CALIBRATED DEVICES. SHARED TIME. ONE CANVAS.
pixelmesh maps individual devices into a shared spatial grid. Each endpoint renders against a synchronised clock. Effects propagate across distributed hardware as if it were one continuous surface.
Devices normally operate independently. pixelmesh treats them as infrastructure. Once time is shared and position indexed, a crowd becomes a display. Coordination becomes visible.
A simple indexed grid. A shared clock. A travelling wave. At scale, this becomes light moving across a crowd.
Devices first undergo spatial calibration, assigning each endpoint a stable coordinate within a shared grid. This coordinate becomes its identity within the system.
A central controller maintains a shared time reference and broadcasts lightweight effect parameters rather than pixel data. Each device computes its own output locally using indexed position and time, similar to a distributed shader.
This keeps bandwidth low and behaviour deterministic. The crowd effectively becomes a distributed rendering engine.
pixelmesh draws inspiration from Junkyard Jumbotron by MIT and PixelPhones by Seb Lee-Delisle. Both explored how independent screens could form a coherent visual surface.
pixelmesh blends distributed systems engineering with projection mapping and real-time computation. Infrastructure behaving like art.
pixelmesh exists because coordination is invisible.
Distributed systems power everything around us, yet we rarely see them behave as one. I wanted to explore what happens when infrastructure becomes visible when independent devices share time, share space, and render as a single surface.
This is not a product demo. It is an experiment in synchronisation, scale and emergent behaviour.
Infrastructure behaving like art.
pixelmesh combines computer vision, distributed computation and cloud infrastructure. Calibration is powered by OpenCV in Python. Synchronisation and orchestration run via a lightweight controller hosted on DigitalOcean.