It was actually quite exciting to see what people came up with. And equally exciting was seeing the looks on their faces as they interacted with the iPads and then waited with anticipation to see how their drawings would be interpreted by the foreign shapes appearing on the screen.
To pull off this stunt we used all open web technologies: a webpage running my «DrawPad» Canvas application that allowed people to draw, and captured the movements of their fingers; storage of the stroke data in JSON on the backend (thanks to Tim Lucas); and visualisation of those strokes in 3D using WebGL via Mr. Doob’s wonderful three.js library.
To get people drawing, I took a look at 37signals’ Chalk but it lacked one important feature: multi-touch drawing, so I decided to write my own drawing app.
If you’re never done multi-touch event handling it can be a mysterious process (it certainly was to me) but once you get your head around the notion of an event object that contains multiple points of interaction, then it’s actually quite fun.
I’ve uploaded the source code for the drawing app (and the 3D visualiser) into a DrawPad Github project if you want to take a closer look (or improve it in the countless ways that it could be improved upon). Certainly the 3D visualiser is a result of cramped deadlines. I really would have loved to create the stroke paths as true 3D meshes, but had to settle for a series of spheres that follow the path of the stroke instead. (Kind of like voxels.)
But at the end of the day the technology didn’t matter. What mattered was the outcome: an easy-to-use way for people to draw what they wanted, and a pretty-as-a-picture translation of what they drew. The fact that it was done in a browser made no difference at all.