Turning a camera into an instrument (of sorts).

I worked with Aaron Arntz to produce this little experiment. The idea is that the camera splits the screen into blocks, then captures color values for each block, and then plays music whenever the colors shift… It’s a bit of a mess right now but it works!

I dealt with image processing and display, and Aaron used the minim library to produce the sounds.

The code’s over on github.

And here’s a video of the project in action:
[vimeo https://vimeo.com/75279998 w=640]

As you can see something went terribly wrong/right with my recording setup…

“I can always make more”

Here are the results of my collaboration with Billy Dang and Yiyang Liang, tentatively called “I can always make more”. The audio clips used are from field recordings we made at MOMA (including recordings of audio installation pieces), interviews with strangers and friends, and sounds of the city.

[soundcloud url=”http://api.soundcloud.com/tracks/111100701″ params=”” width=” 100%” height=”166″ iframe=”true” /]

Here are some pics of the recording process:

Photo Sep 13, 2 05 59 PM

Photo Sep 13, 2 17 21 PM

Photo Sep 13, 2 25 46 PM

Photo 13-9-13 下午2 13 16


For my second computational media project I worked with Brian Clifton to make this prism extruding thingy. We were inspired by the code we found here.

Click here to see the sketch on openprocessing.org. You can mouse over to extrude the triangles, click them to change their color, or move the slider (or mouse wheel) to change their max height.

Here’s a vid:

[vimeo https://vimeo.com/74787712 w=640]

And here’s the code:

What is Interactivity?

A few thoughts on Bret Victor’s “The Future of Interaction Design”

Victor begins his invective against current trends in interaction design with a video produced by Microsoft that depicting various lonely souls swiping at paper-thin screens in a not-too-distant future. It makes for a good target and feels at home in Microsoft’s line of depressing videos about the current and could-be future state of technology. The characters in it seem so alienated and the overall mood so melancholy that I was actually surprised that the man at the subway station at 1:52 doesn’t just go ahead and step in front of the oncoming train. He really looks like he’s considering it:

[youtube http://www.youtube.com/watch?v=a6cNdhOKwi0&t=1m45s&w=640]

Victor laments the lack of vision in this version of the future, and notes that the paradigm of the tablet, which has only recently become a reality, was in fact initiated by Alan Kay in 1968. The vision has not advanced significantly since then, and the tools that we use do not come close to leveraging our expressive capacity (although I do think that the glass smart phone does, ironically, enhance human capability: in this case it is the human capability to swipe ineffectually at a world that you are alienated from).

Chris Crawford describes interactivity as a kind of infinite loop of “listen, think, speak” between two actors. Victor, I think, would want to collapse that loop, make it entirely invisible to the actors. Victor has described his work almost as a cure for blindness, saying that good interactivity allows the user to “see what you’re doing” and “try ideas as you think of them”. This type of interaction, which seeks to enhance understanding, generate unexpected ideas, and aid in creativity, requires tools that are able to leverage a full range of sensory input and output. Not merely of the eyes, but also the ears, the hands, and ultimately the entire body.