Thanks to Reid Spice, the second edition of the BaconWood Festival is coming soon, and I started wondering about what I'm going to showcase this year.

Thinking of that reminded me of what I did last year. I set up this video installation I now refer to as "the psychedelic mirror" which was basically a camera / projector setup while Processing the image in real time through trippy algorythmetic filters I wrote. I set it up on the dance floor and people were basically dancing with an ever changing psychedlic image of themselves.

Luckily, my friend Jim Wiggins had taken a couple of short videos that I dug up and put together recently. See what it looked like here:


Psychedelic Mirror - Real time video processing.org from mistercrunch on Vimeo.

The webcam was slightly modified to work on the close to infra-red spectrum and I had a infra-red spotlight. A sheet was used in front of a window as a projector screen. I had nothing to hold the projector in a safe place so I had to strap it on garbage can sitting on top of a table on top of two other tables (so it would clear the heads of the dancers). Kinda ghetto, but it turned out pretty good!

The software was designed to have two modes, one that would cycle automatically through the different patterns and another where the events and patterns were triggered by a wireless keyboard.

I'm thinking about using this idea as a prototype to something much better. Stay put!

So first thing first: excuse me for the poor video quality, I couldn't use a screen capture software since my core2duo was too busy rendering balls and piano sounds. I had to film my computer screen using my cheap Canon camera. I'll post a better video as soon as I find a way.



I've always had mixed feelings regarding sound visualization on computers. While results usually look extremely cool, effective spectrum analysis and beat detection are hard to program, and the results never fully convey the feeling of looking at music.


So I decided to try a totally different approach and work with General Midi instead. While the public perception might be that midi sounds terrible (legacy of the AdLib & Sound Blaster sound cards days), sound synthesis has come a long way and midi, on top of being extremely practical, now can sound amazing. Coding sound visualization using midi as the input allows me to get the full information about which note is played, at which velocity, when, and how long it is held. Of course the downside is that it works only with midi instruments (keyboards and electronic drums and this is typically it...).


So here's how I approached the design, and how I represented the different dimensions:

  • Pitch: Notes are displayed left to right, as on the piano. Black notes appear slightly higher.
  • Velocity: Size, harder strokes are shown bigger than lighter strokes
  • Time: Basically using the Y axis to represent time. When a note is hit, a ball appears at the top of the screen and stays there until the key is released, then the ball drops following a gravity like force. The note will then start shrinking until is disappears. For trippyness purposes the balls split into 3 when they hit the floor and bounce around.
  • Harmony: Colors are used to display the note on the scale. Consonant interval is illustrated by using similar colors, while dissonant intervals are shown as opposed colors. Basically using the circle of fifths and the hue of the HSB color.





About the technology, I am using processing.org (java), a midi library and some OpenGL. A sequencer generates the sounds and redirects the midi to the processing application. Everything is rendered in real time.

Thanks to these indirect contributors:
* Frederick Chopin
* Ruin & Wesen, who wrote the Midi library for processing.org that I'm using
* All processing & java peeps! w00t.



Oh and I am looking for a performer that would be interested in teaming up to expand on the idea and do live performances using related technologies. Think a concert hall, a great piano player, and a giant projector screen. I have different midi visualizations in the pipeline and other ideas. Let's innovate and bring digital art and virtuoso piano concerto together.

Edit 11/23/2010: Source code!
https://github.com/mistercrunch/MidiVisualization

Edit 3/13/2011:

Robert Turenne has shown interest in this project and provided this screen capture
. Thanks!


3D Vertices fed from webcam from mistercrunch on Vimeo.


So here's my latest little hack: I'm basically using the webcam feed in real time to draw 3d lines that create a grid of *vertices*. I'm using processing.org (java) and OpenGL. The Z axis is based on brightness. Then there's a simple color mapping that adds an interesting effect.

Moving the mouse cursor will rotate the grid, and buttons allows to change the resolution (size of the squares). Another button will engage into a steady rotation along the Y axis.

This is an attempt at contour detection using a basic algorithm.




Basic contour detection with Processing.org from mistercrunch on Vimeo.


I simply look at each pixel and compare it with its 8 surrounding neighbors and sum the gray scale difference. From there a certain threshold is applied to decide whether to display the pixel or not.



I was expecting something a bit less noisy and more cartoonish but I love the results. Somehow it looks as though there's some image compression happening somewhere between my webcam and my pixel array.



This should work pretty good for the video installation I'm working on (projector + infrared webcam + infrared spotlight). See it at BaconWood in May: and hopefully at Burning Man 2010.

Shit I just realized that I never posted anything on this blog of mine about the Blink Buggy: our Burning Man art car. I designed and assembled most of the electronics and wrote the software for this project. This is definitely one the most interesting things I've worked on ever.

Here are a few of the best videos and pictures that have managed somehow to make their way back from Burning Man:






Pics:









Most of the pics and vids have not been very satisfying because camerasjust don't behave the way human eyes do. Anyhow, these are as good as it gets, thanks to Reid Spice for the pics!

I've been playing around with my webcam and the 2d/3d Java based IDE processing.org lately.

Here's what that looks like:


Webcam Distortion from mistercrunch on Vimeo.


What you see is basically a frame differencing algorithm with a palette animated color applied to it. I compare the current frame with the previous one, pixel per pixel and apply a color based on a cycling saturated color. The alpha of that color is based on how different the pixel is from the previous one. The leftover traces left behind the movement is there by applying a semi-transparent layer at every frame that makes previous drawings slowly disapear.

I am planning a night outdoor installation for BaconWood, an outdoor festival located in California a few weeks from now. I will use a video projector, a white screen, an infrared webcam and a infrared spotlight to create a trippy "mirror".

I modified my webcam for near infrared perception by removing the little infrared filter located in front of the sensor and by sliding in a little cut out from black negative to filter out the normal light. I'm looking into buying an infrared spotlight of some sort to stealth light the subjects which are most likely going to be dancing people or passerbys.

I've discovered Processing (www.processing.org) a few months back and it's been my new favorite toy.

"Processing is an open source programming language and environment for people who want to program images, animation, and interactions. It is used by students, artists, designers, researchers, and hobbyists for learning, prototyping, and production."

I've been working on a few projects involving image processing, real-time video distortion, fluid dynamics and a whole lot of basic vector math.

Here's a small screen capture of the latest thing I've been working on: