Archives for January 2010

January 9, 2010 - Comments Off on Flight Risk

Flight Risk

“Feedback” is the result of experimentation with beat detection, dynamic threshold levels, and 3D video feedback loops. The audio track (“Flight Risk”) was created by Bit Shifter, using a Nintendo Game Boy and Nanoloop v 1.1. Everything else was created with Processing (processing.org). A feedback loop is created by mapping the contents of the screen onto a three dimensional space. The beat triggers control the position and rotation of the camera as well as the field of view and style of feedback.

Audio by Bit Shifter ("Flight Risk"). Mastered by Chris Burke at Bong & Dern.
Originally released on Various Artists: Sonic Acts XI • The Anthology Of Computer Art DVD, 2005.

I came up with the concept of the video when I started playing around with feedback loops in Processing. First I would construct a large box that the whole scene exists inside of. Inside that box, I would place some objects. Finally, I take an image of the entire scene and use it as a texture map for the sides of the big box. I would repeat this process every frame and depending on the rotation of the box and the size of the objects, you can get some really nice glimpses of infinity. Below are a couple sample images I created for a presentation I gave at OFFF in Barcelona.

January 7, 2010 - Comments Off on Turbulence

Turbulence

This is another offshoot from the Magnetosphere project. Here, I am locking the particle positions onto the surface of a sphere. They are still free to move about and push on each other, but none can leave the surface of that sphere.

I found pretty quickly that the particles, even when being dynamically modified with audio, really just want to find their place and stay put. There isn't much activity. I remedied this problem by adding a rotation velocity.

Audio by Ganga (Autumn).

Audio by Goldfrapp (Deer Stop).

And finally, Turbulence with flares. I saved this one for last because it was the most successful of the video renders. Audio by Thievery Corporation (Le Monde).

January 7, 2010 - Comments Off on Flocking, for Nervo

Flocking, for Nervo

More often than not, the pipeline for a project gets overshadowed by the end result. The process of getting from point A to B is often overlooked because point B is just sooo damn sexy. But it doesn’t have to be that way. No really, it doesn’t.

In the summer of 2007, Nando Costa approached Barbarian Group and asked if we would help out with a project his new motion graphics company Nervo was working on for Fox Movies Japan. He showed us the boards and instantly we knew we wanted to collaborate because his vision for this project was quite beautiful and surreal.

What he needed from us was videos of flocking behavior. He had seen the previous experiments I have done with perlin noise flocking and thought it would work well for this project. All he wanted was a couple videos of flocking using a 3D crow (or is it a raven) he would provide. Simple enough. But given the tight deadline, the thought of doing a render and posting it and waiting for approval or changes and then implementing the changes then rerendering and reposting, etc… That process didn’t make sense for this project so we decided to deliver them an application instead.

Using Processing, we started playing around with the flocking behavior to make it more customizable. The original version of the flocking experiment had very few controls and they had to be hard-coded. There was no run-time adjustment. This was the first thing addressed. Several new parameters were added. They included population density, gravity, drag, collision avoidance, flight range, camera position and tracking, and a few toggles such as tethering strings, floor plane, and bezier curves. Once the parameters were tweaked to the user’s liking, they need only to hit the spacebar and an image sequence of PNGs would start saving to the harddrive.

Once he had the exported image sequence, it was pretty easy to put it into a post processing application and work his magic. See one of the final spots below. Or you can view them on the Nervo.tv website. The birds where used in the top three spots.

January 6, 2010 - Comments Off on Magnetosphere, part 3

Magnetosphere, part 3

I eventually got tired of additive blending. The next step was to get a handle on how to control object depth and introduce some occlusion. This next image shows my first attempts at using depth testing and depth masking to put some solid objects in the scene. Technically, it is a failure. I have no idea what I am doing and my process essentially involved adding a line of code then removing it and adding another. Over and over until I got something with some promise.

Over the next few months, I continued to refine this code. I created a couple dozen renders of varying success. The following videos are some more successful from the bunch.

Audio by Ganga (Autumn).

My apologies, I do not remember who created this audio. I will credit it properly as soon as it comes to me.

Audio by Royksopp ("In Space" off Melody A.M.)

January 6, 2010 - Comments Off on Magnetosphere, part 2

Magnetosphere, part 2

Here is where things start to get interesting. Additive Blending entered my life. Simply put, additive blending makes things seem to glow. Instead of blocking pixels that lie behind an image, it brightens them. This gives you a glowy look that I will be the first to admit I overused and abused.

I definitely got carried away with the additive blending. It took a while for me to understand that moderation is useful when applying new effects. But for now, make it glow! Like the stars in the heavens above! Everything should glow!

The other reason I relied so heavily on additive blending is that it works as an easy way to get around the depth-sorting issue which plagues semi-transparent image rendering. In OpenGL, if you want to draw a bunch of transparent PNGs, you need to draw them from back to front. When you have a dynamic camera system and a bunch of objects, it can be tricky or time consuming figuring out which objects are behind and which are in front. If you use additive blending, you don't need to worry about depth sorting.

Audio by Helios (Sons of Light and Darkness). Note, this piece builds really slow so don't be put off by the very very dark intro.