Tag Archives: a/v

Legs on the Wall – Open Source Residency Blog 4

The accelerometer and wearable work progressed smoothly, without significant hitches. Donna’s wearable was completed in the final week. A few issues are worth mentioning here. She found the use of conductive thread and ribbon to be somewhat problematic. Great care needed to be exercised to prevent thread touching and shorting out (which reset the boards) and the high resistance also posed some issues. This caused Donna to make a decision to replace most of the conductive thread with conventional insulated hookup wire. Whilst initially concerned about appearance, the hookup wire actually looked great. Take a look.

Donna - showing off finished wearable with insulated wire
Donna – showing off finished wearable prototype with insulated wire

A second issue was the total power draw of the sensors and Xbee wireless system. Donna found that the single AA battery could not supply enough power to run the system properly. With insufficient time to source and fit a higher capacity battery system before the public showing of our work, she decided to power the system from a USB cable and make the necessary modifications to the battery system following the residency.

To everyone’s surprise, when we connected the wearable to our test audio and video patches for the first time, the results were beyond our expectations. The wearable was highly responsive, ‘playable’ and Donna reported that she felt a fine sense of control over the media. She felt immersed in the media and the interface was highly intuitive, providing a rich set of possibilities for gestural control. Here is a video showing the initial hookup of the wearable interface to a test audio patch with the triaxial accelerometer and flex sensor mapped to audio filter parameters.

Macrophonics – first wearable trial

In the final week we experimented with a range of sensing techniques that picked up the position of performers on a stage. We experimented with the cv.jit and with Cyclops systems within MaxMSP. Both proved to be excellent. However given that they are video tracking systems they are inherently light dependent and so their output and behaviours are fundamentally affected by changing light states and conditions.

Cyclops patch detail
Cyclops patch detail
Cyclops grid and hotspot 'zones'
Cyclops grid and hotspot ‘zones’

All of this is manageable with precise light control and programming (or the use of infra red cameras), however in the absence of more sophisticated lighting and camera resources, we decided to use ultrasonic range finders on the stage to locate the position of performers.

Ultrasonic rangefinder
Ultrasonic rangefinder

Two of these devices were placed on either side of the stage,  outputting a stream of continuous controller data on the basis of the performer(s) proximity to the devices. These sensors were connected to Tim’s computer via an Arduino Uno. This allowed us to have a simple proximity sensing system on stage. The system was robust in respect of light, but suffered from occasional random noise/jitter that would last for a few seconds without obvious cause. This meant we had to apply heavy signal conditioning to the source data in Max/MSP to smooth out these ‘errors’ which in turn resulted in a significant amount of latency. Given these constraints, we used these sensors to drive events which did not have critical timing dependencies and could ramp in and out more gradually.

Julian spent a number of days in the final week programming the relationships between the sensing systems and the video elements, working with ‘retro’ 70s scanline/raster style video synth processing and time domain manipulations of quicktime movies. Julian’s computer also operated as a kind of ‘data central’, with all incoming sensor data coming in to my computer and from there being mapped/directed out to the other computers from a central patch in MaxMSP+Jitter.

Detail of Jitter patch for Open Source residency
Detail of Jitter patch for Open Source residency

 

Legs on the Wall – Open Source Residency Blog 3

While Donna works on building the wearable top with sensors, Julian is working on making ‘musical scenes’. These scenes are modular, with a range of musical elements which can be triggered or manipulated in an improvisational manner. They are designed for non-musician physical theatre performers/acrobats. The wearable top will have a Triaxial Accelerometer (reading X, Y, Z axes) mounted on the right arm, a flex sensor on the left elbow and some buttons. Julian has been using a Wii remote to simulate the accelerometer and buttons so he can work on parameter mapping and prototype some a/v scenes while Donna is working on building the wearable interface. The  Wii remote can act as a hand held device which contains much of the functionality of the wearable (accelerometer and buttons)

Nintendo Wii Remote (Wiimote) puts out pitch, roll, yaw and z (g force) data

Julian is using the fantastic software Osculator to take the bluetooth Wiimote data and convert it to midi. He is then streaming the midi into a patch in MaxMSP which allows him to condition, scale and route the data streams before they get sent to Ableton Live to control audio.

The system is very robust and today Julian had it working to a distance of over 15 metres. Let’s hope the wifi from the Lilypad is as solid. Here is an example of the wii remote being used to control audio. The accelerometer XYZ outputs control various audio filter processing parameters and volume changes, while the buttons are used to trigger audio events.

Macrophonics. Open Source. Wiimote test from Julian Knowles on Vimeo.

On the video side, Julian has been building a realtime video processing system in the Jitter environment.

The following example shows the wii remote accelerometer XYZ parameters mapped to video processing. No audio processing is taking place in this example. Julian is just listening to some music while he programs and tests the video processing system. In this example the wii remote is driving a patch Julian has written in Jitter. A quicktime movie is used as input and the wiimote is driving real time processing. Julian has also extended the jitter patch to allow it to process a live camera input.

Macrophonics. Open source. Prototyping video processing with wii remote

Donna has been working away on the wearable interface that will contain the functionality of the above (plus more). She has been designing the layout of the sensors and working out how to connect everything within the given constraints of the LilyPad system. The conductive thread that can be used to sew the sensors in and connect to the main board has quite a high resistance, and so runs need to be kept short. Likewise the run between the battery board and the Lilypad/Xbee board needs to be short, so as to keep maximum current available. Runs of conductive thread cannot be crossed over or they will short out.

Donna – roughing out the sensors on a black sports top

We’re hoping to get the wearable interface completed in the next day or two so we can start to test it out with the modular musical materials. For the purposes of the showing, we’ll demonstrate three a/v ‘scenes’ in sequence, demonstrating different approaches and relationships between gesture and media.

The first scene will be drone/video synthesis based (with the performers stage positions driving processing). This state will have a very strong correlation between the audio and video processing gestures and will allow for multiple performers moving in relation to ultrasonic range finder sensors.

The second scene will involve the wearable interface, with direct/detailed gestural control of audio and video elements from a solo performer.

The third involve a complex interplay of sensors and parameters. The wearable interface will perform time domain manipulation and transport control on quicktime materials whilst driving filters and processors in the audio domain. The scene will also make use of physical objects sounding on stage, driven by the wearable interface. The interface data will be used to control signals flowing through physical objects (in this case cymbals and a snare drum) and audio/spatial relationships will unfold between the  performer’s gestures, proximity to objects and sound behaviours. Tim is taking care of the actuator setup.

Wearable interface. Pressure sensors on the shoulder blades.

Ultimately we are aiming for a fully responsive media environment. The aim is for the stage performer(s) to drive both audio and video elements and feel immersed in a highly responsive mediascape.

Wearable interface. Pressure sensors on shoulder blades

Legs on the Wall – Open Source Residency Blog 2

We’re now well into our Macrophonics Open Source creative development project at Legs on the Wall in Sydney.  Lots of experimentation going on at this stage and we’ll start to resolve things down during the week before our stint in the theatre next week. We’ve opened out the exploration to sounding objects – driving audio signals into snare drums, cymbals and other resonant objects to set them off acoustically. The idea is that certain parts of the stage area contain assemblages of resonant objects and, through video and hardware sensing, performers will be able to activate the array of objects within an auditory ‘scene’ that we create. These auditory scenes will contain flexible sonic ‘modules’ – collections of sounds and musical motifs that can be recombined freely.

15″ loudspeaker driver coupled to a 15″ snare

At the moment, we’re experimenting to find the resonant frequency of each of the trial objects. We also want to get a sense of how the objects behave with different input signals. At the moment, we’ve set up the input to the ‘actuators’ to run from discrete sends so that they can be blended/balanced with the signals from the main loudspeakers. From a control perspective, we will compare wearable sensors (using the Lilypad Arduino platform) located on the performer, with video tracking from above the stage. The wearable sensors have the advantage of being robust in respect of different lighting states, but need to be protected from damage by the physical theatre performer. This could prove challenging. The video tracking approach is robust from a physical point of view, but highly light dependant – so changing light conditions can affect the threshold settings for the tracking patches so that tracking becomes less reliable through changing light states.

Donna working with Lilypad Arduino. Seen here with small accelerometer sensor

The plan is to use the accelerometers on the wrists of the performers so that arm movements and rotations will output X,Y,Z co-ordinates. We’ll also be trying out light sensors on the performers, flex sensors and some heat sensors. The Lilypad is a great platform for wearable computing – the main issue is the small number of analog inputs, so we will need to investigate the potential to have multiple Lilypads sending data over wifi (via the Xbee platform) to a single computer. If this poses an issue for us, we could mount a second Lilypad/Xbee set up and have it transmitting to a second receiver at a second computer. As we have not yet established an approach to data mapping and distribution to the media performers, this second model may actually be more ergonomic. We shall see.

Lilypad Arduino with Lilypad Xbee (top left) at Donna’s workstation

In the shot above you can see the Lilypad Arduino connected to a Lilypad Xbee board which takes care of the wireless communication of sensor data to an Xbee receiver at the computer. The Lilypad Xbee is currently getting power from a USB cable, but will soon be powered by an AAA battery. Once all the sensors have been connected up and prototyped in this way, the whole thing will be sewn into a garment and the sensors will be connected to the board via conductive thread.

Julian Knowles and Tim Bruniges (positioning sonic actuators). Photo: Lucy Parakhina

 

Legs on the Wall – Open Source Residency Blog 1

The Macrophonics artist collective has just commenced our Open Source media technologies residency with Sydney based physical theatre company Legs on the Wall at their Red Box development space. Background information on the project can be found here.

Macrophonics live – Brisbane Festival 2011

The residency sees the group examine a range of interface technologies and approaches to explore the nexus between theatrical performers and a four member live media ensemble. The objective is to build a responsive performance environment that will allow the theatre performers to have direct gestural input into the media control system, defining new relationships between the performance ensemble, the media design elements and the media ensemble.

We’ve broadly structured our investigation across two approaches. The first is video tracking/computer vision techniques (using the cv.jit suite of objects for Jitter) where the stage area can be analysed and moving objects tracked.

Jitter patch – cv.jit suite of ‘computer vision’ (video analysis) objects
Moving masses in the video frame are identified and then tracked

The second area of investigation is wearable garment interfaces that sense movement directly at each performer (accelerometers, pressure and flex sensors, light sensors etc) and send data wirelessly to the media ensemble. We are using the LilyPad Arduino platform for this work. The LilyPad is a small, flat implementation of the Arduino platform that can be sewn into garments. Conductive ‘thread’ can then be sewn in ‘tracks’ into the garment to form the links between the device and the attached sensors.

LilyPad Arduino

The LilyPad can be run from a battery source and can, with a bit of extra stuff, transmit data wirelessly over an ad hoc wireless network.

Lilypad arduino sewn into garment with conductive thread. Image credit: Rain Rabbit (flickr)
Donna’s LilyPad – ready for sewing