My interest in creating Journeys began when Carol Flax approached the Institute for Studies in the Arts to create a "video book". This book would be physical in that pages could be turned and looked at, but it was also virtual because it presented video. When the project was funded by the ISA, I volunteered to collaborate on the project.
Carol's first visit involved discussions about the form the piece should take. We batted around ideas about how the book should be presented and where the video should play. Several ideas came out of that conversation: A lounge with a comfortable couch and chairs with the book on a table, the book on a table, or the book on a stand. The video would be presented on the wall. Both she and I expressed a desire to have the video not look like video in the way it was presented and we mulled over some alternatives to presentational styles. I had some experience with video presented flat horizontal surface, so I offered that idea to Carol, and we immediately adopted it as the best way to present the video.
Over the next few visits we optimized the installation to have the "feel" of a library table with a book on it. This was chosen because the piece was going to be presented in a museum type setting, and it seemed appropriate to the space. Video would play on and around the book so that the book became integrated with the video very closely.
In subsequent visits I outlined a system to Carol that I felt could accomplish the artistic goals we had laid out. The system would be similar to technology that was used in the Intelligent Stage, consisting of an I-Cube for sensors to digital information, two laser discs connected to some small video crossfade devices and audio would be played directly from the computer. All devices are directly controlled from MAX through serial ports on an expansion card in a G3, or through a MIDI interface. A camera system, using software I wrote for the Intelligent Stage called "Eyes", would be used to detect the presence or absence of a person in front of the table.
Over the next 6 months, I created an integrated control system that allowed all the devices to coordinate sensor information with visual and audio response to book manipulations and camera input. At the center of the system is a "global" cue list where audio and visuals are coordinated. Since Carol was unfamiliar with MAX it had to be written so that no experience with MAX was necessary, and this "global" cue list mechanism provided that capability. Carol could write cues and hook them to sensors without doing any MAX programming.
Three controllers and two sensor systems are called from the "global" cue list: a laser disc controller (2 were used), an audio sampler/mixer, and a video crossfade device; the Eyes system, and the book bend and touch sensors. Originally, lighting was to be controlled as well, but a low cost solution was not found.
While the system remains complex, it is simple in relation to what it accomplishes. Cuts between video clips and audio are easy to accomplish and can be simply programmed, yet providing smooth transitions (that are low cost) remains a difficult problem. I feel that this solution provides a compromise: low cost, beautiful transitions.
Overall, collaborating on this piece has allowed me to refine techniques that I have been working on over the past few years. It has also provided me with experiences to carry to the next piece. This piece, Journeys, I believe is a wonderful example of how art and technology can come together so that all of us can experience something new.
- - - - - - - - Robb E. Lovell is an integrated media researcher and dancer. Robb is an independent software architect within the fields of astronomy, network applications, performance systems, and computer imaging. Robb was with the Institute for Studies in the Arts (ISA) at Arizona State University for eight years where he collaborated with many artists on performance works and installations. While at the ISA he created the Intelligent Stage as an interactive performance space that allows performers to control various media of image, graphics, light, and sound through movement or speech. He has a Master of Science in Computer Science and is an accomplished systems engineer. He is currently working on his Ph.D at the Univeristy of Surrey U.K. exploring intelligent spaces that use sensing systems to manipulate electronic media. Robb is also a dancer/choreographer and has performed with Semaphor DanceWorks Inc., Dance Arizona Repertory Theater, RSVP Dance, and as guest artist with Desert Dance Theater and a ludwig co.
http://www.intelligentstage.com