electroacoustic & algorithmic
delayed to rest: for guitar and computer
"Delayed to Rest" is a work for solo electric guitar and a computer running custom software written for this piece. As the guitarist performs, the computer takes the live sound of the guitar, heard through the center speaker, delays its output by a few beats, then plays it through the two side speakers. This produces and echoing effect. The performer uses a footswitch to tell the computer how many beats to delay the sound, and to control other delay-based processes. Every sound heard originates in real-time from what the guitarist performs live; nothing is prerecorded, sampled, or synthesized. Additionally, nothing that the computer does is random; it uses specific delay times, rhythms, and signal levels throughout the piece according to the score.
DOWNLOAD Standalone App
disappearing.god.trick: for solo voice
"This system is designed to take real-time user input through a microphone and process it in novel ways. The user needs only to open the program and begin speaking. The spoken voice is recorded and analyzed using an FFT, and the spectral data is converted into a 3D matrix using a technique by my former teacher, Luke Dubois. This software is based heavily on his model. The 3D matrix is then scanned to resynthesize the original spoken sample. The novelty of resynthesizing from an image is that visual effects can be applied which change its sonic properties. Over time, the original sample will change substantially--there are no other sound sources or effects introduced in this system: everything heard is derived from the original sample. In this piece, the user says "God" one time. It is then processed according to a deterministic processing score that the software follows; nothing is random. "
discourse: for Eb clarinet & computer
"Discourse is a piece for Clarinet in Eb and interactive music system. The original software, which was also written by the composer, takes the clarinet signal as input and processes the sound in real-time. All of the processing happens live, so there are no prerecorded sounds in the piece. In other words, the computer's performance, including its timbre, is data-driven by the clarinet's performance. The clarinet begins a conversation with the computer which responds by imitating the clarinet and expanding upon its ideas. These two voices continue their dialogue: formulating responses to the other's last statement, pausing to listen to each other, and even stuttering at times while trying to answer appropriately."
requires a microphone (and a clarinetist, of course)
nil: for guitar & computer
"nil is a composition for solo classical guitar and interactive music system. In this piece, the opening notes of the guitar performance are sampled by the computer and placed into a small buffer. From within the buffer, the audio sample is trimmed to remove any silence around the file. The audio sample is then manipulated to generate all of the computer sounds in the piece. As the performer continues to perform from a notated score, the computer begins manipulating qualities of the audio sample according to a score that it follows. The computer also processes the live input of the performer in a number of ways. All of the computer processing and sound generation for nil occurs in real-time and is driven by the performance of the guitarist; nothing is prerecorded."
requires a microphone (and a guitarist, of course)
Performance Optimized Versions
Note: this version of the software has some tweaks/features primarily suitable only for live performance such as auto-recording.
squares: for control surface
"squares is an electro-acoustic work composed by V.J. Manzo. It was originally performed with the squares interactive music and live image processing system also created by Manzo. The system uses the Korg padKontrol’s 16 touch-sensitive pads to begin generating algorithmic composition processes when touched. The velocity of each pad sets the initial velocity of the process. The system takes live video input from two sources. In the original performance, two cameras were used: one on the performer’s hands, and one on his face. The number of pads currently held down relates to the number of rows and columns used to display the matrix input from the first camera. Each of these squares contains a scaled-down image of the overall matrix. The each square is tinted to resemble the second input matrix, the second camera. Each pad is also assigned a color that, when pressed, causes the matrix color to change. The performance begins (fades in) with the touch of a single pad. If no pads are being touched, the video fades to black (this is subsequently how the performance ends). The performer specifies an initial mode. The first knob at the top of the padKontrol controls the chord root and quality which drives the algorithmic composition processes. The second knob controls the change to another mode related to the initial mode sharing six of its seven pith classes. The padKontrol’s XY pad is used to play a monophonic melody in which the X value controls pitch classes of the specified mode, across a 2 octave range. The Y value controls velocity; a position held closer to the top will yield a higher velocity value. When there is no position held on the XY pad, the velocity will equal zero. This, therefore, yields a note-off state when the XY pad is not in use. A sustain mode can be enabled by pressing the HOLD button on the controller. This will cause notes played on the XY pad to sustain until another XY coordinate replaces it. The velocity (Y) will only change when a new note is triggered. This video was generated in real-time from within the software system and was not edited in any way."