Reviewing

Advanced Real-Time Sound and Data Processing with FTM&Co

Diemo Schwartz, Hans Leeuw, Matthew Ostrowski

The FTM&Co extensions for Max/MSP make it possible to program prototype applications that work with sound, music or gesture data in both time and frequency domain in a more complex and flexible way than what we know from other tools. This workshop will provide an introduction to the free Ircam libraries FTM, MnM, Gabor, and CataRT for interactive real-time musical and artistic applications.

Advanced Real-Time Sound and Data Processing with FTM&Co

Diemo Schwartz, Hans Leeuw, & Matthew Ostrowski
Saturday, May 26, noon to 6pm
Cost: Regular: $150, Members & Students (with ID): $135
Pay with PayPal or Credit Card

Location:
Harvestworks – www.harvestworks.org
596 Broadway, #602 | New York, NY 10012 | Phone: 212-431-1130
Subway: F/M/D/B Broadway/Lafayette, R Prince, 6 Bleeker

Pay with PayPal or Credit Card
 

Member/Student $135.00 USD

The FTM&Co extensions for Max/MSP make it possible to program prototype applications that work with sound, music or gesture data in both time and frequency domain in a more complex and flexible way than what we know from other tools. This workshop will provide an introduction to the free Ircam libraries FTM, MnM, Gabor, and CataRT for interactive real-time musical and artistic applications.

The basic idea of FTM is to extend the data types exchanged between the objects in a Max/MSP patch by rich data structures such as matrices, sequences, dictionaries, break point functions, and others that are helpful for the processing of music, sound and motion capture data. It also comprises visualization and editor components, and operators (expressions and externals) on these data structures, together with file import/export operators for SDIF, audio, MIDI, text.

Through examples of applications in the areas of sound analysis, transformation and synthesis, gesture processing, recognition, and following, and manipulation of musical scores, we will look at the parts and packages of FTM that allow arbitrary-rate signal processing (Gabor), matrix operations, statistics, machine learning (MnM), corpus-based concatenative synthesis (CataRT), sound description data exchange (SDIF), and Jitter support. The presented concepts will be tried and confirmed by applying them to programming exercises of real-time musical applications, and free experimentation.

FTM&Co is developed by Norbert Schnell and the Real-Time Music Interaction Team (IMTR, http://imtr.ircam.fr/) at Ircam and is freely available at http://ftm.ircam.fr.

Prerequisites: A working knowledge of Max/MSP is required, knowledge of a programming or scripting language is a big plus for getting the most out of FTM&Co., having notions of object-oriented programming is even better. Users of Matlab will feel right at home with MnM.

Bio
Diemo Schwarz is a researcher–developer in real-time applications of computers to music with the aim of improving musical interaction, notably sound analysis–synthesis, and interactive corpus-based concatenative synthesis.
Since 1997 at Ircam (Institut de Recherche et Coordination Acoustique–Musique) in Paris, France, he combined his studies of computer science and computational linguistics at the University of Stuttgart, Germany, with his interest in music, being an active performer and musician. He holds a PhD in computer science applied to music from the University of Paris, awarded in 2004 for the development of a new method of concatenative musical sound synthesis by unit selection from a large database. This work is continued in the CataRT application for real-time interactive corpus-based concatenative synthesis within Ircam`s Real-Time Music Interaction (IMTR) team.

http://diemo.concatenative.net/
http://imtr.ircam.fr/imtr/Diemo_Schwarz

Diemo Schwartz

Diemo Schwarz is a researcher–developer in real-time applications of computers to music with the aim of improving musical interaction, notably sound analysis–synthesis, and interactive corpus-based concatenative synthesis. Since 1997 at Ircam (Institut de Recherche et Coordination Acoustique–Musique) in Paris, France, he combined his studies of computer science and computational linguistics at the University of Stuttgart, Germany, with his interest in music, being an active performer and musician. He holds a PhD in computer science applied to music from the University of Paris, awarded in 2004 for the development of a new method of concatenative musical sound synthesis by unit selection from a large database. This work is continued in the CataRT application for real-time interactive corpus-based concatenative synthesis within Ircam`s Real-Time Music Interaction (IMTR) team.

Hans Leeuw

The Electrumpet is a hyper instrument designed by Hans Leeuw in 2008. Expression in electronic music is and has always been an issue and subject to both artistic and scientific debate. Hans Leeuw approaches this from a blend between the acoustic and digital handling of sound in such a way that the two are really integrated. Not only is all the sound coming originally from the trumpet but also the design of the handling and mapping of the sensors on the instrument are aimed at a true integration and not just an addition of some buttons and sliders. The instruments original and most important purpose is to play in cooperative contexts. The instrument is used in Hans Leeuw’s own big band Tetzepi but also in diverse more loose contexts within the Amsterdam Jazz scene or in his duo with Diemo Schwarz from IRCAM. Many appearances in combined talks and demonstration contexts (e.g. radio interviews, electric spring festival, IRCAM forum, divers academic invitations) made the step to a solo program logical though. This solo program is now developed on a grand from the Dutch cultural ministry. In his composition work Hans Leeuw combines a very diverse (musical) background. In the pieces and patches you can hear Jazz influences, world music, micro tonality, Psychoacoustics. In the review from the Dutch cultural committee it states: “The Jury is especially positive about the professional quality of the works”. Although Hans works very meticulously that does not mean that his pieces are rigid. Improvisation is an important aspect of his work. The ability to tell a story on the instrument is always more important then instrumental or compositional craftsmanship but the latter should not be ignored.

Matthew Ostrowski

A New York City native, Matthew Ostrowski has worked as a composer,
performer and installation artist, exploring work with music, multimedia, video and
theater. Using digital tools and formalist techniques to engage with quotidian
materials — sonic, physical, and cultural – Ostrowski’s work explores the liminal
space between the virtual and phenomenological worlds. His work ranges from live
electronic performance to installations incorporating video, multichannel sound, and
computer-controlled objects. He is a freelance developer of interactive technology
for artists, and teaches at New York University and Ramapo State College of New
Jersey.

Educated at at Oberlin College and the Institute of Sonology in The Hague, Ostrowski
has collaborated with a large number of artists in the US and abroad, including
David Behrman, John Butcher, Diamanda Galás, Nicolas Collins, Anne LaBerge, The
Flying Karamazov Brothers, and many others. He was composer-in-residence for the
MacArthur-award winning choreographer Elizabeth Streb, and has designed interactive
technologies for performing and fine artists ranging from Laurie Anderson to Martha
Rosler. He regularly performs in the duo KRK, with Prague-based contrabassist George
Cremaschi, and with Andrea Parkins in the duo Elective Affinities.

Ostrowski’s productions have been seen or performed on six continents, including the
Wien Modern Festival, Transmediale and Maerz Musik in Berlin, the Kraków Audio Art
Festival, Sonic Acts in Amsterdam, PS 1 and The Kitchen in New York, the Rencontres
Internationales video festival in Madrid, and Yokohama’s dis_locate Festival. He has
received numerous awards, including a NYFA Fellowship for Computer Arts.