iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW - 17 November 2010 > Feature - The '70s in the 21st century: synthesized music returns

Feature - The 1970s in the 21st century: synthesized music returns (via parallel processing)


This Arp 2500 analog modular synthesizer from 1971 had hundreds of inputs and outputs for control of the synthesis processes.
Image courtesy of discretesynthesizers.com.

Curtis Roads is a professor, vice chair, and graduate advisor in media arts and technology, with a joint appointment in music at the University of California at Santa Barbara. He also was the editor of the Computer Music Journal (published by MIT Press), and co-founded the International Computer Music Association. He is often a featured speaker at conferences such as Supercomputing. 

 

Music is an interactive, concurrent process. A note or a chord sounds, then is replaced, gradually or sharply, softly or powerfully, by the next one. For electronically produced or enhanced music, real-time technical advances are critical to continued progress and exploration.

In the 1970s, I fondly remember learning my first parallel programming language, Burroughs Extended Algol. As a researcher in media arts and technology with a focus on music, I wrote programs that spawned thousands of parallel processes for computerized musical composition.

After this, the sequential computing paradigm began to dominate. This seemed like a step backwards — we had to write sequential loops for algorithms in which the exact sequence of events was irrelevant. Even so, as microprocessors became faster, we were able to overcome many real-time hurdles, including sound synthesis, concert hall reverberation, analysis of sound waves, pitch estimation, and granulation, or dividing sound into many, short snippets that allow shortened or prolonged replay with no change in pitch or quality.

My colleagues, graduate students and I have more recently set our sights back to solving problems in areas where parallel machines could have a major impact. One of these is matrix modulation for control of sound synthesis. With modulation, a control signal is used to change the output of a sound-generating component so as to give it a more life-like quality.

Screen shot of EmissionControl, written in SuperCollider and C++. The left side of the interface allows a user to control the source and amount of modulation of the parameters on the right side. 

Image courtesy of Curtis Roads.  

The massively parallel modular analog synthesizers of the 1970s combined signals from multiple components into a common audio output. The Arp (see image above) and other analog synthesizers implemented a matrix modulation control scheme.

The idea of matrix modulation is that any component that emits an output signal can be programmed to control, or modulate, any other component that accepts an input signal. This provides a flexible framework for control of synthesis processes, and enables automatic control of a variety of parameters while a human musician controls other parameters manually.

Inspired by these specialized analog computers, my student David Thall and I developed a software synthesizer called EmissionControl that implements matrix modulation to synthesize granulated sounds. While granular synthesis can be a computationally-intensive task on its own, we found that the matrix modulation subsystem was actually consuming about 80% of the processor cycles.

EmissionControl has only 17 parameters to control, and it is easy to imagine a more complicated synthesis process on the scale of the Arp 2500 with many more parameters. As much as this calls out for a multiprocessor solution, it would require partitioning the algorithm on-the-fly into pieces that can be run independently in order to avoid data dependencies — an interesting challenge.

The vast majority of today’s musical software does not take advantage of multi-core processors, therefore this type of partitioning, also called multi-threading, requires additional manual labor and is prone to human error. Systems that could automatically multithread would be a boon.

 

Curtis Roads. This article is adapted from his presentation "The Hungry Music Monster;" he recently gave performances in Zurich, Switzerland and Boston, Massachusetts.

Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map