When synthesizers like the Yamaha DX7 became consumer products, the possibilities of music changed forever, making available a wealth of new, often totally unfamiliar sounds even to musicians who’d never before had a reason to think past the electric guitar. But if the people at Project Magenta keep doing what they’re doing, they could soon bring about a wave of even more revolutionary music-making devices. That “team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art,” writes the New York Times’ Cade Metz, work toward not just the day “when a machine can instantly build a new Beatles song,” but the development of tools that allow artists “to create in entirely new ways.”
Using neural networks, “complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data” (the kind that generated all those disturbing “DeepDream” images a while back), Magenta’s researchers “are crossbreeding sounds from very different instruments — say, a bassoon and a clavichord — creating instruments capable of producing sounds no one has ever heard.”
You can give one of the results of these experiments a test drive yourself with NSynth, described by its creators as “a research project that trained a neural network on over 300,000 instrument sounds.” Think of Nsynth as a synthesizer powered by AI.
Fire it up, and you can mash up and play your own sonic hybrids of guitar and sitar, piccolo and pan flute, hammer dulcimer and dog. In the video at the top of the post you can hear “the first tangible product of Google’s Magenta program,” a short melody created by an artificial intelligence system designed to create music based on inferences drawn from all the music it has “heard.” Below that, we have another piece of artificial intelligence-generated music, this one a polyphonic piece trained on Bach chorales and performed with the sounds of NSynth.
If you’d like to see how the creation of never-before-heard instruments works in a bit more depth, have a look at the demonstration just above of the NSynth interface for Ableton Live, one of the most DJ-beloved pieces of audio performance software around, just above. Hearing all this in action brings to mind the moral of a story Brian Eno has often told about the DX7, from which only he and a few other producers got innovative results by actually learning how to program: as much as the prospect of AI-powered music technology may astound, the music created with it will only sound as good as the skills and adventurousness of the musicians at the controls — for now.
Related Content:
Artificial Intelligence Program Tries to Write a Beatles Song: Listen to “Daddy’s Car”
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on the book The Stateless City: a Walk through 21st-Century Los Angeles, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.