What would pop music sound like now if the musicians of the 27 club had lived into maturity? Can we know where Amy Winehouse would have gone, musically, if she had taken another path? What if Hendrix’s influence over guitar heroics (and less obvious styles) came not only from his sixties playing but from an unimaginable late-career cosmic blues? Whether questions like these can ever be given real flesh and blood, so to speak, by artificial intelligence may still be very much undecided.
Of course, it may not be for us to decide. “The charts of 2046,” Mark Beaumont predicts at NME, “will be full of 12G code-pop songs, baffling to the human brain, written by banks of composerbots purely for the Spotify algorithm to recommend to its colonies of ÆPhone listening farms.” Seems as likely as any other future music scenario at this point. In the meantime, we still get to judge the successes, such as they are, of AI songwriters on human merits.
The Beatles-esque “Daddy’s Car,” the most notable computer-generated tribute song to date, was “composed by AI… capable of learning to mimic a band’s style from its entire database of songs.” The program produced a competent pastiche that nonetheless sounds like “cold computer psychedelia — eerie stuff.” What do we, as humans, make of Lost Tapes of the 27 Club, a compilation of songs composed in the style of musicians who infamously perished by suicide or overdose at the tender age of 27?
The “tapes” include four tracks designed to sound like lost songs from Hendrix, Winehouse, Nirvana, and the Doors. Highlighting a handful of artists who left us too soon in order to address “music’s mental health crisis,” the project used Magenta, the same Google AI as “Daddy’s Car,” to analyze the artists’ repertoires, as Rolling Stone explains:
For the Lost Tapes project, Magenta analyzed the artists’ songs as MIDI files, which works similarly to a player-piano scroll by translating pitch and rhythm into a digital code that can be fed through a synthesizer to recreate a song. After examining each artist’s note choices, rhythmic quirks, and preferences for harmony in the MIDI file, the computer creates new music that the staff could pore over to pick the best moments.
There is significant human input, such as the curation of 20 or 30 songs fed to the computer, broken down separately into different parts of the arrangement. Things did not always go smoothly. Kurt Cobain’s “loose and aggressive guitar playing gave Magenta some trouble,” writes Endgadget, “with the AI mostly outputting a wall of distortion instead of something akin to his signature melodies.”
Judge the end results for yourself in “Drowned by the Sun,” above. The music for all four songs is synthesized with MIDI files. “An artificial neural network was then used to generate the lyrics,” Eddie Fu writes at Consequence of Sound, “while the vocals were recorded by Eric Hogan, frontman of an Atlanta Nirvana tribute band.” Other songs feature different sound-alike vocalists (more or less). In no ways does the project claim that MIDI-generated computer files can replace actual musicians.
They’re affectionate tributes, made by players without hearts, but they don’t really tell us anything about what, say, Jim Morrison would have done if he hadn’t died at 27. Yet the cause is a noble one: a rejection of the romantic idea at the heart of the “27 Club” narrative — that mental illness, substance abuse, etc. should be glamorized in any way. “Lost Tapes of the 27 Club is the work of Over the Bridge,” notes Fu, “a Toronto organization that helps members of the music industry struggling with mental illness.” Learn more about the project here and about Over the Bridge’s programs here.
Related Content:
Artificial Intelligence Program Tries to Write a Beatles Song: Listen to “Daddy’s Car”
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
.. Amy had s0 much to say! And this, although it mimics at best, the tempo.. and perhaps her voice(in minimal degrees); … Is not close to providing the feeling Alyssa music/songs/harmonies did. Sorry.
But to the software engineers I will say: good job. I just can’t see how AI could ever reproduce actual, heartfelt feeling and emotion that is expressed through instruments and vocals..(?).. it comes and flows from the heart. [Is my own personal belief, for what it’s worth.]