A Brief History Of is our series digging into the backstory behind something in the news right now.
A new wave of AI mimicry has artists and record labels worried about what the latest advances in music-generating technology could mean for copyrights and their businesses.
But it’s not the first time that experts have wondered about the implications of computer capabilities for human musicians.
An AI-generated song in the style of Drake and The Weeknd went viral earlier this year, raising questions about ownership in music and what the technology could mean for the future of the art form. More recently, Recording Academy CEO Harvey Mason Jr. asserted that “only human creators are eligible to be submitted for consideration for, nominated for, or win a Grammy Award.”
While AI’s ability to imitate human art is certainly at a new precipice, questions about technology’s role in music production are as old as the field of computer science itself. We took a look back at some previous moments in AI music history.
Turing’s tunes: The history of computers and music traces all the way back to computers’ earliest days. In one of the first instances of computer-generated music, a BBC recording from 1951 captured simple songs, including “Baa Baa Black Sheep,” played by computer pioneer Alan Turing’s Mark II machine, which sprawled across an entire ground floor of a building, according to the British Library.
Across the pond, Bell Labs was becoming a hotspot for early computer music research, eventually drawing various prominent composers, including John Cage and James Tenney, to come experiment.
Computer music on TV: In 1965, computer scientist Ray Kurzweil, then 17, appeared on a CBS game show called I’ve Got a Secret to play a piano piece he said was composed by a computer he built and programmed. As demonstrated in the clip, the computer was a desk-sized piece of equipment, created from a hodgepodge of parts and attached to a clacking typewriter with string.
Byte-ing Bach’s style: In the spirit of IBM’s man-versus-machine challenges, researchers tasked a computer with writing music in the style of Johann Sebastian Bach in the mid-1990s. The results were competent enough that an audience thought the performed piece was the real deal, according to a 1997 New York Times article.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
David Cope, the composer behind the computer program, dubbed Experiments in Musical Intelligence, or EMI, even released an album called Classical Music Composed By Computer: Experiments in Musical Intelligence, which can be streamed on Spotify.
“I find myself baffled and troubled by EMI,” cognitive scientist Douglas Hofstadter told the New York Times in 1997. “The only comfort I could take at this point comes from realizing that EMI doesn’t generate style on its own. It depends on mimicking prior composers. But that is still not all that much comfort. To what extent is music composed of ‘riffs,’ as jazz people say?”
“If that’s mostly the case, then it would mean that, to my absolute devastation, music is much less than I ever thought it was,” he added.
The deep learning revolution: The AI boom around deep learning began to yield a new generation of algorithmic models for music production in the mid-2010s. Google’s WaveNet could mimic human voices from an audio clip. Grimes tapped a Google neural network tool called NSynth for a song on her 2020 album Miss_Anthropocene, and YACHT experimented with another Google AI tool that “blends musical loops and scores,” according to Consequence of Sound.
In 2020, OpenAI released the music-generation model Jukebox, which can produce entirely synthetic music clips in the style of different artists or genres. And in an early conflict between an established artist and an AI pretender, Jay-Z-founded Roc Nation filed copyright strikes against YouTube videos that featured a Tacotron-generated likeness of Jay-Z’s voice without his consent in 2020.
Tools like these set the scene for the current flood of AI-generated music online, which is usually created with voice cloning, large language models, and neural networks trained on the melodies and styles of existing music. It’s now up to artists, record labels, streaming platforms, and industry groups to determine how to handle these newfound capabilities.