Pop Star Algorithms: Why AI Will Soon Make Better Music Than Humans
In February 2020, the digital media agency Space150 experimented with artificial intelligence machine learning programs to create the excellently titled “Jack Park Canny Dope Man,” a banger of a hip-hop song in the vein of one of the genre’s biggest names, Travis Scott.
The neural networks used to craft the tune were trained on Scott’s entire catalog, and the resulting beat and melody don’t do anything to betray their artificial origins. The song lacks the awkward, clunking approximation of what a human operator would produce, something we so often see in language translation software, for example. No—it sounds surprisingly good, a song a Travis Scott fan wouldn’t think twice about if it randomly showed up on their Spotify or SoundCloud weekly playlist.
Not until they put a close ear to the lyrics, that is, which you can find over at the song’s Genius lyric page. Intentionally or not, the AI-generated words function as a piece of satirical commentary on Scott’s lyrical aesthetic, and include such masterful lines as,
“We gettin' brothers, I ain't talkin' 'bout my place
You got the scream, mamacita, I can space (Straight up)
I'm the best park stance special space.”
Space150 fed Scott’s lyrics to a machine-learning text generator for two weeks, and the results surprised the company’s executive creative director, Ned Lampert. “It came up with things we would never come up with,” said Lampert in an interview with Adweek after the song was released. “The bot kept talking about food.”
While the song’s couplets are fantastically mad at times, many of the lyrical flourishes are definitely on-brand. Just like the rest of the song’s elements, that stylistic accuracy is an impressive example of what AI can do, especially in an artistic medium that many feel is a sacred bastion of humanity.
Why AI in music makes us uncomfortable
Space150 maintains the whole thing was something of an experiment, but it’s worth noting just how much of an understatement that is. Far from being a weightless, cultural one-off, the song is yet another example of AI forcing us to look into the uncanny mirror of our own programming.
It’s a view we’d better get used to. As AI based on deep learning algorithms begins to encroach on cultural strongholds like the world of music, it brings with it a sense of unease.
Sure, we’re happy to let AI silently work its magic in the background of the search engines we use and the websites we browse, but we generally prefer that such technology keep away from the sensibilities of what we feel makes us unique. We witness programs like Deep Blue defeat Gary Kaspirov in a high-stakes chess match and start to wonder, ‘Can human genius really be reduced to some fast-functioning lines of code?’ That feeling is even more exacerbated when it comes to the realm of art and music.
In her book, Artificial Intelligence: A Guide for Thinking Humans, Melanie Mitchell writes about how her mentor, the well-known physicist and AI researcher Douglas Hofstadter, once expressed this fear to her.
“If such minds of infinite subtlety and complexity and emotional depth [as Bach and Chopin] could be trivialized by a small chip,” he lamented, “it would destroy my sense of what humanity is about.”
Hofstadter was referring to Experiments in Musical Intelligence, a program developed by the composer David Cope in the 1990s to act as a kind of assistant to his own compositional process. EMI was built to capture the overall syntax of a composer’s style and generate new pieces in that style.
Mitchell also tells the tale of how Hofstadter once had the opportunity to play one of EMI’s piano compositions to an audience of music theoreticians alongside an obscure, genuine composition of Chopin’s and had the crowd guess which was which. Most of them mistook EMI’s piece for the real Chopin.
How AI is shaking up the music industry
We would all do well to face such a moment of unease, and the sooner, the better. AI is getting good at making music—so good, in fact, that the technology is starting to sign with major music labels, a trend that’s unlikely to do anything but become more popular in the near future.
"Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence."
In 2018, Endel, a company that developed an app to create unique ambient soundscapes for its users based on their heart rate, circadian rhythm, and even the surrounding atmospheric conditions announced in a press release that it was signing with Warner Music Group to release 20 algorithm-driven albums revolving around the themes of sleep, relaxation, and focus.
If you’re an aspiring musician bothered by the fact that a machine has beat you to the record label punch, don’t feel too hard done by. Music streaming services have found a large and profitable listener base in genres that lie somewhere between ambient and active listening. The YouTube phenomenon “lo-fi hip hop radio - beats to study/relax to” is a famous example of this (funnily enough, in the middle of writing this article, YouTube suggested I listen to an eight-hour album of “Calm Ambient Music for Stress Relief”—I’m not creeped out, you’re creeped out).
This brings up a potentially worrying issue. Under the reasonable assumption that AI will continue to get better at making music, will labels begin to favor such low-cost, low-labor algorithms over people trying to make a living off their art? Streaming services like Spotify have long been at the receiving end of plenty of criticism over how they treat artists on their platform, and this criticism has ramped up in recent years.
Last July, Music Ally released a controversial interview with Spotify CEO Daniel Ek in which he claimed that it’s simply not good enough anymore for artists to release albums every three to four years. “The artists today that are making it realize that it’s about creating a continuous engagement with their fans,” Ek said.
It’s possible that the development of AI-backed programs that can churn out songs quickly and cheaply could further incentivize such treatment.
As The Verge pointed out when it ran a story on the signing last year, Endel asserts it’s not competing with artists, as it doesn’t create music in the traditional sense. However, the company’s collaboration with Glaceau Smartwater, on a project called Smartbeats, pairs its algorithm with artists like Toro y Moi, Washed Out, Nosaj Thing, Madeline Kenney, and Empress Of, a group of well-known musicians who might beg to differ with the company’s suggestion that their collaboration doesn’t constitute music.
Who’s your favorite algorithm?
Beyond such ambient projects and collaborations are what we can classify as outright AI artists, machine learning programs that are taking the concept behind Space150’s Travis Bott project to another level.
In 2019, Ash Koosha (Ashkan Kooshanejad), a London-based electronic musician, alongside Negar Shaghaghi and Isabella Winthrop, founded Auxuman, a company that creates AI-based entertainment personas and licenses them for the performance and music industries. The company's most well-known creations are five digital pop musicians named Yona, Hexe, Zoya, Mony, and Gemini, all of which have a unique genre style as well as avatars that look like they’d be perfectly at home in a horror film version of The Sims.
As Bloomberg reported last year, Koosha’s main drive for the project was a curiosity to see just how complete a piece of music could be created utilizing only a computer. The result is music that is as bizarre as it is captivating.
Under the Auxuman alias, the AI pop personas make up a musical cadre that has already released two albums, both of which can be heard over at the group’s Bandcamp page, and they're nothing if not compelling. The songs and their production alternate between sounding like an even-more-eerie-than-usual Bjork on a bad LSD trip and a pitch-shifted, chopped-up version of The Weeknd who forgot he was allowed to have fun.
"Is our creativity in fact more algorithmic and rule-based than we might want to acknowledge?"
In short, they’re amazing. The hypothermic “Strange Times,” for example, sprinkles bits of piano over ominous but beautiful ambient chords, augmented both by Yona’s icy, digital vocals and disturbingly persistent sounds of a gun cocking and loading. Compared to Travis Bott’s hall of lyrical mirrors, Yona’s words feel like a semi-coherent rumination on the modern musical age:
“I pretend I never left
You repeat every word I say
Would you be the one to stay
In such a strange, strange time?”
In an interview with Digital Trends in October 2019, Koosha explained that Auxuman’s lyrics come from machine-learning models that are trained on poems, articles, and online conversations connected to human-chosen song themes.
The future of music and the evolution of creativity
Koosha also believes that it’s computers that will help people find new and previously unheard of musical sounds and styles. This is both exciting and contentious to many.
Whether or not AI might make “better” or “worse” music than humans is a tough question to answer in a field that is by definition subjective. Still, as music is seen as a creative enterprise, the discussion is already taking place. Will AI ever be able to comfortably inhabit or even surpass human creativity?
Some, like the philosopher Sean Dorrance Kelly, firmly reject the notion. Writing in the MIT Technology Review in 2019, Kelly says, “Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.” You can imagine Douglas Hofstadter hugging the man for so unequivocally standing up for human artistic exceptionalism.
"Take an algorithm that plays the blues and combine it with the music of Boulez and you will end up with a strange hybrid composition that might just create a new sound world."
Others believe that there’s something to the artistic process that qualifies art from humans as distinct and therefore irreplaceable.
The pioneering neurologist Geoffrey Jefferson, whose thoughts on AI influenced even the great Alan Turing, carved out a space for such human originality during a speech at the Royal College of Surgeons of England all the way back in 1949: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know it had written it.”
Regardless of whether or not the technology mirrors how humans make art and music, AI could still leave humanity in the creative dust, as it seems poised to do in just about every field in which it’s being developed and employed.
Marcus du Sautoy, an Oxford mathematician and author of The Creativity Code: Art and Innovation in the Age of AI, offers a grounded perspective on the matter. “At some level, all these expressions of creativity are the products of neuronal and chemical activity [...] So, is our creativity in fact more algorithmic and rule-based than we might want to acknowledge?”
As in all things AI, the discussion boils down to whether or not the traits we associate with being human can be explained via such materialistic mechanisms, and on that front, the jury is still out. That being said, even AI’s biggest skeptics must now admit that at least some part of some of those traits can be.
Du Sautoy’s writing strikes a helpful balance between the technology’s potential for both creative virtue and experimental nonsense. To him, machines are neither a hopelessly lost cause incapable of creating art of value nor guaranteed to erase that inimitable essence that leads people to sing a song, pick up an instrument, or step into a recording booth.
“Take an algorithm that plays the blues and combine it with the music of Boulez and you will end up with a strange hybrid composition that might just create a new sound world,” he writes. “Of course, it could also be a dismal cacophony.”
As the light from AI’s rising sun starts to warm up the human horizon, it looks like we won’t have to wait long to find out which of those potentials is likely to stick around.