Magazine \ Features \ Features

AI and music, part one

How AI is changing the way we consume music

2019 Aug 01     
2 Bit Thugs

In this first part of a two-part investigation into music and artificial intelligence, Harold Heath looks at consumer apps for playing and making music

In this first of a two-part investigation into artificial intelligence (AI) and music technology, we take a brief look at the history of AI in music, review the current AI consumer landscape and take a tentative guess as to what the future may hold. 

A brief history…
The very first piece of music composed by a computer was the somewhat disquieting Illiac Suite at the University of Illinois USA in 1957. Around the same time, composer Iannis Xenakis was pioneering the use of algorithmic music composition, and his work was also influential in the later development of electronic music and granular synthesis in particular. But the most well-known artist in the development of AI in music is Brian Eno. 
 


Eno’s pioneering 1976 album Discreet Music consisted of two melodic cycles of different duration overlapping arbitrarily, what he called an ‘automatic system’of music generation, where the composer gives up some of the compositional decision-making. This theme continued through much of his ‘ambient’productions of the 70s and then his self-generating music in 90s. Inspired by the way screensavers constantly generated fresh variations of the same visual motif, Eno developed the idea of compositional seeds that could create music in the same way. 

Eno was interested in the fact that until the end of the 19th Century, when recording technology arrived, every music performance was entirely unique, whereas recording froze the music in time. His generative music was music that would always be new and changing; unique. 

Today, we live in an era when machines can easily generate unique music or produce viable original compositions based on a few human instructions, and when smart devices, machine learning, predictive analysis and music are all coming together in ways that are changing how consumers interact with music. 

The future is already here
Mau5trap artist, producer and sound designer Dom Kane is currently working on an AI mastering system, and points out that AI is already a part of consumer life.

"Technically we use AI every day in the curation of our Netflix and Spotify suggestions," he says, "which if we're totally honest has given the music industry a new breathe of life, in more ways than one."

Algorithms on streaming platforms are already subtly shaping the tastes of many listeners,by suggesting songs and artists it ‘thinks’ they may also like. Even if you don’t pay attention to what your preferred platform suggests, it will still be paying extremely close attention to your listening choices.

AIs can now compose instrumental music that the average listener is unable to discern from human-composed music, and plenty of soundtracks and games themes as well as cinema, TV, radio and online adverts are either AI-composed or AI-assisted. Indeed, in 2017 an AI named Aiva, trained by learning music theory and analysing thousands of recordings, became the first "virtual artist" to be registered with an author's collecting society.

Robot music
In the consumer market, Ampermusic.com offers collaborative cloud music creation for non-musicians. Ampermusic's system uses AI and a library of live-recorded instrument samples to create royalty-free music, a market that is likely to grow as platforms like YouTube is required to enforce music copyright in the clips uploaded by users. Jukedeck, an artificially intelligent music composer, is another source of royalty-free music online, one highly rated by producer and music tutor Danny Lewis (Ministry of Sound, Ruff Trx, Plastik People): 

"A friend who works on 3D rendering of architecture had created a model of a spa that was going to be constructed, and needed some music to go with a promotional brochure," says Lewis. "We put some parameters into Jukedeck on my phone, something along the lines of relaxed, ambient and what came back was perfect – he listened to it on my phone and purchased the license. Incredible."

Humtappis an App that creates a song from a melody you hum, in the style of your choice. It allows people with no knowledge of chords, melody, programming or production to produce a demo in seconds. Similarly, Alysia is an AI app that helps songwriters with writing lyrics and melodies. You choose a genre, topics and mood, which guides the AI to generate lyrics which you can then use as is or change. It offers melodies to choose from which you can alter to your own taste, then you record your song through your phone – or get the onboard auto-singer to sing it for you. 

Amazing or a bit spooky? 
Aside from resulting in lots of amateur singing videos that only friends and family will watch, where else might this lead? Currently, where things get really interesting (or indeed scary, depending on your point of view) might be when different AIs interact. 

For example, your music listening platform constantly records your every music-listening decision, including tracks you skip, when you listen, for how long and so on, and ‘learns’ your musical preferences. Combine this with an AI that is able to compose music based on a set of preferences, and you have the potential for unique, personalised music to suit your mood, created on request in seconds by an app on your phone. 

The fact that this scenario causes reactions of excitement in some and revulsion in others is a clear sign that AI technology is still very much finding its place within music and that this is still a highly contested area. 
 


Crystal ball time
Although it’s notoriously difficult to predict the future, technologies like HumTapp, Amper, Alysia and Jukedeck all facilitate access to music creation for those with little or no musical knowledge. Consumer digital music technology has continually offered increasing access, and AI is simply continuing this trend and its proponents see AI as a potentially democratising technology. In the last few hundred years in the West, much of the composition and creation process became the preserve of experts and specialists. In taking care of certain aspects of composition and production, AI could allow a more democratic model where non-musicians can participate and produce their own music. 

Creating music electronically has generally been carried out using software that is based on the old recording studio model, with its associated skills of EQ, compression, de-essing and so on. AI could automate many aspects of the recording and composition process. Instead of needing to know how a compressor works, the user simply asks the AI to tidy up the vocal and is presented with infinite high-quality compression/EQ/limiting/finalising templates, based on the analysis of millions of pieces of music, with levels of more traditional tweak-ability available to those more knowledgeable.

While it's hard to argue against the positives of increased access to music, there is always the possible price – cultural homogenisation and the erosion of expertise. That's a question that requires way more space to deal with than we have here, but certainly the utopian view of AI is only part of the current debate, and there’s plenty of concern about machines putting songwriters, producers and musicians out of work. We’ll look into this in more depth in part 2. 

Words: Harold Heath Main pic: Gerd Altmann/Pixabay

 

 

 

 

 

 

Tags: Harold Heath, AI, artificial intelligence, music AI, AI composer, composition