Blog-Layout

Music and Artificial Intelligence

How do computers and Artificial Intelligence challenge artists and musicians methods of composing new music? What are the consequences of this?

By Eugenia Borgonovo

October 8, 2021

Humans feed thousands of songs to artificial intelligence. By use of neural networks, namely mathematical models that mimic biological neural networks,  and machine learning, specifically deep learning, these songs are fragmented and studied. Deep learning is a subcategory of Machine Learning that is also able to infer meanings without human supervision. The machine is able to extract the basic information and recognize patterns that can be used to create original works, similar to those that any artist might compose.

How do music and artificial intelligence work?


The first fear among artists and composers is that their work might be replaced by a machine. The second fear concerns music itself and the music industry, challenged by the fact that machines are now able to create music more and more similar to that created by humans. The first to move in this direction was Alan Turing when in 1951 he created a machine the size of a room that could emit three simple melodies. In the 1990s, several musicians experimented with computers: Brian Eno and David Bowie for pop music and David Cope for classical music. The latter created Emi, a program that allows new music to be written in the style of Bach. Then there is Aiva, created by two engineers from Luxembourg, which allows creating original compositions along the lines of the greatest composers of classical music.



What are the instruments available today?


There have been several advances in artificial intelligence devices and software applied to music. One example is
LyricJam, a real-time system that uses artificial intelligence to generate text lines for live instrumental music and was created by members of the Natural Language Processing Lab at the University of Waterloo, Canada. To date, the work has led to the creation of a system that learns artists' musical expressions and generates lyrics in their style. Other experiments in this field include Amper, an open-source programme that allows anyone, even non-professionals, to create music by indicating parameters such as genre, mood, and tempo.


Similarly, Sony created the Flow Machines system that led in 2016 to the release of Daddy's Car, a track openly based on Beatles songs. This system is capable of analysing the composer's primary idea and assisting them by generating ideas during the virtual composition. The basic idea is not to automate the creative process of the human mind but to combine it with an algorithm that can suggest melodies and chords according to the genre.


Another great invention is Jukedeck, now owned by Tik Tok. This AI platform works very easily: in order for it to create a new piece, the user just selects the desired composition genre (folk, rock, electronic, ambient), the mood (upbeat or melancholic), the tempo (beats per minute) and sets the desired duration. After a few seconds, artificial intelligence elaborates and processes and then provides a preview to be downloaded. Jukedeck solves the copyright problem and each user can create new personalised music. Basically, users can pay as little as $0.99 for a royalty-free license, which is a very democratic price compared to the ones that the market is offering now.


One last AI instrument that is worth mentioning is Lo-Fi Player, a tool created by Google’s Magenta. Lo-Fi Player is a virtual room where fans can create and mix lo-fi and hip-hop tracks for free. It works by changing the objects in the room. For example, by changing the view out of the window, you can change your background sound (be it rain, a beach, or the chaos of the city) and if you change the animal you change the beats per minute (BPM). Also, there are instruments such as bass, piano, synthesizer, or guitars to change the background sound. Despite these advances, no one has yet created a machine capable of writing chart-topping songs.


Most musicians use artificial intelligence to achieve what they could not do on their own. An example of this is Yacht, an American pop group that has used AI to break out of their comfort zone and try to create something new from their previous discography. The result is their new album “Chain Tripping”. The album was released in 2019 and contains 10 songs, among which (Downtown) Dancing is the mos popular one.

 

Not only AI is used to produce music, but it is also used to produce virtual artists. Famous digital artists such as the pop star Hatsune Miku or the band Gorillaz are very popular today, but they have a human apparatus behind them that is responsible for their music. A few months ago, the artist Ask Koosha designed and created a 3D band of characters with different personalities who produce their own music. However, the creator claims that 'human creativity will always be substantially different from that of machines because we have a different biological nature, needs and intentions.’





So what is the challenge now?


The challenge will be to evolve in two parallel directions: on the one hand, human artists need to be stylistically unique and exceptionally competent, and on the other, they should understand how to use these new tools to enhance their creativity and explore new frontiers. Perhaps the future role of the human artist will be to focus on emotions, leaving the execution work to machines. But the answer to these questions is not obvious.






References and useful Links

  1. Scippa, G. (2020). Intelligenza artificiale per creare musica, il punto di vista degli artisti. Lifegate. Available at https://www.lifegate.it/intelligenza-artificiale-musica

  2. Dannenberg, R.B. (2018). Artificial Intelligence, Machine Learning, and Music Understanding. School of Computer Science and College of Art Carnegie Mellon University. Available at https://web.archive.org/web/20180823141845/https://pdfs.semanticscholar.org/f275/4c359d7ef052ab5997d71dc3e9443404565a.pdf

  3. Santin, F. (2020). Nasce Lo-Fi Player, lo strumento di Google per creare musica chill, lo-fi con pochi click.  Everyeye.it Available at: https://tech.everyeye.it/notizie/volete-mixare-musica-chill-lo-fi-questo-strumento-google-caso-vostro-466910.html n)




Eugenia Borgonovo holds a bachelor’s diploma in Economics and Management for Arts, Culture and Communication from Bocconi University and is currently attending a master’s in management at ESCP Business School. Eugenia is researching for Kittiwake institute.

Read More

By Kamayani 21 Sep, 2022
Elon Musk points at Twitter's cybersecurity vulnerabilities to cancel $44 bn buyout-deal.
By Raushan Tara Jaswal 21 Sep, 2022
Time is running out on the National Security defence adopted by the Government of India for the prolonged ban on Chinese based Mobile Applications.
By Marco Schmidt 21 Sep, 2022
This article is a follow-up to “Showdown Down Under?” which was published here last year. As our cycle aims to explore jurisdictions outside the EU and North America, we will further dive into Australian competition law by outlining its basic structure, introducing the relevant actors and give an insight into the pursued policies in the realm of digital markets with a particular focus on “ad tech”.
By Linda Jaeck 16 Jan, 2022
How AI is enabling new frontiers in Mars exploration.
By Marco Schmidt 09 Aug, 2021
Regulation is gaining more traction all over the place but it is uncertain if the Australian News Media Bargain Code will become a role model for legislation in other places. There are several weaknesses to the Code and after all, it is not clear if paying publishers for their content will really alter the high levels of market concentration.
By Theint Theint Thu 09 Aug, 2021
The perseverance of Myanmar’s youth to fight for freedom is proving to be the key to the country’s democratic future.

Watch Our Episodes

Share by: