AI-generated music ‘inferior to human-composed’ work, finds study

They recruited 50 participants for this study who have a strong understanding of music, particularly musical notes and other essential components. 
Mrigakshi Dixit
Composing music.
Composing music.

puhimec/iStock 

Artificial intelligence has become the world's latest buzzword. And experts have been busy demonstrating its capabilities in virtually every field, including music. And it appears that AI did not fare well in the generation of music.

According to the University of York study, AI-generated music is "inferior to human-composed music."

They recruited 50 participants for this study who have a strong understanding of music, particularly musical notes and other essential components. 

They were made to listen to different music excerpts, some of which were composed by actual humans, while others were made using the AI-based deep learning method.

Following this, the music experts were asked to rate the excerpts based on six musical criteria: “stylistic success, aesthetic pleasure, repetition or self-reference, melody, harmony, and rhythm.” During this process, however, they were unaware of the composer (human or computer-generated) of the musical excerpts.

“On analysis, the ratings for human-composed excerpts are significantly higher and stylistically more successful than those for any of the systems responsible for computer-generated excerpts,” said Dr. Tom Collins, from the School of Arts and Creative Technologies at the University of York, in a statement

The copyright issue

One major concern the authors identified was the issue of copyright. While training the model, the team discovered flaws in the algorithms used in AI music generation. This could be the source of the problem when using AI-generated music.

“It is a concerning finding and perhaps suggests that organizations who develop the algorithms should be being policed in some way or should be policing themselves. They know there are issues with these algorithms, so the focus should be on rectifying this so that AI-generated content can continue to be produced, but in an ethical and legal way,” said Collins. 

To address this issue, the authors have also suggested seven key guidelines for evaluating machine learning systems. This research could help to improve the development of AI-generated music as well as prohibit any ethical issues.

The study is published in the journal Machine Learning.

Study abstract:

Deep learning methods are recognised as state-of-the-art for many applications of machine learning. Recently, deep learning methods have emerged as a solution to the task of automatic music generation (AMG) using symbolic tokens in a target style, but their superiority over non-deep learning methods has not been demonstrated. Here, we conduct a listening study to comparatively evaluate several music generation systems along six musical dimensions: stylistic success, aesthetic pleasure, repetition or self-reference, melody, harmony, and rhythm. A range of models, both deep learning algorithms and other methods, are used to generate 30-s excerpts in the style of Classical string quartets and classical piano improvisations. Fifty participants with relatively high musical knowledge rate unlabelled samples of computer-generated and human-composed excerpts for the six musical dimensions. We use non-parametric Bayesian hypothesis testing to interpret the results, allowing the possibility of finding meaningful non-differences between systems’ performance. We find that the strongest deep learning method, a reimplemented version of Music Transformer, has equivalent performance to a non-deep learning method, MAIA Markov, demonstrating that to date, deep learning does not outperform other methods for AMG. We also find there still remains a significant gap between any algorithmic method and human-composed excerpts.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board