How Predatory AI Is Getting Ready to Steal Musicians from Spotify

Art in general and music in particular are presented exclusively as human property, a form of spiritual exploration of reality. To create music, inspiration, talent, emotions and, often, many sleepless nights are required, so the idea of ​​neurocomposers can be called pure blasphemy. Or not? Over the past couple of years, it has become clear that machines can create paintings, write texts, and now even compose music. And try to distinguish the creativity of soulless algorithms from the product of the torment of the strings of the human soul.

At the end of 2022, generative AI began a new round of ascent along the Gartner Hype Cycle and delighted with the quality of generations with the release of Chat GPT and Midjourney to the masses. But if the texts created by AI still caused controversy and skepticism, then with visual arts everything turned out to be much more clear-cut. Midjourney and Stable Diffusion quickly showed their power, especially with the advent of many user extensions (LORA, checkpoints, etc.). Such services may well leave artists and illustrators without work. In any case, those “artisans” who drew “plain vanilla” art to order for $ 20-50.

Naturally, artists began to sound the alarm and boycott AI – for example, authors went on strike on ArtStation because the platform was flooded with works generated by neural networks, and the service administration did not impose restrictions on posting AI content. The artists' claims are based not only on jealousy of a soulless machine, but also on the suspicion that artificial intelligence was trained on works, including from their portfolio on ArtStation.

Of course, creators' fears are also related to the fact that if AI can do their job faster and cheaper, then where is there room for humans in this system? Of course, there will always be those who can offer something unique, something that an algorithm cannot do. But how long will this balance last, and what awaits those who are not ready to adapt?

The Battle for Spotify

While visual art is already cracking under the pressure of AI, music has been slow to submit. Until recently, neural networks’ attempts to create something musically meaningful caused bewilderment rather than admiration. The tracks were too mechanical, monotonous, and hallucinatory.

The first successful experiments with generative music could be heard in the MusicLM algorithm from Google. But the product was in closed mode, you could only admire the demos. Everything really changed in generative AI music with the advent of Suno and UDIO. These services managed to take a step forward, offering something that can easily compete with human work.

And now generative music is slowly starting to make its way to streaming, and the main battlefield now is Spotify. The platform seems to care more about the growing amount of music, as well as listening, and how this music was created is of secondary importance, as long as it does not violate copyrights. Therefore, the service has officially stated that it has nothing against artificial intelligence algorithms.

Labels and musicians complain that Spotify does not label AI music in any way and the recommendation system pushes soulless synthetic generations along with “live” performers. How do they identify such music? So far, the strategy is this: if the performer has not appeared anywhere outside of Spotify, there is reason to believe that it is a neural network. The group's profile – logo, biography – is also closely examined.

Recently The Guardian published the story of Swedish composer Johan Röhr. He has amassed 15 billion streams (more than Elton John, Metallica, and ABBA, for example), going by 650 pseudonyms and earning an estimated $3 million from streaming.

History is silent on whether the enterprising Swede used generative AI, but there is reason to believe that it is quite difficult to create more than 2,000 compositions without neural network doping. Even in the genre of “background”, relaxing music. This style is characterized by simple harmonies and constant rhythmic patterns. Such music does not have complex transitions, sudden dynamic changes or a pronounced melodic line, which makes it an ideal object for algorithmization.

Condemn cannot resolve

Generative music AI is seen by industry Luddites as nothing less than a villainous creation that takes the bread from hard-working musicians. In addition, neural networks have encroached on the sacred acts of sound creativity, and operating companies are violating copyright when they send their algorithms to learn on music catalogs. In general, everything is similar to the story of picture neurons.

Suno and Udio have recently been sued by major labels who claim the services trained their AI models on copyrighted songs without proper licenses and permissions.

The lawsuit from the Recording Industry Association of America (RIAA) is seeking an injunction to stop further use of their music and compensation of up to $150,000 for each copyright infringement.

The plaintiffs claim that the AI ​​generators create songs that are nearly indistinguishable from the original artists, and also reproduce the voices of, for example, Michael Jackson or the members of ABBA.

Suno's position in this lawsuit is interesting for its candor – the company acknowledged that it actually trained its AI model using copyrighted tracks, but argues that this falls under the fair use doctrine.

Suno CEO Mikey Schulman in his article “The Future of Music” admits that the training took place on “medium to high quality music” available on the Internet and much of this material is indeed copyrighted, with some tracks owned by major record labels.

However, Shulman argues that the learning process of neural networks is similar to the cognitive process that humans have always used, studying styles, patterns, and forms (essentially the “grammar” of music) and then inventing new music based on them.

“Just like a teenager composing his own rock songs after listening to the genre, or like a teacher and a journalist studying someone else's material to get new ideas, learning cannot be considered a violation. It never was and it still is not,” writes the Suno CEO.

In a statement, Suno's CEO noted that the company's technology is designed to “create entirely new content, not to remember and play existing material.” Schulman also stressed that Suno does not allow prompts based on specific artists.

“We would have been happy to explain this to the major record companies who filed the lawsuit (and indeed tried to do so), but instead of constructive dialogue, they decided to follow their tried and tested legal playbook. Suno is created for new music, new ideas and new musicians. We value originality.”the statement said.

Labels and artists are understandably concerned that AI could take away their earnings and creative opportunities. The RIAA responded as follows:

“Large-scale copyright infringement is in no way “fair use”. There is nothing fair about stealing an artist's life's work, extracting the core value from it, and repackaging it to compete with the originals. Clearly, their vision of the “future of music” is that fans will no longer enjoy the music of their favorite artists because those artists can no longer make a living.”

Suno reminds us that training neural networks on open sources on the Internet is practiced by all large companies: Google trains its Gemini models this way, Microsoft — Copilot, Anthropic — Claude, OpenAI — ChatGPT, and Apple — its new Apple Intelligence system.

The outcome of the court ruling is not yet available, but it is clear that the decision will likely be momentous for generative AI music.

Google is probably watching the Suno and Udio lawsuits and trying to avoid a similar scenario. Recently It became known that YouTube is negotiating with Universal Music Group (UMG), Sony Music Entertainment and Warner Records on licenses to train Google's AI models.

It is not said how much exactly YouTube is offering labels, but the Financial Times hints that the sum is quite significant. It is also not specified for which product the service is training models. The company has Dream Track for generating music in the style of famous artists, but the publication notes that this tool will not be developed. The secret AI project is planned to be launched at the end of this year.

In a comment to the publication, representatives of the video hosting site stated that they want to enter a more or less legal field (although now, from a legal point of view, training neural networks is a gray area) and are ready to pay a fixed one-time fee for a license to use music to train their neural network algorithms.

“This is different!”

But enough about the alarming. In general, the influence of neural networks on music is often seen as a positive trend. What areas are affected?

Recommendation algorithms. When services give you the content you like and guess your tastes, that's good. AI is capable of understanding and classifying music in great detail – this applies to both the analysis of the tracks themselves and the detailed study of user habits at the individual and aggregate levels. AI generates metadata, analyzes mood, lyrics, tempo, instrumentation and finds unexpected connections between genres and artists, making recommendations more accurate and interesting.

Music generation. Yes, some musicians want to raise the developers of neuromusic services on pitchforks, crucify and burn them. But we can already talk about a split: some are categorically against synthetic music, while others actively use AI tools and become more productive musicians. Most often, AI is used to “boost inspiration” and generate references to overcome the problem of a “clean sheet”. Others rewrite a synthetic track using “live” instruments, adding their own creativity. This is how hybrid creativity of man and machine is born.

Production. While music generation is criticized, musicians love and wait for AI tools for audio processing. Analyzers and “enhancers”, smart equalization and compression algorithms, mastering chains with AI, mixing to a reference in one click using neural networks. Of course, retrogrades of music production will say that neural network processing is not warm and tube-like enough, and turning a compressor manually in ideal studio acoustics is another matter.

In contrast, techies find new technologies extremely interesting and promising. AI can significantly improve production and make professional mastering and mixing available to a wider audience. Sampler libraries also benefit — AI helps make them sound more realistic. In a regular pipeline (from my experience), it takes a ton of time to write realistic parts.

The emergence of generative AI music is a logical result of decades of technological development. From experiments with autotune to the creation of fully synthetic musical works using sampler libraries, modern music has become more mechanical and predictable, and listeners have become accustomed to certain patterns. Artificial intelligence has only reinforced this trend, creating compositions that fully meet the standards and expectations of modern audiences.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *