What neural networks can sing and perform death metal

Let’s talk about intelligent tools that can generate tracks and even lyrics. It will be about the decisions of corporations and laboratories, as well as the development of enthusiasts.


Photo Joe green / Unsplash

Neural networks write music …

Doing this synthesizer NSynth Super. It is built on the basis of the AI ​​system, which forms new sounds that do not exist in nature from pre-recorded samples. Algorithm can combine sound flute and drum. NSynth able to work with 16 musical instruments – on their basis it generates more than 100 thousand sounds. He analyzes their input characteristics, and then linearly interpolates, forming a mathematical representation.

NSynth Super is an open source project. Sources and schemes for assembly are available to everyone in repositories on github.

Another example is an artificial intelligence system. Dadabotswhich have developed musicians CJ Carr and Zack Zukowski. The neural network composes death metal tunes – it was trained on the work of the Canadian team Archspire. The solution based on AI generates quite harmonious, although not always pleasant for hearing compositions – they are periodically superimposed on sharp acoustic effects. Although for the chosen style it looks like a wave organically. You can listen to the work of Dadabots on YouTube, there is a round-the-clock Live Stream.

AI music systems are also being developed in Jukedeck. This startup is developing a tool for generating tracks with a given mood and pace. A year ago it acquired company owning TikTok. Jukedeck technologies help social networks save on royalties.

… and know how to sing

At the end of April, such a tool was presented in OpenAI, it was called Jukebox. It generates compositions with meaningful lyrics and vocals. Here is an example:

Engineers trained a neural network on a data set of 1.2 million songs (600 thousand were in English). The text and metadata for them was taken from the LyricWiki library. To generate new tracks, the AI ​​system uses the VQ-VAE (Vector Quantized Variational AutoEncoder) method – it compresses the tracks and extracts the necessary acoustic information from them. Then on its basis forms a new composition. Jukebox spends about nine hours to write one minute of a song, but so far does not know how to generate familiar songs with repeating chorus. The system also requires large computing resources – it is not yet possible to test it at home on a computer or in a studio. In the future, developers plan to fix these shortcomings.

But will musicians be replaced

The authors of intelligent instruments for generating music say that machine algorithms are not intended to replace composers, but to expand their artistic capabilities.

American singer Taryn Southern (Taryn Southern) recorded album using an AI-based solution. Neural network wrote music for the track Break free and generated the video clip of the clip. YACHT lead singer Claire Evans also used machine algorithms when writing the album “Chain tripping“. The computer generated new melodies based on Claire’s previous works, and the performer connected the most interesting samples to each other.

Machine learning algorithms also help musicians solve technical issues. For example, developers from LANDR offer an artificial intelligence system that automatically tracks tracks. She already enjoy more than 2 million performers.


Additional reading:

“Machine Sound”: synthesizers based on neural networks
The history of speech synthesizers: the first mechanical installations
History of Speech Synthesizers: The Computer Age


What to read at us on Habré:

  • “Up to 30 thousand”: more than 20 reviews of audio systems
  • Voice assistants at the wheel – why they are not always needed and not for everyone
  • The history of audio formats – the era of cassettes and the development of speech synthesis technologies


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *