Neurofuturism. What else will AI adapt to in the near future?

Almost every year a new topic appears on the Internet that excites the minds of mankind and sounds almost from every iron. First there was a boom in cryptocurrencies, then robots from Boston Dynamics appeared on the agenda, they were replaced by NFT technology, and now neural networks have occupied a vacant place. Everyone has already heard about ChatGPT, Dalli-e 2 and Midjourney, there is no point in talking about them again. And what other opportunities does modern artificial intelligence have, and which of them will be in demand in the near future? Here are the most interesting, as well as promising projects and directions.

Neurogame development

There are many development environments and frameworks that can make life easier for the creators of computer and mobile games – these are Unity, Unreal Engine and the like. But why not go further and try to generate individual levels or scenes using neural networks?

This is the path the blogger decided to take. Madebyoll.in – as initial data, he fed the neuron many hours of video footage of the Pokemon Let’s Play gameplay, on the basis of which she independently generated something similar. In such a peculiar way, he created a kind of analogue of the Pokemon game, in the virtual world of which you can even wander in the browser using here this link. The resulting product looks rather primitive and clumsy, but this is just the beginning! The first arts of Dalli-e also didn’t look very good, to put it mildly.

In fact, the author of the idea did quite a lot of work: he changed the size of the original video, marked up the videos in accordance with the events taking place on them, wrote an untrained neural network with 300 thousand parameters – he himself admits that this is a tiny value by the standards of modern neural networks – and training algorithms for it. The result was a demo, which is not actually a full-fledged video game, but only imitates it. For example, the scene generation algorithm does not know how to remember the maps and objects it created: if the character of such a “game” moves one screen and then returns, he will end up in a completely different location (this happens to me very often in a dream). The logic is also lame: for example, a neural network can “understand” what to do if a character enters a room, but he can enter it not only through a door, but also through a wall. And having tried to “enter the tree”, he will also be inside the room.

In general, this project is still “stuck in textures”. Its author named three problems that stand in the way of the full-fledged creation of games based on neural networks: this is the insufficient power of AI, the lack of input information that allows a neural network to compile a complete and exhaustive description of the characteristics and parameters of the game world, and the uncertainty caused by a lack of knowledge about the rules of the game themselves.

The solution to the last two problems can be a combination of the traditional model of trained neural networks with the advent of some kind of metalanguage that allows you to create a description of the game world for them, set boundary conditions and build a model that specifies the key parameters of the gameplay and character characteristics. And the neural network will do everything else itself. It seems that this is a matter of the not too distant future. The technology is already in its infancy, and if it continues to develop, we will soon see many interesting, exciting and exciting games, the creators of which will finally be able to fully focus on creativity, and not on programming.

Speech recognition by brain biocurrents

Remember the scene from the legendary sci-fi movie Back to the Future where Professor Brown tries to read Marty’s mind? In general, they say that now it is not so fantastic, although it is still scientific. Banned and recognized as an extremist in Russia, Meta (sorry, I had to write this in accordance with applicable law) developed a prototype speech recognition technology based on non-invasive recordings of brain activity, that is, by taking an electroencephalogram. This thing is built, of course, on the basis of neural networks, who would doubt it.

To further its scientific-extremist ambitions, Meta assembled a group of 169 volunteers who were tortured for more than 150 hours by listening to audiobooks while having their electroencephalogram and magnetoencephalogram taken at the same time. The MEG and EEG data were then run through a neural network using wav2vec2 speech recognition model. As a result, artificial intelligence was able to successfully recognize up to 73% of the English words “thought” by the subjects from a dictionary of 793 words, which generally corresponds to the average vocabulary used by the average person in everyday conversation. Details of this experiment can be found in the corresponding scientific articlepublished by Cornell University.

Of course, for now this is just a scientific study, but the key word here is “yet”. The day is not far off when we will finally be able to dictate messages to our favorite “cart” with the power of thought.

Multi remotes

We have already talked about game developers, but the animators have undeservedly remained behind the scenes of our attention. Not those animators in Jack Sparrow costumes who breathe fumes on the kids at school matinees, but those who are engaged in a more adult and responsible business – animation. If neural networks already know how to generate quite professional pictures from a text description, wouldn’t they be able to create 24 such pictures per second, creating a cartoon? Yes they can, how they can! True, for the time being with a number of significant reservations.

At present, at least one neural network, capable of turning a modern 3D cartoon into a rather simple 2D cartoon. At the exit, she gets something like “South Park”, but, as they say, it’s a start.

It remains to wait until it will be possible to submit to the AI ​​input not a finished animation, but, say, a fantastic story or scenario: then the world animation will step to a fundamentally new level. The main thing is that this should not become a step into the abyss.

NeuroBeathoven

Now you will not surprise anyone with a neural network capable of generating music in a given genre: the most famous among them is, of course,

jukebox

from OpenAI. And here, for example,

neuron

, which generates real-time audio at the click of a button. This AI knows only two styles: “techno” and Death Metal, but its sources

available on github

thanks to which anyone can train the model on their own dataset.

And here is a neural network created on the basis of Stable Diffusion Riffusion generates music based on text description. Type in the name of a style, group, or artist, and you’ll get an audio stream that barely resembles what you asked for. For example, on request “the beatles”, Riffusion gave birth to a rather mournful track, the voice in which really somehow remotely resembles the vocals of Paul McCartney. A strong hangover would pass for some outtake bootleg from the early Wings era.

But you can go further: generate lyrics using ChatGPT, then create music using Jukebox, and play the resulting song with a voice synthesizer like Uberduck. Neurobethoven or, if you like, Neurovalery Leontev is ready. It remains only to draw an animated 3D character in leggings with the help of a neural network, and you can roll up a concert in Luzhniki. And what, something good may well come out of this idea: for example, neural networks fed the entire Nirvana music library and it created a new song of this legendary band. It sounds, frankly, impressive.

***

Of course, modern neural networks are still a “black box”, which sometimes works in an unpredictable way and sometimes throws out completely unexpected tricks, like a picture of a salmon swimming upstream of a river that has already become a meme. Neurons are already great for solving a number of applied problems, but they still cannot do absolutely everything. It seems that the most optimal vector for their development is to combine the advanced capabilities of artificial intelligence with traditional programming, that is, to choose the best of what is available and combine this foundation with the new. That is the way.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *