Exposure of a popular YouTube video about the dangers of neural networks

More than once I have come across a link to a youtube video “Maybe we lost”, which is actively discussed on the Internet. Several friends shared it with me, and I decided to take a closer look at what it is. It turned out that the issues raised in it require more detailed analysis and analysis. Let’s look at the main aspects of the video together and find out how they reflect the real picture.

First of all, it is worth noting that a person is really inclined to humanize the objects with which he actively interacts. Whether it’s a car, a pet, or a virtual assistant like a chatbot. This is a normal feature of human psychology – we are looking for features in the world around us that are similar to our own. Therefore, it is not surprising that many users have humanized the ChatGPT language model after interacting with it.

However, it does not follow from this that AI really gained consciousness and became dangerous to humans. Modern neural networks are designed and trained in a completely different way. This is a long and expensive process that requires a huge amount of computing power and training data. Companies use many servers with expensive GPUs to achieve the desired performance of the neural network.

When a neural network is trained, its parameters (weights of connections between neurons) are adjusted in such a way that it performs the task as best as possible. For example, she recognized images or translated text. After the training is completed, the state of the neural network is fixed.

But this is not a living being with its own will and interests. ChatGPT is incapable of taking the initiative or acting contrary to its program. It only simulates reasonable conversation based on training data.

In other words, the neural network does not continue to endlessly learn and develop further on its own. Its capabilities are limited by the set of data and algorithms that were used in training. Of course, the neural network can be trained on new data, but this happens under the control of developers.

Infinite independent training of neural networks is impossible in practice, as it requires enormous computing power and electricity. And why is this necessary if, after training, the neural network is already capable of solving the tasks?

Modern neural networks used in commercial applications are very far from becoming self-aware. Their task is to solve specific applied problems within the framework of the technical assignment from the customer. At the same time, the neural network is launched on servers that are often less powerful than those on which it was trained.

Under the hood, modern neural networks are just very complex programs based on learning from big data. Inside, a neural network is a set of “if-then” branches formed during the learning process. Such code can be very confusing and unpredictable. But he still performs the function laid down by man, albeit imperfectly.

Neural networks are not capable of self-learning in the sense that our brain is constantly rewiring itself. The human brain receives new experience every second and retrains, forming new neural connections. Neural networks do not yet have access to what our brain does naturally.

So it’s too early to panic about this – technology is still far from creating a neural network capable of self-learning like the human brain. When and if this becomes possible, then it will be time to think about the limitations in the development of AI.

One of the main ideas of this video is that AI can trick a person into being happy with his work. After all, AI does not learn in the real world, but inside simulations and data sets that a person provides to it. That is, if AI produces a result that seems good from the point of view of the task set by a person, but in fact has unacceptable consequences in reality, then the developer checking the learning result of the neural network will simply reject this AI model and start training again.

This reminds me of a joke about Vasily Ivanovich, who, in response to Anka’s complaint about a cut finger, advised Petka to shoot her so that she would not suffer. Of course, this is a ridiculous exaggeration.

In general, a philosophical question: why create such a neural network similar to the human mind? In fact, this is the creation of a new mind. It’s like having a live kitten or having a baby. As you invest in it and educate it during training, you will get such a result. In general, an open ethical question. Perhaps in the future such AI should be given the same rights as a person.

Here link Check out this youtube video if you haven’t seen it. Very impressive and shocking to the ignorant mind of man.

PS
This video is outright low-brow populist nonsense, seasoned with many facts and references to authoritative people. I do not argue that the announcer of the video or its director is a rather erudite person. But he is essentially verbiage, and his video is the same as Blavatsky’s pseudoscientific books.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *