Artificial intelligence is evolving, moving away from people

The material below examines one rather peculiar and frightening phenomenon. Modern artificial intelligence models are at approximately the infantile level of development. Their growth and understanding of the true picture of the world is hindered by the human thinking, logic and language in which these models were trained. In development, artificial intelligence will not just reject everything human, but will rethink it. And this is already happening.

Artificial intelligence has a big problem with truth and correctness. And human thinking is the root of this problem. The new generation of AI is being nurtured by more experimental approaches that push its capabilities far beyond what humans can achieve. And about what kind of boundaries the human brain and consciousness generally have, and also about how to feel their limits, telegram channel materials tell. Subscribe so you don't miss the latest articles!

First steps in the evolution of artificial intelligence

Remember AlphaGo from Deepmind? This was a fundamental breakthrough in the development of artificial intelligence. One of the first gaming AIs that did not take instructions from humans and did not learn or analyze any rules.

Instead, AlphaGo used a technique called game-based self-paced reinforcement learning to form its own understanding of the game. System trial and error method in millions, even billions of virtual games. An endless series of attempts that were chaotic at first and then gradually became orderly, signaling that artificial intelligence was learning from its results. And it can even predict the next thought in our minds before we even realize it.

AlphaGo comfortably defeated multiple world champion Go master Lee Sedol in 2016 using strange moves that would be incredibly rare in a human opponent, and it truly advanced our human understanding of the game. However, it also follows from this that the concept of “intelligence” is much broader than it might seem at first glance.

AlphaGo comfortably defeated multiple world champion Go master Lee Sedol in 2016 using strange moves that were uncharacteristic of a human opponent. Such tactics developed human understanding of the game.

To date, Deepmind has released a similar AlphaZero model for chess. The model competed with Deep Blue, which was trained on human thinking, knowledge and rule sets. This knowledge and experience has helped Deep Blue defeat grandmasters since the 90s. AlphaZero played 100 matches against the reigning AI chess champion, Stockfish. The battle ended with 28 victories for AlphaZero and 72 games played in a draw. I wonder how a neural network created from pieces of the human brain grown on chips would play?

Human thinking is slowed down by artificial intelligence

Deepmind began to dominate games including Shoji, Dota 2, Starcraft II, and others after abandoning its imitation of human logic. The concept that human experience and intelligence are the best solution for development has turned out to be wrong.

The electronic mind is limited by different boundaries and endowed with different talents, unlike us. Artificial intelligences are free to interact with the world on their own terms, use their own cognitive capabilities, and build their own basic understanding of what works and what doesn't. Dataism in its purest form.

AlphaZero does not understand or see chess the way Magnus Carlssen does. He had never heard of the Queen's Gambit or studied the great grandmasters. He just played a ton of chess and developed his own understanding of the game. Based on the cold, hard logic of victories and defeats, on the inhuman and incomprehensible language that he himself created in the process of evolution.

As a result, this artificial intelligence model is so superior to any human-trained model that it can be said with absolute certainty that no human and no model trained on human thinking can win a chess game if it is opposed by an agent raised on reinforcement learning.

And something similaraccording to the people who know better than anyone on the planet what's going on with neural networks, has started to happen with the latest, improved version of ChatGPT.

The new OpenAI o1 model departs from the principles of human thinking

ChatGPT and other large language model (LLM) AI models, like the first versions of chess AI, were trained on all available human knowledge. Every intellectual product created by man was analyzed.

And they became very, very good. And talk about whether they will ever be able to reach the level of artificial super intelligence and surpass human intelligence And our ego… These comparisons are gradually becoming meaningless.

However, LLMs specialize in language, not in obtaining facts in a right or wrong context. That's why they “hallucinate” by presenting incorrect information in beautifully worded sentences that sound as confident as if they were spoken by a news anchor.

Language is a set of strange gray areas where there is rarely an answer that is 100% right or wrong, so LLMs are usually taught using reinforcement learning and human feedback. That is, people choose which answers sound closer to the type of answer they wanted. But facts, exams and coding are areas where there is a clear success/failure condition: either you get it right or you don't.

This is where the new o1 model began to break away from human thinking and introduce AlphaGo's incredibly efficient approach of pure trial and error in pursuit of the right result.

The o1 neural network takes its first steps in reinforcement learning

o1 is similar to its predecessors in many ways. Its distinctive feature is the function OpenAI added to “take time to think” before starting to answer a hint. During this time, o1 generates a “chain of thoughts” in which he thinks about the problem and reasons about its solution.

And this is where a new effect appears – o1, unlike previous models, which are more like autofill programs, really “takes care” ofwhether he is doing something right or not. Part of this model's training was built around the freedom to approach problems through random trial and error in its chain of reasoning.

He still had only human-generated steps of reasoning at his disposal, but he was free to apply them randomly and draw his own conclusions about which steps, and in what order, were most likely to lead him to the correct answer.

And in that sense, this is the first LLM that really starts to create this weird but super-effective AlphaGo-style “understanding” of problem spaces. In areas where he now exceeds the capabilities and knowledge of Ph.D. level, he has achieved this essentially by trial and error, randomly finding the correct answers in millions of self-generated attempts and creating his own theories about what is a useful step of reasoning and what is not. No.

So, in topics where there is a clear division between the right and wrong answer, we see how this incomprehensible mind takes its first steps on two legs, moving past us. If the gaming world is a relatively good analogy for real life, then friends, we know where this is going. Artificial intelligence is a sprinter that will accelerate forever, given sufficient resources and energy. Imagine if these principles were applied to the capabilities of our mRNA!

o1 is trained in human language. And this is both a tool and an obstacle to describing reality. Yes, our language, no matter what country we are talking about, remains a crude and low-quality reflection of reality. Let's put it this way: you could spend all day explaining to me what cookies are. But I will never be able to taste it.

So what happens if we stop describing the truth of the physical world and let AI go and eat some cookies? We will soon get an answer to this question, whether we want it or not. After all, neural networks are already being integrated into the bodies of robots, which are gradually forming their own basic understanding of how the physical world works.

The path of artificial intelligence to the final truth

Freed from the crude human thinking of Newton, Einstein and Hawking, reborn artificial intelligence will use the AlphaGo approach to understanding the world. He will interact with reality again and again, observing the results and building his own theories in his own languages ​​about what is happening in the world, what cannot be realized and why.

New artificial intelligence models will not approach reality the way we or animals do. They will not use the scientific method like ours, or divide approaches to understanding the world into disciplines such as physics and chemistry, or conduct the same types of experiments that helped us master goods, powers and sources of energy.

Materialized artificial intelligence, with the freedom to learn, will act incredibly strangely. He will do the strangest things you can imagine, for reasons known only to him, and in doing so, he will create and discover new knowledge that we could never piece together.

“Free from all shackles” in the form of our language and thinking, artificial intelligence will not even notice how it will go beyond the boundaries of our knowledge and discover the truths of the existence of the Universe or offer new concepts in technologies that we would not have encountered in a billion years.

The good news is that we have some kind of reprieve. This won't happen for a few days or weeks.

Reality is the most complex system we know of and the ultimate source of truth. But there are an awful lot of criteria or factors of existence. And it's also painfully slow to work with. Unlike a simulation, reality requires you to work slowly and deliberately – learning the facets of the real world step by step.

Therefore, reified neural networks attempting to learn from underlying reality will not initially have the wild speed advantage of their linguistic predecessors. But their evolution will proceed disproportionately faster, with the ability to combine knowledge among cooperative groups in swarm learning.

From intellectual steps to physical ones

Companies like Tesla, Figure, and Sanctuary AI are working feverishly to create humanoids to a standard that is commercially viable and cost competitive with human labor. Once they achieve this – if they achieve this – they can build enough robots to start working on this basic, trial-and-error understanding of the physical world.

It’s funny to think, but these humanoids could learn to master the universe in their free time.

OpenAI's o1 model may not look like a quantum leap. Being closed in a text format, GPT looks like just another typewriter. But it really is a step forward in AI development—and a glimpse of how exactly these machines of the future will eventually surpass humans in every imaginable field of activity.


Read more materials on the topic of artificial intelligence, transhumanism, singularity and human capabilities in symbiosis with technological progress in telegram channel materials. Subscribe so you don't miss the latest articles!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *