what is left for a man

While ChatGPT sets records for the number of users, and Microsoft And Opera introduce technologies based on it into their products, Yura Chainikov, managing partner of rdl by red_mad_robot, talks about artificial intelligence, neural networks and the rapidly changing process of thinking.

About the limits of thinking

Futurist Arthur Clark argued that any sufficiently advanced technology is indistinguishable from magic. I see how Clarke’s thesis manifests itself more and more clearly in more and more areas of human intellectual activity.

Understand, realize, think, reason, think, draw conclusions, hold the context, invent the unprecedented, convey meaning, retell, communicate, draw from the description, learn new things, purposefully lie – we distinguish a lot of terms that differ greatly in the understanding of thinking, reasoning and reflection. For us, these are completely different words. But for the vast majority of people, these are all properties, facets of generalized intelligence, which for them is a familiar tool for living life. And the boundaries of these words are quite blurred.

About Searle’s room

American philosopher John Searle in 1980 published in the journal Behavioral and Brain Sciences an article titled “Minds, Brains, and Programs”in which he described the thought experiment known as the “Chinese room”.

Imagine an isolated room in which there is a person who does not know a single Chinese character. But he has precise instructions for interacting with them, like “take such and such a character from the first box and put it next to such and such a character from the second box.” This instruction does not contain information about the meaning of these hieroglyphs – a person simply follows it like a computer.

The observer, who knows the Chinese script, passes hieroglyphs with questions through the slit into the room, expecting to receive a meaningful, reasonable answer at the output. And the instruction is designed in such a way that after applying all the steps to the hieroglyphs of the question, they are converted into the hieroglyphs of the answer. In fact, an instruction is a kind of computer algorithm, and a person executes the algorithm in the same way as a computer would execute it.

In such a situation, the observer can send any meaningful question into the room and get a meaningful answer to it – just like when talking to a person who is fluent in Chinese writing. At the same time, the person inside the room does not know anything about hieroglyphs and cannot learn how to use them, since he cannot find out the meaning of even one character. A person does not understand either the original question or his answer. And the observer can be sure that there is a person in the room who knows and understands hieroglyphs.

If we exclude a person from this reasoning, and leave the algorithm, then a very philosophical question arises – does the algorithm understand, does it think, does it think, does it draw conclusions?

About ChatGPT and other neural networks

In an attempt to define intelligence, people repeatedly called the properties of the mind skills that have not yet been technologized. Much of what was previously considered signs of unconditional intelligence and thinking of a person is now being done by programs better than a person or on the borderline of this “better/worse”:

  • transformation of speech into text and vice versa,

  • recognition of faces, objects,

  • playing chess, go and poker,

  • any game without explaining the rules,

  • voice communication (Siri, Alice),

  • generation of music, video and images according to the text description with preservation of style,

  • change of style, body parts, hair color by text request,

  • code deobfuscation,

  • solving problems from the exam for 50 points,

  • passing the final MBA exam for the four.

Some of these things are self-evident to us now – “computers have always been able to do this,” what is the rationale in this? And some argue that “pattern recognition was never a sign of intelligence.”

The ChatGPT neural network supports natural language dialogues. In a number of questions, she looks like a chatty fifth grader with university knowledge – and can maintain human communication for quite some time. I checked how she behaves, and found that out of the standard problems for algebra and statistics from the Unified State Examination, I can make her solve a noticeable part of the problems, simply by formulating a request like “imagine that you are a mathematician, solve the problem.”

And this is the amazing thing that is happening to us right now. And we already perceive the first eight points from the list above as a common occurrence. And we stop considering it a sign of the mind or part of thinking. It turns out that a person has less and less of what he does better than a machine.

About the fact that a machine can do better than a person

What can a machine do better than a human? Or that it can do enough to replace a person in some area?

Let’s take medicine. The machine may not be better at analyzing x-rays and CT scans than first-class specialists, but it already does it better than the average doctor in the average hospital. Controlling the compatibility of drugs in complex complex treatments, it definitely makes a person better. It has dramatically fewer incompatibilities that lead to complications or death. This is generally bad for an ordinary person, or, to be more precise, it is very expensive – a person has to double-check a lot of things manually. It was within the power of Dr. House, but it is beyond the power of an ordinary person. And the machine can do it better than we can.

AI makes decisions, observes reality and draws conclusions about it. As soon as we narrow down the task to something very specific, it often turns out that modern technologies of deep neural networks have already matured to do a piece of the task better than a person.

About risks

What can we as actors do about this situation? As we take on a role, we should strive to observe how the current emerging AI technologies work in our subject areas.

The designer needs to know about stable diffusion. To the author of the texts – about ChatGPT. For an effective manager – about brain-computer interfaces with speech decoding. Medica – about AI-based drug incompatibility prevention tools. Biochemist – about approaches to the design of substances based on AI. For an urban designer – about traffic light network management techniques, AI recommendations on choosing places to open a shopping center, etc.

Some of these examples have come up in the last few months. Some of them scare me. As these things get smarter, more able to hold context, to solve specific problems – and do it in natural human language – it becomes a tool stronger than an atomic bomb.

If a person has a technology in his hands that, for ten cents per message, is able to individually conduct dialogues with a hundred million people, convincing them, for example, to vote for a certain candidate, then he will be very tempted to use it in this way. And if there are many such actors, then we run the risk of quite quickly finding ourselves in a situation where the average person will go crazy because several competing systems intensively rinse his brains, because it is easier to incline a shattered psyche to one or another action with more spontaneity.

Yes, people are weak, greedy, stupid. But to think that artificial things will be more perfect than we are in this sense seems to me unjustified optimism. First, when we teach them, they learn our own sins. Secondly, when we begin to use them at will, they become a huge enhancer of what was much more difficult to do before, such as manipulating public opinion and consciousness. This has never happened before on such a scale.

About where it’s all going

Thinking is no longer a purely human skill, but a complex “centauric” skill of joint human-machine systems.

In our modern economy, there is a lot of activity related to the operation of physical objects, it is tied to primitive logistics. The man took one object, rearranged it in a box, packed the box, put it on the tape, the tape moved on, loaded it into the car from the tape, took it out of the car and carried it to the warehouse. The total share of such human operations, according to various estimates, is from 20 to 80%, depending on the industry and the final product. And it’s impossible to entrust these things, because they are not too smart yet. The key word is “yet”.

The rate of progress of deep neural networks, large language models and image processing methods in the field of “understanding” is impressive. Literally two years ago, I could not even imagine that I could talk to ChatGPT in a human language and she would answer questions, getting into context.

Text models and neural networks “look” at our physical reality and “understand” in what circumstances of this reality they are. They are rapidly progressing towards the fact that every day they can be entrusted with another next piece of activity. And when they start to understand phrases like “Make me coffee, bring me some slippers” or “Place the dairy products in department number 7,” a huge number of areas will die as a sphere of human activity.

We thought that robots would replace us in simple human activities, but the fastest progress is now in the creative fields. For example, in Amazon successfully sold about 200 books co-written by ChatGPT and a human. I passed the English language exam in graduate school at MAI using this thing. She works with me more effectively than I do alone.

In general, the world will change incredibly in connection with this. And what is thinking, where are the boundaries of terms about how we think, is a big question. We can digitize a large number of human texts, videos and speeches, and within this digitized entity an increasingly authentic personality, a character model, will emerge – and yes, it will be possible to talk to her as with an almost living person.

We are not just on the threshold of a new industrial revolution in this direction. We are already inside it.

By the way, we are open Jobs Data Scientist.

Materials worked on:

  • text – Yura Chainikov,

  • editing – Vitalik Balashov,

  • illustrations — Marina Chernikova.

We share iron expertise from practitioners in our telegram channel red_mad_dev. We add useful videos on the YouTube channel of the same name. Join now!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *