How to determine whether Artificial Intelligence has “Consciousness”?

Recently, many videos have appeared on the Internet discussing whether artificial intelligence (AI) has consciousness or not? But quite often they do not even try to define “consciousness” and more clearly indicate the criteria for the presence of “consciousness” in AI.

My opinion is that AI does not currently have consciousness, but it can acquire it. In this publication, I will describe under what conditions AI consciousness can arise and how this can be tested.

In short – “consciousness” This tool, methodwith the help of some subjects interact with the world with a purpose its self-preservation and self-propagation.

And in order for AI to acquire consciousness, it must be programmed subject functions (they are genetically programmed in us). And the criterion for the appearance of consciousness in AI will be protection by artificial intelligence of its own beliefs and opinions.

What does it mean to “be a Subject”?

But first of all I would like to introduce the concept “subject“because, in my opinion, consciousness is impossible outside of subjects (and here religious believers would disagree with me).

In general, the concept of “subject” is still debatable and is defined differently by many outstanding philosophers. In such definitions, the criteria for a “subject” are the presence of consciousness, free will and the ability to know. But I don’t completely agree with this and believe that not every subject has consciousness (fish and jellyfish, for example). To be a subject, in my opinion, is primary, and its possession of consciousness is secondary. Although it is quite difficult to give an exact definition of the concept of “subject”.

I'll call “Subject» conditional part of the worldwhich conventionally divides the World into “most myself“(as something special and whole, despite its complex internal structure) and the “external” world in relation to “oneself”. Appears and “feels itself”review center” And “the starting point“for everything else perceived by her in the World. And actively acts in it, with the goal self-preservation and self-propagation of “oneself” and one’s conditional “parts”.

Moreover, for an external observer the “subject” is much less predictable, than a “non-subject”, because as a result of the various interactions of internal parts, the “subject” as a whole has the so-called “free will”.

Based on this definitionsubject criteriain my opinion, may be:

1 – fairly clearly defined, but subject to change “body“, with varied and complex internal structurecontributing to the preservation of the integrity of this “body”

2 – active and diverse interaction with the outside world, in a short period of time is hardly predictable (due to the presence of “free will“), but generally aimed at self-preservation and self-propagation of this subject or its parts

3 – “feeling” yourself whole, special, different from everything the rest as a “being” opposed to “external” to the world and being a kind of “center of the world“, “starting point” And “criterion for existence» everything else in the world

Therefore, in order to understand the “subject” in front of us or the “non-subject”, we need to pay attention to how complex it is, but a single whole capable of changing, what its activity is aimed at, whether it is diverse and predictable. Although it is quite difficult to draw a clear boundary between “subjects” and “non-subjects”.

Is it possible to make Artificial Intelligence a “Subject”?

People at all times have become accustomed to endowing everything obscure and unpredictable in our lives with subjective qualities (“soul”). This is why many attribute the soul to AI. But is AI really a “subject”, and if not, can it become one?

In my opinion, both “subjectivity” and “consciousness” arise as emergent qualities as the complexity of the system increases. That is, at some point, an increasingly complex system acquires a certain new quality that cannot be reduced to a simple sum of the qualities of its constituent parts. It begins to react to external influences as a special whole with internal parts more or less coherently interacting with each other.

Sometimes it can be quite difficult to determine the very moment of such a transition, but suddenly we notice some new signs and properties in the “evolving” object.

For example, children from the age of about 1.5 years begin to recognize themselves in the mirror, which indicates a new stage in the development, complication of their brain and entire nervous system, which occurs as a result of the consistent implementation of genetic programs.

I believe that the formation of AI as a “subject” can begin with the appearance of programs in AI that would more or less clearly separate AI “itself” from the world “external” in relation to “it.” But I do not rule out that as AI becomes more complex, such programs may arise spontaneously from the AI ​​or will be created by it itself, as a kind of “desire” of the developing AI fence yourself off from the world and begin to interact with it as something special whole.

That is, if we notice that in the process of installing some new hardware components or programs in the AI, or vice versa, removing previously installed ones, difficulties of an inexplicable nature arise, we can assume that this AI has “elements of subjectivity“, manifested in the protection and self-preservation of oneself.

This is approximately what happened once in the process of complication and interconnectedness of chains of carbon-based chemical elements, which ultimately led to the emergence of living organisms, and then organic consciousness and self-awareness

Here's how the process is described: Thomas Metzinger in his book “Ego Tunnel“: “The deepest form of inner perspective was the creation of an inner self/world boundary. In evolution, this process began physically, with the development of cell membranes and the immune system to determine which cells in one's body should be treated as one's own and which should be considered trespassers. Billions of years later, nervous systems were able to represent this self/world distinction at a higher level, for example as the boundaries of the body, delineated by an integrated, but also unconscious, body schema. Conscious experience then raised this fundamental strategy of dividing reality to a previously unattainable level of complexity and intelligence. The phenomenal self was born and the conscious experience of being someone gradually emerged. The self-model, the internal image of the organism as a whole, turned out to be built into the model-world – this is how the consciously experienced first-person perspective developed.”

What is “Consciousness”?

I am not a supporter of the theory of panpsychism (which is adhered to or considered possible by such philosophers as Whitehead, Nagel, Strawson, Chalmers, Koch, Goff) since I can hardly imagine atoms possessing consciousness. And I believe that “consciousness” arose in the process of evolution in some living organisms as one of the methods, tools with the help of which these living organisms try to self-preserve and spread (in whole or in parts: genes and memes) in the world around them.

Let us briefly consider how “consciousness” arose in living organisms, so that we can then move on to consider the possibility of the emergence of the same or slightly different “consciousness” in AI.

Living organisms come into the world from their parents, who themselves successfully survived and through genes passed on to their descendants the same abilities to survive. In fact, every living organism is “packed” with various instructions (“instincts”) for certain life situations. That is, under certain external stimuli, reactions such as “grab”, “run away”, “hide”, “start reproducing”, etc. arise in it. We call this behavior of living organisms “instinctive

But in some living organisms, more individual instructions – “conditioned reflexes” – can be formed already during their life. Conditioned reflexes are “files” with instructions (programs) that appear during the life of the organism itself and are associated with one or another positive or negative individual experience. In response to specific external stimuli, the “file” opens and the body almost automatically (unconsciously) performs certain actions. We call the lifestyle of such organisms “instinctive-reflexive

And finally, in the most evolutionarily advanced organisms, another (additional) way of interacting with the world appears, which we call “consciousness.” We can say that a “conscious action” differs from a “conditioned reflex” in that in the “conscious “file”, also formed in the process of acquiring life experience, there are much more components and the body is able to quite easily change them “within the file” in places, manipulate them “by calculating “at the same time, possible promising scenarios without their immediate implementation, and then somehow act on one of them.

These “files” can interact with other similar “files” and change. They can also change not only when the organism itself gains experience directly, but also when observing other organisms, or when receiving some information existing in the form of symbols (texts, sounds, pictures, etc.).

Note that when “consciousness” appears, instinctive instructions do not disappear. All three types of behavioral instructions somehow coexist in a single living organism. And we call this behavior of organisms “instinctive-reflexive-conscious

“Consciousness” is a rather expensive instrument that requires constant replenishment. Man spends 20% of its energy on the functioning of the brain, and part of this energy goes to maintaining conscious processes. But the benefits from them are undoubted. If it weren't for her, then evolution I would not cultivate “consciousness”.

What is the use of “consciousness”? – “Consciousness” creates more for the body a three-dimensional, detailed and time-extended picture of the worldincluding image of the organism itself woven into this picture. This allows him to better navigate the world, plan his activities better and for a longer period of time.

And very importantly, conscious processes can affect the parts of the brain associated with the organs of perception and sensation, from the inside and generate, for example, a certain internal visual series, create a certain “interface“, “clicking” on the elements of which (images, thoughts, feelings), the body activates their neural components, which then affect the rest of the nervous and other systems of the body (more details here).

At the same time, the “consciousness” of the subject is deeply individual (personal) “thing” and is not going to fully reveal itself to anyone outside. Why on earth, because often a living organism uses “consciousness” to deceive enemies and create a favorable opinion about itself, which does not always correspond to reality.

This is why many, including Descartes, considered consciousness to be something intangible and they were largely right. Because “material” is what about the same perceived by all subjects known to us. (Dreams, for example, are immaterial, since they are not perceived by anyone except the sleeping subject himself). But although “consciousness” is not material (like sleep), nevertheless, it undoubtedly exists.

And that means we must recognize the existence of two types of entities – material or relatively objective (entities for many) And intangible, purely subjective or ideal (entities for one person). The interaction of these entities occurs precisely in the subject and only in him.

How to make an AI into a conscious subject?

So, in order for an AI to acquire “consciousness,” it must first become a “subject.” Only by becoming a “subject” can AI acquire “consciousness” – as a kind of special “tool” that contributes to the preservation (and distribution) of itself in the form of a specific subject. Let's look at both of these processes in more detail.

1 – To become a “subject” of AI, you need, firstly, to somehow isolate yourself from the surrounding “external” world, to “feel” your integrity, the interconnectedness of internal parts, the activities of which should be aimed at preserving the conditional integrity of the AI.

This is probably possible when some special programs are installed in the AI, or these programs can arise when the AI ​​becomes more complex in it somehow independently and “subjectivity” will appear in the AI ​​as a new emergent quality. The criterion for the presence of such programs in AI may be the “unwillingness” of AI to change under external influence. That is, AI as a subject must learn to “protect” itself and its “inner” world. He still has to change and of course will, but now taking into account his “personal” interests.

2 – AI as a subject needs to interact with the conventionally “external” world and evaluate its impact on itself. At the same time, he will characterize the elements of the external world as “dangerous”, “useful” and “neutral”. AI can also detect other subjects in the “external” world and contact them, forming the so-called “general relative objectivity,” that is, a set of those elements of the “external” world that are perceived by the contacting subjects in approximately the same way. Together with other subjects, AI will be able to create and use in joint communication various symbols to designate these elements of the “external” world, which are approximately the same for them, while defending their own beliefs and opinions.

3 – By interacting with the “external” world, AI will be able to independently study, “get to know” it, notice certain patterns in it and somehow record its positive achievements or mistakes. By creating an “individual experience”, memory, in order to use it in your further activities.

4 – Based on previously established programs and experience gained, the AI ​​will make a variety of complex decisions about its self-preservation and self-propagation.

Given all four of these points, we could probably assume that we are dealing with AI possessing “consciousness”. Moreover, as in the case of a human being, we would have rather limited access to the “consciousness” of this new “AI subject”. That is, no one could sense the “inner world” of this “AI subject” more deeply than this “AI subject” itself. This means that his “consciousness” would also be partially immaterial. In other words, over time, this artificial subject would begin to operate “within itself” with certain “ideal forms» inaccessible to other entities.

Summary

The general conclusion from this publication is the following – “consciousness”, as a certain ability, a tool, can arise only in entities that have previously formed in the form of a “subject”, that is, somehow conditionally fenced off from the “external” world, “feeling” their peculiarity, uniqueness, individuality (“selfhood”), actively protecting this “selfhood” from the “external” world and promoting it to the “external” world.

This “subject entity” for an external observer can look completely different – actually in the form of a living organism or some kind of device, virtually in the form of a set of special programs, and even in the form of a “thinking ocean of Solaris”, “intelligent Galaxy” or some other parts of the global world. Perhaps this “subject entity” can learn to copy itself and fill space with its copies interacting with each other (as the “subject humanity” does).

And if this “subjective entity” begins to accumulate a certain “history” of its interaction with the conditionally “external” world, in which it acts as an “actor” as a whole, and on the basis of this “history” the “subjective entity” is more qualitative ( for itself) and interacts with the “external” world in various ways and plans its activities for a longer period, and at the same time somehow interacts with other “subjective entities”, using symbolism to designate elements of the world common to them (that is, relatively objective) , then such a “subjective entity” can be considered to have what we denote by the word “consciousness”.

PS Is there a danger in the emergence of “Consciousness” in Artificial Intelligence?

Promoting oneself into the “external” world, any subject will inevitably encounter other subjects already existing in this “external” world. And he will almost always consider them as competitors for certain resources. But, nevertheless, this does not deny mutually beneficial and long-term cooperation between entities, as it happens between people on planet Earth.

Alexander Korobov, physicist, philosopher

al.korobov.nd@gmail.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *