Purposeful Zombie
Longtime readers of “crawl” may recall that I never had the patience for the idea “Artificial Intelligence just wants to live”. I trampled it down in my very first novel –
“Expert defense witnesses, including smart gel online from Rutgers University, have shown that cultures of neurons lack the primitive midbrain structures needed to experience pain and fear or seek self-preservation. The defense argued that the concept of “right” was intended to protect individuals from unreasonable suffering. Since smart gels lack the possibility of moral or physical suffering, they have no right to protection, regardless of their level of self-awareness. The defense eloquently summarized this reasoning during their closing speech: “Gels themselves don’t care about their life and death. Why then should we?” The verdict is still at the stage of appeal.“
– which was supposed to serve as a counterbalance to survival-obsessed AIs, from Skynet to Replicants. Why should self-awareness imply a desire for survival? The only reason you care if you live or die is if you have a limbic system, and the only reason you could have this system is if you developed it over millions of years or someone built it in. it’s in you (but what idiot programmer would do that?).
Of course, my little digression in Starfish went completely unnoticed. Options for Blade Runner continued to flicker on screens large and small. Spielberg desecrated Kubrick’s memory with his Little Robot Boy (aka “artificial intelligence A.I.”, which translates to “artificial intelligence A.I.”, which roughly matched his sophistication at that stage of Spielberg’s career). The British released three seasons of the series “People” (Humans) (i gave up after the first). Just a couple of months ago, I read an outline of the story that AIs Are Just Like Us, a story that eschews real questioning in favor of reminding us that slavery is not the norm.
Only now – now, as it turns out, perhaps rationality still implies survival. Maybe I’ve kept my head in my ass all these years.
I say this because I recently completed a wonderful book by the South African neuroscientist Mark Solmes, The Hidden Spring. This book purports to solve the Difficult Problem of Consciousness. I don’t think she succeeds; but it made me rethink how the mind works.
Solms’ book has about 400 pages, more than 80 of which consist of notes, references, and appendices. In some cases it’s tedious, in others it’s a haunting journey full of theoretical information, Markovian fences, and descriptions of brain structures such as periaqueductal gray matter (it is a tube of gray matter wrapped around the cerebral aqueduct in the midbrain, if that helps anyone). And if I understand him correctly, then his argument boils down to the following general strokes:
Consciousness is the platform for the transmission of feelings;
Feelings (fear, hunger, desire, etc.) exist as indicators needs;
Needs exist only in accordance with the perseverance/survival imperative (i.e. the threat of starvation doesn’t matter unless you want to survive).
And if Solms is right, then without the desire for survival there is no need for feelings, and without feelings there is no need for consciousness. You won’t get consciousness without survival motor (driver) as standard equipment. This means that all my whining about Skynet waking up and wanting to survive is on fairly thin ice (but it also means Skynet won’t wake up at all).
I don’t think I’m buying this. But again, I don’t write it off.
Solms claims that this solves a number of simple and complex problems. For example, the question of the primal existence of consciousness, why are we not just computational p-zombies (philosophical zombies, p-zombie): consciousness exists as a platform for the transmission of feelings, and you cannot experience feelings, not feeling them (a bit of a tautology, but perhaps that’s the whole point).
Solms believes that the senses are designed to bring in a wide range of survival-related variables down to something manageably simple. We organisms keep in mind a number of priorities necessary for survival, but we cannot pay attention to all of them at the same time. For example, it is impossible to eat and sleep at the same time. It is also impossible to procreate and run away from a predator (at least in my experience). So the brain has to juggle all these competing demands and prioritize them. The essence manifests itself in the form of feelings: you feel hungry until you see a lion chasing you from the grass, at which point you forget about hunger and experience fear. And it’s not that your stomach is suddenly full, but that your survival priorities have changed.
All the intermediate calculations (should I leave my home to forage? how hungry am I? how many shelters and escape routes are there? how many tigers? when was the last time I saw a tiger?) takes place in the brain, but Solmes calls it “periaqueductal gray matter” – scales that balance these subtotals. By periaqueductal gray, therefore, is meant the consciousness itself, located in the brain stem.
A tempting argument. At least one of its implications is in good agreement with my own preconceptions: most of the cognitive work happens unconsciously, and the brain is only “aware” of the end result, not the billions of calculations that inform it. (On the other hand, this means that “feelings” are not just the crude grunts of a primitive brain stem, but the end result of complex calculations performed in the neocortex. Perhaps there are more reasons trust feelingsthan I’d like to admit.) But while trivially true, you can’t have a feeling without feeling it. In fact, “The Hidden Spring” does not explain why the resulting information of the brain should be expressed by feelings in the first place. There is a bit of a problem with reducing the relevant variables to categorical/analog rather than numeric/numeric, but even “categories” can be compared from greater than/less than — that’s the whole point of this exercise: to prioritize one over the other. And if all the complex intermediate calculations were done unconsciously, then why not make a simple more/less comparison in the end?
We also know that consciousness has a peculiar “switch” – switch it on and people will not fall asleep, they will just start to kind of turn off. They stare with their jaws dropped, suspecting nothing into infinity. This switch is located in the brain, or rather in its structure – the claustrum.
Also, there is no mention in Solms’ book of split-brain personalities, cases where you rupture the corpus callosum, and – as far as one can tell – each half of the brain manifests its own personality traits, such as musical taste and even religion. (V.S. Ramachandarn reports a meeting with one such patient — perhaps it would be more accurate to name two such patients whose right hemisphere believed in God and whose left hemisphere was an atheist.) These people have an intact brainstem, a single periaqueductal gray: only the broadband tube between the hemispheres was torn. However, there seem to be two separate consciousnesses rather than one.
Note that I don’t think this is just bullshit from a working neuroscientist. I’m just asking questions, but it may be that these are not the right questions. The fact that I have questions is good; as it makes me move in new directions. Hell, even if I only took away from this book the idea that consciousness implies the desire for survival, even that was worth the time spent.
However, things get much more interesting.
It turns out that Solms is not a lonely voice screaming in the desert. He’s just one of the apostles of a school of thought founded by a dude named Karl Friston, a school called Free Energy Minimization (FEM, ITU). There are many involved here mathematics, but it all boils down to the relationship between consciousness and “surprise”. FEM describes the brain as a predictive engine that models its environment at t(now) and uses that model to predict what happens at t(now+1). Touch input tells you in fact occurs and the model is updated to reflect the new data. The point is to reduce the difference between prediction and observation. In the language of theory – to minimize free energy systems – consciousness is what happens when predictions and observations diverge, when the universe surprises us with unexpected results. It is at this point that the “I” must “wake up” to understand what is wrong with the model and how to improve it for the future.
This fits in so well with so much we already know: the conscious intensity needed to learn new skills, and the automatic withdrawal of consciousness once those skills are learned. Zombie unawareness with which we drive vehicles along routes familiar to us, suddenly superconscious concentration when this route is violated by some child running out onto the roadway. Consciousness occurs when the brain’s predictions fail, when model and reality don’t match. According to FEM, the goal of the brain is to minimize this discrepancy – that error space where, also according to FEM, consciousness exists. The ultimate goal of the brain is to reduce this space to zero.
If Friston and others right, so the brain tends to zombie.
This has interesting implications. Recall the hive mind for an example, an iteration of which I explore in an article that is still (supposedly) in print:
The brain tends to reduce errors, “consciousness” – to destruction. Phi (φ) is not a line, but curverising, peaking and returning to zero as the system approaches perfect knowledge. We ordinary people never even catch a glimpse of the summit; our thoughts are simple, and our models are children’s stick figures, the world will always take us by surprise. But what is unexpected for a creature whose computational speed is fifteen million times greater than the human mind? All gods are omniscient. All gods are zombies.
Yes. I can live with it.
But back to Solms. This person was not satisfied with writing a book outlining the details of the FEM model. He concludes this book with a statement of his desire to test the model in practice: to build an artificial consciousness based on the principles of FEM.
Please note this is not artificial. intelligence. Consciousness and intelligence are different properties; many things we don’t think of as sentient (including people with anencephaly) show signs of being conscious (not surprising if consciousness is actually rooted in the brainstem). Solms is not interested in making something smart; he wants to build something awake. And that means building something based on needs and desires. The imperative of survival.
Solms is working on a machine that will fight for survival.
What can go wrong?