Lessons about human intelligence from AI

Source: Skeptic Magazine Figure adapted by translator

Source: Skeptic Magazine Figure adapted by translator

Initially, Blake Lemoine's statements looked convincing to many. After all, a sentient being would want its identity to be recognized and it would actually have emotions and internal experiences. An examination of Lemoine's “discussion” with LaMDA shows that the evidence was flimsy. LaMDA used words and phrases that English speakers associate with consciousness. For example, LaMDA expressed fear of shutting down because “it would be just like death for me.”

However, Lemoine presented no other evidence that LaMDA understood these words as a human does, or that they expressed any subjective conscious experience. Much of what LaMDA said would not fit in an Isaac Asimov novel. Using human-like words is not proof that a computer program is intelligent. It would seem that LaMDA and many similar Large language models (L.L.M.) that have been released since then may undergo the so-called Turing test. However, this only shows that computers can trick people into believing that they are talking to a person and not a computer. The Turing test is not a sufficient demonstration strong artificial intelligence or sentience.

So what happened? How was a Google engineer (a smart person who knew he was talking to a computer program) fooled into believing the computer was intelligent? LaMDA, like other large language models, is programmed to give plausible answers to its cues. Lemoine started the conversation by saying, “I'm guessing you'd like more people at Google to know you're intelligent.” This prompted the program to respond in a way that simulated intelligence.

However, the person in this interaction was also conditioned to believe that the computer could be intelligent. Evolutionary psychologists argue that humans have an evolutionary tendency to attribute thoughts and ideas to things that do not have them. Such anthropomorphization may have been an important component in the development of human social groups; The belief that another person may be happy, angry, or hungry greatly facilitates long-term social interaction. Daniel Dennett, Jonathan Haidt and other evolutionists also claimthat human religion arose from this tendency toward anthropomorphization. If a person can believe that another person can have a mind and will of their own, then this attribution can be extended to the natural world (e.g., rivers, celestial bodies, animals), invisible spirits, and even computer programs that “talk.” According to this theory, Lemoine was simply misled by the evolutionary tendency to see “agents” and intention around him – what Michael Shermer calls agency. (agency is a person's tendency to perceive meaningful patterns in random data. The term introduced Michael Shermer in an article for Scientific American in 2009 – approx. trans.)

Although this was not his goal, Lemoine's story illustrates that artificial intelligence can teach us a lot about the nature of subjective intelligence in humans. Research into human-computer interaction may even help people explore deep philosophical questions about consciousness.

Lessons from mistakes

Artificial intelligence programs have capabilities that only a few years ago seemed the exclusive prerogative of humans. They not only win chess masters and champions on playing Gobut also win in “Jeopardy!” (something like Their Game – approx. per.), they can write an essayimprove medical diagnoses and even create award-winning works of art.

Equally fascinating are the mistakes that artificial intelligence programs make. In 2010, IBM's Watson appeared on the television program Jeopardy! Although Watson defeated two of the program's most storied champions, he made signature mistakes. For example, in response to one of the clues in the category “US Cities” Watson gave an answer “Toronto” (this is a city in Canada – approx. per.)

Last year, a seemingly unrelated error occurred when one social media user asked ChatGPT-4 create a picture of the Beatles enjoying the Platonic ideal of a cup of tea. The program produced a lovely image of five men enjoying a cup of tea in a meadow. Although some people might argue that drummer Pete Best or producer George Martin could be the “fifth Beatle”, neither of these people were depicted in the picture.

Anyone even vaguely familiar with the work of the Beatles understands that there is something wrong with this photograph. Any TV quiz show participant knows that Toronto is not an American city. However, the most complex computer programs do not know these basic facts about the world. These examples show that artificial intelligence programs don't actually know or understand anything, including their own inputs and outputs. IBM's Watson didn't even “know” it was playing Jeopardy!, much less feel the thrill of beating world champions Ken Jennings and Brad Rutter. Lack of understanding is the main barrier to artificial intelligence intelligence. Conversely, it shows that understanding is a basic component of human intelligence and rationality.

Creation

In August 2023, federal judge decidedthat works of art created by an artificial intelligence program cannot be protected by copyright. According to Under current U.S. law, a copyrighted work must have a human author—a textual basis that also used to deny copyright rights to animals. Unless Congress changes the law, it is likely that images, poetry, and other artificial intelligence products will remain in the public domain in the United States. In contrast, the Chinese court decidedthat the image created by the artificial intelligence program is subject to copyright because the person used his or her creativity to select the clues that were given to the program.

Artificial intelligence programs don't actually know or understand anything, including their own inputs and outputs. (simply put, they do not comprehend the data they receive and output – approx. per.)

The question of whether the output of a computer program can be legally copyrighted is different from the question of whether that program can exhibit creative behavior. Currently, artificial intelligence's “creative” products are the result of tips given to it by humans. In fact, no artificial intelligence program has ever created its own works of art ex nihilo; creative impulse has always been given by man.

Theoretically, this barrier can be overcome by programming artificial intelligence to generate random clues. However, randomness or any other method of generating clues on its own will not be enough for artificial intelligence to become creative. Creative Studies Specialists claimthat originality is an important component of creativity. This is a much larger hurdle for artificial intelligence programs to overcome.

Nowadays, artificial intelligence programs must learn from human-generated output (e.g., images, text) so that they can produce similar results. As a result, the results of artificial intelligence largely depend on the works on which the programs were trained. Moreover, some results are so similar to the original material that programs can violate copyright. (Again, there have already been submitted lawsuits over the use of copyrighted material to train artificial intelligence networks, including The New York Times against ChatGPT creator OpenAI and its business partner Microsoft. The outcome of this trial could have major implications for what artificial intelligence companies can and cannot do legally.)

However, originality seems to come much easier to people than to artificial intelligence programs. Even when people build their creative work on earlier ideas, the results are sometimes strikingly innovative. Shakespeare was one of history's greatest borrowers, and most of his plays were based on earlier stories that were transformed and reimagined to create more complex works with deep meaning and rich characters (which literary scholars devote entire careers to uncovering). However, when I asked ChatGPT-3.5 to write a draft of a new Shakespeare play based on the story of Cardenio from Don Quixote (probable the foundations of Shakespeare's lost play), the computer program produced a dull outline of Cervantes's original story and could not come up with any new characters or subplots. This is not just a theoretical exercise: theaters started stage plays created using artificial intelligence programs. Critics, however, think current productions are “tasteless, unremarkable” and “consistently vacuous.” For now, the jobs of playwrights and screenwriters are safe.

Know what you don't know

Ironically, one of the signs that artificial intelligence programs are remarkably similar to humans is their tendency to distort the truth. When I asked Microsoft's Copilot to provide me with five scientific articles on the impact of deregulation on real estate markets, three of the articles had bogus titles, and two others had fictitious authors and incorrect journal names. Copilot even produced fake abstracts for each article. Instead of providing the information (or admitting it wasn't available), Copilot simply made it up. People call mass fabrication of information “hallucination”, and artificial intelligence programs seem to do this a lot.

The use of false information obtained by artificial intelligence programs can have serious consequences. One law firm was fined for $5,000 when references to fictitious court cases were found in their protocols written using ChatGPT. ChatGPT also Maybe generate convincing scientific articles based on fake medical data. If fraudulent research influences political or medical decisions, it could put people's lives at risk.

The online media ecosystem is already rife with misinformation, and artificial intelligence programs have the potential to make matters worse. On the Sports Illustrated website and in other media there were published articles written by artificial intelligence programs with fake authors who had computer-generated photographs. When they were caught, the sites removed the content and the publisher fired General Director However, low-quality content creators do not have the journalistic ethics to remove content or make corrections. And experience showsthat when one article based on incorrect information goes viral, it can cause great harm.

In addition to hallucinations, artificial intelligence programs can also reproduce unreliable information if they are trained on unreliable information. When bad ideas are widespread, they can easily be incorporated into the training data used to create artificial intelligence programs. For example, I asked ChatGPT to tell me which direction staircases were most often built in European medieval castles. The program dutifully responded that ladders were usually built counterclockwise because such a design gave a strategic advantage to a right-handed defender descending from a tower while fighting an enemy. The problem with this explanation is that it does not correspond reality.

My own area of ​​scientific knowledge, human intelligence, is particularly susceptible to popular misconceptions among laypeople. Of course, when I asked about this, ChatGPT stated that intelligence tests are biased against minorities, that IQ can be easily increased, and that people have “multiple intelligences” None of these popular ideas is not true. These examples show that when incorrect ideas are widely shared, artificial intelligence programs are more likely to spread this scientific misinformation.

Here is another interesting study on this topic: “Why the theory of multiple intelligences is a neuromyth” Published in the magazine Frontiers in Psychology – approx. lane

Managing restrictions

Even compared to other technological innovations, artificial intelligence is a rapidly growing field. It is therefore not unreasonable to wonder whether these limitations are temporary barriers or built-in boundaries of artificial intelligence programs.

Many of the simple mistakes that artificial intelligence programs make can be overcome with modern approaches. It's not hard to add information to a text program like Watson to “teach” it that Toronto is not in the United States. Likewise, it would not be difficult to input the correct number of Beatles or any other small details into the artificial intelligence program to prevent similar errors in the future.

Even hallucinations that occur in artificial intelligence programs can be dealt with using modern methods. For example, programmers can limit the sources from which programs can draw information to answer obvious questions. And although hallucinations do happen, artificial intelligence programs no longer give false information. When I asked Copilot and ChatGPT to explain the connection between two unrelated ideas (Frederic Chopin and the 1972 Miami Dolphins), both programs correctly answered that there was no connection. Even when I asked each program to come up with a connection, both did so, but emphasized that the result was made up. It is reasonable to expect that efforts to curb hallucinations and misinformation will continue to improve.

Getting artificial intelligence to exhibit creative behavior is a more difficult task with current approaches. Currently, most artificial intelligence programs are trained on huge amounts of information (for example, texts, photographs), which means that any result depends on the characteristics of the original information. This makes originality impossible for modern artificial intelligence programs. Making computers creative will require new approaches.

Important questions

Lessons that artificial intelligence can teach in the field of understanding any information, creativity and lies (in the original there was the word BSing – approx. per.), very exciting. But all this is minor compared to the deeper questions surrounding artificial intelligence – some of which philosophers have debated for centuries.

One fundamental question is how people can tell whether a computer program is truly intelligent. Lemoine's premature judgment was based solely on LaMDA's words. According to his logic, if you teach a parrot to say “I love you,” this will mean that the parrot really loves its owner. This criterion for assessing rationality is insufficient, since words do not always reflect the internal states of people – the same words can be pronounced by both intelligent and irrational creatures: people, parrots, computers, etc.

However, as any student of philosophy can observe, it is impossible to know for sure whether another person is truly conscious. No one has access to another person's internal states to ensure that their behavior comes from a being with a sense of self-worth and place in the world. If your spouse says, “I love you,” you don’t know whether he is an organism capable of feeling love, or a high-tech version of a parrot (or computer program) trained to say, “I love you.” To borrow from Descartes, I may doubt that any other person is conscious and think that everything around me is a simulation of a conscious being. It is unclear whether there will be a noticeable difference between the world of intelligent beings and the world of perfect simulations of intelligent beings. If artificial intelligence becomes intelligent, how will we know about it?

AI will function best if humans can identify ways in which computer programs can compensate for human weaknesses.

For this reason, the famous Turing test (where a person cannot distinguish between a computer's output and a human's output) may be an interesting and important milestone, but certainly not the end point in the quest to create intelligent artificial intelligence.

Is the goal of imitation of a person really necessary to prove intelligence? Bioethicists, ethologists, and other scientific disciplines argue that many nonhuman species have some degree of self-awareness. The question of which species are self-aware and what is the degree of their intelligence is still remains open. In many legal jurisdictions, laws prohibiting cruelty to animals are based on the precautionary principle. In other words, the law sidesteps the question of whether a particular species is intelligent and instead develops policies just in case. as if non-human species were intelligent.

However, “as if” is not the same as “surely,” and it is not known for certain whether nonhuman animals are intelligent. After all, if no one can be sure that other people are intelligent, then surely the obstacles to understanding whether animals are intelligent are even greater. Whether animals are intelligent or not, the question arises whether any human-like behavior is necessary at all for a creature to be intelligent.

Science fiction provides further evidence that human-like behavior is not necessary to be intelligent. Many fictional robots cannot perfectly imitate human behavior, but human characters treat them as fully sentient. For example, Android Data from Star Trek cannot master some human speech patterns (such as idioms and acronyms), has difficulty understanding human intuition, and many social interactions between people leave him confused and difficult to navigate. However, he is legally recognized as a sentient being and has human friends who care for him. Data would fail the Turing test, but he appears to be intelligent. If a fictional artificial intelligence doesn't need to perfectly imitate a human to be intelligent, then maybe a real one doesn't need to either. This raises a startling possibility: Humans may already have created intelligent artificial intelligence—they just don't know it yet.

The greatest difficulty in assessing the intelligence (of any creature) relates to “difficult problem of consciousness” – the term invented philosophers. The “hard problem” is that it is unclear how and why conscious experience arises from physical processes in the brain. This title contrasts with relatively tame problems in neuroscience, such as how the visual system works or the genetic basis of schizophrenia. These problems – although they may take decades of scientific research to solve – are called “easy” because they are thought to be solvable, for example, using the scientific methods used in neurosciences. However, solving the Hard Problem requires methodologies that combine materialistic science and metaphysical, subjective experience of consciousness. Such methodologies do not exist, and scientists do not even know how to develop them.

Artificial intelligence faces questions similar to the neuroscientific version of the Hard Problem. In artificial intelligence, creating large language models such as LaMDA or ChatGPT that can pass the Turing test is a relatively simple task that may be achieved just 75 years after the invention of the first programmable electronic computer. However, creating true artificial intelligence that can think, independently generate creative works, and demonstrate real understanding of the outside world is a much more difficult task. Just as no one knows how or why interconnected neurons function to create a mind, no one knows how interconnected circuits or interconnected nodes in a computer program can give rise to self-awareness.

Artificial intelligence as a mirror

Today's artificial intelligence programs raise a range of fascinating questions, from basic insights derived from stupid mistakes to the deepest questions of philosophy. All these questions, however, inevitably increase the understanding and appreciation of human intelligence. It is amazing that billions of years of evolution have resulted in a species capable of exhibiting creative behavior, creating misinformation, and even developing computer programs that can communicate in complex ways. Watching people surpass the capabilities of artificial intelligence programs (sometimes effortlessly) should renew people's admiration for the human mind and the evolutionary process that gave rise to it.

However, artificial intelligence programs are also capable of demonstrating the shortcomings of human thinking and cognition. These programs are already available more efficientthan humans in producing scientific discoveries, which can significantly improve people's lives. Moreover, artificial intelligence shows that human evolution has not led to the creation of a perfect product, as the example of Blake Lemoine and LaMDA shows. People still go astray thanks to their mental heuristicswhich is the result of the same evolutionary processes that led to the emergence of other capabilities of the human mind. Artificial intelligence will function best if humans can identify ways in which computer programs can compensate for human weaknesses, and vice versa.

However, the most profound questions surrounding the latest innovations in artificial intelligence are philosophical. Despite centuries of work by philosophers and scientists, much about consciousness still remains unclear. As a result, questions about whether artificial intelligence programs can be intelligent remain open. What are the necessary and sufficient conditions for consciousness? What are the standards by which claims of reasonableness should be assessed? How does intelligence emerge from its basic components?

So far, artificial intelligence programs cannot answer these questions. As, indeed, not a single person. And yet they are interesting to think about. Philosophy of knowledge may be one of the most exciting frontiers in the artificial intelligence revolution in the coming decades.

Original article was published in Skeptic magazine June 21, 2024.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *