Artificial Intelligence and the Existential Crisis

There are fears that artificial intelligence could become an existential threat to humanity, ranging from the “ashes of nuclear war” to the “gray assimilation” scenarios depicted in the film Transcendence. But according to a new study, large language models (LLMs) can only follow instructions, cannot develop new skills on their own, and are inherently “controllable, predictable, and safe,” which sounds like good news for us meatbags.

The more articles I write about the brain and consciousness, the more I understand that the questions of “who is in charge”, “AI uprising scenarios”, questions of increasing productivity and personal effectiveness are not resolved within the framework of one scenario.

Baseline scenario where artificial intelligence threatens humanity

The President of the United States announces to the public that the nation's defense has been handed over to a new artificial intelligence system that controls the nuclear arsenal. One command and war is rendered unnecessary by a superintelligent machine incapable of error, capable of learning any new skills it needs to keep the peace, and growing more powerful and intelligent with each passing minute. It is efficient to the point of infallibility.

As the president thanks the team of scientists who developed the AI ​​and proposes a toast to world peace, the AI ​​suddenly begins sending text messages. It makes demands followed by threats to destroy a major city if humans do not comply.

This is one of the scenarios about the development of AI in recent years. If we do nothing, if it is not too late, AI will spontaneously develop, become conscious and superintelligent and make it clear that Homo Sapiens has been reduced to the level of domestic animals. And this scenario is real for artificial intelligence grown from parts of the human brain attached to chips.

The Roots of Fears of Artificial Intelligence

Oddly, the above parable comes from 1970. It’s the plot of the sci-fi thriller Colossus: The Forbin Project. It’s about a supercomputer that takes over the world with depressing ease. It’s an idea that’s been around since the first real computers were built in the 1940s, and the fear has been played out over and over again in books, movies, and video games.

Although, given that neural networks are used to standardize the smiles of service personnel in stores, the problem may arise from the other side.

If you look at the programming journals, they have been talking about computers and the danger of their takeover since 1961. Over the past six decades, experts have repeatedly predicted that computers will demonstrate human-level intelligence within “the next five years” and far surpass it “within 10 years.”

Artificial Intelligence as an Obsolete Technology?

For starters, artificial intelligence has been around since at least the 1960s, and has been used in many fields for decades. We tend to think of the technology as “new” because it’s only recently that AI systems that process language patterns and images have become widely available. And giant supercomputers that mimic the human brain are being built to run them. These are also examples of AI that are more understandable to most people than chess engines, autonomous flight systems, or diagnostic algorithms.

These “overdrive accelerators” instill fear of unemployment in many people whose areas of responsibility seemed completely free of automation threat.

But the real question is: does Artificial Intelligence pose an existential threat? After more than half a century of false alarms, are we finally going to find ourselves under the thumb of a modern Colossus or HAL 9000? Or maybe we will be hooked up to the Matrix?

According to researchers from the University of Bath and the Technical University of Darmstadt, the answer is no.

Research into the potential threat posed by AI

IN researchpublished as part of the 62nd annual meeting of the Association for Computational Linguistics (ACL 2024), says that AI, and LLM in particular, is inherently controllable, predictable and safe. The three paragraphs below are from the study’s co-author, Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath.

The prevailing view that this type of AI (LLM) poses a threat to humanity hinders the widespread adoption and development of these technologies, and distracts attention from the real problems that require our attention.

There have been concerns that as models get bigger and more sophisticated, they will be able to solve new problems that we cannot currently predict, raising the risk that these larger models could acquire dangerous capabilities, including the ability to reason and plan. This has generated a lot of discussion – for example, at the AI ​​Safety Summit last year at Bletchley Park. But our research shows that the fear that a model will evolve and do something completely unexpected, innovative and potentially dangerous is unfounded.

Concerns about the existential threat are not only expressed by non-specialists, but also by some leading AI researchers around the world. But this is rather a fact of our times.

Based on research on the security of neural networks, artificial intelligence and LLM

If we look closely at these models, testing their ability to perform tasks they have not encountered before, we find that LLMs are very good at following instructions and demonstrating language proficiency. They can do this even when they are shown only a few examples, such as when answering questions about social situations.

What they can't do is go beyond those instructions or learn new skills without explicit instructions. LLMs may exhibit strange behavior, but it can always be traced back to the code or instructions they were given. In other words, they can't evolve into anything more than what they were created to be, so no Deus Ex Machina.

However, the team emphasizes that this does not mean that AI does not pose any threat. These systems already have remarkable capabilities and will become more complex in the very near future. The main and immediate risk from neural networks is the exponential growth of postmodern content. Neural networks have a frightening potential to manipulate information, create fake news, scam people with deepfakes, which is more important than the weather in the house, provide lies even without the original intention, and abuse neural networks as a cheap solution to hide the truth.

The real risks

The danger, as always, comes not from the machines, but from the people who program and control them. Whether through malice or incompetence, it is not the computers we need to worry about. It is the people behind them.

More strange materials about the brain, consciousness, psyche and how realistic it is to find the keys to all this – they tell Telegram channel materials. Subscribe to stay up to date with new articles!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *