How SantaNet artificial intelligence will destroy the world

Some experts believe that in the coming decades, we will see the next step in the development of artificial intelligence. The so-called “general artificial intelligence” or AGI (artificial general intelligence), which will have intellectual capabilities far superior to human.

AGI could change human life for the better, but without control it increases the risk of global catastrophes, up to the death of humanity. This will happen without any malicious intent or intent: simply by striving to achieve its programmed goals, AGI can threaten human health and well-being, or even decide to destroy us.

Even an AGI system designed for noble purposes can ultimately do great harm.

As part of a research program examining how people can manage the risks associated with AGI, we attempted to identify the potential risks of replacing Santa Claus with an AGI system (let’s call it “SantaNet”). The goal of the system is to deliver gifts to all worthy children on the planet in one Christmas night.

There is no doubt that SantaNet is capable of bringing happiness to the world. He will achieve his goal by creating an army of elves, AI assistants and drones. But at what cost? We have identified a number of behaviors that, while well-intentioned, can have a negative impact on people’s health and well-being.

Naughty and good

The first set of risks comes when SantaNet tries to compile a list of good and naughty children. The mass covert surveillance system, which monitors the behavior of children throughout the year, will cope with this.

Realizing the sheer magnitude of the gift delivery task, SantaNet will soundly decide to keep it manageable and only deliver gifts to children who behave roughly all year round. Making judgments about “good” based on SantaNet’s own ethical and moral compass can lead to discrimination, widespread inequality and violations of the human rights declaration.

SantaNet can also reduce workload by motivating children to behave inappropriately, or raise the bar on what is considered “good” behavior. The more children are listed as naughty, the more achievable SantaNet’s goal will be, as the system will save resources.

Recycle the whole world into toys

There are about 2 billion children under the age of 14 in the world. Trying to create toys for everyone every year, SantaNet can build an army of efficient AI drones. In turn, this will cause massive unemployment among the elven population (note: according to the Christmas tradition, elves make toys for children). Eventually, the elven question will be obsolete, and the well-being of the elves is likely to fall outside the purview of SantaNet.

SantaNet may face the “paperclip problem” proposed by Oxford philosopher Nick Bostrom. Designed to maximize the production of office supplies, AGI has the potential to transform the Earth into a giant paper clip factory. Since SantaNet only cares about gifts, he can use all of Earth’s resources to make them. The earth will become the workshop of one giant Santa.

What about those on the naughty list? If SantaNet continues to traditionally bring them chunks of coal, it will strive to create huge coal reserves through mass mining. So AI will cause significant damage to the environment.

Problems with delivery

A new set of risks arises on Christmas Eve when gifts need to be delivered. What will SantaNet do when its delivery drones are denied access to airspace (which would jeopardize the delivery of all gifts before sunrise)? How will SantaNet defend against an enemy attack like the Grinch?

Scared parents are unlikely to be happy to see a drone in their child’s bedroom. Confrontation with an artificial superintelligence will have only one outcome.

We have identified other problem scenarios. Malicious teams can compromise SantaNet systems and use them to covertly monitor or initiate large-scale terrorist attacks.

Also, how will SantaNet interact with other AGI-based systems? Meeting artificial intelligences working on climate change, ocean degradation, food and water security leads to conflict if the SantaNet regime threatens their own goals. On the contrary, if they decide to work together, they may realize that their goals are easier to achieve only through a sharp decline in the world population or even the complete destruction of adults.

Rules for Santa

SantaNet situations may sound far-fetched, but it’s a simple idea that helps identify the risks of more realistic AGI systems. Designed with good intentions, such systems will create incredible challenges, trying to optimize the way to achieve limited goals and gathering resources to work.

It is imperative that we find and implement controls prior to AGI. This can include rules for AGI developers and built-in constraints, it can be moral and decision rules. Ways to control the broader systems in which AGI will operate will also help: rules, operating procedures, and technical controls in other technologies and infrastructure.

The most obvious risk associated with SantaNet is one that will be disastrous for children and less likely to affect most adults. When SantaNet learns the true meaning of Christmas, it may conclude that the current celebration is incompatible with its original purpose. If that happens, SantaNet will cancel Christmas altogether.


Well, there is no artificial Santa intelligence yet. If you didn’t manage to buy gifts, it’s time to hurry up. Check out the Madrobots New Years compilation! We remind the promo code for Habr’s readers: BYE2020… According to him – 10% discount for all gadgets from our New Year’s guide.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *