TESCREAL is the new ideology of Silicon Valley. What is this and why is it already annoying everyone?

In 2023, the concept of TESCREAL began to be heard more and more often in the media – it is called the ideology of modern technological capitalism. Its main idea is that the well-being of the modern world is not a goal to be strived for, but a means to achieve the benefit of future generations. And the main tool of TECREALists is literally Deux Ex Machina – general artificial intelligence.

It is generally accepted that its main apologists are top managers of leading companies in the AI ​​industry, who seek to turn Silicon Valley to the right and, according to critics from the left flank, can destroy humanity. Let's figure out what and who is behind this abbreviation, and what's wrong with its criticism.

How did TESCREAL come about?

The term TESCREAL itself is not a self-name of supporters of the ideology, but, on the contrary, a collective concept invented to criticize them.

The invention of the term belongs to the philosopher Emil P. Torres and the Eritrean programmer Dr. Timnut Gibr – in 2020 her fired from Google for radically criticizing the corporation's ethical standards towards racial minorities. Starting in 2021, both researchers publish journalistic articles and give interviews devoted to criticism of TESCREALism.

Emil P. Torres and Timnut Gibru.  Source

The acronym TESCREAL itself combines a whole group of related ideas: transhumanism, extropianism, singularitarianism, etc. – one for each letter.

Despite the critical nature of the acronym, some who are considered proponents of these ideas proudly use it. Thus, a major American investor in the AI ​​sector, Marc Andreessen called yourself as a TESCREAList in your profile description in X.

In his articles, Torres says that the TESCREAL craze in Silicon Valley companies is leading humanity to an existential catastrophe.

In 2023, the concept was criticized by sociologist James Hughes, former executive director of the World Transhumanist Organization and founder of the Institute of Ethics and New Technologies.

In his article “Conspiracy Theories, Left Futurism and Attacks on TESCREAL” exposes the ideas of Torres and Gibrou are conspiracy theories devoid of meaning. In response to Torres released article “Conspiracy Theory about the TESCREAL Conspiracy Theory”, in which he in turn called his opponent’s arguments unsubstantiated.

Torres stated that after he began publicly criticizing TESCREALists, he received many tweets and a threatening letter. However, accusations were also brought against the scientist himself of persecuting the philosopher Peter Boghossian and the British cultural theorist Helen Pluckrose.

Torres himself calls accusations of a coordinated campaign against him, while emphasizing that both named figures are representatives of “radical far-right views.”

In his Guardian review of TESCREAL and Torres, Andrew Anthony writesthat Boghossian and Ploukrose would not agree with such a definition. At the same time, the journalist does not dispute the values ​​of Torres’ philosophy.

In general, the journalistic discussion on the topic very quickly moved to the stage of insults and threats. Torres announced that he and Gibru would produce a detailed scientific work on it – this supposedly should add weight to their arguments. But it hasn't been published yet.

What is TESCREAL?

To understand what frightens Gibra and Torres about the ideas of Silicon Valley residents, you need to understand what is behind each of the letters of the term and how it relates to AI.

T – transhumanism

Transhumanism is the main philosophical concept in the TESCREAL set. She postulates that we are moving towards a post-human future in which technology will help us build a perfect utopia.

One of the theorists of modern transhumanism, philosopher Nick Bostrom (colleague of the aforementioned Hughes at the Institute of Ethics and New Technologies) describes a state in which immortality, increased intelligence and complete control over emotions will be available to humanity.

The most transhumanistic developments include microchips from Elon Musk’s Neuralink, which are designed to connect the human brain with AI, by launching a new stage in the evolution of humanity.

Torres calls transhumanism a new religion, which is believed in by the largest players in Silicon Valley – with corresponding religious components. For example, Californian cryogenics startup Alcor, of which Bostrom is a client, is engaged in post-mortem freezing of people with the hope of one day “revitalizing them.”

E – extropianism

Extropianism is the first formal movement of transhumanism, which emerged in the 1980s. The word itself extropia first appeared in scientific discussions back in the late 1960s and was used metaphorically as an antonym to the concept entropy – a measure of uncertainty, unknown and disorder in the universe.

In 1988 definition The concept was coined by Max More, a British philosopher and future chief executive of Alcor. In contrast to the theory of entropy, the scientist proclaimedthat extropy is “the degree of vitality or organization of systemic intelligence, functional order, vitality, energy, life, experience, ability and engine of improvement and growth.”

In 1988, Mohr, the author of the term extropianism, created a scientific journal of the same name. A community formed around him, which began to hold conferences, scientific tables and discussions dedicated to robotics, genetic engineering, space exploration and other futuristic areas of science.

Extropia Journal.  1990  Source

S – singularism

Singularity posits that the main way to achieve a transhumanist future is through the development of intelligence. “The dull matter and mechanisms of the universe will be transformed into exquisitely sublime forms of intelligence,” writes about the technological singularity by one of the main theorists of the direction, Google research scientist Ray Kurzweil.

Singularitarians believe that when humanity creates AI smarter than its own intelligence, there will be a point of technological singularity – a point after which humans will no longer control the further course of technological progress.

Some of those considered TESCREALists emphasize the need to prepare for the onset of the singularity. Among them is the Swedish philosopher Nick Bostrom. But most singularists are optimistic that this moment is approaching.

It is important to note that singularism did not become an ideology on paper. In Silicon Valley, there is a company created with the support of NASA And Google science Center – Singularity University. In 2009 he created the same one Singularitarian Ray Kurzweil and space tourism founder Peter Diamandis. Its goal is to prepare the next generation of leaders who will work to accelerate changes.

However, the university mainly makes headlines in the media for reasons that have little to do with the singularity: accusations in harassment and gender discrimination and large-scale COVID-parties.

C – cosmism

Cosmism is a wide range of different religious and philosophical movements that are connected by the idea of ​​humanity as part of a common system with the cosmos, which develops according to general laws. One of the most famous representatives of the doctrine was the Russian pioneer of astronautics Konstantin Tsiolkovsky.

Modern cosmism is also largely about AI. One of its main ideologists became American futurist Ben Goertzel, who was one of the first to talk about general artificial intelligence (more about him below). In his “Cosmist Manifesto” he writesthat humanity is moving towards the development of intelligent AI. Technologies to upload it into the human mind “will open up unlimited lifespan to all who choose to leave their biology behind.”

R – rationalism

Rationalism is a worldview that places human reason at the forefront. Its origins are in philosophy since Socrates, but its special development received in the XVII-early XVIII centuries. the works of Spinoza, Descartes and Leibniz.

Rationalism in the discourse of AI technologies entered extrapist Eliezer Yudkowsky, who created a half-joking manifesto of rationalism on his LessWrong portal. It proclaims the principles of improving human rationality for the speedy achievement of progress.

The field of the rational according to Yudkowsky.  Source

The field of the rational according to Yudkowsky. Source

EA – effective altruism

Peter Singer, one of the founders of the effective altruism movement is it as a philosophy that helps improve the world in the most effective way. What distinguishes its proponents from classical altruism is the desire to calculate the best methods of “doing good,” which Torres sarcastically calls “optimizing our morality.”

This is why many supporters of effective altruism work in the AI ​​industry: the idea behind the development of neural networks is to make human activity more efficient.

How effective altruism works is to take a step back and evaluate how to make doing good as rational as possible.  Source

How effective altruism works is to take a step back and evaluate how to make doing good as rational as possible. Source

L – Longtermism

The idea of ​​longtermism or long-term planning was introduced by another effective altruist, professor of philosophy at Oxford University William MacAskill.

In his works the scientist offers focus on the future: on the survival of civilization and caring for the billions of unborn people. MacAskill's idea is based on the idea that the potential size of humanity in millions of years is almost infinitely larger than the current world population, and we should prioritize these trillions of unborn people over the short-term needs of the few billions alive today.

Among other things, the scientist calls for paying attention to developing AI and preparing for the moment when it becomes more perfect than human AI. MacAskill discusses this in detail in the book “What We Owe to the Future,” published in 2022. The news about the book's release was reposted by Elon Musk, calling work of philosophy close to his.

Concluding his overview of his concept, Torres calls for a loud critique of the ideology behind techno-capitalists and warns: “The TESCREAL package is already deeply shaping our world and the world of our children.”

Who are called TESCREAL supporters?

Under the umbrella term TESCREAL, Torres brings together many visionaries, venture capitalists, computer researchers, and philosophers associated with AI development.

However, there is an important philosophical rift between them – and it concerns AI. Some experts see the rapid development of artificial intelligence as a solution to many global problems, while others see the threat of the extinction of humanity (the authors of the TESCREAL concept criticize both of them). TESCREAL critics call the former accelerators, and the latter, AI doomers.

AI accelerators

In the list of TESCREAL optimists burning with the idea of ​​a bright AI future, the author of the concept includes both various transhumanist philosophers, such as Professor Nick Bostrom and effective altruist William MacAskill, and practitioners, for example, Open AI Executive Director Sam Altman.

How concludes Torres, for TESCREALists united by the belief in achieving progress, artificial general intelligence is the shortest path to this breakthrough. After all, just such a tool will help calculate all the necessary variables to create a utopian future.

His words are confirmed by the self-named TESREAList, a member of the board of directors of Mark Zuckerberg Corporation, eBay and HP Marc Andreessen. He writesthat the development of general AI could be the answer to a variety of challenges, from curing all diseases to interstellar travel, and calls the development of this technology “our moral obligation to ourselves, our children and our future.”

AI doomers

In the opposite camp of TESCREAL are alarmists or, as Torres and Gibru call them, AI doomers (from “doom” – darkness). Their most famous representative is the extropist Yudkovsky, who calls for stop all developments in the field of neural networks, since they are leading humanity to an existential catastrophe.

In his TIME magazine article, Yudkowsky speaks about the need to provide for the possibility of dropping nuclear charges on AI servers – because, unlike the emergence of general artificial intelligence, this will lead to the destruction of only part of humanity, and not the entire species.

Deus Ex Machina – why general AI is important for TECREAL

According to Torres and Gibru, the complex of TESCREAL ideas is not accidental embodied in Silicon Valley's passion for artificial intelligence.

At the junction of all his ideologies is the belief in the advent of a technological utopia, in which technologies develop themselves uncontrollably for humans. In his book “Superintelligence: paths, dangers, strategies» Nick Bostrom describes this “intelligence explosion” moment that will happen when machines are much smarter than us and start designing their own machines

Bostrom and other modern transhumanists correlate that very point of singularity with the creation of artificial general intelligence (AGI).

There is no single definition of the concept of AGI, but in general terms it can be represented as follows. Even the most advanced neural networks that exist today are only capable of solving the narrow tasks for which they were designed: generating text or an image, finding economic macro trends, or predicting the weather.

General AI is a fundamentally new generation of models that are believed to be as close as possible to the human mind and capable of self-learning. General (or strong) neural network, according to expression Apple co-founder Steve Wozniak, will be able to walk into a random house and make herself a coffee: that is, set a goal for herself to learn a new skill and do it.

The vagueness in the definition of general AI is due to the fact that no one can say with certainty when artificial intelligence will reach this level, what it will look like and what it will lead to. The potential and dangers of such a tool are causing heated debate in the industry.

Why is TECREAL criticized?

Judging by Torres's articles, it may seem that he reproaches TESCREALists for all the sins in the world. For example, he accuses them of claiming that transhumanist ideas take roots in eugenics, or racist scandals, in which their supporters were involved. Nevertheless, one main theme can be traced in the scientist’s criticism – criticism of utilitarianism.

According to Torres, the disastrous starting point for both optimists and skeptics is a belief in a distant utopian future for which the well-being of individual people will have to be sacrificed.

It is this “pragmatic” approach that unites the most diverse supporters of TESCREAL ideologies. Scientist leads as an example, Altman jokes that catastrophes that will not affect our post-human future in heaven are “mere ripples on the surface of the great sea of ​​life.”

The same vicious logic as believes Torres, Yudkowsky’s statements and MacAskill’s book “What We Owe the Future” are subordinated, where he, in particular, argues that the targeted destruction of wildlife can be useful and rational, since it will put an end to the suffering of wild animals.

By opinion Torres, the happiness of future generations, which all TESCREALists operate on, is an abstract concept, because we are talking about the well-being of people who do not yet exist.

Criticizing the ideology of radical long-term planning, Torres asserts, that its result will always be a view of a person not as an end, but as a means to achieve value. Both faith in a happy technological future and the gloomiest forecasts about it bring utilitarianism onto the agenda, where the person of the present is only a way to achieve something to come.

Remembering the Kantian imperative, according to in which man is an end, not a means, Torres calls on humanity to abandon not one or another view of technological progress, but a paradigm in which the end prevails over the means, which inevitably turns out to be living people.

Utopia vs. Extinction: What's Wrong with the TECSREAL Criticism

It may seem that Torres's criticisms lack substance: for the most part, his arguments are based on loud quotes from TESCREALists.

Author concentrates his attention to the rhetoric and discourse of TESCREALists, which, in his opinion, creates an aura of cult around their ideas.

The main problem with Torres's theory is that his ideas are no less extravagant than TESCREAL, and radically opposed to them in their vision of an ideal future for humanity. If TESCREALists propose to sacrifice everything so that future generations will be happy, then the leftist philosopher considers unfunny extinction to be the humane future of humanity.

Torres's own view of global problems sets out in his book Human Extinction: A History of the Science and Ethics of Annihilation, published in 2023. Author tellshow Christianity took the problem of our extinction as a species off the agenda, replacing it with the idea of ​​​​the inevitable salvation of souls.

Five periods of

Five periods of “existential sentiment” (attitudes toward extinction) in Western history according to Torres. Source.

The first shift in this area occurred with the advent of nuclear weapons, but then it seemed that only man could destroy himself. Then other fears about the possible extinction of humanity emerged: supervolcanoes, meteorite falls (with the corresponding theory about the extinction of dinosaurs), and then fear of an impending climate crisis.

This led to the idea of ​​the need to prevent human extinction in any way possible. But is it so undeniable?

Useful from Online Patent:

  1. How to get government support for an IT company?

  2. What benefits can you get from registering a computer program?

  3. How to protect your customer database?

  4. Not only IT specialists: which companies can add their programs to the Register of Domestic Software?

  5. Trademark Guide in 2024.

More content about the field of intellectual property in our Telegram channel

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *