How Ilya Sutskever Created Artificial Intelligence and Then Taught Us to Fear It

OpenAI co-founder Ilya Sutskever has been thinking and dreaming about AI since he was a child. But when he got closer to his dream, he realized how dangerous it really was. Fear of his own creation prompted him to a desperate but failed rebellion. Disillusioned with OpenAI and breaking with it, today Sutskever is building his own project. We tell the biography of this talented and noble scientist, who has already changed the world, and perhaps will do it again.

From childhood insights to first discoveries

Little is known about Ilya Sutskever's early years. He was born in the Soviet city of Gorky (Nizhny Novgorod) in 1986, the son of a Soviet electronics engineer, Efim Sutskever. True, he has very little connection with Russia – when Ilya was only five years old, his family moved to Israel.

Sutskever has said more than once that since childhood was interested mathematics, computers and artificial intelligence, as well as questions about how consciousness works.

In one speech Ilya describedhow I first thought about AI.

«In my early childhood, maybe five or six years old, I was very struck by my own experience of consciousness. The fact that I am me, and that things are deposited in my experience. That when I look at things, I see them <...>

But this feeling that I am me, that you are you, I found it very strange and almost disturbing. And so when I learned about artificial intelligence, I thought, “Wow, if we could build a computer that was intelligent, maybe we could learn something about ourselves, about our own consciousness.”

Psychologists would probably call this moment the manifestation of the self-concept. Many children have such sudden and vivid realizations of their own self that they can be very surprising and discouraging. For Sutskever, this childhood insight predetermined his career and destiny.

His family always encouraged his hobbies and interests. In 2000, eighth-grader Ilya even entered the Open University of Israel's bachelor's program in computer science (this university accepts any students, regardless of whether they have already received a high school diploma or not).

But already in 2002, the Sutskevers moved to Canada. Ilya entered the University of Toronto, where he received a bachelor's and master's degree, and in 2013, a PhD in mathematics.

When Ilya entered the University of Toronto, he already knew very well what he wanted. He chose Geoffrey Hinton as his supervisor, a scientist to whom we owe the very concept of modern neural networks and who is called the “godfather of AI.”

Hinton rememberedthat he saw Ilya's talent and determination when he completed a coding assignment in half a day that students were given a week to complete. Hinton and Sutskever collaborated for more than 10 years.

In 2010, Sutskever was first noticed by the press. Then he presented a neural network that was trained on arrays of English Wikipedia articles and could continue a sentence by generating further words. Technically, its job was to predict the next character in the sequence, and it generated a prediction. Most often, it produced relatively grammatically consistent nonsense:

«Akkerma's Alcesia Minor (including) of Hawaiian State Rites of Cassio. Other parish schools were established in 1825, but were relieved on March 3, 1850“.

Of course, such texts are not applicable in life and are infinitely far from the capabilities of modern neural networks. Nevertheless, the consistency of the output showed that the neural network understood the concept of words and grammar. It is unlikely that even Sutskever himself could have imagined that this was the beginning of the path to ChatGPT, a tool whose contribution to the economy will amount to many billions of dollars.

Sutskever's first big break came in 2012, when, under Hinton's guidance, he and fellow graduate student Alex Krizhevsky developed AlexNet, a groundbreaking architecture for a convolutional neural network eight layers deep, five convolutional and three fully connected. It was used for computer vision, but the discovery itself was fundamental and applicable to all areas of AI development. AlexNet showed the enormous, at that time unimaginable, promise of deep learning in understanding the meaning of the human world.

In 2012, Hinton, Sutskever and Krizhevsky founded startup DNNresearch, which in just a year bought Google. The first product based on the developments of Hinton's group was image search in Google+, which used computer vision. At that time, this search algorithm was incredibly advanced. This is one of the first products where the computer did not just sort through the labels and tags of photos, but understood, what is depicted on them.

An example of machine vision in Google+ image search. Source

An example of machine vision in Google+ image search. Source

Sutskever at OpenAI

In 2015, Sam Altman, then president of the famed startup accelerator Y Combinator, invited Sutskever, Elon Musk, and Stripe founder Greg Brockman to a business dinner. The meeting resulted in the founding of OpenAI, an initially nonprofit research company (the for-profit side of the business only emerged in 2019).

The startup declared its goal to create safe AGI — artificial general intelligence for the benefit of humanity. It immediately became a unicorn, having received $1 billion in investments. At OpenAI, Sutskever became Research Director, and then Chief Scientist.

Until the launch of OpenAI's first public products, the DALL-E image generator in 2021 and ChatGPT in 2022, not much is known about Sutskever's activities at the company. This is largely due to OpenAI's own extreme secrecy at this stage of its development. For example, employees are strictly was forbidden communicate with the press without permission from the PR team. And Sutskever was not one of the executives through whom the company communicated with the general public.

We do know that Sutskever is the brains behind OpenAI’s strategic bet on continually increasing model size. While many scientists and developers believed that deep learning was fundamentally insufficient for AGI and that a new revolutionary idea was needed in this area, OpenAI continually scaled and improved its deep learning designs, which ultimately led to its success.

Sutskever has always been concerned about the risks of AI in general and AGI in particular — what are now called AI doomers. His concerns on this score seem to have reached their peak during his time at OpenAI. Since the release of ChatGPT, he has appeared in public more often — and in almost every talk he gives, he has spoken about both the possibilities and risks of artificial intelligence, and the importance of ensuring its safety.

At times, he painted eloquent and terrifying scenarios in which AI is used to enslave people in new totalitarian regimes and create new terrible diseases. Perhaps if it were someone else in Sutzkever’s place, the world community would think much less about the potential risks of advanced AI.

Sutskever from most of his colleagues in the industry distinguishes the belief that as AI evolves, it gains an ever deeper understanding of our reality—rather than learning to imitate that understanding ever more skillfully. Back in February 2022, it wrote back then on Twitter: “maybe today more neural networks are a little bit conscious” – other AI experts ridiculed this statement.

In June 2023, already in the ChatGPT-4 era, Sutskever led OpenAI’s strategic “super alignment” initiative. Alignment is about aligning AI activity with human intentions, values, and goals. And super alignment is about ensuring that AGI is safe and fully under the control of human intentions before it even exists.

OpenAI allocated 20% of its computing power to this project. However, we never saw the results of this project under Sutskever's leadership – everything soon turned upside down.

From the OpenAI Coup to Our Own Superintelligence

Sutskever only came to the attention of the general public last year, after the failed coup at OpenAI. On November 17, 2023, OpenAI's board of directors abruptly and without clear public explanation removed Altman as CEO.

Apparently, the main role in this was played by Sutskever, who at that time controlled the company's board of directors. According to one version, he accused Altman of developing the company in an overly commercial manner and ignoring its mission – to develop AI for the benefit of humanity.

The man proposed to be appointed CEO, Emmett Shear, advocates slowing down the pace of AI development for safety reasons.

However, according to another version, the board of directors fired Altman not so much because of the security of the developments, but rather because his activities were extremely non-transparent. Thus, according to former board member Helen Toner, in November 2022, board members learned about the release of ChatGPT from social networks – no one warned them personally. At times, Altman lied, falsifying information. In this version, the issue of AI security also appears – it was one of the main topics for Altman's concealment and lies. However, the focus was on the CEO's habits in general.

On November 20, Sutskever retracted his words and stated that he did not want to harm OpenAI, and on November 22, under pressure from the overwhelming majority of OpenAI employees who spoke out in support of Altman, as well as Microsoft, the company's main investor and partner, the CEO was reinstated.

After this defeat, Sutskever's days at OpenAI were numbered. He did not appear in public for six months, and the phrase “Where is Ilya?” became a meme in the AI ​​community. In May 2024, he officially left the company.

Three days after Ilya's official departure, OpenAI presented ChatGPT-4o, a new version of its multimodal neural network capable of working as a full-fledged voice assistant. The style of this version is very different – if ChatGPT used to be extremely neutral and even dry in its wording, now it has started to joke and even flirt.

ChatGPT couldn’t talk under Sutskever yet — but the tone of its text responses sounded like professional and somewhat detached androids like C-3PO from Star Wars or the android Data from Star Trek. But when ChatGPT finally spoke without Sutskever, it sounded like the voice of the artificial intelligence in Spike Jonze’s Her, making Joaquin Phoenix’s character fall in love with it.

And already in June Sutskever announced about creating his own startup called Safe Superintelligence. His partners are Daniel Levy, a former OpenAI executive who previously worked with Sutskever at Google, and Daniel Gross, who was Apple's director of AI.

The startup aims to create a purely research superintelligence (a hypothetical concept of AI that greatly exceeds the capabilities of human consciousness) with no intention of commercializing it in the short term. Sutskever says the company must be insulated from all external factors that could affect its work. Moreover, the startup did not disclose investors or funding amounts.

There is absolutely no way to judge what Sutskever’s new company will actually do. Perhaps, instead of becoming a real ethical alternative to OpenAI, it will become its commercial clone – as Anthropic, also founded by former OpenAI employees, became. But Ilya’s stubborn commitment to the idea of ​​safe AI gives hope that his own team will not put profit generation above the interests of humanity. Plus, Sutskever is not Altman.

Drawing for registration of 5 trademarks

Useful from Online Patent:

  1. How to get government support for an IT company?

  2. What benefits can you get from registering a computer program?

  3. How to protect your customer database?

  4. Not just IT specialists: which companies can add their programs to the Register of Domestic Software?

  5. Trademark Guidelines for 2024.

More content about intellectual property in our Telegram channel

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *