Mr. Altman's Imaginarium: A Beautiful Faraway Place?

Vegetarian pasta on the table, wine on the table, loud music playing—all to take your mind off the eventful Thanksgiving of 2023. OpenAI CEO Sam Altman felt out of flow and out of resources, so he fled to his ranch in Napa after nearly losing control of the company that “holds the future of humanity in its hands.”

However, everything worked out: Altman was fired, then returned to the company five days later, his face appeared on the cover of Time. And OpenAI again continued to shake the information space with new neural network wonders under his leadership.

And futurologists and forecasters began to vied with each other to make predictions about what was coming. Interestingly, it was writers like Jules Verne and Herbert Wells who once painted the future, describing incredible technologies that have now become reality. However, the role of technological Wang is increasingly being taken on by top managers of IT companies – those who are directly involved in creating the future.

Sure, there's a hint of PR in their predictions, since everyone sees the future through the prism of their own products. But when someone like Sam Altman speaks, it's worth listening. After all, the legions of visionaries and tech evangelists rolling out neural network forecasts don't have the insights of a key figure at the most important AI company on the planet.

Sam Altman recently published an essay on his website. And there are a number of interesting points there that I want to think about. Now, armed with “hints” from the text of the OpenAI chapter.

Judging by the reviews, it seems that techies are not happy – a lot of “fluff” and enthusiastic odes to what a wonderful future with AI awaits us. And there are no specifications, no data on parameters, nothing precise and concrete! Total literary fiction! Yes, this AI manifesto, for the most part, repeats the traditions of science fiction writers, with the only difference that it may contain hints about where OpenAI is going.

The first thesis is about the emergence of an AI team consisting of virtual experts in different fields who will work together. Here, we are probably talking about the development of the GPT agent system — extensions that allow the chatbot to perform specific tasks. There is already an extensive database of chat plugins for various tasks, and chatbots can be connected to external services and to each other. OpenAI clearly has Apple's ambitions — Apple's AppStore has revolutionized the mobile device industry. As a leader in the industry, OpenAI is building an ecosystem for a new market and has already offered many companies to create their chatbots inside ChatGPT agents.

Altman goes on to write that Children will have virtual tutors for any subject, similar improvements in the healthcare sector, the ability to create any software.

There is a nuance here — the concept of virtual tutors looks promising. But now the cost of computing in AI is still quite high, and therefore a full-fledged educational process through ChatGPT may not be so profitable. At the same time, Sam does not say how the cost of the computer can be reduced, but writes that this definitely needs to be done. “If we do not create sufficient infrastructure, AI will become a very limited resource.”

As for training, AI has one important feature – you can ask the neural network as many stupid questions as you like! It sounds funny, but I'm sure many have done it – wrote requests to explain something that it would be shameful not to know in decent society =) Of course, you can google it, but it's much more convenient to write, even if crookedly, your request and the neural network will understand, will not judge and explain everything. This ability to answer stupid (but actually important) questions gives ChatGPT an undeniable advantage over “leather” mentors.

Regarding healthcare — here the Overton window, it seems, has not yet opened to the point where a person will trust the neural network in serious cases. Yes, the diagnostic capabilities, as the media writes, are improving for the neural network. And it is believed that people may well turn to the chatbot for primary diagnostics. But it is also known that ChatGPT can still invent and fantasize, so there is reason to believe that even human error in “skin” doctors will not discourage people from visiting clinics. Another thing is that the doctors themselves, especially if they are not very qualified, can consult with the same ChatGPT. Well, maybe in this case it is for the best?

“The ability to create any software” via prompts — I readily believe it, although my programmer friends are gleefully closing in disbelief. At the very least, making edits to already generated code can be problematic. Although I could be wrong. I can neither confirm nor deny this. Programmers, what do you think?

By the way, Altman admits that AI “could significantly change the labor market (both positively and negatively) in the coming years”But “most jobs will change more slowly”. So far I can clearly see that AI does not threaten plumbers and putty workers in any way. The head of OpenAI also writes that new professions will be created that we don’t even consider work now. Here we can't help but recall the “new black” of info gypsies – promts. It's incredibly amusing to watch how the ability to put thoughts into words and the ability to describe a request in detail are presented as know-how that needs to be learned (and a course to be bought!), since this is the profession of the future. However, with the advent of the o1-preview model, a trend has emerged to simplify prompts, and neural networks themselves can be used to write prompts, and therefore the concept of prompting as a profession of the future is under threat.

«With these new capabilities we can achieve universal prosperity.“, Sam writes. And adds that With the development of AI, all physical laws will be discoverable and virtually unlimited energy sources will be available. Oh, it seems that few people believe in this populist rhetoric. AI is already replacing a number of low-level professions, forcing many to “retrain as house managers”. It is obvious that AI is a platform for a new technological arms race. And in the literal sense of “arms”: after all, artificial intelligence is obviously a “dual-use” technology. What weapons will be controlled in the near future via the ChatGPT API or the “military” analogue of ChatGPT can only be guessed at.

«We may get superintelligence in a few thousand days; it may take longer, but I'm sure we'll get there.” This phrase may contain the most important insight: artificial general intelligence (AGI) is to be. Several thousand days is about 5.5 years. If there is more to this forecast than just long-winded thoughts, then a technology that could match human intelligence in performing many tasks without the need for special training could be introduced relatively soon. Sam Altman has long been talking about the inevitability of AGI. Rumor has it that OpenAI's management was afraid of what the new AI could do and hastened to fire Altman, but then changed their minds. And here it is unclear – either everything is so revolutionary and Altman's colleagues are afraid of a machine uprising, or new AI models, instead of discovering all the laws of physics, can, through “jailbreaks”, issue new recipes for bioweapons…

So, for now, the following directions for the development of the near future follow from this essay:

  • Development of the GPT agent ecosystem and creation of an analogue of the AppStore ecosystem, and chatbots become new applications.

  • Development of additional models capable of “self-reflection”, such as o1.

  • And these reflexive models are the basis for mass proto-AGI

  • In the meantime, we need to build infrastructure, because without a large number of chips and data centers, all this wonderful stuff will be very far away.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *