AI Bubble? You're Just Using It Wrong

On Habr, there are often articles about how AI is now a very overheated topic. At least once, twice. So, are we in for a crash similar to the dotcom crash in the early 2000s?

TLDR – No.

However, we are indeed in for a wave of disappointment, because people have inflated expectations. And why? In my opinion, the hype here is similar to that which reigns around the courses “enter IT”, and it is often created by the creators (sorry for the tautology) of these courses.

I'll give two interesting examples to make it clear what's wrong with AI now.

The first example happened to a friend working in a large European financial company. They created an AI department on the wave of ChatGPT hype. Well, not really a department – two guys, one is the boss, the other is a deputy. It is important here that this department (as in most companies that do not specialize in AI) is not R&D, but a completely ordinary department, subject to the same management requirements as everyone else. And that means it must report on what has been developed and implemented. Even if it is at the expense of the company as a whole. You have to fight them off constantly. For example, here is one of their bright ideas.

The company has a scoring application that, based on user data, makes an initial decision on whether to approve a financial transaction for the user or not. Let's deploy a network inside the company, train it based on the algorithm of this program (or better yet, feed it the program code) and let the network issue these decisions via a chatbot.

Reaction of the developers of the original program

Reaction of the developers of the original program

Firstly, the program already exists and has been working for quite a long time. Secondly, there are no unclear points that the AI ​​assistant could clarify with the user. Thirdly, there is the issue with PCI Compliance, but won't the chatbot somehow save this data inside the network as a result of retraining? Fourthly, it is still supposed to be run through Azure servers, albeit with guarantees from Microsoft that it will not leak anywhere.

That is, it is supposed to solve a problem that has already been solved.

The second case happened to me. A friend of mine, an entrepreneur, contacted me. He provides services to people, and people write letters to his website describing the problem, what type of equipment they need, what volume, and he sets the price. He has a very clear algorithm for how he sets it for them, and he came to me with an idea – what if we train a bot for Telegram that would communicate with users on this topic and set the price, and even make a deal? He even found several designers of such AI bots, and even in the Russian Federation (I will not name them, so as not to seem like advertising). After a long analysis, I talked him out of it. And here are the arguments:

  • Even in banks, bots are usually trusted with questions regarding technical support consultations, and are transferred to an operator if necessary. It is not possible to quickly apply for a loan through a bot, and there are reasons for this.

  • The task of “Ask the user leading questions and give him an approximate estimate” is good, and it is best solved with the help of artificial intelligence, but not in the way managers see it today. In theory, in cases of dialogues with real users, it is possible to display a number of different phases (questions about authorization, questions about data storage and logging, questions about data distribution, questions about the availability of a warehouse accounting system, etc.). In this case, this can and should be designed as a decision tree. On a page in the system, the user selects one of the possible options for the first phase, for the second, etc. Based on the results, we issue the cost. Here it makes sense to give the AI ​​all these cases in the form of a document and ask it to formulate a decision tree in any of the acceptable programming languages ​​that goes through these phases, and then deploy a web system with this tree somewhere in the cloud. Why is this better?

  • Legal issues. Let's not forget that the peculiarity of GPT is the presence of a randomizer in the selection of the next token. This means that two users who asked the bot the same questions (literally with an accuracy of a comma) may receive slightly different answers and not slightly different prices. For example, Russian legislation directly prohibits this. As it is prohibited, for example, to read the User-Agent in the request, and if it is an iPhone, to set the price 10 percent higher – like a major. Of course, here you can refer to the lack of direct intent with GPT, but how the regulatory authorities will treat this is difficult to predict.

  • Problems with math – suddenly, due to a number of features of LLM under the hood of GPT, it is much easier for them to write a program that performs calculations than to calculate them themselves. Tasks like “Given the length and width in feet, give the area in meters” are extremely difficult for them, but they can very easily give a recipe for how to solve this, both in human language and in software. Therefore, I am extremely skeptical about the idea of ​​a bot that, after asking a number of leading questions, will give a price by multiplying something there. It is easier to force AI to help with writing a program.

  • Energy and ecology. Using AI to write a program, we get a very simple decision tree, which can be launched even on the first Pentium if desired. If you use AI every time for each client, then there are completely different energy costs. It is characteristic that on the site of the bot designer, which a friend sent me, they write that the price is from 990 rubles for a client (I hope they do not have tricky billing, like on Amazon, to quietly gobble up the CPU and demand 100 thousand from you). A program based on a decision tree turns out to be much more predictable – for 900 rubles you can allocate a server for yourself and place a dozen such applications on it. If you set preemptive quotas, and people rarely come there, then you can generally fit into a couple of hundred.

    These arguments convinced the man that he had been hasty. What can be said as a result?

  • Various AI bot designers and advertising campaigns essentially exploit the same hidden desires as “get into IT” courses. The courses offer you “study here for six months and get a job as a programmer for 300 thousand a month.” The designers offer you “Get rid of the programmer and save 300 thousand a month.” LLMs are very useful and can dramatically increase the efficiency of a programmer, not get rid of him.

  • In general, the whole LLM saga now reminds me of the hype stage with Domain Specific Languages ​​(DSL). They were also sold at one time under the sauce “With these languages, ordinary employees can perform all tasks, and programmers will not be needed.” This turned out to be wrong. But this does not mean that DSLs have disappeared – they have not gone anywhere and are very intensively used. It just did not work to use them as a replacement for programmers and to save on programmers' labor.

  • Based on this, LLM is in for a significant rollback in terms of interest in the technology. When people become convinced that LLM does not allow them to solve most problems on their own, without resorting to the help of specialists, many startups will burst, since the possible client base will disappear. This does not mean that LLMs themselves will disappear – their use will become more and more widespread.

What do you think about this?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *