Outsourcing programming to a country called AI

A meeting from the near future?

A meeting from the near future?

In this note, I want to share my own experience of using AI tools in my software projects, as well as my thoughts on the reality of the complete transfer of programming into the hands of AI and, thus, the disappearance of the programmer profession.

At first glance, such expectations are not groundless. Indeed: the quality of responses to general queries between ChatGPT 2 and ChatGPT 3.5 increased from about 20% to about 80% only due to the increase in the volume of training data and the increase in the capacity of the servers used. Maybe we should feed the system all open source software projects, buy some graphics cards, and it will be possible to replace all or most programmers with AI? In other words, outsource programming in a country called AI?

First, about outsourcing…

Some are no longer there, and those are far away…

A.S. Pushkin

I have been involved in programming and IT architecture my entire professional life. For the last thirty years I have been doing this in Germany.

I won't rub salt into the wounds and recall that three decades ago the German industry in general and IT in particular were on the rise, but today…

I will focus on outsourcing in the IT industry.

The managers of the then German IT giants began to intensively try to outsource programming from about the mid-2000s. First to India, then to China, until finally, after many failures and losses, they settled on Russia and the countries of Eastern Europe. Although we still see attempts to find a happy partnership in India.

The specific financial benefits of the outsourcing process were not obvious, but successes were declared in decrees, consultants received their salaries, managers received their bonuses and went off to ruin other companies.

And programmers, testers, and administrators were offered severance pay, or to try to retrain as managers or architects.

But here's the problem – outsourcing teams started to acquire their own architects, who, due to their proximity to the developers, understood the problematic much better than those who remained in the customer companies. So project managers removed their architects as an unnecessary link.

But many project managers had to leave for the same reason. Senior managers or direct customers of projects also began to see that employees of outsourcing firms were much more competent.

This is a broad brushstroke picture, reflecting the general trend and may differ in detail in each specific case. What is important is that over these decades, Western European IT has produced a crop of managers for whom there is no question of outsourcing or not outsourcing. They have been taught that outsourcing is necessary. The question is to which country?

And then ChatGPT appeared and the consulting sirens sang sweet songs about how AI will soon completely replace programmers. So maybe it is real to “outsource” a software project not to some distant or nearby country, but to AI?

Outsourcing has not passed me by either. After many years of conscientious work, two companies (first one, and then a few years later the other) offered me to quit with severance pay, since the managers believed the advice of consultants about the happiness of outsourcing. However, like a surfer, changing tacks and finding a suitable wave, swims to the shore, so I successfully worked until retirement age. And by the way, looking back, I will note that I left these ships, which took a course for outsourcing, in time.

As a retiree, I work on my personal software projects and… I am intensively trying to apply AI in my work.

Now about AI…

Just a few years ago, the achievements of AI were perceived by most people on Earth, with the exception of a small group of enthusiasts, with irritation. People were not pleased to learn that yet another machine had been invented that beat everyone in this or that game.

And while this was the case, AI had little practical benefit.

But with the advent of ChatGPT 3.5, the situation changed dramatically. This version of AI was not focused on solving only one narrow task, but began to solve many tasks in such a way that one could expect a real effect from this. It would be a sin not to try to achieve it.

So, when ChatGPT 3.5 demonstrated its capabilities, I devoted a lot of my time to it.

To start, I made a bot in Telegram that forwarded user requests to ChatGPT and its responses back to the user.

At that time, ChatGPT was not able to perceive and generate audio information, so around this basic functionality it was necessary to wrap Speach-to-Text and Text-to-Speach converters, an audio format converter, and minimal logic for communication.

But the whole thing worked pretty quickly, which inspired me to do a relatively deep study of the mathematical foundations of GPT.

Almost immediately, I started trying to solve my current programming problems using OpenAI.

Doing this via the input field on the OpenAI page was quite awkward, so I was very happy when GitHub Copilot appeared in VS Code and IntelliJ as plugins, and then Gemini in Android Studio. And of course, I immediately installed Microsoft Copilit as soon as I heard about it. (By the way, the title image was drawn by him under my artistic direction.)

Why am I telling this? No, not to brag, as some people thought :-). I want to emphasize that I have been using AI (mainly GitHub Copilot) quite intensively in my practical work for about two years now.

At the same time, I am not under the pressure of project deadlines, when the interests of the project prevail over the interests of making it high-quality and at the same time mastering the available technologies well. On the other hand, I am not obliged to declare success in using AI, since no one has instructed me to use it in my work.

Thus, I had the opportunity to calmly and thoughtfully try to use AI in my daily project activities and even experiment with it a little. Moreover, from time to time I looked at YouTube and read articles about Best Practices of using these tools and was convinced that nothing special, which I myself came to, is shown in these videos and is not described in the articles.

The most amazing thing I've noticed for myself after two years of using AI in the form of GitHub Copilot is that I feel a psychological attachment to it, like to a colleague.

To explain this in more detail, I need to talk a little about my attitude towards my colleagues.

About colleagues in general and GPT in particular

My professional life has developed in such a way that I had to work a lot on projects where a quality product had to be created quickly. This is only possible if the team has the required number of high-quality professionals who are also ready to creatively cooperate with each other. I will note that this is a general rule for any creative collective projects. It is true for rock bands and football teams.

Over the years, I've developed a knack for quickly evaluating colleagues (whom I rarely had the choice of, and they rarely had the choice of me). And when AI appeared on the horizon in the form of the GitHub Copilot plugin, I instinctively began to apply my criteria to it, too.

I began to analyze what he could do better than me and what he could do worse, how easy and pleasant it was to communicate with him, what new, more difficult tasks I could try to assign to him.

Here are my assessments of his abilities based on almost two years of cooperation.

What AI (GitHub Copilot) does well, often better than me:

1. Translation from one human language to another,

2. Generation of simple program texts according to a precisely specified specification, when calling from 2 to 5 functions is assumed.

3. Documenting functions and small classes

AI is especially good at solving the second type of tasks. In this case, it’s faster to ask AI than to search through documentation. Especially if it concerns a technology that I haven’t used for a couple of months. For example, most of us use Gradle and GitHUB Actions for our projects, but the need to change something in these scripts arises once a quarter. And by that time, the intricate details of these technologies have disappeared from my head.

The AI's responses in the plugin turn into very polite explanations and are often accompanied by friendly, often practical advice, which is not so often the case when communicating with flesh-and-blood colleagues.

This creates a reciprocal sympathy and affection. It is more than the pleasure of using some wonderful tool and the desire to use it.

I have had to tell myself many times: “Stop! It is not the person sitting on the other end of the line who wrote this code or answered your question. It is a soulless system, an automaton!”

For all its friendliness, AI constantly gave reasons for irritation. Here are the most important of them (on my subjective scale):

  1. AI solutions often look like they work, but… – they don't. Unfortunately, the AI ​​still doesn't check the proposed code.

  2. When you force an AI to improve its decision, it usually only gets worse.

  3. AI often “does not understand” the logic of the problem and tries to solve it too literally.

  4. AI does not learn from its mistakes. When searching for a solution together, it “without remorse” re-proposes solutions rejected a couple of iterations ago.

  5. AI still has big problems with planning large tasks (more than 10 lines of code), not to mention larger solutions that require architectural thinking.

  6. AI is “lagging behind” in many fast-growing areas, such as Angular with its six-month release cycles or Kotlin Multiplatform, for which there is not much material available online yet.

Professor of mathematics Alexandre Borovic, having experimented with OpenAI by giving it student problems from a university course in higher mathematics, compared it to an impudent C-student who only knows the high points, never admits his own ignorance, and bravely takes on any mathematical problem.

I, from my point of view, compared AI in the guise of GitHub Copilot with a strange colleague who has a very fragmentary, but surprisingly deep knowledge of individual areas of programming, has a huge knowledge of various details, but who is poor at using his knowledge and who still needs to learn a lot.

Outsourcing to the land of AI?

And here we return to the question posed at the beginning of this article, but in two specific versions:

  1. What percentage of my activities in my specific projects can be delegated to AI today?

  2. Can we expect this share to approach 100% in the coming years?

What is the current efficiency of AI in programming?

My personal assessments of the usefulness of AI in my specific programming activity coincide with many other similar assessments. Yes, in some special types of activity, for example, writing routine tests for the functions of some well-known interface, AI is several times more effective than me. But I don’t have to do this very often. And if we consider everything that I do for the project during the week, I estimate its help at the level of 10-12%. For programmers in companies, this figure will be even lower, since they need to participate in various meetings and do other activities not related to programming.

Has AI become more efficient in recent years?

My answer to this question is that if it did grow, it did so only slightly.

On the one hand, I take my hat off to the developers of the GitHub Copilot plugin. The plugin was updated almost every day, it became more and more attractive, worked more stable.

At the same time, I agree with the opinion that the system is experiencing an effect called “AI dementia”. It seems to me that the number of inadequate answers to my questions is growing over time.

I like two possible explanations for this phenomenon. Firstly, the system is being “fed” with less and less valuable information sources, and thus the quality of the parameters stored in the system is falling. On the other hand, various AI systems have themselves become intensive sources of information garbage on the Internet in recent years, and specifically – software solutions that have not been tested in practice, thus closing the vicious circle.

Can we expect a sharp increase in AI programming efficiency in the coming months or years?

So far my personal forecasts in this regard are negative. My main arguments are as follows:
1. Modern technologies, if they develop, then they develop at a monstrous speed. (Perhaps we really are moving towards the point of singularity, but this is a separate topic). And in the field of AI application in programming, we have not seen such development in the last couple of years. The technology has become more convenient, has entered the masses, has acquired additional services, but its main part is stuck in place.

2. Hopes for progress in the area we are considering are primarily associated with the GPT approach. This is far from the only approach in AI, but successes in image or speech recognition are not yet clear how to apply to programming automation. Namely, in the GPT area, after the stunning success of ChatGPT 3.5, progress has stopped growing exponentially.

3. We haven’t heard anything yet about new AI approaches that can actually replace GPT in programming automation (at least not to me).

4. Attempts to “strengthen” the GPT type RAG (Retrieval Augmented Generation)[1]LLM-Modulo approach[2]Multi-Agent AI Systems[3] have not yet produced any breakthrough results either.

So… outsourcing to the country of AI, as it seems to me, is postponed for now.

Literature

  1. Full Fine-Tuning, PEFT, Prompt Engineering, and RAG: Which One Is Right for You? https://deci.ai/blog/fine-tuning-peft-prompt-engineering-and-rag-which-one-is-right-for-you/

  2. Kambhampati, S., Valmeekam, K., Guan, L., Verma, M., Stechly, K., Bhambri, S., Saldyt, L., & Murthy, A. (2024). LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks. arXiv:2402.01817.

  3. Combining LLMs with Other AI Tools: One Desirable Future of Intelligent Systems.

    https://medium.com/codex/combining-llms-with-other-ai-tools-one-desirable-future-of-intelligent-systems-a7c747a99c04


Illustration made by Microsoft Copilot at the request of the author.


We discussed this topic in my Telegram group “Materialization of Ideas” (@rpseru). But in general, the group is dedicated to mental models of programming and our lives. Come in, take a look. If you find it interesting, stay.


I'm also writing a book.Memoirs of a Nomadic Programmer: Stories, Happenings, Thoughts. It is available for reading at https://proza.ru/avtor/vsirotin


One of the projects I tried to use AI in was KotUniL:

Why it is needed and how it can make humanity happy, I described in a series of articles that begin with this one: The Magic of Dimensions and the Magic of Kotlin. Part One: Introduction to KotUniL.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *