we should have figured it out sooner

The cargo cult in development is becoming the norm, only rituals are changing.

Outside is 2023. We have access to an amount of information that any, even the most plump encyclopedia, would envy. However, techies still tend to look for the silver bullet that will solve all their problems in one fell swoop. There are equally many of them: those who are looking for, and those who think they have found.

Agile, microservices, DevOps, blockchain or artificial intelligence – we are always trying to invent the Philosopher’s Stone, the “Do it right” button. It’s like we’re putting together a puzzle and one piece is missing. And once we find it, the solution to life, death, and everything in general suddenly becomes so simple and obvious that we just look back and laugh: “Ha ha, incredible, it took us so long to understand this simple thing! And all this time she was right in front of us!”

And now to the point: I believe that many of the advertised methods and technologies can bring real benefits. And in some cases their implementation is justified and necessary. But here’s the problem: as soon as there is hype and excitement around something, people literally rush into the pool with their heads, because, they say, “this is the future.” And they don’t even think about what problems await them in this bright future.

What is the purpose of this article

I am writing this article because I am tired of news from the world of technology, in which someone suddenly “returns to the good old methods and tools, because the new ones do not work.” Recent examples – article about how the Prime Video team reduced their costs by 90% by moving from microservices to a monolith. Today, I’ll take a look at these trends in the TDD culture, and end with my thoughts on how to avoid it.

But first, I will clarify the meaning of the subtitle.

What is a cargo cult and what is its connection with technology?

For those who are not familiar with the term “cargo cult”, I will explain. During World War II, a huge amount of cargo with provisions for the army was dropped on US military bases located on the islands in the Pacific Ocean. The war ended, and the cargoes were no longer dropped. However, the locals did not like this arrangement. For no reason, they began to imitate the soldiers, imitate their style of dress and habits, build life-size replicas of straw airplanes. So they tried to resume the supply of wonderful gifts from heaven.

They believed that outsiders had some kind of magical connection with the spirits of their ancestors, thanks to which they received all these riches. The naive natives hoped that by performing the same “rituals” as the soldiers, they would also be able to get containers of good.

Surely you already understand what the joke is. Due to the fragmentation of knowledge, the savages did not really understand what was happening at all, and tried to achieve the desired result, blindly imitating what they saw.

Alas, I often observe similar trends in the technical environment. People learn about a new technology because everyone is trumpeting it here and there, and they think that the results someone gets are a direct consequence of using that technology. At the same time, they completely do not pay attention to the context and conditions in which the desired result was formed. Because of this, bad theses appear like “with Agile your team will work faster”, “clouds are cheaper than on-premise”, “unit tests increase code security and simplify its maintenance” or even “NeoVim is better than VS Code” . There is some truth in them, no doubt. But often it is not without compromises.

Let’s try to demystify a couple of popular beliefs. Let’s start with the Prime Video example.

The Amazing Story of Prime Video and Distribution Fever Syndrome

I confess honestly: This article surprised me. I was very surprised. And not even because the authors took a bunch of serverless components and consolidated them into one application, but because they decided to make it completely distributed. I was particularly confused by this one:

Some of the decisions we made are not obvious, but they were the ones that made it possible to achieve significant improvements. For example, we replicated expensive» computationally the media conversion process and moved it closer to the detectors.

I understand that replicating computationally “expensive” operations should always be a conscious decision after careful consideration of other costs, but the way it is presented here implies that the only “obvious” cost to consider is the cost of computation. Everyone knows that the CPU is expensive, but isn’t it obvious that the memory and network are also not a penny resource?

I do not blame the team: they realized that the initially chosen approach did not work, corrected it, and shared their experience. I think it’s great. We all make mistakes from time to time, and learning from them is what makes us good engineers.

What surprised me so much? Otherwise, the “default” approach was to try to immediately make the system as distributed as possible. Alas, this is a fairly common mistake – I know quite a few stories of people going too far with the idea of ​​distributed systems. One particular case struck me as rather funny: one foaming at the mouth advocated the use of a monorepository because it allegedly “makes it easier to work with microservices, especially in situations where the only thing a microservice does is store data from another microservice.”

I smirk every time I remember this story. And the joke is not at all in a mono-repository: the very idea of ​​\u200b\u200bkeeping an entire system just for the sake of storing data from another system is ridiculous. I mean, can a microservice be considered a microservice at all if it loses all meaning in isolation?

It seems to me that such situations are a symptom of an illness that I call distributional fever syndrome. It looks something like this:

The truth is that building distributed systems tends to come with additional overhead. At a minimum, you will have to serialize messages, send them over the network, and deserialize them back. In addition, many other issues arise, such as the minimum application size – depending on the technology used, a module may require less than 10 MB of memory, while launching an additional process just to execute a new function can take tens or hundreds of MB.

However, depending on the size of the application and the complexity of maintaining it as a whole (especially if different teams work on different components of the solution), the costs of building a distributed system can easily pay off. So what should be discussed is not what is better by itselfmonolith or microservice, but which approach is better suited for your specific needs in system maintenance.

Harmful practices

I have witnessed many similar situations. As a rule, the root cause is the same everywhere: a generally good concept or idea suddenly begins to be extolled as the ultimate truth with the requirement to strictly and blindly follow it.

Let’s take testing for example. I’ve seen quite a few people zealously defending the idea that automated tests are the only way to test code for lice. But I have also met those who vehemently resist the idea of ​​spending time on tests, because, in their opinion, they only slow down the work of programmers. Often, both those and others can back up their opinion with personal experience. How in such a situation to decide which of them is right and who is wrong?

But in my opinion, the point is not whether testing is necessary in principle or not, but when and how it is necessary to do it. Quality testing, especially regression testing, can do wonders for systems maintenance. Bad tests, on the other hand, can hamper the maintenance process and create a false sense of stability (when in reality there will be tons of unreported bugs in the application). Sometimes the difference between a good test and a bad test is just that What it is being tested.

Sometimes testing goes too far. Take, for example, the desire for maximum code coverage with tests, or the idea of ​​​​the superiority of unit testing and the belief that absolutely everything that is not SUT should be mocked in unit tests. The irony is that for many TDD developers, this approach is commonplace, while Kent Beck himself (the author of the concept of Test-Driven Development, test-driven development) seems to adhere to a much more adequate policy (based on his answer to the question about testing depth on Stack Overflow):

I get paid for working code, not for tests, so my philosophy is to test as little as possible to achieve a given level of reliability (it’s probably already high even compared to industry standards, well, or I’m too good about myself Think). Unless the problem is due to my own carelessness (like setting the wrong variables in a constructor), I don’t test it. As a rule, I always try to understand the errors in testing, so when working with logic that includes complex conditions, I take extra care. In a team, I have a different strategy, so I carefully test the code – in those places where we can all screw up together.

Note that the pioneers of TDD understand that only meaningful tests can be useful. In my opinion, by focusing on code coverage rather than test quality, you de facto give up the benefits of testing in general. Obviously, in this situation, it will slow down your work rather than speed it up.

How to break the vicious circle

By the time you’ve read this far, you’ve probably already remembered a whole bunch of similar troubles that have occurred because of Cargo Cult Driven Development. Hell, you must have been responsible for some of them yourself. But don’t worry. Let’s discuss how to break this vicious circle. Is it really enough to simply resist the temptation and hype around the latest newfangled technology?

Frankly, this is not a solution to the problem. Promoted tools or approaches came to the fore for a reason. First of all, it is important to understand how the tool you are interested in works, what makes it special, and only then decide whether it is worth investing in it, or it is better to discard this stupid idea.

Technology is always a story about compromises.

It is unlikely that you will be able to find a “diamond” that will be perfect from all sides, in any situation and with any input. It is also absolutely impossible to find something so disgusting that it cannot be used anywhere. That’s why when I’m introduced to something new, I try not only to learn about the advantages, but also to study the disadvantages and weaknesses. I find it incredibly useful to be aware of the shortcomings of a technology or practice before implementing it. At the very least, it helps to avoid the pitfalls that have set people on edge – the efficiency is higher when you are already aware of the existing restrictions.

It is much easier to reduce cloud costs if you know what mistakes are threatening to lead to huge computing bills. It is much easier (and faster) to develop on a codebase with well thought out and high quality tests, rather than testing everything that moves. NeoVim allows you to work faster and more efficiently than other code editors. Etc., etc., and etc.

Okay, okay, my last statement is perhaps a little biased. But the point, I think, is clear. Let’s think of hype stuff as interesting novelties that are worth exploring – and perhaps applying for their intended purpose. But never – hear never Don’t take technology out of context!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *