Technoradar Lamoda 2020: what has changed in two years

Technological radar is a diagram on which you can see the IT technologies and tools that we use at Lamoda, divided by fields of application and status. In 2018, we posted here on Habré a detailed article with a decoding of the technical radar that was relevant at that time. What has changed in two years, and why we continue to update the radar regularly – read this article.

image

In fact, technological radar is not just a picture that we can show at a conference. Although we use it this way too, it’s great for getting a quick impression of the IT side of our company. In 2018, we were the only company that used such a visualization tool for “presentation” – and now at every conference we are happy to notice 2-3 new radars from different companies. What can I say – it’s really very clear and convenient!

But, firstly, the technical radar exists not as one page with infographics, but as a magazine – we regularly update it, and the most interesting thing is to compare radars for different months – you can try it yourself here

Secondly, this picture does not just capture a situation that somehow developed by itself. The changes on the technical radar are a reflection of the activities of the Architectural Committee. It includes leaders from all major IT areas, and every new technology that experts want to use must first be tested and approved by the committee. We do this in order to contain the swelling of the technology stack. We have many complex and completely different systems, so the stack is already quite impressive.

A specific specialist simply cannot know everything that is already used somewhere in the company. It often turns out that the tool he needs to solve the problem has already been implemented by another team. Then, of course, it is better for him to go for an expert opinion from colleagues than to invent a bicycle himself. With this size of the IT department, information does not spread well enough on its own, for this reason we have introduced knowledge sharing practices. Tehradar is one such practice.

I will talk in more detail about this further, but I want to emphasize right away: the policy of the Architectural Committee is not aimed at prohibiting experiments with new technologies, but at making these experiments more conscious and manageable. Our specialists do not find themselves squeezed into the narrow framework of specific languages ​​and tools for the entire period of work in the company. On the contrary, thanks to an understandable cycle of introducing new (or well-forgotten old) technologies, if a specialist gets bored with the tools with which he works, we can almost always invite him to take part in an experimental project. At the same time, he will not have to completely change direction, and he will be able to use the accumulated expertise on the system on which he worked.

So, what has this policy led to in two years? You can read more about the technologies used in the previous article, here I will talk about the main changes and some interesting points.

Let’s go sequentially through the four sectors of the radar, corresponding to the main IT areas: Languages, Data Management, Platforms and Infrastructure, Frameworks and Tools.

But first, let me remind you what the four layers (technology acceptance status) mean on our radar:

  • ADOPT – technologies and tools that are implemented and actively used;
  • TRIAL – technologies and tools that have already passed the testing stage and are preparing to work with production (or even already work there);
  • ASSESS are trial tools that are currently being evaluated and have no impact on production yet. Only test projects are implemented with their participation;
  • HOLD – in this category we have expertise, but the mentioned tools are used only with the support of existing systems – new projects are not launched on them.

Languages

image

As you can see, there are fewer points in this sector compared to 2018. Right now we don’t have any languages ​​in the “Assess” or “Trial” status. Why did it happen? For each task, we already have a language that suits us. Of course, we follow the development of the field and know interesting and promising languages ​​that are not currently used in Lamoda, for example Rust, but in their capabilities they are essentially duplicates for those in which our systems are written. Actually, one of the goals of the radar is precisely to ensure that the introduction of new technologies occurs consciously and with a clear understanding of what advantages it will bring to us – and not just because the language is fashionable and everyone is talking about it.

If a problem arises that can be solved only with the use of a new language, then, of course, implementation occurs without question. But even if we already write part of the systems on something, and at some point we understand that the new language will make it possible to greatly improve the performance of our systems or simplify development in some way – then we also allocate resources for experiments and potential implementation of this language.

This is exactly what happened with Go. When we began to actively develop the microservice architecture, we realized that Go is better suited for solving many problems than PHP, in which we are used to writing. Yes, it took some effort for the whole team to switch to it (we wrote more about it here). But as a result, the speed of the applications has increased greatly, and the language turned out to be convenient in other parameters. Over the past two years, we have much more of it, in particular, it almost completely replaced Python from web development (but, of course, Python remained in Data Science and other places where it is necessary to work with large amounts of data – in such tasks it is definitely a leader) …

The radar of 2018 shows that we are “trying” Kotlin, but in 2020 we confidently assign it the “Adopt” status. More than two years ago, we decided to transfer part of Android development to Kotlin – the language seemed very promising, and our mobile application was just developing and we could afford an experiment. Now we are definitely glad that we made such a decision. Not only are all our Android apps written in this language, but also part of the backend – so far this is an experiment, but we have high hopes for it. Apart from other obvious advantages, Kotlin is a very versatile language – which means that you can write different parts of systems in it, and transferring expertise between teams is greatly simplified. And it also becomes easier to find new specialists in the market.

Also, compared to 2018, TypeScript has moved from “Trial” to “Adopt”.
It greatly enhances JavaScript, and our entire huge delivery service is written in it, works great, and we do not plan to change that yet.

Data management

image

Almost all our databases are still implemented in PostgreSQL (we also use MongoDB, but MySQL was finally transferred to the “Hold” status).

But over the past two years, we also have new technologies. We are currently piloting the use of the Greenplum DBMS for the development of our data platform. We have Oracle and Vertica in our arsenal – excellent bases, but we are looking for ways to reduce the cost of infrastructure ownership, so we are considering Opensource solutions. Perhaps it will not be a replacement, but an addition – time will tell.

We decided to replace Tableau with Microsoft Power BI as a tool for creating BI dashboards, and have already completed this migration. We decided that we wanted to give access to dashboards to everyone, and in Power BI it comes out cheaper, since you do not need to buy licenses to view reports.

We also implemented Oracle APEX as the creation of Web interfaces for maintaining manual references in the data warehouse. Previously, XLS files loaded from a network drive were used for this. Oracle APEX allowed us to create a convenient interface for business users, now they can independently update data in a convenient Web application.

There are no changes on the radar in this place, but it is worth noting the even deeper penetration of Apache Airflow into the processes of creating the data platform. Now it is the main orchestrator of data processing tasks; it came to replace Luigi.

An interesting thing happened with RabbitMQ – at one time we implemented it, but it did not suit all of us, and we even thought to completely abandon it. But then new specialists came to the administration team, and it turned out that RabbitMQ is good, we simply did not know how to cook it. After switching from FreeBSD to Linux, RabbitMQ is confidently in the “Adopt” status.

Platforms and infrastructure

image

This is where the big changes have taken place. When we initially chose tools for deployment, we settled on the Nomad + Consul bundle. We had a lot of trust in the company Hashicorp, which makes them, and in general, the solution was fine with us – until there were several critical drops during equipment upgrades. Troubleshooting each time required a lot of time and resources, and the company suffered some losses. Then we switched to the more popular Kubernetes.
Perhaps the new versions of Nomad work more stably, but after that story, we can say that we have a scar – so there is no desire to check. Moreover, Kubernetes in conjunction with Docker suits us completely.

One of the tools we are currently trying out is Zabbix. If it meets our expectations, it may partially or completely replace the Grafana we are using now.

Frameworks and tools

image

Perhaps, it is in this segment that new “fashionable” technologies appear most often. And it is here that we see how the technical radar and the Architectural Committee work, restraining the unjustified waste of resources on experiments in production (which would later lead to the need to manage a motley system written in many frameworks, or unify it – both are very laborious) …

image

For example, we tried different JavaScript frameworks, but tried to experiment on non-business-critical tasks. And most importantly, we remembered that this was an experiment, and in the end we want to choose the minimum set of tools that fit us. This policy led to the fact that React, for example, was never used in production. Angular and other frameworks have moved to the “Hold” status, and mainly on the frontend we use vue.js, which has shown itself to be the best in this “competition”.

Naturally, some frameworks go away along with the languages ​​for which they were used. For example, this happened with Django, when Go almost completely replaced Python from web development.

It suits us!

At the end of the 2018 article, we said “in a nutshell… we write in GO, PHP, Java, JavaScript, keep the databases on PostgreSQL, and deploy them on Docker and Kubernetes”.
This generalization is true even now – the only thing that has definitely burst into the list of major languages ​​is Kotlin.

It turns out that the main tools and technologies we selected two years ago turned out to be really suitable for our tasks and our approach to development.

And thanks to the work of the Architectural Committee and, in particular, the maintenance of the technical radar, we know that these technologies did not get into our stack by accident. They honestly beat their competitors on experimental projects, and are used reasonably in production.

Of course, new tools will appear on the market that are more suitable for us than those that we are using now.
We expect that in our system of “natural selection” of technologies, these tools will go from “Assess” to “Adopt” and, if necessary, replace the current ones.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *