How to Configure a Web Application for High Loads

Hello, my name is Alexander Adadurov. I am the project manager of the Federal State Budgetary Institution “Center for Information and Technical Support”. In this article, I will describe the experience of setting up a website with educational content for a peak load of up to 15,000 requests per second or up to several million users per day.

The educational content of the site consisted of illustrated HTML pages, video tutorials, and various interactive tasks, mostly in JavaScript, which checked the correctness of the tasks by making requests to the backend. The site lived a quiet life and developed sluggishly until the introduction of lockdowns due to the spread of COVID-19. The first months of quarantine significantly changed the application code, its architecture, and even the server infrastructure on which it was located.

Original architecture

The development team consisted of 3–5 people at different times, the project was written over several years, during which views on the architecture and the concept as a whole changed. Individual parts were rewritten, the team changed. As a result, by the beginning of the pandemic, the project code was quite loose and not always verified in terms of optimality. When the load increased, the code contained classes, methods, and even bundles, the purpose of which the team did not fully understand.

The site was written in the Symfony 3 PHP framework, without a clear division into front and back. Web interfaces were rendered using the Twig template engine, and JQuery was used primarily for interactivity. PostgreSQL 9.6 was used as a DBMS, and some of the data was cached in the NoSQL Redis DBMS at the initiative of the developers. The site had an API for loading and multi-stage processing of new content, for which a queue system was built on two RabbitMQ brokers.

The project was located on 16 physical servers, the frontends and backends had 24 cores and 128 RAM each, the DBMS nodes had 56 cores and 512 GB of RAM. Each server had four 10-gigabit network interfaces, which provided an aggregated channel of 40 Gbit. The nodes had 2 TB hard drives with the OS installed, and the backend nodes additionally hosted the PHP/Symfony code. Shared resources such as images, videos, and downloads that were required on all nodes were stored in the storage system and mounted to each node as NFS network shares.

Initial application architecture

Initial application architecture

The original architecture already included some ideas for handling high loads.

For example, the project was divided into two segments based on the type of content processing and consisted of a “video service” and an “engine”.

Video service was located on a separate video subdomain. All video materials were uploaded to the video service, processed separately and embedded into the content via