Let's make a large application in Flask (Python language)

Example of displaying search results on a website using Elasticsearch

Example of displaying search results on a website using Elasticsearch

STEP 12. Implement a bypass of blocking operations (sending emails)

What is one of the worst things from a UX perspective? I would answer this way: when a site takes a long time to process requests. The user waits, and then closes the tab and never visits your site again, knowing that the site is “buggy”.

We may have long running operations in our code that require waiting. One such operation is sending an email. We need to bypass this blocking operation. How? Here It's worth referring to the official Flask documentationwhich in such cases recommends us to use the asynchronous queue Celery and NoSQL DBMS Redis. We will use them in our project.

STEP 13. Test the web application

I don't remember exactly which book I came across this in, but I remember one phrase: “If the code is not tested, it is not working.” In Python, there is a standard library for testing called unittest. In practice, you can also often see the use of a third-party library pytest

Tip 13: Unittest can be useful for simple cases and those who prefer a more traditional approach to testing. For more complex projects, pytest is more flexible and powerful.

I used the pytest library because of its fixture capabilities.

First, we need to create a folder called “tests” in the root of the project and make it a package, i.e. place the __init__.py file there. Next, we need to create the conftest.py file. This is a special file in the pytest library that is used to configure tests and their environment. It allows you to define fixtures and configurations that can be available in all tests in the directory and its subdirectories.

Well, then we create the test files themselves, starting with “test_what_we_test”

The project's autotests are located here

STEP 14. Implement page caching

With us, a user can visit the same resource several times. The first time, the data for this resource will be downloaded from the database, and for all subsequent visits – from a special storage where this data is stored. Such stored data is called a cache, and writing this data to a local storage is called caching.

Flask has extensions for all occasions. So we will use the Flask-Caching library.

There are several tools for caching. For example, in-memory caching is a strategy where data is temporarily stored in RAM to ensure fast access. The problem is that RAM is limited, and as the amount of data increases, in-memory caching may become ineffective. Using Redis as a cache server solves this problem.

When the cache size is approaching a critical value, Redis can save data to disk, which allows the cache state to be restored even after an application restart or crash. Moreover, Redis is designed for high performance and can handle millions of requests per second, ensuring fast access to cached data.

Our project will use Redis for caching. The Flask-Caching extension provides a very convenient interface for managing Redis.

STEP 15. Implement logging

There is no debugging in Production, it is disabled. But applications still crash sometimes. And we need to know the reasons – why? That is why we create a special log file, where all events that occur in the application will be recorded. The process of recording events in this file is called logging. By the way, logging can be enabled during development, outputting logs to the console.

Tip 14: When implementing logging, always limit the log file size (e.g. 100 MB).

If you do not set the log file limits, it can grow to indescribable proportions. It is also necessary to define the logging level. Logs of what priority will be written to the file

The logging organization can be viewed here

STEP 16. Connect a synchronous server WSGI gateway on several logical cores of the central processor

WSGI (Web Server Gateway Interface) is a standard interface between web servers and Python applications or frameworks. It allows developers to create web applications that can work with any WSGI-compliant server, providing flexibility and compatibility.

The most popular WSGI gateway is Gunicorn. In the modern world, almost all processors have several physical cores. Physical cores correspond to twice the number of logical cores. We can use these cores to process requests in parallel. And we are talking about true parallelism. Each worker process can process its own requests independently of others. This allows you to significantly increase the throughput of the application, especially when processing a large number of simultaneous requests.

We can look at the hosting site to see how many cores our virtual machine has. But it is better to automate this process so that the number of cores is determined automatically.

To do this, you need to implement the following gunicorn.conf.py script:

import multiprocessing
import socket
import fcntl
import struct


def get_ip_address(ifname):
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    return socket.inet_ntoa(fcntl.ioctl(
        s.fileno(),
        0x8915,
        struct.pack('256s', bytes(ifname[:15], 'utf-8'))
    )[20:24])


ip_address = get_ip_address('eth0')

bind = f'{ip_address}:5000'
workers = multiprocessing.cpu_count()*2 + 1

timeout = 2
preload = True
loglevel="info"

And connect it:

gunicorn -c ./gunicorn.conf.py manage:app

STEP 17. Connect the HTTP server and reverse proxy server

We are at the finish line. Now we need to make our Python application accessible at a specific URL over HTTPS. To do this, we need to create our own nginx server with our configuration.

Tip 15: Don't use the default nginx configuration. There are plenty of guides on the Internet on how to improve the configuration so that nginx works much faster and more securely.

STEP 18. Let's create docker compose to combine all components into a single whole

Now we need to combine all the components of our application into a single whole, connected by a single network. Docker is a system for running applications in containers. The main advantage of this approach is that whether you are working on Windows, MacOS, Ubuntu – the result of running Docker containers will be the same everywhere.

Particular attention should be paid to pluggable volumes. These are the so-called SSDs inside your application. You can change the DBMS, recreate images and containers. But the information in the volumes will remain untouched. That's why we create pluggable volumes for storing images and databases. And also additionally for Elasticsearch and Redis.

Tip 16: Keep your data safe by using mounted volumes to containers and making backups.

STEP 19. Deploy the web application to a remote server

It's time to rent a virtual machine. When you register on any hosting, in addition to the virtual machine, you are offered to receive an SSL certificate and a domain name. And, as a rule, you are given a bonus – free receipt of a certificate and a name for six months.

An SSL (Secure Sockets Layer) certificate is a digital certificate that is used to ensure a secure connection between a web server and a browser. It encrypts data transmitted between the client and the server, protecting it from interception and unauthorized access.

Tip 17: If your site has authentication, always get and connect an SSL certificate

An SSL certificate allows you to transfer data via the secure HTTPS protocol, which is a necessary requirement for almost any modern website.

Next, you need to install Git or Docker on the server.

The domain name specified in the certificate must be bound to the IP address of the virtual machine. This can also be done on the hosting.

Tip 18: If the hosting offers a virtual machine with Git and Docker already installed, agree. Everything will already be configured there and you won’t have to configure anything additionally.

Next, you need to set up login to the remote server via SSH. How to do this should be described in detail in the hosting documentation. After that, we create SSH keys on the remote server – public and private. And we point these keys to GitHub so that we can download repositories from GitHub.

Next, we need to clone our repository from GitHub to a remote server and run docker compose. From this point on, we can enter our domain name and go to the site from any browser.

STEP 20. Let's make interaction between our project repository and the remote server via CI/CD

Everything works, what else do we need? What if we want to add new features to our project and DO NOT want to manually re-upload everything to the remote server. This problem is solved by GitHub Actions, which implement CI/CD.

CI (Continuous Integration) is a practice where developers regularly merge their changes into a central repository. Each commit triggers automated testing, allowing for quick bug detection and fixing.

CD (Continuous Deployment / Continuous Delivery) – full automation, where every change that passes tests is automatically deployed to the production environment without manual intervention.

Tip 19: At later stages of development, you can create a stub project for implementing nginx, CI/CD and test and work out all integrations and deployments on this project.

That's it, now all changes in our project when pushed to GitHub will be automatically reflected on the remote server.

Conclusion

Finally, I will give you Tip 20: Personally, in my opinion, deep learning of backend technologies has a higher priority than frontend. If you understand how the backend works, deeply learn the backend programming language, then mastering the frontend will be quite easy.

The project is open source and Fully available here

If you have any questions, I will be glad to answer them. Thank you for your attention.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *