Cloud Service Design Patterns

When designing various applications based on cloud services, it is important for us to choose the right architecture for our solution. You can, of course, reinvent the wheel yourself, as they say, by coming up with your own architecture. But it is unlikely that you will be able to invent something new, but you risk spending a lot of time and resources. Most likely, someone has already come up with what you need and you can safely use it.

There are several dozen design patterns in total, and in this article we will certainly not consider all of them. We'll look at three of the most common patterns you'll need when working with cloud environments and microservices. Let's talk about what they are, what advantages and disadvantages they have.

What is the problem

The interactions between different components in the cloud can be quite complex. At the same time, it is important to understand that for the development of sustainable applications in the cloud, various functions are important, not only related to the operation of the basic functionality of the application itself, but also allowing you to control the collection of metrics, monitoring, and much more. However, integrating this functionality into the code base is often difficult to implement. A typical case is when a programming language, library or framework, for one reason or another, is not compatible with the main application code base or the development team does not have the necessary knowledge.

Separately, it is worth mentioning network interactions, which also require significant effort in setting up connections, authentication and authorization. If you use network calls across multiple applications built in different languages ​​and platforms, you will need to configure calls separately for each instance. Sometimes, it is better to delegate management of networking and security throughout the organization to a centralized development team. And if the code base for a cloud application is large enough that developers are forced to make changes to unfamiliar areas of code, the risk of errors increases.

To avoid these problems, you can use the design patterns presented below. Let's start with the Mediator template.

Mediator template

One possible solution to the management problems presented above is to use the Ambassador pattern, which proposes to move client frameworks and libraries into an external process that acts as an intermediary between the application and the external services with which it interacts. The template suggests deploying this server (or a group of them) in the same environment as the main application to gain full control over routing, resiliency and security features, and avoid any access restrictions issues. At the same time, this template will help standardize and expand the set of available tools.

Also worth noting is the ability to use a proxy server, which will help you monitor performance metrics such as latency and resource usage directly in the environment where the application is hosted.

Below is a general diagram of how the application works, in accordance with the Intermediary template.

As can be seen from the diagram, the code that we place in the intermediary is much easier to modify, without interfering with the operation of existing application components. Also, functions in the broker can be controlled independently of the application.

It is worth noting that if the broker is required for several separate services on the same host, it can be deployed as a Windows service, or as part of a pod in k8s if containerization is used in the cloud.

But, as elsewhere, when using this template you may encounter a number of difficulties. So the use of such an intermediate node is accompanied by additional delay. And in some cases it may be more rational to use a client library that the application calls directly.

Separately, it is necessary to consider how the set of components placed in the intermediary affects the overall operation of the application. For example, some intermediary transactions may be insecure due to the lack of additional authentication. So, when using the web, add a special HTTP request header that allows you to prohibit repeated requests or limit their number. In this header, the server must return either an integer representing the number of seconds that must pass before retrying the request, or a value containing the date and time after which the request can be reissued.

Retry-After: <http-date>

Retry-After: <delay-seconds>

You also need to think through the process of deploying the broker. Considering that this node is connected “in a gap,” it is necessary to avoid problems with the availability of application components as a whole.

Additionally, when planning your architecture, you need to choose the number of instances: one common for all clients or one for each client.

Thus, the Mediator pattern is best suited when you need a common set of client functionality to communicate between services that are written in different languages ​​or platforms. Or when you need to provide support for cloud or cluster interaction for an application that is outdated or for other reasons cannot be improved.

But if the bandwidth of the network channel for transmitting requests is critical for you and delays are unacceptable, then it is better not to use this pattern, since it imposes a delay, albeit small, that affects the operation of the application.

In addition, the Mediator is unlikely to be appropriate if all client connection functions are implemented in the same language. In this case, it makes more sense to create a client library and provide it to development teams as a package.

As an alternative to the Mediator, you can consider the SideCar template, which will be discussed later.

Sidecar template

If the Mediator proposes to place components between the application code and client services, then the SideCar pattern proposes to separate part of the application functions into a separate process. When these functions are integrated into an application, they can run in a single process as an application, ensuring efficient use of shared resources. However, this also means that they are not effectively isolated from the rest of the application, and a failure in one of these components can affect other or all applications. In addition, they usually must be implemented in the same language as the parent application. As a result, the component and the application are tightly coupled.

If an application is divided into services, each of these services can be created using different languages ​​and technologies. In this case, we certainly gain additional flexibility, but we are faced with a number of other problems, since each component has its own dependencies, and also requires language libraries to access the underlying platform and resources of the parent application.

A solution to this problem is to isolate a given set of tasks along with the main application within their own process or container. At the same time, to interact with other application components it is necessary to use a uniform interface. This pattern is called Sidecar pattern.

This technology has become widespread in containerization, when auxiliary containers are used to perform monitoring tasks. Also in a cloud environment, we can use a similar approach when we separately deploy Sidecar, in which we implement logging and monitoring functions.

The main advantage of Sidecar is that it is independent of the host application's runtime and programming language, so Sidecar does not need to be developed for each language. That is, we can write Sidecar in another language and using other frameworks and runtimes, and it will interact with the main application through open protocols such as JSON.

At the same time, the Sidecar template can monitor the system resources used by the main application and itself. Unlike the Mediator, there are no delays in the exchange of data, since they are tightly coupled, but Sidecar stands on the side, not in the gap, and thus cannot cause delays in the main application.

Even for applications that do not provide an extension mechanism, you can use Sidecar to extend functionality by attaching it as its own process, in the same node or nested container as the main application.

Sidecar is well suited for applications that use multiple languages ​​and platforms. Sidecar is also convenient if a remote team or other organization works with it.

In case the Sidecar component or feature is hosted on the same node as the main application, we require a service that has the same lifecycle as the main application, but requires updates to be installed independently of the main application.

Sidecar is also good in situations where we need to control the resources used by an individual component. The component can be deployed as an extension and manage memory usage independently of the host application.

But the Sidecar template is not suitable for all applications. If the delay in data exchange between application components is critical for us, then Sidecar, like the Intermediary, can cause additional load. Such delays when optimizing interaction between processes may have a detrimental effect on the performance of the system as a whole.

Also, if you have a small application, the cost of deploying a separate Sidecar may be higher than all the benefits this template provides.

Competing Consumer Pattern

So, the disadvantages of previous templates, to one degree or another, were possible performance problems when processing a large volume of messages. The Competing Consumers Pattern allows multiple concurrent consumers to process messages received over the same messaging channel. Using this pattern, we can build a system that can process multiple messages at the same time. By doing so, we can optimize throughput, improve scalability and availability, and balance load.

Processing each request synchronously will invariably result in slow message processing rates for the system, especially when the volume of messages increases significantly. A common method is for an application to pass messages through the messaging system of another service (the consumer), which processes them asynchronously. This strategy helps ensure that business logic in the application does not block while requests are processed.

But while the application is running, there may be a sudden surge in user activity, or aggregated requests from other components may result in a large number of requests. During peak hours, the system may receive a large number of requests, while at other times there may be very few requests. Accordingly, there may be situations where in order for our application to work correctly, we may need to scale our consumer-facing services.

That is, the system must be able to launch several instances serving consumers. However, these consumers must be coordinated to ensure that each message is delivered to only one consumer.

We can use a message queue as an intermediary between the target application and the consumer service instances. The message queue will relieve consumer services. The application sends requests as messages to a queue, and consumer instances receive messages from the queue and process them. This approach allows the same pool of consumer instances to process messages from any application instance.

It looks like this:

If the workload for an application is divided into tasks that can be executed asynchronously, then the competing consumer pattern is a great fit. That is, we do not need to wait for some tasks to complete in order to launch others, which allows us to significantly save time. Also, in situations where the amount of work can vary greatly depending on the flow of messages, the pattern allows you to scale system performance.

However, if our application architecture does not allow us to divide the application workload into separate subtasks, or there is a high degree of dependency between tasks and it is extremely difficult for us to divide the total volume of tasks into these subtasks, then this pattern will not suit us. It will also be difficult to use this pattern if tasks must be executed synchronously and the application logic must wait for the task to complete before continuing. Just the same synchronous execution of tasks that we talked about earlier. Well, if it is unacceptable for us to receive messages in any order, then this template is also not suitable for us.

Conclusion

Today we looked at three main design patterns. Each of them has its own advantages, allowing it to be used in certain situations. Thus, we can choose which template is suitable for us in case of different requirements for resource consumption, delays, etc.

You can learn how to practically apply design patterns and SOLID in development at online course from area experts. The new stream starts on June 26.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *