What You Need to Know About Red Hat OpenShift Service Mesh

The transition to Kubernetes and Linux infrastructures during the digital transformation of organizations leads to the fact that applications are increasingly starting to be built on the basis of microservice architecture and, as a result, very often are surrounded by complex request routing schemes between services.

At Red Hat OpenShift Service Mesh, we go beyond traditional routing and offer components for tracing and visualizing such requests to make service interaction easier and more reliable. The introduction of a special logical level of management, the so-called service mesh service mesh, helps simplify the connection, control and operational management at the level of each individual application deployed on Red Hat OpenShift, the leading enterprise-class Kubernetes platform.

Red Hat OpenShift Service Mesh is offered as a special Kubernetes operator, which can be tested in Red Hat OpenShift 4 here.

Improved tracking, routing and optimization of communications at the application and service level

Using only hardware load balancers, specialized network equipment and other solutions of a similar plan, which have become the norm in modern IT environments, it is very difficult, and sometimes impossible, to regulate and manage the communications at the “service-to-service” level that arise between applications and their services. With the advent of an additional layer of service mesh management, containerized applications can better monitor, route and optimize their communications with Kubernetes as the core of the platform. Service mesh helps simplify hybrid load management with multiple locations and more granular control over data location. With the release of the OpenShift Service Mesh, we hope that this important component of the microservice technology stack will expand the ability of organizations to implement multi-cloud and hybrid strategies.
The OpenShift Service Mesh is built on the basis of several open source projects, such as Istio, Kiali and Jaeger, and makes it possible to program the communication logic within the framework of the microservice application architecture. As a result, development teams can fully concentrate on developing applications and services that solve business problems.

Making life easier for developers

As we already wrote, before the appearance of the service mesh, a huge part of the work on managing complex interactions between services fell on the shoulders of application developers. In these conditions, they need a whole range of tools for managing the application life cycle, starting from monitoring the results of code deployment and ending with managing application traffic in production. For the application to work successfully, all its services must interact normally with each other. Tracing gives the developer the ability to track how each service interacts with other functions, and helps identify bottlenecks that create unnecessary delays in real work.

The ability to visualize the connections between all services and see the interaction topology also helps to better understand the complex picture of interservice interactions. Combining these useful features within the OpenShift Service Mesh, Red Hat offers the developer an expanded set of tools necessary for the successful development and deployment of cloud microservices.

To simplify the creation of a service mesh, our solution makes it easy to implement this level of management within an existing instance of OpenShift using the appropriate Kubernetes operator. This operator takes on the tasks of installation, network integration and operational management of all the necessary components, which allows you to immediately start using the newly created service mesh to deploy real applications.

Reduced labor costs for the implementation and management of the service mesh allows you to quickly create and roll out application concepts and not lose control over the situation as they develop. Why wait until the management of interservice communications develops into a real problem? The OpenShift Service Mesh will easily provide the necessary scalability even before you really need it.

The list of benefits that the OpenShift Service Mesh gives OpenShift users includes:

  • Tracing and monitoring (Jaeger). The activation of the service mesh to improve manageability may be accompanied by certain performance degradation, therefore the OpenShift Service Mesh can measure the basic level of performance, then use this data for further optimization.
  • Visualization (Kiali). The visual representation of the service mesh helps to understand the topology of the service mesh and the overall picture of how services interact.
  • Kubernetes operator Service Mesh. Minimizes the need for administration in managing applications, allowing you to automate standard tasks such as installation, maintenance, and lifecycle management of a service. By adding business logic, you can further simplify management and speed up the introduction of new features in production. The OpenShift Service Mesh operator deploys the Istio, Kiali, and Jaeger packages, complete with configuration logic that implements all the required functionality at once.
  • Support for multiple network interfaces (multus). The OpenShift Service Mesh eliminates manual operations and enables the developer to run the code in enhanced security mode using SCC (Security Context Constraint). In particular, additional isolation of workloads in the cluster is provided, for example, for the namespace, you can specify which loads can be executed as root and which cannot. As a result, it is possible to combine the advantages of Istio so demanded by developers with well-defined security measures that cluster administrators need.
  • Integration with Red Hat 3scale API Management. For developers or IT operators who need increased security of access to service APIs, OpenShift Service Mesh offers a standard Red Hat 3scale Istio Mixer Adapter component, which, unlike the service mesh, allows you to control interservice communications at the API level.

As for the further development of service mesh technologies, at the beginning of this year, Red Hat announced its participation in the industry project Service Mesh Interface (SMI), which aims to increase the interoperability of these technologies offered by various vendors. Collaboration within the framework of this project will help us provide users of Red Hat OpenShift with a wider and more flexible choice, and bring the new era closer, when we can offer developers of the NoOps class environment.

Try OpenShift

Service mesh technologies can greatly simplify the use of microservice stacks in a hybrid cloud. Therefore, we encourage anyone actively using Kubernetes and containers to try the Red Hat OpenShift Service Mesh.

Similar Posts

Leave a Reply Cancel reply