Kubernetes best practices. Mapping external services

Kubernetes best practices. Create small containers
Kubernetes best practices. Kubernetes organization with namespace
Kubernetes best practices. Kubernetes viability test with Readiness and Liveness tests
Kubernetes best practices. Setting Queries and Resource Limits
Kubernetes best practices. Correct Terminate Disable

If you are like most people, then you are most likely using resources that operate outside of your cluster. You might be using the Taleo API to send text messages or analyzing images using the Google Cloud Vision API.

If you use the same endpoint – the server-side request receiving point in all your environments and do not plan to migrate your servers to Kubernetes, then it is perfectly normal to have an endpoint service directly in your code. However, there are many other scenarios. In this Kubernetes Best Practices series, you’ll learn how to use Kubernetes built-in mechanisms to discover services both inside and outside the cluster.

An example of a widespread external service is a database that runs outside a Kubernetes cluster. Unlike cloud databases such as the Google Cloud Data Store or Google Cloud Spanner, which use the same endpoint for all types of access, most databases have separate endpoints for different circumstances.
Best practices for using traditional databases, such as MySQL and MongoDB, usually require you to connect to different components for different environments. You may have a large machine for production data and a smaller machine for a test environment. Each of them will have its own IP address or domain name, but you probably will not want to change your code when moving from one environment to another. Therefore, instead of hard coding these addresses, you can use the Kubernetes built-in service to discover external DNS-based services in the same way as for native Kubernetes services.

Suppose you run the MongoDB database in the Google Compute Engine. You will be stuck in this hybrid world until you manage to transfer it to a cluster.

Fortunately, you can use Kubernetes static services to make your life a little easier. In this example, I created a MongoDB server using the Google Cloud Launcher. Since it was created on the same network (or Kubernetes VPC cluster), it is accessed using a high-performance internal IP address.

This is the default setting on Google Cloud, so you don’t have to configure anything. Now that you have an IP address, the first step is to create a service. You may notice that there are no hearth selectors for this service. That is, we created a service that will not know where to send traffic. This will allow you to manually create an endpoint object, which will receive traffic from this service.

The following code example shows that the endpoints determine the IP address for the database using the same mongo name as the service.

Kubernetes will use all IP addresses to find the endpoints as if they were regular Kubernetes pods, so now you can access the database using a simple connection string to the above name mongodb: // mongo. However, there is no need to use IP addresses in your code at all.

If the IP addresses change in the future, you can simply update the endpoints with the new IP address, and your applications will not need to be changed in any additional way.

If you use a database hosted on a third-party host, then most likely the owners of the host provided you with a unified resource identifier for the URI. So if you were given an IP address, you can simply use the previous method. This example shows that I have two MongoDB databases hosted on mLab host.

One of them is a database of developers, and the other is a production database. The connection strings for these databases are as follows – mLab provides you with a dynamic URI and a dynamic port. As you can see, they are different.

To ignore this, we use Kubernetes and connect to the developers database. You can create an external Kubernetes service name, which will provide you with a static service that will redirect traffic to the external service.

This service will perform a simple kernel-level CNAME redirect, which will have minimal performance impact. Thanks to this, you can use a simpler connection string.

But since the external name uses CNAME redirection, it cannot perform port forwarding. Therefore, this solution is applicable only for static ports and cannot be used with dynamic ports. But the free mLab Free Tier, by default, provides the user with a dynamic port number, and you cannot change this. This means that for dev and prod you need different connection command lines. The bad news is that you will need to hard-code the port number. So how do you get port forwarding to work?

The first step is to get the IP address from the URI. If you run the command nslookup, hostname or ping the URI, you can get the IP address of the database. If at the same time the service returns several IP addresses to you, then all these addresses can be used at the endpoints of the object.

Keep in mind that URI IPs may change without prior notice, so it’s quite risky to use them in prod. Using this IP address, you can connect to a remote database without specifying a port. Thus, the Kubernetes service performs port forwarding quite transparently.

Mapping, or mapping external resources to internal ones, gives you the ability to flexibly use these services within the cluster in the future while minimizing refactoring efforts. It also facilitates management and provides insight into what external services your company uses.

To be continued very soon …

A bit of advertising 🙂

Thank you for staying with us. Do you like our articles? Want to see more interesting materials? Support us by placing an order or recommending to your friends, cloud VPS for developers from $ 4.99, A unique analogue of entry-level servers that was invented by us for you: The whole truth about VPS (KVM) E5-2697 v3 (6 Cores) 10GB DDR4 480GB SSD 1Gbps from $ 19 or how to divide the server? (options are available with RAID1 and RAID10, up to 24 cores and up to 40GB DDR4).

Dell R730xd 2 times cheaper at the Equinix Tier IV data center in Amsterdam? Only here 2 x Intel TetraDeca-Core Xeon 2x E5-2697v3 2.6GHz 14C 64GB DDR4 4x960GB SSD 1Gbps 100 TV from $ 199 in the Netherlands! Dell R420 – 2x E5-2430 2.2Ghz 6C 128GB DDR3 2x960GB SSD 1Gbps 100TB – from $ 99! Read about How to Build Infrastructure Bldg. class c using Dell R730xd E5-2650 v4 servers costing 9,000 euros for a penny?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *