Set a resource usage limit. Using simple math, you can calculate how many copies of the application you can run – if one copy needs 1 GB of RAM, then having 10 GB of memory, you can run 10 copies. This will not need to be monitored, because I know that the core of the system will simply begin to fulfill the stipulated contract. This contract, or agreement between you and the system, is very important because if it is available, all tools work much better. Thus, we introduce the discipline of execution into the system.
So, the scheduler will launch this only if each of the replicas gets 1 GB of free memory. If there is not enough memory, the process will not start. So, I enter the kubectl create command, and after its execution the mysql container will be created.
It can be NFS, ISCSI or any other protocol that provides network block-level access to storage devices. I do this in order to disconnect my storage from the machines. If one of the machines fails, I can recreate the process on the other machine using the same data store. If you mount the storage from the host on which the failure occurs, you simply lose your data and will have to restore everything from the backup again.
Therefore, our goal is for storage to run faster as our networks grow faster. This is not about migration, but about the ability to quickly mount and unmount the storage outside the machine. This is quite possible to do. Let’s see how things are with our hearth – it is still being created, and now I want to create services for it so that other applications can find ours.
As soon as I create this service, Kubernetes will generate a DNS record, so you can just call mysql and automatically detect that this container is running. Let’s continue and enter the command $ kubectl create –f services / mysql.yaml. As you can see, the container is still being created. By the way, you can watch this demo video on my website. You see what service looks like for a mysql application – it contains the cluster IP addresses, external IP addresses, port numbers and network protocols.
Let’s see what happens with this particular container. As you can see, it works.
So, at the moment, I believe that the mysql application really works. The next thing we need is a web application. So let’s deploy such an application called “lobsters”, I took it on GitHub, this is a clone of Hacker News. It is a Ruby-on-Rails project, I just created a container based on the data given here and the basic configuration.
If you’re not up to date: Hacker News will make you super popular at any hacker conference. Just read what is written here, and you can discuss all the popular topics from the world of computer technology. So if you want to impress others – read the news of this portal.
So, I want to create a clone of this thing and put it on the Internet to make money. Of course, this is not a real business project, but just a demonstration of opportunities.
I am currently deploying an application called Lobsters. From my secret I get the database URL, for which I use the $ kubectl get secrets command. Secret also has a username and password.
Next, I want to create a container that will communicate with my application, for which I use the command $ kubectl create –f deployments / lobsters.yaml. As you can see, the application is running.
In addition, we have an IP address. I enter the $ kubectl get svc command and use the global load balancer, which points to the page with the external IP address 126.96.36.199.
We’ll go to the browser and try to enter this address through HTTP. Yeah, error waiting for migration! This is Ruby-on-Rails, so I was expecting something like that.
So, we need a database migration. We need to run this process once, and that’s it. However, we want to do this in the same way – no authorization on the server, no special servers such as Jump box, we want to contact the scheduler and say: “Hey, run this task once and after execution just kill the process!”. That is, I want to run just one command and exit. Therefore, to perform batch processing using the $ cat jobs / lobsters-db-schema-load.yaml command, I create a Jobs object that implements such a scheme.
The rake command flag “db: schema: load” is sent directly to the GitHub website and says: “take the code image: kelseyhightower / lobsters: 2.0.0 and run this command 1 time”. The restartPolicy: never line at the end of the code tells Kubernetis that he should run this only once and never repeat it. I also limit the resources of the processor and memory, that is, I indicate the parameters of a suitable machine on which this can be started and executed, after which the database transfer will be completed. So I “put on the rails” all the Jobs objects that should be run on the system using the command $ kubectl create –f jobs / lobsters-db-schema-load.yaml.
You see that the corresponding job is created, after which I type the command $ watch kubectl get jobs.
So, the container was pulled to the machine, the scheduler worked, the rake task was created. Let’s go back and refresh the database error page. As you can see, now our scheme has been successfully implemented.
Next I need to log in. I use the command $ kubectl create –f jobs / lobsters-db-seed.yaml. You see that the scheduler is still loading the container, and after a few seconds the work is completed. To start the migration, we use the same level of code as before. I log in to this page, and all that needs to be done now is to get the content. Content is necessary if we want to “raise” some money. This is what growth hacking looks like, or “hacking growth” – you go to someone else’s site, grab the content from there and post it on your own site, which looks similar to the original.
But we need not just content, but good content. It would be a shame to let things drift, so I manually borrow some news. You could copy the content automatically, but it is not legal. So I select the news, copy the link address, set the “test” tag, check the box “I am the author of the story located at this URL” and press the Submit button. See how great the stolen news looks!
Now it’s time to scale the application. To do this, you just need to change the definition of what we are doing – instead of 1 replica, go, for example, to 10. Then I run the command block again.
Kubernetes accepts this information, performs an action, and now we have 10 copies of the Lobsters application running in our hearth. Moreover, this process is automatically added to the load balancer thanks to the work of Services.
Let’s see what happens in the backend. To do this, I use the $ kubectl get svc command, get a short status and ask to describe it with the $ kubectl describe svc command. Kubernetes automatically detects all of our endpoints and places them behind a load balancer.
At the same time, everything that is useless is deleted, and everything that is needed is automatically added. We do not need to create this thing again and again, everything is fully integrated into the platform.
The next important question is how to update and how to get logs. If you remember, I took away your SSH access, so you need to centralize the logs using something like Log Stash or internal Google Cloud logging. But if you just want to view ad hoc logs, remember that you do not have access to the machines. However, you can use the API to capture logs using container names. To do this, enter the command $ kubectl logs lobsters-240734871-03rmn –f, where 03rmn is the name of a specific copy of the lobsters-240734871 application in the container. So you can view the log of each container with a replica, so that if necessary, troubleshoot.
Let’s view our content using the $ kubectl get pods command, as you can see, everything works. The next important thing to do is hire a marketer. He looks at this page and says: “Do what you want, but remove these white spots from the site!”.
All you need to do for this is to create a new container in addition to the already created one and customize the CSS for what we want to promote. Let me remind you that we are not talking about nodes, nodes are not important to us, because the system itself will provide exactly what we want.
To be continued very soon …