launching k8s v.0.1 from 2014 and announcing the challenge
Hi! I'm Alexander Khrennikov, the head of the DevOps unit at KTS. The first commit to the kubernetes repository was made 10 years ago, on June 6, 2014. During this time, kubernetes has come a long way and has become the most popular container orchestration tool.
I suggest you take a look at what it was like at that time and try to run the application in it yourself.
We also invite you to take part in the challenge to launch Kubernetes from the very first commit. This is a continuation of our joint challenge with Yandex Cloud at KuberConf / 24, where we launched an application without errors on the cloud infrastructure.
If you don't want to assemble components from scratch, but want to launch them right now, take part in the Kube01 Challenge to launch k8s v.0.1 on the Yandex Cloud infrastructure. You can take part and win merch with Kotzilla by link.
The code of the first commit is still available in kubernetes repository.
How to build kubernetes v0.1
To run, we will need a ̶v̶r̶e̶m̶e̶n̶i̶ machine running Ubuntu 14.04. This version of the OS is needed to run all the dependencies that are needed to build kubernetes.
We will install the necessary packages on the machine:
apt update
apt install git iptables curl apache2-utils
Let's get the Go compiler of the required version:
wget https://dl.google.com/go/go1.2.2.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.2.2.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
Now you need to download the kubernetes repository and go to the first commit:
cd /root
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
git log --reverse
git commit 2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56
git checkout 2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56
And finally build kubernetes:
cd /root/kubernetes/src/scripts
./build-go.sh
cd /root/kubernetes/target
etcd
etcd has been used to power kubernetes since the beginning, so it's worth installing:
wget https://github.com/etcd-io/etcd/releases/download/v2.3.8/etcd-v2.3.8-linux-amd64.tar.gz
tar xzvf etcd-v2.3.8-linux-amd64.tar.gz
cd etcd-v2.3.8-linux-amd64
cp etcd /usr/local/bin/
cp etcdctl /usr/local/bin/
Docker build
Naturally, to run kubernetes, you need to use the same version of docker that was available at that time. Let's get it from the archive.
git clone https://github.com/docker-archive/engine.git
cd engine
git checkout v1.0.0
Now you will have to suffer a little and install all the dependencies required for assembly.
You can find them in Dockerfile. You need to put everything up to line 62. Of what can cause difficulties, lvm can be obtained here in this repository.
You can use go version 1.2.2 as go, the minor version does not affect the build.
Once the dependencies are installed, you can proceed to assembly:
AUTO_GOPATH=1 ./hack/make.sh binary
The compiled application can be found in the folder: ./bundles/1.0.0/binary/
Note: instead of the archive repository provided here docker you can use the repository Moby the code with the v1.0.0 tag is identical in them, since it was written before the split.
Moby is a “framework” that replaced the monolithic docker repository and allows you to build your own docker-based containerization solutions. In fact, all current docker products are built on its basis. More about this – Here.
Components
The first version of kubernetes had significantly fewer components, but they have survived to this day. It is worth noting that auth is missing among the components, which means that the api of the first version was not protected by anything, and for these purposes it is worth using nginx + basic auth, if you want to expose it to the big Internet.
apiserver – was responsible for cluster management, including querying kubelets via the REST interface about running containers;
cloudcfg (later kubectl) – allowed access to the API from the console.
controller-manager — brought the number of running tasks to the required level;
kubelet — as now — launched containers according to the described configuration and returned the container status via http;
proxy is a simple proxy server written in go that can distribute requests across endpoints according to RoundRobin logic.
In the first versions of kubernetes etcd was used not only as a configuration storage, but also as the main means of communication between components. All components were registered in etcd, after which apiserver went and independently addressed kubelets. For this reason, in the first versions the number of cluster nodes was severely limited, apiserver simply did not have time to address all the nodes.
Abstractions
Let's look at the main abstractions that were present in the system at that time. The list is quite limited and very different from what we are used to now. However, this set was quite sufficient for launching containers and minimally managing their work.
Task
At the very beginning, kubernetes did not have the pods we are used to, and the main unit of a running application was a task. Here is an example of its description:
In fact, the task contained descriptions of the container launch parameters and a list of labels that should be assigned to it. The full description is available here schemes.
ReplicationController
The prototype of the modern ReplicaSet operated with task templates and described the required set of instances to run in the cloud:
Full description available here schemes.
Service
The only entity that has survived in kubernetes to this day, although it has acquired a large number of different options and parameters:
Control
api
As now, control was performed via the api-server, although the methods only contained the entities corresponding to the above mentioned /tasks /replicationControllers /services. You can read more at link.
cloudcfg
The kubectl we are used to didn't exist, instead cloudcfg was used, which transmitted requests to the api. For ease of use, it was wrapped in cloudcfg.sh, which added the KUBE_MASTER address to the request:
Example of using cloudcfg directly:
Challenge Announcement
We went back in time and built the first version of Kubernetes to see for ourselves how it all began. Now you have everything you need to run kubernetes and win a T-shirt with Kotzilla in our challenge. To participate, go to in our bot.