In our new translated article, we give a quick overview of the new Kubernetes distribution. We hope the article will be interesting for the readers of Habr.
A couple of days ago, a friend told me about a new Kubernetes distribution from Mirantis called k0s . We all know and love K8s, right? We were also captivated by K3s , a lightweight Kubernetes developed by Rancher Labs and handed over to CNCF some time ago. It's time to discover the new k0s distribution!
After a brief introduction to k0s, we will create a cluster of three nodes by following these steps:
- Preparing three virtual machines ( Multipass in action)
- Installing k0s on each of them
- Setting up a simple k0s cluster configuration file
- Cluster initialization
- Gaining access to the cluster
- Adding worker nodes
- Add user
What is k0s?
k0s is the newest Kubernetes distribution. The current release is 0.8.0. It was published in December 2020, and the first commit of the entire project happened in June 2020.
k0s is shipped as a single binary without any OS dependencies. Thus, it is defined as a Kubernetes distribution with zero-friction / zero-deps / zero-cost characteristics (ease of configuration / no dependencies / free).
Latest k0s release:
- Delivers Certified (Internet Security Center Certified) Kubernetes 1.19
- Uses containerd as the default container runtime
- Supports Intel (x86-64) and ARM (ARM64) architectures
- Uses intra- cluster etcd
- Uses the Calico network plugin by default (thereby activating network policies)
- Includes Pod Security Policy Access Controller
- Uses DNS with CoreDNS
- Provides cluster metrics via Metrics Server
- Enables horizontal pod autoscale (HPA).
A lot of cool features will come in future releases, including:
- Compact VM runtime (I look forward to testing this feature)
- Zero Downtime Cluster Upgrade
- Cluster backup and recovery
Impressive, isn't it? Next, we'll look at how to use k0s to deploy a 3-node cluster.
Preparing virtual machines
First, we will create three virtual machines, each of which will be a node in our cluster. In this article, I'll take a quick and easy route and use the excellent Multipass tool (I love it) to prepare local virtual machines on MacOS.
The following commands create three instances of Ubuntu on xhyve. Each virtual machine has 5 GB of disk, 2 GB of RAM and 2 virtual processors (vCPU):
for i in 1 2 3; do multipass launch -n node$i -c 2 -m 2G done
We can then display a list of virtual machines to make sure they are all working fine:
$ multipass list Name State IPv4 Image node1 Running 192.168.64.11 Ubuntu 20.04 LTS node2 Running 192.168.64.12 Ubuntu 20.04 LTS node3 Running 192.168.64.13 Ubuntu 20.04 LTS
Next, we will install k0s on each of these nodes.
Installing the latest k0s release
The latest release of k0s can be downloaded from the GitHub repository .
It has a convenient installation script:
curl -sSLf get.k0s.sh | sudo sh
We use this script to install k0s on all of our nodes:
for i in 1 2 3; do multipass exec node$i --bash -c "curl -sSLf get.k0s.sh | sudo sh" done
The above script installs k0s to / user / bin / k0 . To get all the available commands, you run the binary with no arguments.
Available k0s commands We
can check the current version:
$ k0s version v0.8.0
We will use some of the commands in the next steps.
Creating a configuration file
First, you need to define a configuration file that contains the information k0s needs to create a cluster. On node1, we can run the default-config command to get the full default configuration. Among other things, this allows us to determine:
ubuntu@node1:~$ k0s default-config
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 192.168.64.11
sans:
- 192.168.64.11
- 192.168.64.11
extraArgs: {}
controllerManager:
extraArgs: {}
scheduler:
extraArgs: {}
storage:
type: etcd
kine: null
etcd:
peerAddress: 192.168.64.11
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
provider: calico
calico:
mode: vxlan
vxlanPort: 4789
vxlanVNI: 4096
mtu: 1450
wireguard: false
podSecurityPolicy:
defaultPolicy: 00-k0s-privileged
workerProfiles: []
extensions: null
images:
konnectivity:
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent
version: v0.0.13
metricsserver:
image: gcr.io/k8s-staging-metrics-server/metrics-server
version: v0.3.7
kubeproxy:
image: k8s.gcr.io/kube-proxy
version: v1.19.4
coredns:
image: docker.io/coredns/coredns
version: 1.7.0
calico:
cni:
image: calico/cni
version: v3.16.2
flexvolume:
image: calico/pod2daemon-flexvol
version: v3.16.2
node:
image: calico/node
version: v3.16.2
kubecontrollers:
image: calico/kube-controllers
version: v3.16.2
repository: ""
telemetry:
interval: 10m0s
enabled: true
- API Server, Controller Manager, and Scheduler Launch Options
- Storage that can be used to store cluster information ( etcd )
- Network plugin and its configuration ( Calico )
- Version of container images with management components
- Some additional management schemes to deploy when starting a cluster
We could save this configuration to a file and adapt it to our needs. But in this article, we will use a very simple configuration and save it in /etc/k0s/k0s.yaml . Note : Since we are initializing the cluster on node1 , this node will serve the API server. The IP address of this node is used in api.address and api.sans (subject alternative names) in the above configuration file. If we had additional master nodes and a load balancer above them, we would also use api.sans in the settings
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 192.168.64.11
sans:
- 192.168.64.11
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
The IP address of each host and load balancer (or corresponding domain name).
Cluster initialization
First, we create a systemd unit on node1 to manage k0s.
[Unit] Description="k0s server" After=network-online.target Wants=network-online.target [Service] Type=simple ExecStart=/usr/bin/k0s server -c /etc/k0s/k0s.yaml --enable-worker Restart=always
The main command is listed here in ExecStart ; it starts the k0s server with the configuration we saved to our file in the previous step. We also specify the --enable-worker parameter so that this first master node also functions as a worker.
Then we copy this file to /lib/systemd/system/k0s.service , reboot systemd and start the newly created service.
ubuntu@node1:~$ sudo systemctl daemon-reload ubuntu@node1:~$ sudo systemctl start k0s.service
For the sake of curiosity, you can check the processes started by the k0s server:
ubuntu@node1:~$ sudo ps aux | awk β{print $11}β | grep k0s /usr/bin/k0s /var/lib/k0s/bin/etcd /var/lib/k0s/bin/konnectivity-server /var/lib/k0s/bin/kube-controller-manager /var/lib/k0s/bin/kube-scheduler /var/lib/k0s/bin/kube-apiserver /var/lib/k0s/bin/containerd /var/lib/k0s/bin/kubelet
From the output above, we can see that all the main components are running ( kube-apiserver , kube-controller-manager , kube-scheduler , etc.), as well as components common to master and worker nodes ( containerd , kubelet ). k0s is responsible for managing all of these components.
Now we have a cluster of 1 node. In the next step, we will see how to access it.
Gaining access to the cluster
First, we need to get the kubeconfig file generated during the creation of the cluster; it was created on node1 at /var/lib/k0s/pki/admin.conf . This file should be used to configure kubectl on the local machine.
First, we get the kubeconfig of the cluster from node1 :
# Get kubeconfig file $ multipass exec node1 cat /var/lib/k0s/pki/admin.conf > k0s.cfg
Next, we replace the internal IP address with the external IP address of node1 :
# Replace IP address $ NODE1_IP=$(multipass info node1 | grep IP | awk '{print $2}') sed -i '' "s/localhost/$NODE1_IP/" k0s.cfg
Then we configure our local kubectl client to communicate with the k0s API server:
export KUBECONFIG=$PWD/k0s.cfg
Surely one of the first commands we run when we enter a new cluster is the one that displays a list of all available nodes - let's try:
$ kubectl get no NAME STATUS ROLES AGE VERSION node1 Ready <none> 78s v1.19.4
There is nothing surprising here. After all, node1 is not only the main, but also the working node of our first cluster thanks to the --enable-worker flag , which we specified in the start command. Without this flag, node1 would only be working and would not appear in the list of nodes here.
Adding worker nodes
To add node2 and node3 to the cluster , we first need to create a connection token from node1 (this is a fairly common step as it is used in Docker Swarm and Kubernetes clusters created with kubeadm).
$ TOKEN=$(k0s token create --role=worker)
The above command generates a long (very long) token. Using it, we can join node2 and node3 to the cluster :
ubuntu@node2:~$ k0s worker $TOKEN ubuntu@node3:~$ k0s worker $TOKEN
Note: In a real cluster, we would use systemd (or another supervisor) to manage the k0s processes for the worker nodes, as we did for the master node.
Our three-node cluster is up and running, as we can verify by displaying the list of nodes and listing the nodes again:
$ kubectl get no NAME STATUS ROLES AGE VERSION node1 Ready <none> 30m v1.19.4 node2 Ready <none> 35s v1.19.4 node3 Ready <none> 32s v1.19.4
We can also check the pods running in all namespaces:
List of pods running in the cluster in all namespaces
There are a few things to note here:
- As usual, we see the kube -proxy pods, the network plugin pods (based on Calico), as well as the CoreDNS pods.
- api-server, scheduler controller-manager , , .
K0s version 0.8.0 contains the user subcommand . This allows you to create a kubeconfig for an additional user / group. For example, the following command creates a kubeconfig file named demo for a new user , which is located inside an imaginary group named development .
Note: In Kubernetes, users and groups are managed by an administrator outside the cluster, that is, there is no user-not-group resource in K8s.
$ sudo k0s user create demo --groups development > demo.kubeconfig
For a better understanding, we will extract the client certificate from this kubeconfig file and decode it from the base64 representation:
$ cat demo.kubeconfig | grep client-certificate-data | awk '{print $2}' | base64 --decode > demo.crt
Then we use the openssl command to get the contents of the certificate:
ubuntu@node1:~$ openssl x509 -in demo.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
71:8b:a4:4d:be:76:70:8a:...:07:60:67:c1:2d:51:94
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes-ca
Validity
Not Before: Dec 2 13:50:00 2020 GMT
Not After : Dec 2 13:50:00 2021 GMT
Subject: O = development, CN = demo
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:be:87:dd:15:46:91:98:eb:b8:38:34:77:a4:99:
da:4b:d6:ca:09:92:f3:29:28:2d:db:7a:0b:9f:91:
65:f3:11:bb:6c:88:b1:8f:46:6e:38:71:97:b7:b5:
9b:8d:32:86:1f:0b:f8:4e:57:4f:1c:5f:9f:c5:ee:
40:23:80:99:a1:77:30:a3:46:c1:5b:3e:1c:fa:5c:
- The issuer property is kubernetes-ca , which is the CA for our k0s cluster.
- Subject is O = development, CN = demo ; this part is important as this is where the user's name and group comes in. Since the certificate is signed by the cluster CA, the plugin on the api-server can authenticate the user / group by the common name (CN) and organization (O) in the subject of the certificate.
First, we instruct kubectl to use the context defined in this new kubeconfig file :
$ export KUBECONFIG=$PWD/demo.kubeconfig
Then once again we display the list of nodes and enumerate the cluster nodes:
$ kubectl get no Error from server (Forbidden): nodes is forbidden: User βdemoβ cannot list resource βnodesβ in API group ββ at the cluster scope
This error message was expected. Even if
api-server
the user is identified (the certificate sent with the user request was signed by the cluster CA), he is not allowed to perform any actions on the cluster.
Additional permissions can be easily added by creating
Role/ClusterRole
and assigning them to a user with
RoleBinding/ClusterRoleBinding
, but I leave this task as an exercise for the reader.
Conclusion
k0s is definitely worth considering. This approach, when a single binary file manages all processes, is very interesting.
This article provides only a brief overview of k0s, but I will definitely track its development and devote future articles to this new and promising Kubernetes distribution. Some of the future features seem to be really promising, and I look forward to testing them out.