Complete Kubernetes from scratch on Raspberry Pi





More recently, a well-known company announced that it is transferring its line of laptops to ARM architecture. Upon hearing this news, I remembered: while looking through the prices for EC2 in AWS once again, I noticed Gravitons with a very tasty price. The catch, of course, was that it's ARM. It didn't even occur to me then that ARM was pretty serious ...



For me, this architecture has always been the lot of mobile and other IoT things. "Real" servers on ARM are somehow unusual, in some ways even wild ... However, a new thought stuck in my head, so one weekend I decided to check what could be launched on ARM today. And for this I decided to start with a close and dear one - a Kubernetes cluster. And not just some conditional "cluster", but everything "in an adult way" so that it is as much as possible the same as I am used to seeing it in production.



According to my idea, the cluster should be accessible from the Internet, some web application should run in it and there should be at least monitoring. To implement this idea, you will need a pair (or more) Raspberry Pi model 3B + or higher. AWS could also become a platform for experiments, but it was the "raspberries" that were interesting to me (which still stood idle). So, we will deploy a Kubernetes cluster with Ingress, Prometheus and Grafana on them.



Preparation of "raspberries"



Installing the OS and SSH



I didn't bother much with the choice of OS for installation: I just took the latest Raspberry Pi OS Lite from the official website . Installation documentation is also available there , all actions from which must be performed on all nodes of the future cluster. Next, you need to perform the following manipulations (also on all nodes).



After connecting the monitor and keyboard, you must first configure the network and SSH:



  1. For the cluster to work, the master must have a static IP address, and the worker nodes must have a static IP address. I preferred static addresses everywhere for ease of setup.
  2. A static address can be configured in the OS ( /etc/dhcpcd.confthere is a suitable example in the file ) or by fixing lease in the DHCP server of the used (in my case, home) router.
  3. ssh-server is just included in raspi-config ( interfacing options -> ssh ).


After that, you can already log in via SSH (by default, the login is pi, and the password is the raspberryone you changed to) and continue the settings.



Other settings



  1. Let's set the hostname. In my example, pi-controland will be used pi-worker.
  2. Let's check that the file system is expanded to the entire disk ( df -h /). It can be extended if needed using raspi-config.
  3. Change the default user password in raspi-config.
  4. Turn off the swap file (this is the Kubernetes requirement; if you are interested in details on this topic, see issue # 53533 ):



    dphys-swapfile swapoff
    systemctl disable dphys-swapfile
  5. Let's update the packages to the latest versions:



    apt-get update && apt-get dist-upgrade -y
  6. Install Docker and additional packages:



    apt-get install -y docker docker.io apt-transport-https curl bridge-utils iptables-persistent


    During installation, you iptables-persistentwill need to save the iptables settings for ipv4, and /etc/iptables/rules.v4add the rules to the chain in the file FORWARD, like this:



    # Generated by xtables-save v1.8.2 on Sun Jul 19 00:27:43 2020
    *filter
    :INPUT ACCEPT [0:0]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    -A FORWARD -s 10.1.0.0/16  -j ACCEPT
    -A FORWARD -d 10.1.0.0/16  -j ACCEPT
    COMMIT
  7. It remains only to reboot.


You are now ready to install your Kubernetes cluster.



Installing Kubernetes



At this stage, I deliberately postponed all my and our corporate developments on automating the installation and configuration of the K8s cluster. Instead, we will use the official documentation from kubernetes.io (slightly augmented with comments and abbreviations).



Add the Kubernetes repository:



curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update


Further in the documentation it is proposed to install CRI (container runtime interface). Since Docker is already installed, let's move on and install the main components:



sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni


At the step of installing the main components, I immediately added kubernetes-cniwhat is required for the cluster to work. And here there is an important point: kubernetes-cnifor some reason, the package does not create a default directory for the CNI interface settings, so I had to create it manually:



mkdir -p /etc/cni/net.d


For the network backend to work, which will be discussed below, you need to install the plugin for CNI. I chose the portmap plugin, which is familiar and clear to me (see the documentation for a complete list ):



curl -sL https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-arm-v0.7.5.tgz | tar zxvf - -C /opt/cni/bin/ ./portmap


Configuring Kubernetes



Control plane node



Setting up the cluster itself is fairly straightforward. And to speed up this process and verify that Kubernetes images are available, you can first run:



kubeadm config images pull


Now we carry out the installation itself - we initialize the control plane of the cluster:



kubeadm init --pod-network-cidr=10.1.0.0/16 --service-cidr=10.2.0.0/16 --upload-certs


Please note that subnets for services and pods should not overlap with each other or with existing networks.



At the end, we will be shown a message stating that everything is fine, and at the same time they will tell you how to attach work nodes to the control plane:



Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
 kubeadm join 192.168.88.30:6443 --token a485vl.xjgvzzr2g0xbtbs4 \
   --discovery-token-ca-cert-hash sha256:9da6b05aaa5364a9ec59adcc67b3988b9c1b94c15e81300560220acb1779b050 \
   --contrl-plane --certificate-key 72a3c0a14c627d6d7fdade1f4c8d7a41b0fac31b1faf0d8fdf9678d74d7d2403
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.88.30:6443 --token a485vl.xjgvzzr2g0xbtbs4 \
   --discovery-token-ca-cert-hash sha256:9da6b05aaa5364a9ec59adcc67b3988b9c1b94c15e81300560220acb1779b050


Let's follow the recommendations for adding a config for the user. And at the same time, I recommend adding auto-completion for kubectl right away:



 kubectl completion bash > ~/.kube/completion.bash.inc
 printf "
 # Kubectl shell completion
 source '$HOME/.kube/completion.bash.inc'
 " >> $HOME/.bash_profile
 source $HOME/.bash_profile


At this stage, you can already see the first node in the cluster (although it is not ready yet):



root@pi-control:~# kubectl get no
NAME         STATUS     ROLES    AGE   VERSION
pi-control   NotReady   master   29s   v1.18.6


Network configuration



Further, as it was said in the message after installation, you will need to install the network into the cluster. The documentation offers a choice of Calico, Cilium, contiv-vpp, Kube-router and Weave Net ... Here I deviated from the official instructions and chose a more familiar and understandable option for me: flannel in host-gw mode (for more information about available backends, see the documentation project ).



Installing it into a cluster is pretty simple. First, download the manifests:



wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


Then change the type from vxlanto in the settings host-gw:



sed -i 's/vxlan/host-gw/' kube-flannel.yml


... and the pod subnet - from the default value to the one specified during cluster initialization:



sed -i 's#10.244.0.0/16#10.1.0.0/16#' kube-flannel.yml


After that, we create resources:



kubectl create -f kube-flannel.yml


Done! After a while, the first K8s node will go into the status Ready:



NAME         STATUS   ROLES    AGE   VERSION
pi-control   Ready    master   2m    v1.18.6


Adding a worker node



Now you can add a worker. To do this, after installing Kubernetes itself according to the scenario described above, you just need to execute the previously received command:



kubeadm join 192.168.88.30:6443 --token a485vl.xjgvzzr2g0xbtbs4 \
    --discovery-token-ca-cert-hash sha256:9da6b05aaa5364a9ec59adcc67b3988b9c1b94c15e81300560220acb1779b050


On this we can assume that the cluster is ready:



root@pi-control:~# kubectl get no
NAME         STATUS   ROLES    AGE    VERSION
pi-control   Ready    master   28m    v1.18.6
pi-worker    Ready    <none>   2m8s   v1.18.6


I only had two Raspberry Pi at hand, so I didn't want to give one of them only under the control plane. So I removed the auto-installed taint from the pi-control node by running:



root@pi-control:~# kubectl edit node pi-control


... and removing the lines:



 - effect: NoSchedule
   key: node-role.kubernetes.io/master


Filling the cluster with the required minimum



First of all we need Helm . Of course, you can do everything without it, but Helm allows you to customize some components at your discretion literally without editing files. And in fact it is just a binary file that "does not ask for bread".



So, go to helm.sh in the docs / installation section and execute the command from there:



curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash


After that add the chart repository:



helm repo add stable https://kubernetes-charts.storage.googleapis.com/


Now let's install the infrastructure components in accordance with the idea:



  • Ingress controller;
  • Prometheus;
  • Grafana;
  • cert-manager.


Ingress controller



The first component, the Ingress controller , is easy to install and ready to use out of the box. To do this, just go to the bare-metal section on the site and execute the installation command from there:



kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml


However, at this moment, the "raspberry" began to strain and run into disk IOPS. The fact is that together with the Ingress controller, a large number of resources are installed, many API requests are made, and, accordingly, a lot of data is written to etcd. In general, either a class 10 memory card is not very productive, or an SD card is basically not enough for such a load. Nevertheless, after 5 minutes everything started.



A namespace was created and a controller appeared in it and everything it needs:



root@pi-control:~# kubectl -n ingress-nginx get pod
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-2hwdx        0/1     Completed   0          31s
ingress-nginx-admission-patch-cp55c         0/1     Completed   0          31s
ingress-nginx-controller-7fd7d8df56-68qp5   1/1     Running     0          48s


Prometheus



The next two components are fairly easy to install via Helm from the chart repo.



Find Prometheus , create a namespace and install in it:



helm search repo stable | grep prometheus
kubectl create ns monitoring
helm install prometheus --namespace monitoring stable/prometheus --set server.ingress.enabled=True --set server.ingress.hosts={"prometheus.home.pi"}


By default, Prometheus orders 2 disks: for Prometheus data and for AlertManager data. Since no storage class has been created in the cluster, disks will not be ordered and pods will not start. For bare metal Kubernetes installations, we usually use Ceph rbd, but in the case of the Raspberry Pi, this is overkill.



So let's create a simple local storage on the hostpath. The PV (persistent volume) manifests for prometheus-server and prometheus-alertmanager are merged in a file prometheus-pv.yamlin the Git repository with examples for the article . The directory for PV must be created in advance on the disk of the node to which we want to bind Prometheus: in the example nodeAffinity, the hostname is specified pi-workerand the directories /data/localstorage/prometheus-serverand are created on it /data/localstorage/prometheus-alertmanager.



Download (clone) the manifest and add it to Kubernetes:



kubectl create -f prometheus-pv.yaml


At this stage, I first encountered the ARM architecture problem. Kube-state-metrics, which is set by default in the Prometheus chart, refused to start. It was throwing an error:



root@pi-control:~# kubectl -n monitoring logs prometheus-kube-state-metrics-c65b87574-l66d8
standard_init_linux.go:207: exec user process caused "exec format error"


The fact is that for kube-state-metrics, the image of the CoreOS project is used, which is not compiled for ARM:



kubectl -n monitoring get deployments.apps prometheus-kube-state-metrics -o=jsonpath={.spec.template.spec.containers[].image}
quay.io/coreos/kube-state-metrics:v1.9.7


I had to google a little and find, for example, this image . To take advantage of it, let's update the release, specifying which image to use for kube-state-metrics:



helm upgrade prometheus --namespace monitoring stable/prometheus --set server.ingress.enabled=True --set server.ingress.hosts={"prometheus.home.pi"} --set kube-state-metrics.image.repository=carlosedp/kube-state-metrics --set kube-state-metrics.image.tag=v1.9.6


We check that everything has started:



root@pi-control:~# kubectl -n monitoring get po
NAME                                             READY   STATUS              RESTARTS   AGE
prometheus-alertmanager-df65d99d4-6d27g          2/2     Running             0          5m56s
prometheus-kube-state-metrics-5dc5fd89c6-ztmqr   1/1     Running             0          5m56s
prometheus-node-exporter-49zll                   1/1     Running             0          5m51s
prometheus-node-exporter-vwl44                   1/1     Running             0          4m20s
prometheus-pushgateway-c547cfc87-k28qx           1/1     Running             0          5m56s
prometheus-server-85666fd794-z9qnc               2/2     Running             0          4m52s


Grafana and cert-manager



For charts and dashboards, install Grafana :



helm install grafana --namespace monitoring stable/grafana  --set ingress.enabled=true --set ingress.hosts={"grafana.home.pi"}


At the end of the output, we will be shown how to get the password for access:



kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo


To order certificates, install cert-manager . To install it, refer to the documentation , which offers the appropriate commands for Helm:



helm repo add jetstack https://charts.jetstack.io

helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.16.0 \
  --set installCRDs=true


For self-signed certificates in home use, this is sufficient. If you need to receive the same Let's Encrypt , then you need to configure another cluster issuer. More details can be found in our article " SSL certificates from Let's Encrypt with cert-manager on Kubernetes ".



I myself settled on a variant from the example in the documentation , deciding that the staging LE variant would be enough. Change the e-mail in the example, save it to a file and add it to the cluster ( cert-manager-cluster-issuer.yaml ):



kubectl create -f cert-manager-cluster-issuer.yaml


Now you can order a certificate, for example, for Grafana. This will require a domain and external access to the cluster. I have a domain, and I configured traffic by forwarding ports 80 and 443 on my home router in accordance with the ingress-controller service created:



kubectl -n ingress-nginx get svc
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.2.206.61    <none>        80:31303/TCP,443:30498/TCP   23d


In this case, port 80 is translated to 31303, and 443 to 30498. (The ports are randomly generated, so you will have different ones.)



Here is an example certificate ( cert-manager-grafana-certificate.yaml ):



apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: grafana
  namespace: monitoring
spec:
  dnsNames:
    - grafana.home.pi
  secretName: grafana-tls
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-staging


Add it to the cluster:



kubectl create -f cert-manager-grafana-certificate.yaml


After that, the Ingress resource will appear, through which Let's Encrypt validation will take place:



root@pi-control:~# kubectl -n monitoring get ing
NAME                        CLASS    HOSTS                        ADDRESS         PORTS   AGE
cm-acme-http-solver-rkf8l   <none>   grafana.home.pi      192.168.88.31   80      72s
grafana                     <none>   grafana.home.pi      192.168.88.31   80      6d17h
prometheus-server           <none>   prometheus.home.pi   192.168.88.31   80      8d


After the validation passes, we will see that the resource is certificateready, and the above secret contains the grafana-tlscertificate and key. You can immediately check who issued the certificate:



root@pi-control:~# kubectl -n monitoring get certificate
NAME      READY   SECRET        AGE
grafana   True    grafana-tls   13m

root@pi-control:~# kubectl -n monitoring get secrets grafana-tls -ojsonpath="{.data['tls\.crt']}" | base64 -d | openssl x509 -issuer -noout
issuer=CN = Fake LE Intermediate X1


Let's go back to Grafana. We need to fix its Helm release a little, changing the settings for TLS in accordance with the generated certificate.



To do this, download the chart, edit and update from the local directory:



helm pull --untar stable/grafana


Edit grafana/values.yaml TLS parameters in the file :



  tls:
    - secretName: grafana-tls
      hosts:
        - grafana.home.pi


Here you can immediately configure the installed Prometheus as datasource:



datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
    - name: Prometheus
      type: prometheus
      url: http://prometheus-server:80
      access: proxy
      isDefault: true


Now update the Grafana chart from the local directory:



helm upgrade grafana --namespace monitoring ./grafana  --set ingress.enabled=true --set ingress.hosts={"grafana.home.pi"}


We check that grafanaport 443 has been added to the Ingress and that there is access via HTTPS:



root@pi-control:~# kubectl -n monitoring get ing grafana
NAME      CLASS    HOSTS                     ADDRESS         PORTS     AGE
grafana   <none>   grafana.home.pi           192.168.88.31   80, 443   63m

root@pi-control:~# curl -kI https://grafana.home.pi
HTTP/2 302
server: nginx/1.19.1
date: Tue, 28 Jul 2020 19:01:31 GMT
content-type: text/html; charset=utf-8
cache-control: no-cache
expires: -1
location: /login
pragma: no-cache
set-cookie: redirect_to=%2F; Path=/; HttpOnly; SameSite=Lax
x-frame-options: deny
strict-transport-security: max-age=15724800; includeSubDomains


To demonstrate Grafana in action, you can download and add a dashboard for kube-state-metrics . Here's how it looks: I







also recommend adding a dashboard for the node exporter: it will show in detail what happens to the "raspberries" (CPU load, memory, network, disk usage, etc.).



After that, I believe that the cluster is ready to receive and run applications!



Assembly note



There are at least two options for building applications for the ARM architecture. First, it can be built on an ARM device. However, after looking at the current disposal of the two Raspberry Pi, I realized that they would also not survive the assembly. Therefore, I ordered a new Raspberry Pi 4 (it is more powerful and has 4 GB of memory in it) - I plan to build it on it.



The second option is to build a multi-architecture Docker image on a more powerful machine. There is a docker buildx extension for that . If the application is in a compiled language, then cross-compilation for ARM is required. I will not describe all the settings for this path. this will lead to a separate article. When implementing this approach, you can achieve "universal" images: Docker running on an ARM machine will automatically load the image corresponding to the architecture.



Conclusion



The experiment carried out exceeded all my expectations: [at least] "vanilla" Kubernetes with the necessary base feels good on ARM, and with its configuration, only a couple of nuances arose.



The Raspberry Pi 3B + themselves keep the CPU busy, but their SD cards are a clear bottleneck. Colleagues suggested that in some versions it is possible to boot from USB, where you can connect an SSD: then the situation will most likely get better.



Here is an example of CPU load when installing Grafana:







For experiments and "to try", in my opinion, the Kubernetes cluster on "raspberries" much better conveys the sensations of operation than the same Minikube, because all cluster components are installed and work "In an adult way."



In the future, there is an idea to add to the cluster the entire CI / CD cycle, implemented entirely on the Raspberry Pi. And also I will be glad if someone shares their experience on setting up K8s on AWS Gravitons.



PS Yes, "production" may be closer than I thought:







PPS



Read also on our blog:






All Articles