I was lucky: I need to update the k8s v1.12.3 certificates

A week ago, I was given a task - to update the certificates for the k8s cluster. On the one hand, the task seemed quite trivial, BUT my uncertainty with k8s added nontriviality: up to this point I used kuber as a service and more than looking at the pods, I had no chance to delete them by writing deployment using a template. Confidence was added by the presence of the instruction, but as it turned out, it was for version v1.13, and the cluster for which it was required to implement this task was version 1.12.3. And then it began ...





On the 3rd, I solved the problem with the update and wanted to write an instruction. I heard that in new versions now this problem is solved by almost one team, but for those who have the same vintage as mine, I share my experience.





Given a k8s cluster:





  • 3 master nodes





  • 3 etcd nodes





  • 5 worker nodes





kubectl get nodes

NAME                    STATUS   ROLES    AGE    VERSION
product1-mvp-k8s-0001   Ready    master   464d   v1.12.3
product1-mvp-k8s-0002   Ready    master   464d   v1.12.3
product1-mvp-k8s-0003   Ready    master   464d   v1.12.3
product1-mvp-k8s-0007   Ready    node     464d   v1.12.3
product1-mvp-k8s-0008   Ready    node     464d   v1.12.3
product1-mvp-k8s-0009   Ready    node     464d   v1.12.3
product1-mvp-k8s-0010   Ready    node     464d   v1.12.3
product1-mvp-k8s-0011   Ready    node     464d   v1.12.3

      
      



Certificate validity period





echo | openssl s_client -showcerts -connect product1-mvp-k8s-0001:6443 -servername api 2>/dev/null | openssl x509 -noout -enddate

notAfter=Mar  4 00:39:56 2021 GMT

      
      



Go:





  • on all MASTER nodes we back up / etc / kubernetes





sudo mkdir backup; sudo cp -R /etc/kubernetes backup/ ; sudo tar -cvzf backup/pki_backup_`hostname`-`date +%Y%m%d`.tar.gz backup/kubernetes/
      
      



  • We look at the / etc / Kubernetes structure, it will be something like this





ls -l

total 80
-rw------- 1 root root 5440 Mar  3 13:21 admin.conf
drwxr-xr-x 2 root root 4096 Aug 17  2020 audit-policy
-rw-r--r-- 1 root root  368 Mar  4  2020 calico-config.yml
-rw-r--r-- 1 root root  270 Mar  4  2020 calico-crb.yml
-rw-r--r-- 1 root root  341 Mar  4  2020 calico-cr.yml
-rw-r--r-- 1 root root  147 Mar  4  2020 calico-node-sa.yml
-rw-r--r-- 1 root root 6363 Mar  4  2020 calico-node.yml
-rw------- 1 root root 5472 Mar  3 13:21 controller-manager.conf
-rw-r--r-- 1 root root 3041 Aug 14  2020 kubeadm-config.v1alpha3.yaml
-rw------- 1 root root 5548 Mar  3 13:21 kubelet.conf
-rw-r--r-- 1 root root 1751 Mar  4  2020 kubelet.env
drwxr-xr-x 2 kube root 4096 Aug 14  2020 manifests
lrwxrwxrwx 1 root root   28 Mar  4  2020 node-kubeconfig.yaml -> /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5420 Mar  3 13:21 scheduler.conf
drwxr-xr-x 3 kube root 4096 Mar  3 10:20 ssl

      
      



I have all the keys in ssl , and not in pki, which will be needed by kubeadm , then it should appear, in my case I will make a symlink to it





ln -s /etc/kubernetes/ssl /etc/kubernetes/pki
      
      



  • we find the file with the cluster configuration, in my case it was





    kubeadm-config.v1alpha3.yaml









kubectl get cm kubeadm-config -n kube-system -o yaml > /etc/kubernetes/kubeadm-config.yaml
      
      







kubeadm alpha phase certs apiserver  --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml

[certificates] Using the existing apiserver certificate and key.

kubeadm alpha phase certs apiserver-kubelet-client

I0303 13:12:24.543254   40613 version.go:236] remote version is much newer: v1.20.4; falling back to: stable-1.12
[certificates] Using the existing apiserver-kubelet-client certificate and key.

kubeadm alpha phase certs front-proxy-client

I0303 13:12:35.660672   40989 version.go:236] remote version is much newer: v1.20.4; falling back to: stable-1.12
[certificates] Using the existing front-proxy-client certificate and key.

kubeadm alpha phase certs  etcd-server --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml

[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [prod-uct1-mvp-k8s-0001 localhost] and IPs [127.0.0.1 ::1]

kubeadm alpha phase certs  etcd-server --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml

[certificates] Using the existing etcd/server certificate and key.

kubeadm alpha phase certs  etcd-healthcheck-client --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml

[certificates] Generated etcd/healthcheck-client certificate and key.

kubeadm alpha phase certs  etcd-peer --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml

[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [product1-mvp-k8s-0001 localhost] and IPs [192.168.4.201 127.0.0.1 ::1]

      
      



  •   





find /etc/kubernetes/pki/ -name '*.crt' -exec openssl x509 -text -noout -in {} \; | grep -A2 Validity

        Validity
            Not Before: Mar  4 10:29:44 2020 GMT
            Not After : Mar  2 10:29:44 2030 GMT
--
        Validity
            Not Before: Mar  4 10:29:44 2020 GMT
            Not After : Mar  3 10:07:29 2022 GMT
--
        Validity
            Not Before: Mar  4 10:29:44 2020 GMT
            Not After : Mar  3 10:07:52 2022 GMT
--
        Validity
            Not Before: Mar  4 10:29:44 2020 GMT
            Not After : Mar  3 10:06:48 2022 GMT
--
        Validity
            Not Before: Mar  4 10:29:44 2020 GMT
            Not After : Mar  2 10:29:44 2030 GMT
--
        Validity
            Not Before: Mar  4 10:29:44 2020 GMT
            Not After : Mar  2 19:39:56 2022 GMT
--
        Validity
            Not Before: Mar  4 10:29:43 2020 GMT
            Not After : Mar  2 10:29:43 2030 GMT
--
        Validity
            Not Before: Mar  4 10:29:43 2020 GMT
            Not After : Mar  2 19:40:13 2022 GMT
--
        Validity
            Not Before: Mar  4 10:29:44 2020 GMT
            Not After : Mar  2 19:36:38 2022 GMT

      
      



  • admin.conf, controller-manager.conf, kubelet.conf, scheduler.conf tmp 





kubeadm alpha phase kubeconfig all  --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml 

[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"

      
      



  • kubelet   kubelet





sudo systemctl stop kubelet; sudo docker stop $(docker ps -aq); sudo docker rm $(docker ps -aq); sudo systemctl start kubelet

systemctl status kubelet -l

● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-03-03 14:00:22 MSK; 10s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 52998 ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volume-plugins (code=exited, status=0/SUCCESS)
 Main PID: 53001 (kubelet)
   Memory: 51.2M
   CGroup: /system.slice/kubelet.service

      
      



  • master namespace





kubectl get nodes

kubectl get ns

NAME                  STATUS   AGE
default               Active   464d
product1-mvp          Active   318d
infra-logging         Active   315d
infra-nginx-ingress   Active   386d
kube-public           Active   464d
kube-system           Active   464d
pg                    Active   318d
      
      







notAfter=Mar  3 07:40:43 2022 GMT
      
      



master 1 2-.






worker :





  • kubelet.conf, bootstrap-kubelet.conf





cd /etc/kubernetes/

mv kubelet.conf kubelet.conf_old
      
      



  • bootstrap-kubelet.conf ,





apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  | LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETX
    server: https://192.168.4.201:6443
  name: product1
contexts:
- context:
    cluster: product1
    user: tls-bootstrap-token-user
  name: tls-bootstrap-token-user@product1
current-context: tls-bootstrap-token-user@product1
kind: Config
preferences: {}
users:
- name: tls-bootstrap-token-user
  user:
    token: fgz9qz.lujw0bwsdfhdsfjhgds
      
      



 





- certificate-authority-data – PKI CA , /etc/kubernetes/kubelet.conf master





- server: https://192.168.4.201:6443 - ip api master , balance ip





token: fgz9qz.lujw0bwsdfhdsfjhgds - , master





 kubeadm token create



 





  •  kubelet master  , work ,  ready





systemctl restart kubelet

systemctl status kubelet -l

● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-03-03 14:06:33 MSK; 11s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 54615 ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volume-plugins (code=exited, status=0/SUCCESS)
 Main PID: 54621 (kubelet)
   Memory: 52.1M
   CGroup: /system.slice/kubelet.service
      
      



  • , –





     





ls -las /var/lib/kubelet/pki/

total 24
4 -rw-------. 1 root root 1135 Mar  3 14:06 kubelet-client-2021-03-03-14-06-34.pem
0 lrwxrwxrwx. 1 root root   59 Mar  3 14:06 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2021-03-03-14-06-34.pem
4 -rw-r--r--. 1 root root 2267 Mar  2 10:40 kubelet.crt
4 -rw-------. 1 root root 1679 Mar  2 10:40 kubelet.key
      
      



We repeat a similar procedure for all the remaining work nodes.





We all renewed certificates on k8s cluster v1.12.3








All Articles