An Introduction and Comparison of RKE, RKE2 and K3s from an Ops Guy’s Perspective, Part 1: RKE

Rancher supports a lot of different Kubernetes distros in lots of environments. Inside of a Rancher project the question came up: What are the similarities and differences between the three available products RKE, RKE2 and K3s? What are the pros and cons? How does it look like in our own environment? I found it hard to get answers to all the questions I had, so I decided to simply give things a try…

RKE Logo

RKE (Rancher Kubernetes Engine)

What is it?

  • RKE Is basically a Kubernetes “installer”
  • Supports different Kubernetes versions (1.18-1.27 as of March 2024, depending on the actual RKE version 1.3-1.5), current upstream version is 1.29
  • Uses etcd as backend, standalone etcd nodes are supported
  • Docker is required (despite the docker-shim deprecation in upstream k8s), supported Docker versions are hardcoded
  • Docker limits apply (e.g. no registry mirror → no airgapped setups)
  • RKE (the binary) is usually placed on a management node (but can also be put on any cluster node)
  • Bundled (addon) components include Nginx ingress controller and one of the following CNIs: Flannel, Calico, Weave, Canal (default), Cisco ACI

Prerequisites and Installation

  • Supported operating system: several distributions of Suse SLES, Redhat RHEL, Oracle Linux, Rocky, Ubuntu, Amazon Linux and a few others
  • No Windows support, Windows worker nodes were only supported until 1.24
  • Docker installation is required on all nodes
  • Create a user on all cluster nodes which is a member of the docker group (usermod -aG docker <user>)
  • Create a RSA SSH key (only this key type is supported!) on the management node, add public key to the user on all nodes to ~/.ssh/authorized_keys (e.g. with ssh-copy-id)
  • Find a RKE version and download rke binary on the management node: curl -LO https://github.com/rancher/rke/releases/download/v1.5.5/rke_linux-amd64
  • $ mv rke_linux-amd64 ~/bin/rke; chmod 755 ~/bin/rke
  • Create a directory for each managed cluster to hold the cluster config files (mkdir example-rke && cd example-rke)
  • kubectl is not part of the RKE distribution, download a matching(!) version manually from https://kubernetes.io/de/docs/tasks/tools/install-kubectl/
  • The supported k8s versions are hardcoded in the binary. Some examples for RKE 1.3, 1.4 and 1.5:
$ rke config --list-version --all
v1.24.6-rancher1-1
v1.23.12-rancher1-1
v1.22.15-rancher1-1
v1.21.14-rancher1-1
v1.20.15-rancher2-2
v1.19.16-rancher2-1
v1.18.20-rancher1-3
$ rke config --list-version --all
v1.27.10-rancher1-1
v1.26.13-rancher1-1
v1.25.16-rancher2-2
v1.24.17-rancher1-1
v1.23.16-rancher2-3
$ rke config --list-version --all
v1.27.10-rancher1-1
v1.26.13-rancher1-1
v1.25.16-rancher2-2

If you can’t find your desired k8s version, you may try to download a different RKE binary from https://github.com/rancher/rke/releases.

Configuration

Before the actual cluster installation, the RKE configuration needs to be created. This can be done either manually or with an interactive configuration (The items marked with an arrow are required):

$ rke config
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: <----------
[+] Number of Hosts [1]: 
[+] SSH Address of host (1) [none]: nodename            <----------
[+] SSH Port of host (1) [22]: 
[+] SSH Private Key Path of host () [none]: 
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host () [none]: 
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa  <---------
[+] SSH User of host () [ubuntu]: cluster-user          <----------
[+] Is host () a Control Plane host (y/n)? [y]: 
[+] Is host () a Worker host (y/n)? [n]: y              <----------
[+] Is host () an etcd host (y/n)? [n]: y               <----------
[+] Override Hostname of host () [none]:
[+] Internal IP of host () [none]: 
[+] Docker socket path on host () [/var/run/docker.sock]: 
[+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]: 
[+] Authentication Strategy [x509]: 
[+] Authorization Mode (rbac, none) [rbac]: 
[+] Kubernetes Docker image [rancher/hyperkube:v1.26.6-rancher1]: 
[+] Cluster domain [cluster.local]: 
[+] Service Cluster IP Range [10.43.0.0/16]: 
[+] Enable PodSecurityPolicy [n]: 
[+] Cluster Network CIDR [10.42.0.0/16]: 
[+] Cluster DNS Service IP [10.43.0.10]: 
[+] Add addon manifest URLs or YAML files [no]: 

This will generate a file cluster.yml in the current directory. You can also generate the cluster.yml manually. See https://rancher.com/docs/rke/latest/en/ for a detailed list of configuration options!

A minimum configuration for a single node cluster looks like this:

nodes:
- address: <nodename>
user: <docker-user>
role:
- controlplane
- etcd
- worker
<repeat for more nodes>
kubernetes_version: v1.27.8-rancher2-2

kubernetes_version can be omitted as well, then the latest known RKE version is installed.

The RKE components are installed and run as docker containers. Each container image can be specified separately (take care!). The default list of images are available with:

$ rke config --system-images --all

Additional cluster components are installed as cluster addons. For an addon to work, the image needs to be present in the system-images list. These addons include CNIs, Nginx ingress controller, CoreDNS and Metrics server. Custom addons are not supported. These addons are installed as jobs if configured in cluster.yml

Managing a cluster

To create, start, change or update a cluster with the configuration from cluster.yml in the current directory, type:

$ rke up

If everything was configured correctly, the cluster is up and running and two files are created in the current directory: cluster.rkestate and kube_config_cluster.yml. cluster.rkestate is the state file (maybe best compared to a Terraform state file) and should be kept at a safe place. It will be needed for later cluster updates and upgrades. kube_config_cluster.yaml contains the admin kubeconfig file required for further administration of the cluster.

To remove a cluster, type

$ rke remove

Fun fact: There is no stop command!

To interact with the cluster, you need to set the KUBECONFIG variable:

$ export KUBECONFIG=<rkedir>/kube_config_cluster.yml
$ kubectl get node
...

Some configuration items can also be changed in cluster.yml after cluster creation. But there are exceptions, e.g. the CNI provider. Check the RKE docs for detail!

There are two ways to backup a RKE cluster configuration (basically the etcd database). A manual backup or restore is done with:

$ rke etcd snapshot-save
$ rke etcd snapshot-restore

An automatic backup to the local filesystem of every master is active by default. In addition a remote S3 bucket can be used as a backup target.

What do we get?

On OS level we see static autostart docker containers for the control plane and for the node components kubelet and kube-proxy. These containers are not visible within Kubernetes and not manageable with kubectl.

$ docker ps
0f5ad2330674   rancher/hyperkube:v1.26.6-rancher1    "/opt/rke-tools/entr…"   About a minute ago   Up About a minute   kube-proxy
1558c7580d46   rancher/hyperkube:v1.26.6-rancher1    "/opt/rke-tools/entr…"   2 minutes ago        Up 2 minutes        kubelet
652ca561b6a4   rancher/hyperkube:v1.26.6-rancher1    "/opt/rke-tools/entr…"   2 minutes ago        Up 2 minutes        kube-scheduler
5b48bbb7317c   rancher/hyperkube:v1.26.6-rancher1    "/opt/rke-tools/entr…"   2 minutes ago        Up 2 minutes        kube-controller-manager
c2238e4d2748   rancher/hyperkube:v1.26.6-rancher1    "/opt/rke-tools/entr…"   2 minutes ago        Up 2 minutes        kube-apiserver
8bfd49e8a75c   rancher/rke-tools:v0.1.87             "/docker-entrypoint.…"   2 minutes ago        Up 2 minutes        etcd-rolling-snapshots
45e514cc4dea   rancher/mirrored-coreos-etcd:v3.5.7   "/usr/local/bin/etcd…"   3 minutes ago        Up 2 minutes        etcd
$ kubectl get pod -A
NAMESPACE       NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx   ingress-nginx-admission-create-gfc6z       0/1     Completed   0          24s
ingress-nginx   ingress-nginx-admission-patch-b74ds        0/1     Completed   0          23s
ingress-nginx   nginx-ingress-controller-z9wmd             0/1     Running     0          25s
kube-system     calico-kube-controllers-74df54cbb7-ndqn8   1/1     Running     0          81s
kube-system     canal-mddnm                                2/2     Running     0          81s
kube-system     coredns-59499769fb-ktjsr                   1/1     Running     0          63s
kube-system     coredns-autoscaler-67cbd4599c-v99rj        1/1     Running     0          62s
kube-system     metrics-server-585b7cc746-xcf9q            1/1     Running     0          42s
kube-system     rke-coredns-addon-deploy-job-2784r         0/1     Completed   0          72s
kube-system     rke-ingress-controller-deploy-job-98sdg    0/1     Completed   0          32s
kube-system     rke-metrics-addon-deploy-job-6hlvq         0/1     Completed   0          52s
kube-system     rke-network-plugin-deploy-job-qk9pm        0/1     Completed   0          88s

No loadbalancer and no CSI provsioner are available by default. For single node and test setups, Rancher provides the local path provisioner at https://github.com/rancher/local-path-provisioner

The cluster is using these directories and filesystems on the nodes:

/var/lib/docker → images, rootfs, pod logs. This location depends on the Docker installation.

/var/lib/kubelet → volumes, ephemeral storage (emptyDir)

/opt/rke/etcd-snapshots → etcd backups on all master nodes

Debugging

The ways to debug your cluster setup include:

$ rke -d up

$ docker logs     # On the master nodes for core containers

$ kubectl logs    # for normal k8s pods on worker nodes

Cluster Lifecycle

The cluster version can be maintained on the management node in a rolling way. Rolling updates and upgrades are possible, but major versions cannot be skipped (e.g. it’s not possible to upgrade from 1.22 to 1.26 in one step).

  • Download a new RKE binary and replace the old one on the management node
  • Check available versions and update the existing cluster.yml (You kept it somewhere after you installed the cluster, didn’t you?)
  • $ rke up

A cluster downgrade is not supported. If you need to go back to an older version, the cluster needs to be removed and re-installed from a backup.

Summary

RKE is Kubernetes. And it’s lots of Kubernetes, because a lot of different versions (even versions not maintained by upstream anymore) can be managed with it. It runs perfectly fine on a lot of Linux distributions but it still depends on Docker. Windows support has been discontinued and the current version is already lagging behind the upstream version. RKE is still supported and no support end has been published yet, but it seems that the pace is going down. There is currently an early version of a migration tool to RKE2: https://github.com/rancher/migration-agent/releases/latest.

Don’t use RKE version 1 for new setups!

Additional information

To be continued in part 2

Kommentar verfassen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Nach oben scrollen
Cookie Consent Banner von Real Cookie Banner