Deploying a Kubernetes cluster using RKE

Using RKE you can deploy a cluster from a simple config file. In this blog post I'll go over how to set it up.

Deploying a Kubernetes cluster using RKE
Photo by Muhammed Zafer Yahsi / Unsplash

A while ago I talked about deploying a Kubernetes cluster using MicroK8s. There are ofcourse other ways of deploying such a cluster. The one I will talk about today is RKE.

I find RKE interesting as it is completely configured by a file. You define your hosts and your cluster settings in the file and RKE will deploy your cluster according to that.


The installation of RKE is fairly simple. It's a single binary. You could just execute it from a local folder or add it to your path. The latest release can be grabbed from their GitHub repository.


Next we will need some servers to deploy our cluster to. As the environment where I deployed it uses Ubuntu as their default OS I opted for it.

In this scenario we'll deploy a three node cluster. I deployed three Ubuntu 20.04 images on VMware and installed the default package. You could opt to use the official latest Docker package from the official repo, but you'll have to make sure that version is supported by RKE/Kubernetes. Make sure the user you're going to use has the permissions to work with Docker. In this case I had to add my user to the docker group (usermod -aG docker jeroen).

Make sure passwordless SSH access to your user is set up by copying your public key to the authorized_keys file.


RKE needs a configuration file to deploy the cluster. Luckily the binary will help you in creating one. Just execute the rke config command and it will ask you some questions on how you want to setup your cluster. I recommend keeping most at default. Just make sure that the IPs, username and path to your SSH key have been set up correctly.

There are just two settings I've adapted in the config file after I ran through the initial config:


I want the cluster DNS to use the internal DNS server. That's why I've set CoreDNS as the DNS provider and the internal DNS server as upstream. To do the same, edit your config file and search for the dns: null entry. Replace it with following configuration:

  provider: coredns

More info on configuring CoreDNS can be found in the RKE docs


The cluster must be able to handle incomming connections and route them to the correct service. For this we'll need to set up ingress. In RKE it is as simple as setting the ingress provider to Nginx.

  provider: ""


  provider: nginx

More info on configuring ingress can be found in the RKE docs


Once your nodes are properly configured and your cluster config is finished, it is time to deploy the cluster. We do this by executing rke up in the same folder as our cluster config.

If you really want to execute it from somewhere else, or renamed your cluster config, you can specify the correct path with the --config flag.

During the installation it will show his steps and any warnings or errors that may occur. If everything goes well, this message should appear: Finished building Kubernetes cluster successfully.


After installation, RKE will create a config file kube_config_cluster.yml. This can be used to authenticate and connect to your cluster. Test if everything is working by executing a kubectl command:

$ KUBECONFIG=kube_config_cluster.yml kubectl get node                                                    
NAME          STATUS                        ROLES                      AGE   VERSION   Ready                         controlplane,etcd,worker   41h   v1.21.8   Ready                         controlplane,etcd,worker   41h   v1.21.8   Ready                         controlplane,etcd,worker   41h   v1.21.8

Et voila, now your cluster is running and you can start deploying applications to it.


Together with the config file, RKE will also create a statefile called cluster.rkestate. It's important to keep a copy of it together with your cluster config. This allows RKE to know what the current state of the cluster is and what actions need to be performed if you change something to the cluster config.

I recommend keeping both the state and config file in a Git repository, but if you want to keep it somewhere else you can.