Deploying a MicroK8s Kubernetes Cluster
The time has come to migrate from docker-compose to Kubernetes. To do this I settled on MicroK8s, although RKE is also a viable option for me. In this post I'll explain how I've set up the cluster.
The time has come to migrate from docker-compose to Kubernetes. To do this I settled on MicroK8s, although RKE is also a viable option for me. In this post I'll explain how I've set up the cluster.
Cluster
Preparation
As MicroK8s is a Canonical product, I opted for Ubuntu as the operating system of choice. This ensures I have all needed packages etc. by default.
As we want a cluster with high-availability, we'll need 3 servers (VMs in my case). Once you've installed those servers, make sure their FQDNs resolve to the correct IP.
Installation
Connect to one of the three servers. Next run the following command to get a list of available versions: snap info microk8s
. Decide which version you want to use. You'll need to put this in the channel parameter when installing.
Now we need to install it on all three servers, so run the following commands on all of them.
snap install microk8s --classic --channel=1.22/stable
After this is done, we need to add your user to the microk8s group to interact with the cluster. I've also created aliases for kubectl
and helm
so we don't always need to put microk8s.
in front of it.
usermod -aG microk8s root
echo "alias kubectl='microk8s kubectl'" >> .bash_aliases
echo "alias helm='microk8s.helm3'" >> .bash_aliases
Clustering
Once everything is installed, the nodes can be clustered together. SSH to one of the nodes and run the command microk8s add-node
. This will output the command you'll need to add a node. It will look something like microk8s join <IP>:<PORT>/<TOKEN>
. Copy this command and run it on one of the other two nodes. When this is complete, repeat both steps for the last node.
Add-Ons
We'll need some basic add-ons to enable some stuff in our cluster. These can be enabled using:
microk8s enable helm3 dns ingress metrics-server prometheus
Helm3 installs helm, which can be used to quickly deploy applications in out cluster. Dns is needed to resolve hostnames of pods etc. between nodes. Ingress install the Nginx ingress controller to handle incoming web traffic. Metrics-server and Prometheus enable us to see stats and metrics of the cluster.
Tools
Cert-Manager
Cert-manager is a tool to handle ssl certificates in your cluster. It supports Let's Encrypt to automatically generate certificates for your ingress routes, but you can also use it to generate self-signed certificates.
The commands bellow will create a namespace, install cert-manager and create ClusterIssuers. Those issuers can be used by other pods in your cluster to generate certificates.
# create a new nammespace
kubectl create namespace cert-manager
# apply the official yaml file (change the version to the latest release)
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml
# create the ClusterIssuers
kubectl apply -f issuers.yml
The issuers.yml
file contains three ClusterIssuers. One is self-signed, another one uses the Let's Encrypt staging api and the last one uses the production Let's Encrypt api. Because we use MicroK8s, we need to specify the ingress class to solve the Let's Encrypt request. The MicroK8s ingress add-on uses the public
class instead of the default nginx
class. The file looks like this:
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: self-signed
namespace: cert-manager
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-stg
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: jeroen@vermeylen.org
privateKeySecretRef:
name: letsencrypt-stg
solvers:
- http01:
ingress:
class: public
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prd
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: jeroen@vermeylen.org
privateKeySecretRef:
name: letsencrypt-prd
solvers:
- http01:
ingress:
class: public
To use one of these ClusterIssuers, we specify them in a ingress object. We also need to set the public
ingress class. Both these specifications are set in the annotations
in the metadata
tag.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-web
annotations:
kubenetes.io/ingress.class: public
cert-manager.io/cluster-issuer: "letsencrypt-tst"
spec:
tls:
- hosts:
- example.test.local
secretName: app-web-tls
rules:
- host: example.test.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-svc
port:
number: 80
Nfs-Subdir-External-Provisioner
The nfs-subdir-external-provisioner enables you to use one nfs share for your cluster. It will create a new directory for every VolumeClaim you create to store persistent data. To use this, nfs-common
will need to be installed on all nodes in the cluster. We use helm to deploy this tool in the cluster.
# install nfs tools
sudo apt-get install nfs-common
# create a new namespace
kubectl create namespace cert-manager
# add helm repository
helm repo add \
nfs-subdir-external-provisioner \
https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
# install or upgrade
helm upgrade \
--install \
-n storage \
nfs-subdir-external-provisioner \
-f values.yaml \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
The values.yaml
file contains all the settings for the tool, it looks something like this. Don't forget to fill in the correct connection settings for your nfs share (this ip is random).
replicaCount: 2
nfs:
server: 130.68.62.175
path: /microk8s
Now you can use this storage method by specifying storageClassName
in the spec
of a PersistentVolumeClaim
.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: app-pvc
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
This sets up a basic cluster with most tools you need to get started. If you got any questions, don't hesitate to contact me.