An insight into the essentials: Run Kubernetes yourself

An insight into the essentials: Run Kubernetes yourself

Why would I do that?

I asked Bing Chat: "Is it worth it to run Kubernetes yourself?" It gave me the following answer:

Whether it’s worth running Kubernetes yourself depends on many factors. Here are some pros and cons:

Pros:

  • You have full control over your Kubernetes installation.

  • You can customize your installation to your specific requirements.

  • You can reduce costs by using your own hardware.

Cons:

  • It requires a high level of technical knowledge and experience.

  • It can be time-consuming to run Kubernetes yourself.

  • It can be expensive if you don’t have your own hardware.

I hope that helps.

What I found very interesting is that this information is not entirely in line with what the cloud providers advertise, as they make it seem logical that it is cheaper to run your workload in the cloud. Bing Chats' answer above is a rather objective but simplified list. I think it is a good starting point for this topic.

Requirements

I would like to show you a Kubernetes setup that would fit nicely in a home lab or a low-demand environment. For keeping the extent clear I worked with the following circumstances:

  • A computer with an amd64 architecture, which means any Intel or AMD-based CPU. If you do this on an ARM-based system, let me know in the comments!

  • A domain that points to the IP address of your internet router or any network gateway that connects your computer to the internet. We need this so the domain can be secured with Let's encrypt

  • Your network router or gateway is set up to forward ports 80 (HTTP) and 443 (HTTPS) to your computer but does not expose any other unnecessary ports or the whole computer for security reasons - remember, it will be available on the internet.

  • The computer is running Ubuntu with microk8s (tested with v1.27.1) installed, which is a light Kubernetes software from the OS vendor - feel free to use your own Kubernetes variant, but things might work differently for you. Below I'll hint at microk8s-specific things. Reference: Install methods for MicroK8s

  • You are familiar with SSH or have an IDE that connects to the Ubuntu computer. I'm using Visual Studio Code with the Extensions Remote -SSH and Remote Explorer from Microsoft

  • You have a deployable Docker image that exposes a webserver port - if not, just stick with the Nginx image from my examples.

Initial check

Reference: Install methods for MicroK8s

Let's see if anything is ready:

microk8s status

Output:

microk8s is running
high-availability: no
  datastore master nodes: [...]
  datastore standby nodes: [...]
addons:
  enabled:
    cert-manager
    dns
    ingress
    [...]
  disabled:
    [...]

The entry high-availability is no, but that's okay for now. If you achieve the required redundancy it becomes highly available, as explained on this docs page. But that's not part of this article. What is necessary though are the enabled addons. If any is missing run the command:

microk8s enable dns ingress cert-manager

DNS is required for Kubernetes Services like Cert-Manager to work properly. Ingress is required so your Kubernetes Cluster can be accessed from outside, specifically on the HTTP and HTTPS ports.

Now let's check if our tool of choice, kubectl is installed:

kubectl

This will output:

kubectl controls the Kubernetes cluster manager.

# Here appears a list of possible commands.

So far so good.

Step 1: The configuration code

Kind: Deployment - this is your app

The Deployment is a Kubernetes object which builds containers for (but not only) Docker images and puts them in so-called pods. In this example, we use the publicly available Nginx Docker Image. You might want to use your application and at the end of this article, there will be a bonus section on setting up connections to private Docker registries and using images from there. For now, we work with this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
spec:
  selector:
    matchLabels:
      app: demo
  replicas: 2 # Amount of parallel running app instances.
  template:
    metadata:
      labels:
        app: demo # Has to match the selector
    spec:
      containers:
      - name: demo-container
        image: nginx
        ports:
          - containerPort: 80
---

Create a tutorial-kubernetes-app.yaml file and paste this snippet in. Watch out for the indentation as YAML files rely on the exact amount of space characters to build the logical hierarchy. Also copy the three dashes --- at the bottom which is relevant in separating this code block from the next.

Kind: Service - Exposing the deployment within Kubernetes

The service is a Kubernetes object that takes the set of pods from your deployment object and exposes it within Kubernetes. For our example, it might feel like a bit of overkill but let's stick to this abstraction layer as it has a fundamental role to play: Deployments are considered mortal, as their number of replications are scaled up and down or pods might become unresponsive or failing. The service finds these pods in the cluster with a predefined selector:

apiVersion: v1
kind: Service
metadata:
  name: demo-service
spec:
  selector:
    app: demo
  ports:
  - port: 80
---

Note the selector which points to app: demo from the deployment. If your container app is running on a different port you can config the last section this way:

  ports:
  - port: 8080 # whichever port your container exposes
    targetPort: 80 # we want the default http port

This goes into your tutorial-Kubernetes-app.yaml as well. Again: also copy the ---.

Kind: ClusterIssuer - We are using Let's encrypt

The ClusterIssuer is a Kubernetes Object that we will set up in a separate file. Create a file called issuer.yaml and paste the following snippet in:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: YOUR-EMAIL@EXAMPLE.COM
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
      - http01:
          ingress:
            class: public # microk8s specific. You might use class: nginx

---

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-production
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: YOUR-EMAIL@EXAMPLE.COM
    privateKeySecretRef:
      name: letsencrypt-production
    solvers:
      - http01:
          ingress:
            class: public # microk8s specific. You might use class: nginx

So there are two sets. One ClusterIssuer gets the name letsencrypt-staging the other one letsencrypt-production . This is for your sanity because we don't want to run an incomplete configuration against their production server - possibly causing you to run into rate limits. The idea is to work through this example and get it running with the Let's Encrypt Staging Certificate (which the browser prompts as insecure, but that is OK), then switch over to production and get a trusted certificate.

In the snippet above look out for the email: section and make sure to enter your E-Mail. The ClusterIssuer will send that info over to Let's Encrypt. Also note the Ingress setting class: public. This becomes clearer in the next section when we set up the Ingress Object in our microk8s. The rule of thumb is: to pay attention to what kind of Ingress Class your Kubernetes is using and match that class accordingly.

Regarding tutorial-kubernetes-app.yaml and issuer.yaml if you are wondering how to name these files, you can name them as you fancy, just make sure they are declarative enough so you know the next time you look at it, what it was about :)

The same applies to entries in metadata: name: sections. Kubernetes just needs to refer to the matching string. Name them as you like or better: find a suitable naming convention.

Kind: Ingress

The Ingress object is the connection between your service and the outside. Here the computer will be exposing your app. Copy the following snippet at the end of tutorial-kubernetes-app.yaml.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
  ingressClassName: public
  rules:
    - host: demo.EXAMPLE.COM
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: demo-service
                port: 
                  number: 80
  tls:
  - secretName: letsencrypt-staging
    hosts:
      - demo.EXAMPLE.COM
---

Exchange the demo.EXAMPLE.COM entry with your own domain. As stated in the Requirements section, it has to be the exact name of what you have set up at your domain provider. Note, that the domain appears twice in this snippet: in the rules: section and in the tls: section. As soon as we apply these code configurations, the cert-manager will look into this definition and requests automatically a certificate for this domain.

Also here is the spec entry ingressClassName: public which is specific to microk8s.

Step 2: Applying the code config

Cert-Manager

Have a look at the cert-manager documentation. It does not work right out of the box but needs some configuration to be applied in your Kubernetes cluster. Execute the following in the console:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.yaml

This should output quite some feedback: several cert-manager resources are created in your Kubernetes cluster in the namespace cert-manager. Now check if all of these are created by running:

kubectl get all --namespace cert-manager

This will output something like that:

NAME                            READY STATUS  RESTARTS AGE
pod/cert-manager-x              1/1   Running 1        19h
pod/cert-manager-cainjector-x   1/1   Running 1        19h
pod/cert-manager-webhook-x      1/1   Running 1        19h

NAME                           TYPE       CLUSTER-IP EXTERNAL-IP PORT(S)  AGE
service/cert-manager           ClusterIP  removed    <none>      9402/TCP 19h
service/cert-manager-webhook   ClusterIP  removed    <none>      443/TCP  19h

NAME                                      READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cert-manager-cainjector   1/1   1          1         19h
deployment.apps/cert-manager              1/1   1          1         19h
deployment.apps/cert-manager-webhook      1/1   1          1         19h

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/cert-manager-cainjector-x  1         1         1       19h
replicaset.apps/cert-manager-x             1         1         1       19h
replicaset.apps/cert-manager-webhook-x     1         1         1       19h

This looks good.

Create the ClusterIssuer

Then execute the issuer.yaml This step relies on the cert-manager being configured.

kubectl create -f issuer.yaml

Note: kubectl create commands can be executed only once and then the resources are already created. The next time it will throw an error, that it already exists. If you want to roll back your changes run: kubectl delete -f issuer.yaml and then you can recreate. Pay attention to the metadata names of the elements you are creating and deleting.

Now check if the ClusterIssuer is there:

kubectl get clusterissuer -o wide

The ideal output looks something like this (shortened):

NAME                     READY   STATUS                                              
letsencrypt-staging      True    The ACME account was registered ...
letsencrypt-production   True    The ACME account was registered ...

If there is an error in the status block you have to check what exactly fails. Highly possible are networking issues, e.g. the API of letsencrypt.org can't be contacted. In that case, check if the microk8s DNS addon is set up.

Apply the app configuration

Now comes the part we have long been working towards. Applying the configuration:

kubectl apply -f tutorial-kubernetes-app.yaml

The output:

deployment.apps/demo-deployment created
service/demo-service created
ingress.networking.k8s.io/demo-ingress created

Perfect! If not, no worries: there is a big chance that kubectl will have some remarks. Let me take you to the exciting journey of two issues I mostly encountered:

Possible issue 1: Invalid config

The request is invalid: patch: Invalid value: "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"demo-service\",\"namespace\":\"default\"},\"spec\":{\"port\":80,\"ports\":null,\"selector\":{\"app\":\"demo\"}}}\n]] spec:map[port:80 ports:<nil>]]": strict decoding error: unknown field "spec.port"

To understand whats happening, kubectl takes your file, parses the yaml syntax and applies it to Kubernetes. If parsing is ok it but the config does not match the Kubernetes API the output can be quite cryptic. Above, it finds an "unknown field 'spec.port'", which is nonchalantly expressed that port: 80 is wrongly indented and not in a map or array format. It most probably does not belong to spec: but to its child element ports:

Possible issue 2: Pods not loading

Sometimes the pods don't get started. You can check that with:

kubectl get pod --namespace default

If one of the pods is in STATUS: pending that means it failed to start. Reasons for that are as simple as that the container could not be loaded or that a dependency is missing (see below, Private Registry). You can find some more evidence of where the pod stopped by looking at the exact name in the list and using it in the following command:

kubectl describe pod PODNAME --namespace default

Step 3: It runs (kind of)

Image with two nginx responses: 404 is not correct and should be fixed in the config, the default "Welcome to nginx" page should appear.

If you see a 404 there is something not correct and should be fixed in the config. The default "Welcome to nginx" page should appear with the docker image used in the example.

Open your browser and enter the domain name you configured. Make sure you are accessing with https. As soon as you hit the website you should be prompted with an insecure certificate warning. Since it is your website you can proceed to the page and check the certificate.

There are two options:

  • The certificate is from Kubernetes itself and has the name "Kubernetes Ingress FAKE Certificate". In that case, your Let's Encrypt config is not in place and the Ingress Object serves with this self-signed certificate.

  • The certificate is from Let's Encrypt and is something like: "(STAGING) Artifical Apricot R3" from Organization "(STAGING) Let's Encrypt". That's what we want.

If it's the second option, you can proceed by exchanging the letsencrypt-staging entries in your Ingress definition with letsencrypt-production . It should then look like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
spec:
  ingressClassName: public
  rules:
    - host: demo.EXAMPLE.COM
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: demo-service
                port: 
                  number: 80
  tls:
  - secretName: letsencrypt-production
    hosts:
      - demo.EXAMPLE.COM
---

Then execute your yaml config again:

kubectl apply -f tutorial-kubernetes-app.yaml

Now, that should do the trick! The output should contain info that states: demo-ingress configured. Double-check that by running:

kubectl describe ingress

That should output a section which looks similar to this (shortened output):

Annotations: cert-manager.io/cluster-issuer: letsencrypt-production
[...]
CreateCertificate  cert-manager-ingress-shim  Successfully created Certificate "letsencrypt-production"
DeleteCertificate  cert-manager-ingress-shim  Successfully deleted unrequired Certificate "letsencrypt-staging"

The next time you clear the browser cache or open the website from a private browsing window, hitting demo.example.com should give you a website with a valid certificate from organisation "Let's Encrypt"

Step 4: Using a Docker image from a private registry

For this step, you need Docker installed on the Kubernetes host machine or alternatively Docker installed on another machine and be able to export the .docker/config.json

Most probably you will not be satisfied by having an nginx dummy page deployed on your TLS-secured domain at this point. So let's say you have already a dockerized app image ready at your private registry. This is where we have to make a guess: the registry is private-registry.example.com/demo-containers and your image is to be found at private-registry.example.com/demo-container..

The following section derives from the official documentation: Pull an Image from a Private Registry | Kubernetes

I'll keep it a bit shorter and related to the example. Please refer to the docs.

Since the registry is private, Kubernetes will have to do an authentication for retrieving the image. That's where Secret Objects become relevant.

First, you need to do a Docker login. This depends on how your registry is set up but likely looks like this: docker login private-registry.example.com -u USERNAME --password-stdin <<< "$REGISTRY_SECRET_KEY" When Docker logged in successfully, it will put an auth token in your Docker config.json. Check it with the following command:

cat ~/.docker/config.json

If you find the name of the registry and an auth token within that file, you will be able to add it as a secret to Kubernetes. So replace the PATH_TO_HOME part and execute the following script:

kubectl create secret generic regcred --from-file=.dockerconfigjson=PATH_TO_HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson

The output:

secret/regcred created

That is the name we gave the secret: regcred and that's what you have to add to the Deployment Object. Update your yaml code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
spec:
  selector:
    matchLabels:
      app: demo
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo-container
        # image: nginx
        image: private-registry.example.com/demo-containers/demo:latest
        ports:
          - containerPort: 80
      imagePullSecrets:
      - name: regcred

---

Then execute your yaml config again:

kubectl apply -f tutorial-kubernetes-app.yaml

And to make sure your image is being correctly pulled from your private registry check the pod:

kubectl describe pod --namespace default

Chances are your pod(s) will be stuck in Status: Pending, then have a look at the Event log at the end of the output - there might be an error message waiting for you.

But if your Pods will be in Status: Running then check your browser - time to get excited :)

Final remarks

Microk8s is production-grade Kubernetes but is not perfect out of the box. If you want to run an app with it read more about how to configure it. For that, you might be interested in the MicroK8s How To guides.

How did you get along? Could you follow or got stuck? If you made it to the end you have quite something achieved! I'd be happy to hear from you!