PythOps

kubernetes the hard way part 2

Last update: 10 April 2022

In part I I explained how to setup a high availability k8s masters. In this post, I'm gonna go through the steps to setup the k8s nodes.


k8s nodes

A k8s node has two components:

  • kubelet: it makes sure that the Pods are running.
  • kube-proxy: it create and maintains network rules.


Bootstrap nodes VMs

Let's create the VMs

$ vagrant up node1 node2

You should have the node VMs up and running

$ vagrant status
Current machine states:
node1                   running (virtualbox)
node2                   running (virtualbox)


Download k8s node components

Let's download kubelet and kube-proxy binaries

$ ansible-playbook k8s-download.yaml

You should get these files in the bin directory

bin
├── kube-apiserver
├── kube-controller-manager
├── kubelet
├── kube-proxy
└── kube-scheduler


PKI certificates

We need to generate certificates for each node using k8s-ca.crt as a root certificate.

Create the PKI certificates

$ ansible-playbook k8s-pki.yaml -t node

The certificates are stored in the pki/k8s directory


Kubeconfig files

As kubelet and kube-proxy need to communicate with the apiserver, we need to generate certificates for them.

$ ansible-playbook kubeconfig.yaml -t node

The kubeconfig files are stored in the kubeconfig directory


Deploy k8s node

Deploy node components

$ ansible-playbook k8s-node.yaml

Let's check that the nodes are up and running.

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-node-1   NotReady   <none>   12s   v1.23.5
k8s-node-2   NotReady   <none>   12s   v1.23.5

As you can see, the nodes are up but they are in NotReady status and that's because there is no network available in the cluster, so let's setup one.


Setup a network provider

Overlay network can be provided by multiple plugins. You can check the list here. For this setup, we're gonna use Canal which uses Flannel for network and calico for network policies.

$ ansible-playbook canal.yaml

Verification

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig -n kube-system get pods -l k8s-app=canal
NAME          READY   STATUS     RESTARTS   AGE
canal-4m27k   2/2     Running    0          2m11s
canal-cslcz   2/2     Running    0          2m11s

Now that we have a cluster network, the nodes should status should turn to Ready, let's check that

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-node-1   Ready    <none>   91s   v1.23.5
k8s-node-2   Ready    <none>   91s   v1.23.5

Good, the nodes are Ready !


Deploy a Service Discovery

We're gonna use coreDNS plugin as a DNS service provider for our cluster.

$ ansible-playbook coredns.yaml

Verification

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig -n kube-system get pods -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-7f7bc9686f-fg9kn   1/1     Running   0          30s

The DNS service is now available in the cluster !

So far we have all the components deployed. Let's try to deploy an app to see if everything is working as expected.


Example App

Our app is a sime HTTP server that returns "Hello, word!"

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig apply -f app_example.yaml

verification

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get pods -l app=hello-world-app
NAME                               READY   STATUS    RESTARTS   AGE
hello-world-app-65d65c94dd-tcpxj   1/1     Running   0          25s

So far the pod is running.

Let's check the DNS record for the service

$ kubectl  --kubeconfig kubeconfig/admin.kubeconfig get svc
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
hello-world-app   ClusterIP   10.32.0.219   <none>        80/TCP    42s
kubernetes        ClusterIP   10.32.0.1     <none>        443/TCP   15m
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig run -i -t busybox --image=alpine --restart=Never
/ # nslookup hello-world-app
Name:      hello-world-app
Address 1: 10.32.0.219 hello-world-app.default.svc.cluster.local

So far so good !

Let's now make an HTTP request to the pod to see if it returns something

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig port-forward svc/hello-world-app 8000:80

Open another terminal and run the following command:

$ http 127.0.0.1:8080
HTTP/1.1 200 OK
Content-Length: 72
Content-Type: text/plain; charset=utf-8
Date: Sun, 10 Apr 2022 10:51:58 GMT

Hello, world!
Version: 1.0.0
Hostname: hello-world-app-78468d8f6d-7qnph

Our example app is working as expected 🎉


What's next

So far we've deployed a highly available Kubernetes cluster. Now it's time for you to customize it so it fits your needs.

These are some ideas of functionalities that you can add:

  • Add an Ingress controller so you can access your apps form the outside (I recommend using Traefik).
  • Provision the storage using Rook
  • Setup the monitoring for your cluster using Prometheus

Be sure to check the CNCF website to discover some interesting tools.

More posts about Kubernetes are coming soon, stay tuned !

Read more ...

Setup your Linux workstation with Ansible

Kubernetes Security Considerations

kubernetes the hard way part 1