PythOps

kubernetes the hard way part 2

Last update: 20 December 2019

In part I I explained how to setup a high availability k8s masters. In this post, I'm gonna go through the steps to setup the k8s nodes.

k8s nodes

A k8s node has two components:

Download k8s node components

Let's download kubelet and kube-proxy binaries

$ ansible-playbook k8s-download.yaml

You should get these files in the bin directory

bin
├── kube-apiserver
├── kube-controller-manager
├── kubelet
├── kube-proxy
└── kube-scheduler


PKI certificates

We need to generate certificates for each node using k8s-ca.crt as a root certificate.

Create the PKI certificates

$ ansible-playbook k8s-pki.yaml -t node

The certificates are stored in the pki/k8s directory

Kubeconfig files

As kubelet and kube-proxy need to communicate with the apiserver, we need to generate certificates for them.

$ ansible-playbook kubeconfig.yaml -t node

The kubeconfig files are stored in the kubeconfig directory

Deploy k8s node

Deploy node components

$ ansible-playbook k8s-node.yaml

Let's check that the nodes are up and running.

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-node-1   NotReady            64s   v1.17.0
k8s-node-2   NotReady            64s   v1.17.0

As you can see, the nodes are up but they are in NotReady status and that's because there is no network available in the cluster, so let's setup one.

Setup a network provider

Overlay network can be provided by multiple plugins. You can check the list here.
For this setup, we're gonna use Canal which uses Flannel for network and calico for network policies.

$ ansible-playbook canal.yaml

Verification

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig -n kube-system get pods -l k8s-app=canal
NAME          READY   STATUS     RESTARTS   AGE
canal-4m27k   2/2     Running    0          2m11s
canal-cslcz   2/2     Running    0          2m11s

Now that we have a cluster network, the nodes should status should turn to Ready, let's check that

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-node-1   Ready               64s   v1.17.0
k8s-node-2   Ready               64s   v1.17.0

Good, the nodes are Ready !


Deploy a Service Discovery

We're gonna use coreDNS plugin as a DNS service provider for our cluster.

$ ansible-playbook coredns.yaml

Verification

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig -n kube-system get pods -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-7f7bc9686f-fg9kn   1/1     Running   0          53s
coredns-7f7bc9686f-pct6b   1/1     Running   0          53s

The DNS service is now available in the cluster !

So far we have all the components deployed. Let's try to deploy an app to see if everything is working as expected.

Example App

Our app is a sime HTTP server that returns "Hello, word!"

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig apply -f app_example.yaml

verification

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get pods -l app=hello-world-app
NAME                               READY   STATUS    RESTARTS   AGE
hello-world-app-65d65c94dd-tcpxj   1/1     Running   0          2m

So far the pod is running.

Let's check the DNS record for the service

$ kubectl  --kubeconfig kubeconfig/admin.kubeconfig get svc
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
hello-world-app   ClusterIP   10.32.0.162                 80/TCP    19h
kubernetes        ClusterIP   10.32.0.1                   443/TCP   22h
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig run -i -t busybox --image=alpine --restart=Never
/ # nslookup hello-world-app
Name:      hello-world-app
Address 1: 10.32.0.162 hello-world-app.default.svc.cluster.local

So far so good !

Let's now make an HTTP request to the pod to see if it returns something

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig port-forward hello-world-app-65d65c94dd-tcpxj 8080:8080

Open another terminal and run the following command:

$ curl 127.0.0.1:8080                                                       
Hello, world!
Version: 1.0.0
Hostname: hello-world-app-65d65c94dd-tcpxj

Our example app is working as expected 👍


What's next

So far we've deployed a highly available Kubernetes cluster. Now it's time for you to customize it so it fits your needs.
These are some ideas of functionalities that you can add:

Be sure to check the CNCF website to discover some interesting tools.

More posts about Kubernetes are coming soon, stay tuned !

Recommended reading