📢 Free and Open source computer vision API based on open source models 🚀
Support multiple deep learning frameworks as well as multiple hardware accelerators
Discover more here 👇
Last update: 20 December 2019
In part I I explained how to setup a high availability k8s masters. In this post, I'm gonna go through the steps to setup the k8s nodes.
A k8s node has two components: * kubelet: it makes sure that the Pods are running. * kube-proxy: it create and maintains network rules.
Let's download kubelet and kube-proxy binaries
$ ansible-playbook k8s-download.yaml
You should get these files in the
bin ├── kube-apiserver ├── kube-controller-manager ├── kubelet ├── kube-proxy └── kube-scheduler
We need to generate certificates for each node using k8s-ca.crt as a root certificate.
Create the PKI certificates
$ ansible-playbook k8s-pki.yaml -t node
The certificates are stored in the
kube-proxy need to communicate with the
apiserver, we need to generate certificates for them.
$ ansible-playbook kubeconfig.yaml -t node
The kubeconfig files are stored in the
Deploy node components
$ ansible-playbook k8s-node.yaml
Let's check that the nodes are up and running.
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get nodes NAME STATUS ROLES AGE VERSION k8s-node-1 NotReady 64s v1.17.0 k8s-node-2 NotReady 64s v1.17.0
As you can see, the nodes are up but they are in
NotReady status and that's because there is no network available in the cluster, so let's setup one.
$ ansible-playbook canal.yaml
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig -n kube-system get pods -l k8s-app=canal NAME READY STATUS RESTARTS AGE canal-4m27k 2/2 Running 0 2m11s canal-cslcz 2/2 Running 0 2m11s
Now that we have a cluster network, the nodes should status should turn to
Ready, let's check that
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get nodes NAME STATUS ROLES AGE VERSION k8s-node-1 Ready 64s v1.17.0 k8s-node-2 Ready 64s v1.17.0
Good, the nodes are Ready !
We're gonna use coreDNS plugin as a DNS service provider for our cluster.
$ ansible-playbook coredns.yaml
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig -n kube-system get pods -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-7f7bc9686f-fg9kn 1/1 Running 0 53s coredns-7f7bc9686f-pct6b 1/1 Running 0 53s
The DNS service is now available in the cluster !
So far we have all the components deployed. Let's try to deploy an app to see if everything is working as expected.
Our app is a sime HTTP server that returns "Hello, word!"
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig apply -f app_example.yaml
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get pods -l app=hello-world-app NAME READY STATUS RESTARTS AGE hello-world-app-65d65c94dd-tcpxj 1/1 Running 0 2m
So far the pod is running.
Let's check the DNS record for the service
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world-app ClusterIP 10.32.0.162 80/TCP 19h kubernetes ClusterIP 10.32.0.1 443/TCP 22h
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig run -i -t busybox --image=alpine --restart=Never / # nslookup hello-world-app Name: hello-world-app Address 1: 10.32.0.162 hello-world-app.default.svc.cluster.local
So far so good !
Let's now make an HTTP request to the pod to see if it returns something
$ kubectl --kubeconfig kubeconfig/admin.kubeconfig port-forward hello-world-app-65d65c94dd-tcpxj 8080:8080
Open another terminal and run the following command:
$ curl 127.0.0.1:8080 Hello, world! Version: 1.0.0 Hostname: hello-world-app-65d65c94dd-tcpxj
Our example app is working as expected :+1:
So far we've deployed a highly available Kubernetes cluster. Now it's time for you to customize it so it fits your needs. These are some ideas of functionalities that you can add: - Add an Ingress controller so you can access your apps form the outside (I recommend using Traefik). - Provision the storage using Rook - Setup the monitoring for your cluster using Prometheus
Be sure to check the CNCF website to discover some interesting tools.
More posts about Kubernetes are coming soon, stay tuned !