PythOps

kubernetes the hard way part 1

Last update: 10 April 2022

Introduction

Even though there are tools out there that automate the creation of k8s cluster, they aren't that flexible when it comes to customization. So if you are curious about k8s and want to know how to create your own cluster like a pro, you're in the right place ! This is part 1 of a series of posts where I'll be explaining how to deploy a highly available cluster.

At the end of this post, you'll be able to deploy a highly available k8s masters as shown below.

So let's get started !


Prerequisites

Before starting you need to have these tools installed in your workstation.

$ pip install --user -U ansible kubernetes
$ export PATH=$PAHT:$HOME/.local/bin
$ ansible-galaxy collection install kubernetes.core

Once you install these tools, the next step is to clone the Ansible playbooks from GitHub

$ git clone https://github.com/pythops/k8s_the_hard_way
$ cd k8s_the_hard_way


Configure VirtualBox

You may need to update the VirtulBox network config file to allow the 10.0.0.0/8 subnet

# File: /etc/vbox/networks.conf
* 10.0.0.0/8


Bootstrap masters VMs

Let's create the VMs

$ vagrant up master1 master2 master3 lb

You should have the masters VMs up and running

$ vagrant status
Current machine states:
lb                        running (virtualbox)
master1                   running (virtualbox)
master2                   running (virtualbox)
master3                   running (virtualbox)


Deploy the Etcd cluster

Etcd is a key-value store that is used by k8s to store all the cluster data. We're gonna deploy a highly available cluster with 3 nodes. With this setup we can afford to lose up to 1 node and still having a working cluster.

PKI certificates

We're gonna create a set of certificates to secure all the communication.

  • k8s-etcd-{1..3}-peer.crt are used to secure the communication between the etcd nodes.
  • k8s-etcd-{1..3}-server.crt are used to secure the communication between the cluster and the outside world.
Create the PKI certificates
$ ansible-playbook k8s-pki.yaml -t etcd

This will create and store all the certificates inside the directory pki/etcd

pki
├── etcd
│   ├── etcd-peer-ca.crt
│   ├── etcd-peer-ca.csr
│   . . .
│   └──  k8s-etcd-3-server.key

Deploy Etcd cluster

$ ansible-playbook etcd.yaml

Verification

$ vagrant ssh master3 -c "etcdctl --cert /etc/etcd/k8s-etcd-3-server.crt --key /etc/etcd/k8s-etcd-3-server.key --cacert /etc/etcd/etcd-server-ca.crt --endpoints=10.0.0.101:2379,10.0.0.102:2379,10.0.0.103:2379 endpoint health"

10.0.0.101:2379 is healthy: successfully committed proposal: took = 52.012358ms
10.0.0.103:2379 is healthy: successfully committed proposal: took = 54.985654ms
10.0.0.102:2379 is healthy: successfully committed proposal: took = 49.513059ms

Now that we have an etcd cluster up and running, let's move to the next step and deploy the k8s masters.


Deploy the k8s masters

A k8s master has 4 components:

  • kube-apiserver: validates and configures data for the api objects( pods, services etc)
  • kube-controller-manager: watches the state of the cluster through the apiserver and makes changes to move the current state towards the desired state
  • kube-scheduler: finds the best node for newly crated Pods to run on.
  • cloud-controller-manager (optional): embeds cloud-specific control loops.

For this setup we're gonna deploy only kube-apiserver, kube-controller-manager and kube-controller-manager

PKI Certificates

We're gonna create a set of certificates to secure the communications within the master components and with the outside world as well.

  • kube-master-{1..3}-apiserver.crt are the apiserver certificates.
  • kube-scheduler.crt is used to secure the communication between the scheduler and the apiserver.
  • kube-controller-manager.crt is used to secure the communications between the controller-manager and the apiserver.
  • service-account.crt is used to secure the communication between different service accounts and the apiserver.
  • kube-apiserver-etcd-client.crt is used to secure the communication between the apiserver and the etcd cluster.
  • apiserver-kubelet-client is used to secure the communication between the apiserver and kubelet.

For more infotmation about the k8s PKI, check the k8s official doc here 👉 https://kubernetes.io/docs/setup/best-practices/certificates/

Create the PKI certificates
$ ansible-playbook k8s-pki.yaml -t master

this will create all the certificates in pki/k8s directory

├── k8s
│   ├── k8s-ca.crt
│   ├── k8s-ca.key
│   ....
│   └── service-account.key

Kubeconfig files

kubeconfig files are used to access the cluster. They are used by any component that needs to talk to the apiserver.

Create kubeconfig files
$ ansible-playbook kubeconfig.yaml -t master

This will create all the kubeconfig files in the kubeconfig directory

kubeconfig
├── kube-controller-manager.kubeconfig
└── kube-scheduler.kubeconfig

Download the k8s binaries

To save time and bandwidth, we're gonna download the binaries to the host machine and then copy them to the VMs

$ ansible-playbook k8s-download.yaml

This will download the k8s master components to a bin directory

bin
├── kube-apiserver
├── kube-controller-manager
└── kube-scheduler

Deploy Master components

Deploy master components

$ ansible-playbook k8s-master.yaml

ℹ️ One important option in apiserver options is --encryption-provider-config which encrypts all the data before storing it in the etcd. the encryption key is defined in roles/k8s-master/defaults/main.yaml


Cluster admin user

PKI certificates

$ ansible-playbook k8s-pki.yaml -t admin

This will create the admin certificate and private-key in pki/user-accounts directory

pki
└── user-accounts
    ├── admin.crt
    ├── admin.csr
    └── admin.key

kubeconfig

We need to generate the kubeconfig file as well

$ ansible-playbook kubeconfig.yaml -t admin
kubeconfig
├── admin.kubeconfig


Deploy the Load Balancer

We're gonna deploy a TCP loadbalancer using Nginx

$ ansible-playbook loadbalancer.yaml


Verification

Let's verify now that our setup is working

$ kubectl --kubeconfig kubeconfig/admin.kubeconfig cluster-info
Kubernetes control plane is running at https://10.0.0.100:6443

Congratulations 🎉

So far we have deployed a highly available k8s masters. Let's move to part 2 to deploy k8s nodes.

Read more ...

Setup your Linux workstation with Ansible

Kubernetes Security Considerations

kubernetes the hard way part 2