Kubernetes is a system designed to manage containerized applications built within Docker containers in a clustered environment. It provides basic mechanisms for deployment, maintenance and scaling of applications on public, private or hybrid setups, means, it handles the entire life cycle of a containerized application. It also has intelligence of self-healing features where containers can be auto provisioned, restarted or even replicated.
Kubernetes Components
Kubernetes works in server-client concept, where, it has a master that provide centralized control for an all minions(agent). We will be deploying one Kubernetes master with two minions and we will also have a workspace machine from where we will run all installation scripts.
Kubernetes has several components:
etcd – A highly available key-value store for shared configuration and service discovery.
flannel – An etcd backed network fabric for containers.
kube-apiserver – Provides the API for Kubernetes orchestration.
kube-controller-manager – Enforces Kubernetes services.
kube-scheduler – Schedules containers on hosts.
kubelet – Processes a container manifest so the containers are launched according to how they are described.
kube-proxy – Provides network proxy services.
Read More: What is Kubernetes, its basics and components
Installing dependencies
The first thing you must do is install the necessary dependencies. This will be done on all machines that will join the Kubernetes cluster. The first piece to be install is apt-transport-https (a package that allows using https as well as http in apt repository sources). This can be installed with the following command:
# apt-get update && apt-get install -y apt-transport-https
Our next dependency is Docker. Our Kubernetes installation will depend upon this, so install it with:
# apt install docker.io
Once that completes, start and enable the Docker service with the commands
# systemctl start docker # systemctl enable docker
Installing Kubernetes
Installing the necessary components for Kubernetes is simple. Again, what we’re going to install below must be installed on all machines that will be joining the cluster.
Our first step is to download and add the key for the Kubernetes install. Back at the terminal, issue the following command:
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
Next add a repository by creating the file
# vim /etc/apt/sources.list.d/kubernetes.list ((add the following content.)) deb http://apt.kubernetes.io/ kubernetes-xenial main
Save and close that file. Install Kubernetes with the following commands:
# apt-get update # apt-get install -y kubelet kubeadm kubectl kubernetes-cni
Initialize your master
With everything installed, go to the machine that will serve as the Kubernetes master and issue the command:
# kubeadm init --node-name master
When this completes, you’ll be presented with the exact command you need to join the nodes to the master. This command gives Node joining details. make note of it.
Before you join a node, you need to issue the following commands (as a regular user):
# mkdir -p $HOME/.kube # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # chown $(id -u):$(id -g) $HOME/.kube/config
Deploying a pod network
You must deploy a pod network before anything will actually function properly. I’ll demonstrate this by installing the Flannel pod network. This can be done with two commands (run on the master):
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
Issue the command
# kubectl get pods --all-namespaces ##to see that the pod network has been deployed
Joining a node
With everything in place, you are ready to join the node to the master. To do this, go to the node’s terminal and issue the command:
# kubeadm join --token TOKEN SERVER_MASTER_IP:6443
Where TOKEN is the token you were presented after initializing the master and SERVER_MASTER_IP is the IP address of the master.
Once the node has joined, go back to the master and issue the command
# kubectl get nodes ## to see the nodes has successfully joined.
Deploying a service
At this point, you are ready to deploy a service on your Kubernetes cluster. To deploy an NGINX service (and expose the service on port 80), run the following commands (from the master):
# kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" # kubectl expose deployment nginx-app --port=80 --name=nginx-http
If you go to your node and issue the command
# docker ps ## you should see the service listed CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ef5dc06ea900 gcr.io/google_containers/pause-amd64:3.0 "/pause" 1 second ago Up Less than a second k8s_POD_kube-dns-6f4fd4bdf-w96mz_kube-system_f962af8b-109b-11e8-87a6-0aadd08ef4fc_3331 78e06af38cc8 gcr.io/google_containers/pause-amd64:3.0 "/pause" 59 minutes ago Up 59 minutes k8s_POD_kube-flannel-ds-rc9ns_kube-system_1cb1fb4c-109c-11e8-87a6-0aadd08ef4fc_0 7d5cc73e6d77 gcr.io/google_containers/kube-proxy-amd64@sha256:19277373ca983423c3ff82dbb14f079a2f37b84926a4c569375314fa39a4ee96 "/usr/local/bin/ku..." About an hour ago Up About an hour k8s_kube-proxy_kube-proxy-2s4xk_kube-system_f97abe3a-109b-11e8-87a6-0aadd08ef4fc_0 6476dd427279 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-proxy-2s4xk_kube-system_f97abe3a-109b-11e8-87a6-0aadd08ef4fc_0 237d44afd3b9 gcr.io/google_containers/etcd-amd64@sha256:54889c08665d241e321ca5ce976b2df0f766794b698d53faf6b7dacb95316680 "etcd --listen-cli..." About an hour ago Up About an hour k8s_etcd_etcd-kubetest_kube-system_7278f85057e8bf5cb81c9f96d3b25320_0 53cdbeb3eaea gcr.io/google_containers/kube-scheduler-amd64@sha256:2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a1675 "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-kubetest_kube-system_6502dddc08d519eb6bbacb5131ad90d0_0 f2d8873460d6 gcr.io/google_containers/kube-apiserver-amd64@sha256:a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f043 "kube-apiserver --..." About an hour ago Up About an hour k8s_kube-apiserver_kube-apiserver-kubetest_kube-system_eeada3a9cee6a5f9ae6930474adcd2f1_0 a953d5cda2e9 gcr.io/google_containers/kube-controller-manager-amd64@sha256:3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599769d0b7251e "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-kubetest_kube-system_4244b3d987e87af59b2266bff0744c14_0 818279ca4b6b gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-kubetest_kube-system_eeada3a9cee6a5f9ae6930474adcd2f1_0 51d070ac8e1c gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-kubetest_kube-system_6502dddc08d519eb6bbacb5131ad90d0_0 9a10f918db81 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_etcd-kubetest_kube-system_7278f85057e8bf5cb81c9f96d3b25320_0 42a46caa7324 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-kubetest_kube-system_4244b3d987e87af59b2266bff0744c14_0
If you’re interested in maintaining a highly available multi-master node setup with 3, 5 or even 7 master nodes (and without keeping your DevOps team busy troubleshooting esoteric issues, especially during Kubernetes version updates), you can do so using either the Google Container Engine (GKE) on the Google Cloud Platform, or Kublr running on AWS.
Kublr is an easy-to-use, well-tested platform for creating and maintaining highly available Kubernetes clusters.
In a matter of minutes, you can:
- Spin up a new cluster in any AWS region
- Setup cluster logging to ElasticSearch\Kibana
- Perform metric collection via InfluxDB\Grafana
- Enable autoscaling for worker nodes.
- Initialize the latest Tiller/Helm Kubernetes package manager.
- Handle background initialization steps automatically.