Diving into Kubernetes: Containers and Components


In essence, Kubernetes is an open-source platform used for maintaining and deploying a group of containers. These containers might be something like Docker. Therefore, it manages multiple docker elements together.
Actually, Kubernetes is generally used alongside Docker for better control and implementation of containerized applications.
Side Note: Containerized Applications means bundling an application together with all files, libraries, and packages required for it to run reliably and efficiently on multiple platforms.

Kubernetes has some features like: high security, portability and that it saves time.

It has better isolation properties to share the OS among various applications.

It works with docker container solutions which help you to manage your infrastructure exactly the way that you want to be managed and scale it to the demands of the customers as they use the solution.

It can automatically allocate resources to the nodes that are free and hence saves such time. Therefore, it allows the infrastructure to run much more efficiently and effectively.

Google initially developed Kubernetes. It was introduced as a project at Google. It was released as an open-source platform in 2014 so people outside could take advantage of Kubernetes power of containerization management tools. Kubernetes architecture was made convenient for applications to run on the cloud. Today it is managed by the Cloud-Native Computing Foundation.

Kubernetes is compatible across several platforms. It is 100% open source project and thus provides greater flexibility.

Kuberentes is known to be very efficient. New servers are included or deleted easily. It has scalability as it automatically changes the number of running containers manually.

It is designed to tackle the availability of both containers and infrastructure. It is available for use in any environment.

One of the main features of containerization is the ability to hasten testing, managing and deploying phases. Kubernetes is designed for deployment and gives access to many of it features.

Service Discovery and Load Balancing

Permalink to "Service Discovery and Load Balancing"

Kubernetes can display a container using DNS or its IP address. If the traffic load is high, it balances load and distributes the network for deployment to be stable.

The Significance of Container Orchestration Solution

Permalink to "The Significance of Container Orchestration Solution"

With Docker we can run a single instance of the application with a Docker run command. For example, if we want to run a NodeJs application, we run the docker run node js.
But that is just one instance of the application on one Docker host. But when the number of users increases and the instance no longer handles the load then it is needed to deploy additional instances of the application by running the docker run command multiple times.
So that is something you have to do yourself. And not just that, you have to watch closely on those applications and if the container fails, you should be able to detect that and run the docker command again and deploy another instance of that app. What about the help of the docker host itself? What if the host crashes and is inaccessible? The containers hosted on that host becomes inaccessible too. So, in order to fix these issues, it will be needed a dedicated engineer who can sit and monitor the state and performance of the containers to take necessary actions to fix the situation.
But when you have large applications deployed with thousands of containers, it will be an impractical approach. Therefore, you can build your own scripts and that will help you to tackle these issues at some extent.
As a result, Container Orchestration is the solution.
Container Orchestration consists of tools and scripts that can help host containers on a production environment.
Typically, a Container Orchestration solution consists of multiple docker hosts that can host containers so that even if one fails, the application is still accessible to the others. Hence, a Container Orchestration easily allows you to deploy hundreds of instances to your application with a single command. These solutions can help to automatically scale up the number of instances when users increase and scale down when users decrease. Some solutions can even help to automatically append additional host to support the user load, but not just clustering and scaling but also provides support for advanced networking between these containers across different hosts. It also as well loads balancing user requests across different hosts. Another point is that it supports configuration management and security within the cluster.
There are multiple Container Orchestration solutions available today.
Docker has Docker Swarm; Kubernetes is from Google and Mesos from Apache. While Docker Swarm is easy to set up and get started, it lacks some of the auto-scaling features required for complex production grade applications. On the contrary, Mesos is a little bit difficult to configure and get started but supports many advanced features.
Kubernetes is arguably the most popular of them all. It requires a little complex configuration and to get started, however provides a lot of options to customize deployments and has support for many different vendors. Kubernetes is now supported on all public cloud service providers like GCP, Azure and AWS. The Kubernetes project is a highly ranked project on GitHub.

So, what is the connection between Kubernetes and Docker?

Permalink to "So, what is the connection between Kubernetes and Docker?"

Well, Kubernetes uses Docker to host apps in the form of Docker containers. Kubernetes supports alternatives to Docker, such as Rocket CoreOS, therefore it doesn’t need to be Docker all the time.

Using the Docker CLI, you are able to run a single instance of an app.
Whereas with Kubernetes, using the Kubernetes CLI known as kube-control , you can run thousand instances of the same app with a single command. kubectl run --replicas=1000 my-own-server.
Also, you can scale it up to 2,000 with another command. Based on user load, Kubernetes can automatically be configured to scale up and down instances.

Kubernetes can upgrade these 2000 instances of app in a rolling upgrade one at a time with a single command. kubectl rolling-update my-web-server.
If something fails to run then it can roll back these images with a single command.
Kubernetes can help to test new features of app by only upgrading a percentage of these instances through A-B testing methods. The Kubernetes open architecture provides support for many different networks and storage vendors. So, any network or storage brand that you can think of, has a plugin for Kubernetes. Kubernetes also offers some authentication and authorization features.

Let's take a look at Kubernetes architecture. A Kubernetes cluster consists of a set of nodes. A node is a physical machine or virtual one, which a Kubernetes software is installed.
Node is a worker machine where containers will be launched by Kubernetes. But what if the node in which the application is running fails?
Well, obviously the application goes down. So, it is needed more than one node. Therefore, a cluster is a set of nodes grouped together. This way even if one node fails, the application is still accessible from the other nodes.
After cluster, Master comes in.
Master is responsible for managing the cluster in which is information about the members of the clusters stored and monitored. It watches over the nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes.
When Kubernetes is installed on a system, the following components are installed: The API server, etcd server, kubelet, container runtime (an engine like Docker), controller and scheduler.
The first one is API server that acts as the front end for Kubernetes. The user management devices and CLI's all talk to the API server to interact with Kubernetes cluster.

Next etcd, which is a distributed reliable key value that stores all the data used to manage the cluster.
When there are multiple nodes and masters in the cluster, etcd stores all that information in a distributed manner. It is responsible for creating logs within the cluster to ensure that there are no conflicts between the masters. Then, Scheduler is responsible for distributing containers across nodes. It looks for newly created containers and assigns them to nodes. They are the brain behind orchestration as they respond and notify node containers or endpoints.
Next, the controllers, make decisions to bring up new containers in such cases. Then, the container runtime is the underlying software used to run containers.
Finally, it is the kubelet agent that runs in the cluster. It makes sure that the containers are running on the nodes as expected. The image below describes the Kubernetes components.

In addition, Kubernetes CLI utility "kubectl" is used to deploy and manage applications on a Kubernetes cluster to get cluster related information or to get the state of the nodes. Here are some commands regarding kubectl:

  • kubectl run hello-world - it used to deploy an app on the cluster.
  • kubectl cluster-info - to view info about the cluster
  • kubectl get nodes - to list all of the nodes part of a cluster.

And to run hundreds of instances of apps, this single Kubernetes command helps: kubectl run my-own-app --image=my-own-app --replicas=2000.