In my previous article, I provided a high level overview of Containerization and how it compares to Virtualization. In this article I want to introduce an extremely powerful platform by the name of Kubernetes.
By now, we should have a fair grasp on Containerization and Microservices. We are but one topic away on my Istio blog series from finally talking about Istio. But before it’s even considered, we need to understand the basic workings of Kubernetes.
Some quick notes before we continue
- I need to make it clear that this is not a step-by-step instruction manual on how to setup and configure a Kubernetes cluster. As mentioned in my previous blog posts…right now, it’s more about the “why”, than it is about the “how”
- While multiple container engines exist, I will primarily focus on Docker, as it’s the most well known and widely used
What is Kubernetes
Kubernetes is an open-source platform for container engines like Docker, rkt, etc. It lets you schedule and run one to many containers on one or a cluster of machines. Some of the tasks that Kubernetes is responsible for are:
- Starting up and managing the state of containers
- Placing containers within one or many nodes
- Restarting a container if it crashes or gets killed
- Moving containers from one node to another
- Scaling containers when usage becomes high
- Running jobs using parallel processing
- and so on
If you’ve been following my blog series and understand the relationship between Microservices and Containers, you’ll realize that at some point there is a concern of how one manages all these containerized services.
Example: Imagine 5o containers running on a local machine within your IT Operations:
- What happens when one or more containers crash? How would you know about it and how difficult would it be to sift through all the containers to finally fix the one that went down?
- What happens when one of your services gets over-utilized because of business demand? What effort would it take to manage these scenarios to manually spawn additional container runtimes to manage the load?
- How do you implement high availability, so that if the machine managing the containers goes down, it’s at least part of a cluster which guarantees the services will still run?
Well, as you may have guessed, this is why Kubernetes exists. Through fairly straightforward configurations, Kubernetes references container images to spawn multiple runtimes and manage them across a cluster of machines, delivery auto-scaling, high availability and continuous delivery.
When we get to the actual Istio articles, you will see Kubernetes in action. For now, it’s just important you understand why it exists and what purpose it’s fulfilling.
Nodes and Pods. Huh?
A Node (at one point known as a minion), is essentially a worker machine (aka a VM or physical machine). If you were running a Kubernetes cluster across 3 VMs, you then have 3 nodes. If you were running Kubernetes on 1 physical machine, you have 1 node.
A Pod represents a running process on your cluster. It would host 1 or more containers to run as a single instance or application. A pod exists within a node and a node can have more than 1 pod. If you had 3 containers that make up an application within your cluster (e.g. Timesheets, Purchase Orders, or HR), the containers/microservices that make up one of these processes would exist inside 1 pod. You would usually group containers that need to work together into 1 pod, but more often than not, containers tend to exist individually inside their own pods.
Kubernetes On-Prem and in the Cloud
Kubernetes is available on many cloud platforms including:
What is Minikube
Unlike running Kubernetes on IBM Cloud Private for production purposes, Minikube is a lot more lightweight and is meant to run a single node cluster on one’s local machine. Minikube can be used for development and testing purposes and is not recommended at all for production. This is great if you want to spawn a number of containers on your local machine to test out various configurations before deploying your containers to a more manageable cloud service like AWS, etc.
Quick Facts about Kubernetes
- It’s also commonly known as k8s
- Originally created by Google and donated to the Cloud Native Computing Foundation
- It was designed on the same principles that allows Google to run billions of containers a week
- Can be run on-premises, in the cloud or a hybrid of both, allowing for flexible disaster recovery
- Progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time (Upgrades with no downtime)
Platforms similar to Kubernetes
While there are other platforms that provide similar offerings, Kubernetes is indeed the most favored. Examples of these other platforms are:
- Docker Swarm (Ships with Docker but isn’t as extensive as Kubernetes)
- Apache Mesos
- Elastic Container Services (Amazon ECS)
Who Uses Kubernetes
Hopefully you now have a basic understanding of Kubernetes and containerization clustering. At this point we have covered the pre-requisites of the technologies used to be able to delve into the world of Istio.Stay Tuned! 😎