Just a few months ago, I had the chance to get my hands dirty with Kubernetes for the very first time. Up until that point, all I knew about it is that it is something that ensures containerized applications run smoothly in a cluster, nothing else. After finishing my “hello world” and trying out some examples, I have decided to write a series of blog posts in which I will try to summarize all my learning.
A fair warning – I am still a beginner in Kubernetes, so advanced Kubernetes practitioners might find my posts to be too basic. This is also what motivated me to write these posts in the first place. I wanted to write a beginner-friendly series of posts on Kubernetes for developers which could be used as a good first starting point. I have also decided to break down my articles into small chunks of posts so that they are easily digestible by a beginner.
What is Kubernetes?
Containers have become the de-facto medium of packaging applications nowadays. More and more applications are being containerized and being deployed into machines. Deploying a single container to a virtual and/or physical machine may not be a big issue and probably doable. However, things start to get complicated when we have to deploy hundreds of containers to lots of machines, ensuring scalability and availability at the same time.
This is the problem that Kubernetes tries to solve. It helps us to automate the deployment, management, and scalability of containerized applications. It does so by creating a cluster comprising of physical and/or virtual machines, which are capable of running containerized applications. When we want to deploy an application with Kubernetes, we tell it the location of the images and specify some additional configuration (i.e., resources, memory, config etc.). It then takes care of the rest – fetching images, deploying them to machines in the cluster, and making sure the containers get the necessary resources. It also monitor the state of the cluster and takes necessary steps to ensure both scalability and availability.
I’ve found it quite helpful to think of Kubernetes as
the conductor of a container orchestra, and the containers as the instrument playes –
By the way, this is how the official documentation
define Kubernetes –
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.
Before diving into the details of Kubernetes, let’s talk about the differences between declarative vs imperative deployment model. Learning the difference between these two approaches helped me out quite a bit to understand the benefits Kubernetes has to offer.
Declarative vs Imperative Deployment
When I deploy an application to a machine, the approach that I have seen/taken so far looks somewhat like below –
- I allocate a virtual/physical machine
- I configure the machine with all the necessary runtime and dependencies
- I deploy my application to the machine
- I add monitoring to make sure that when my application is down, I get notified
I can of course use tooling to automate these steps. However, I still have to explicitly specify the actions that must be executed to deploy my application. If, after the deployment, one of the application instances gets unhealthy/get stuck, I have to replace it with a healthy instance by myself, preferably executing some simple deployment script. This style of deployment is commonly known as the Imperative style.
Kubernetes works a bit differently. Instead of telling it how exactly it should deploy my application, I point it to my container image. I also specify the resources (CPU, memory) the application needs, the desired number of running instances, and the definition of a “healthy” instance. Once I have specified these in the form of “resource” definitions, I send them to the Kubernetes cluster. Once Kubernetes receives these resource definitions, it takes care of the rest – it deploys the application to the machines in the cluster, ensuring the desired number of instances are running. It also ensures that each running instance has all the resources it needs. Not only that, it continues to monitor the state of the deployment. If due to some unfortunate reason any instance of the application dies/get stuck, it takes necessary actions automatically to make sure that the “desired state” of the application is always maintained. This style of deployment is commonly known as the Declarative style and has all these advantages mentioned above over the Imperative style of deployment.
I hope this article provides a very brief but still somewhat useful overview of the problems Kubernetes tries to solve. In the next article, we will talk about Pods – one of the “resources” that I have just mentioned above, which also happens to be the basic unit of deployment in Kubernetes.
Published on Java Code Geeks with permission by Sayem Ahmed, partner at our JCG program. See the original article here: A Beginner-friendly Introduction to Kubernetes for Developers: What is Kubernetes?
Opinions expressed by Java Code Geeks contributors are their own.