Container deployments are an increasingly popular way to package applications and deploy them to various environments. Containers offer several benefits, such as improved scalability, portability, and security. Let's look at Kubernetes — a system that manages containerized applications.
What is Kubernetes?
Kubernetes is an open-source platform that enables developers to deploy, manage, and scale containerized applications. Kubernetes does all of this through features such as auto-deployment, auto-scaling, self-healing, and resource optimization. Not only making it very capable but also an easy platform to deploy applications in any environment.
The Kubernetes logo
Initially developed by Google and released in 2014, they later donated it to the Cloud Native Computing Foundation, where it is now maintained and supported by an international team of programmers.
In this topic, you will understand the core concepts and architecture components of Kubernetes. You will learn how containers work and concepts like pods and services, how Kubernetes creates these nodes, and how you can interact with them.
The Kubernetes cluster
A Kubernetes deployment, or cluster, consists of three main components: The control plane, the worker nodes, and the user interface.
The applications themselves run inside the nodes, of which you always need at least one. These nodes can be on physical servers, virtual machines, or cloud instances. Inside nodes, the containerized application runs in sets called pods. Each pod takes a part of the workload. And it is the control plane that creates and manages these nodes based on the requirements set by the user.
The control plane
The control plane is made of four components, each with its own responsibility. Users interact with the control plane through the Kubernetes API. It is the controller manager that listens to the API and updates etcd accordingly while building a desired state based on the user's input. To maintain this state, the controllers can deploy nodes, create replicas, and configure services by changing the data stored inside etcd.
etcd itself is a key-value store, like a dictionary. But it is not stored in a single location. Partly for redundancy, etcd is distributed across the cluster, making sure that its data is still available even when one machine goes down. Inside Kubernetes, it is used to store critical data such as the current cluster state, information about pods, and other resources.
The Kubernetes scheduler monitors the etcd for any changes. It is responsible for scheduling containers to run on specific worker nodes within a cluster.
For small-scale or development purposes, it is possible to run all these components on the same machine. Though this is simpler to set up, this method is not recommended for production environments. A single problem with this machine could result in the whole cluster becoming unavailable. This is why it is possible to split the control plane over multiple machines.
The worker node
Worker nodes are responsible for running the containers that contain the application. They consist of several components like the container runtime, the kubelet, and the kube-proxy. The container runtime is responsible for launching and managing containers inside the node, as well as network and storage resources associated with those containers. Additionally, it is responsible for making sure that containers run according to the specifications provided by the control plane.
The kubelet is responsible for handling commands that come from the control plane. It controls container runtime when necessary, making sure the node runs as expected. The kubelet gets these commands through the kube-proxy, which is responsible for routing the traffic to and from the node while delivering messages to the right location.
User interface
Finally, the user interface allows users to interact with Kubernetes. This can be done either through a web-based dashboard or through a command-line tool called kubectl. Both allow users to deploy and manage applications, view resource utilization, and take other actions, by communicating with the Kubernetes API. Both of these will get a more in-depth explanation in further topics.
A simple Kubernetes deployment
Let's take a look at an example of a simple "hello world" script for Kubernetes. These scripts can be written as YAML files that you can send to the cluster for deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: tutum/hello-world
ports:
- containerPort: 80The image used here is called "hello-world", a public image available on Docker Hub, created by the user tutum. It returns "Hello, World!" when accessed via a web browser. In another topic, you will learn what all this means.
This "hello-world" example above does not follow security and best-practice recommendations. It is only a basic introduction to Kubernetes and should not be used as a production-level solution. How to securely run deployments will be explained to you in future topics.
Where to deploy a Kubernetes cluster?
Kubernetes can be hosted in a variety of ways, including cloud providers, on-premise data centers, and even on your own machine. Each option comes with its own pros and cons.
Cloud hosting is the most popular and widely used option. It offers scalability, flexibility, as well as reliability, and is often the least expensive option for production environments. Many cloud providers offer managed Kubernetes services that handle the infrastructure and scaling for you. However, you may be limited by the provider's capabilities.
On-premise data centers offer more control and flexibility compared to cloud hosting. This can be a practical option if you have specific requirements that can't be met by a cloud provider. However, it requires a large upfront investment and is more difficult to manage and maintain than cloud hosting.
Kubernetes can be hosted locally with tools like Minikube or Docker Desktop. It is free and is a great solution for testing and debugging. Developers can quickly set it up to test or debug applications. It can also be a great way to learn and experiment with Kubernetes safely. But it isn't suitable for production, as it lacks the scalability and reliability of cloud and on-premise options.
No matter which option you choose, Kubernetes can help you manage and scale your containerized applications.
Conclusion
In this topic, you learned about the following:
The history of Kubernetes;
The architecture of the Kubernetes cluster;
How you can interact with a Kubernetes cluster;
An example of a Kubernetes deployment;
Options for hosting the cluster.
You now understand the basics of Kubernetes and are ready to dive into the world of containerized applications.