There is a wide range of tools you can use to set up and run a Kubernetes cluster on your computer. They can help you learn and practice using Kubernetes without the cost of subscribing to cloud-hosted options.
You will explore five widely used options for running a local Kubernetes cluster: MicroK8s, Kind, K3s, Minikube, and Docker Desktop. You will discover their advantages, disadvantages, and drawbacks. After you learn how to use these tools, you will gain confidence in managing Kubernetes clusters on your own.
Docker Desktop
Docker Desktop is a tool that enables developers to build and run containerized applications while also providing a seamless way to deploy a local Kubernetes cluster. It also includes features like image building, container management, and logging, which complement the Kubernetes functionality. Docker Desktop provides a self-contained environment that closely resembles a production Kubernetes cluster. Such an environment enables you to test your applications locally before deploying them to a cloud or on-premises cluster.
With Docker Desktop, you can easily create and manage multiple clusters, each with its own configuration and version of Kubernetes. This flexibility allows you to experiment with different Kubernetes features, test compatibility with specific versions, and simulate real-world deployment scenarios, all within the comfort of your local development environment.
One of the key advantages of using Docker Desktop for local Kubernetes deployment is its simplicity and ease of use. Docker Desktop provides a user-friendly interface and a straightforward installation process.
Once installed, Docker Desktop allows you to turn on Kubernetes mode. You can use Kubernetes mode to develop and test applications in a Kubernetes environment on your local machine. This can be done from the Kubernetes panel in the settings.
After running Kubernetes, you can send commands to the cluster. For this, you need kubectl — a tool used for controlling a Kubernetes cluster. kubectl is installed along with Kubernetes when it is enabled and can be found at C:\Program Files\Docker\Docker\Resources\bin\kubectl.exe for Windows and /usr/local/bin/kubectl for macOS respectively. This is not the case on Linux, where you have to install it yourself. To install kubectl on Linux, follow the Kubernetes documentation.
With both Kubernetes and kubectl installed and running, you can check if a cluster is running using the following command:
kubectl cluster-info
The correct output of this command should look something similar to the output below.
Kubernetes control plane is running at https://kubernetes.docker.internal:6443
CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If this is not the case, repeat the steps and look through the Docker and Kubernetes documentation before moving on.
That's it! You have successfully installed a Kubernetes cluster on Docker Desktop. Now, let's move on to some other tools too.
Minikube
Minikube is a popular tool for running single-node Kubernetes clusters on your local machine. It is also another tool recommended in the Kubernetes documentation. Minikube prides itself on its ease of use and lightweight environment allowing for experimentation without the necessity of a full-scale production environment. All of this makes it an excellent choice for developers who want to test their applications locally, learn Kubernetes concepts, or validate their deployment configurations before moving to large-scale clusters.
While it is possible to create multi-node clusters, Minikube creates a single-mode cluster by default. This means that you can only simulate a Kubernetes environment with a single worker node. You won't be able to experience the full scalability and fault-tolerance capabilities of a multi-node cluster. In a production setting, Kubernetes clusters typically consist of multiple worker nodes to distribute the workload and ensure high availability. So Minikube can be a great tool for local development and learning purposes but it will require additional configuration for scenarios that require the full capabilities of a multi-node cluster.
However, Minikube's single-node cluster greatly simplifies the setup process and makes it more accessible for beginners or those who are primarily interested in local development and testing.
Getting started with Minikube is straightforward. To begin, you'll need to install Minikube on your local machine. It supports various operating systems including Windows, macOS, and Linux. To install it on your system, you can take a look at the official documentation. Once installed, you can start a local cluster with a single command that sets all the necessary components for you. Just start a terminal with administrator access and execute the following command:
minikube start
This command installs Kubernetes on your system and starts the cluster. To interact with the cluster, you need kubectl again. If you haven't installed it already, follow the instructions in the Docker desktop section above.
You can see what pods are running currently using the command below:
kubectl get pods -A
The response will look something like this:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-787d4945fb-jkfrp 1/1 Running 0 5m57s
kube-system etcd-minikube 1/1 Running 0 6m11s
kube-system kube-apiserver-minikube 1/1 Running 0 6m11s
kube-system kube-controller-manager-minikube 1/1 Running 0 6m10s
kube-system kube-proxy-ckjnf 1/1 Running 0 5m58s
kube-system kube-scheduler-minikube 1/1 Running 0 6m11s
kube-system storage-provisioner 1/1 Running 1 (5m27s ago) 6m10s
You might be familiar with some Kubernetes components. With a single command, you set up a single-node cluster with all the essential components such as the Kubernetes API server, scheduler, and container runtime. This allows you to test your applications in an environment that closely resembles a production Kubernetes cluster. You can use this to identify and address any issues early in the development cycle.
For more information about Minikube, you can take a look at the official handbook or use the minikube -h command to get all the available commands in Minikube. One important command you should remember is the command to turn off the cluster:
minikube stop
In conclusion, Minikube provides a convenient and efficient way to run a Kubernetes cluster locally, enabling developers to test, deploy, and learn Kubernetes without the need for extensive infrastructure. Its simplicity, ease of use, and integration with popular container runtimes make it an ideal choice for local development and experimentation. Whether you're a beginner getting started with Kubernetes or an experienced developer honing your skills, Minikube is a valuable tool to gain experience with.
KinD
KinD (Kubernetes in Docker) is another tool mentioned in the Kubernetes documentation. It is a lightweight tool that allows you to run a fully functional Kubernetes cluster inside a Docker container on your local machine. Though originally designed for testing Kubernetes cluster itself, many use it for local development. The Kubernetes topics and exercises on the platform are implemented using KinD, so it also bears an additional label of recommendation.
To start with KinD, make sure you install Docker on your machine. Because Docker provides the underlying containerization technology that powers KinD. If you installed Docker Desktop for the first section, you already have Docker on your system. Otherwise, you can take a look at the Docker installation guide.
Once Docker is set up, you can proceed with installing KinD itself. Instructions for Linux, macOS, and Windows can be found on the KinD website. If you use a package manager like Homebrew or Chocolatey, you can use their corresponding instructions. For all others, we recommend installing the Release Binaries. No matter what system you work with, KinD is always installed through the console.
After installing KinD, you can create a new Kubernetes cluster using a single command:
kind create cluster
KinD allows you to have a self-contained Kubernetes environment because it creates clusters as Docker containers. KinD also provides options to configure networking, the number of worker nodes, and other cluster properties.
Once you create the cluster, you can interact with it using kubectl. To validate that the cluster is running, use the following kubectl command:
kubectl get nodes
This command shows all the running nodes. The response should look like this:
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 3m38s v1.27.1
Here you can see that the kind control plane is up and running. Now you can send commands to the cluster to configure it as you like.
K3s
The previous tools help run a normal Kubernetes cluster on a local machine. However, K3s is a Kubernetes distribution. Meaning it is based on the original Kubernetes source code with modifications to fit a specialized purpose.
K3s is designed for resource-constrained environments like IoT or ARM devices like Raspberry Pi. It aims to provide a fully functional Kubernetes cluster with a reduced memory footprint and minimal dependencies. So K3s is ideal for situations where resources are limited or where simplicity and ease of deployment are important. Despite having a reduced footprint, K3s maintains full compatibility with the Kubernetes API and allows you to leverage the power and flexibility of Kubernetes in resource-constrained environments.
Another advantage of K3s is its focus on security. It incorporates several security features by default, such as the use of SQLite as the default database (eliminates external dependencies) and the integration of the containerd container runtime. K3s also provides options for enabling additional security features like RBAC (Role-Based Access Control) and TLS (Transport Layer Security) encryption to enhance the security of your cluster.
K3s is only compatible with Linux distributions including Ubuntu, Debian, Raspberry Pi OS, and more. K3s provides a convenient installation script that fetches and installs all the necessary components. You can execute the installation script using the following command:
curl -sfL /usr/local/bin/k3s https://get.k3s.io | sh - ; chmod a+x /usr/local/bin/k3s
Once installed, you can start a server with the following command:
k3s server
Now that the Kubernetes cluster is running, you can interact with your K3s cluster using standard Kubernetes tools such as kubectl. You can use the following command to see what services are currently running on the server:
k3s kubectl get services
The response to this command will contain the Kubernetes cluster name and IP.
MicroK8s
Like K3s, MicroK8s is also a Kubernetes distribution. MicroK8s provides a complete Kubernetes environment in a single package, making it an excellent choice for developers, enthusiasts, and anyone looking to run Kubernetes on their local machine or in a small-scale deployment.
One of the standout features of MicroK8s is its simplicity. It offers a streamlined installation process that can be completed in just a few minutes. You can set up a fully functional Kubernetes cluster with minimal effort. MicroK8s also comes with a range of useful add-ons and extensions pre-installed, including popular tools like DNS, storage, ingress, and the Kubernetes Dashboard that eliminate the need for manual configuration to start working.
You can download it for your system from the MicroK8s website. It is available for Windows, macOS, and the popular Linux distributions. The installation process itself is straightforward and well-documented.
To interact with MicroK8s, no matter the operating system, you need to use the command line. But don't worry, the commands are same on all platforms. Just make sure you have administrator access or use the sudo command when using Linux. After completing the installation, you can use the following command to start the cluster (the execution might take some time):
microk8s start
To check if the server is running, you can use the following command:
microk8s status
This gives you the IP addresses of the master nodes and all the available add-ons. To interact with your MicroK8s cluster, you can use the standard Kubernetes command-line tool, kubectl, to manage and deploy applications seamlessly. But MicroK8s also lets you easily install the Kubernetes dashboard, which is another way to interact with a Kubernetes cluster. To achieve this, use the following command:
microk8s enable dashboard
When the dashboard is enabled, you just need to get access, which is done with the following command:
microk8s dashboard-proxy
This will give you the link from where the dashboard is available and a token to log in. Just copy the token and use a browser to access the link. The link will bring you to the Kubernetes login screen, where you can pass the token and gain access. After that, it should look something like the picture below.
Because you haven't deployed an application yet, there won't be a lot there.
MicroK8s offers excellent performance. It leverages lightweight container runtimes like containers and supports hardware acceleration. This results in fast start-up times and efficient resource utilization. Whether you are developing applications, testing new features, or experimenting with Kubernetes, MicroK8s provides a responsive and agile environment that helps accelerate your development cycle.
With little over four commands, you can create a cluster and enable the dashboard. It is this ease of use that makes MicroK8s a great option for those who want to explore Kubernetes without the overhead of managing a large-scale cluster. By using MicroK8s, you can unlock the potential of Kubernetes in a lightweight and user-friendly package to build and deploy applications with ease.
Conclusion
Among these five options, Docker Desktop stands out for its simplicity and ease of use. It provides seamless deployment of local Kubernetes clusters along with other container management features. Minikube offers a lightweight environment for local development and testing. KinD enables you to run a fully functional Kubernetes cluster inside a Docker container which is ideal for testing Kubernetes itself. Whereas, K3s and MicroK8s are Kubernetes distributions designed for resource-constrained environments. They offer reduced footprints and simplified deployments.
These are not the only options available, but they have been selected for their ease of use and availability. And as previously mentioned, all Kubernetes topics on the platform are written using KinD. So keep this recommendation into consideration as well.No matter what option you pick, having a locally running Kubernetes will only befit you on your path to becoming familiar with the capabilities of the Kubernetes cluster.