What is Kubernetes?
Â
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally developed by Google and donated to the Cloud Native Computing Foundation (CNCF) in 2015.
Kubernetes has become one of the most popular tools for managing containers in production environments because it provides a way to easily manage large numbers of containers across multiple hosts while keeping them up-to-date with each other's state.
History of Kubernetes
Kubernetes was created by Google and donated to the Cloud Native Computing Foundation (CNCF) in 2015. The motivation behind its development was to create an open-source container orchestration system that could be used across different cloud providers, which would make it easier for developers to build and deploy their applications.
Kubernetes has come a long way since it was first introduced in 2014 at Google's annual conference, Google I/O. The first version of Kubernetes was released on July 21st, 2015; since then there have been several other releases with new features added to each one:
-
1.0 - July 21st 2015
-
1.1 - February 9th 2016
-
1.2 - March 8th 2017
-
1.3 - May 17th 2018
Kubernetes Architecture
The Kubernetes architecture consists of five major layers:
-
API server - The API server is the entry point for all clients to interact with the cluster. It provides an HTTP-based RESTful API that can be used to create, configure, and manage objects in the cluster. The API server also handles authentication requests from users who want to access resources in your cluster.
-
Scheduler - This component decides which pods should run on which nodes based on resource availability, node priority settings (for example), or other factors such as pod labels or user preferences.
-
Controller manager - This component manages controller objects such as replication controllers and service accounts that are responsible for automating various processes within your cluster such as scaling up/down workloads based on demand patterns over time; enforcing quotas on certain resources like CPU usage percentage per pod; etc...
Kubernetes Deployment
The deployment of Kubernetes is a three-step process:
-
Create the cluster
-
Deploy the control plane and worker nodes
-
Configure the cluster
Kubernetes Networking
Kubernetes networking is a complex topic, but it's also one of the most important things to understand when you're deploying your containerized applications.
The three main types of Kubernetes networking are:
-
Flat Networking - This is the default mode for Kubernetes clusters, where every pod gets its own IP address on the same subnet. This approach works well if you have a small number of pods and don't need to worry about scaling out your infrastructure. However, it doesn't scale well beyond a few dozen nodes because each node has to handle all traffic going through its network interfaces (NICs).
-
Overlay Networking - This mode uses VXLAN tunneling technology so that multiple hosts can share a single IP address space across an overlay network while still maintaining isolation between pods running on different hosts within that space (i.e., no two containers can communicate directly).
-
Canal Networking - Canal networks allow you to run containers inside virtual machines (VMs) instead of bare metal servers by using software like Docker Machine or Kubeadm with OpenStack cloud providers like AWS EC2 Container Service (ECS).
Kubernetes Storage
Kubernetes storage is a critical component of your cluster and should be carefully considered. Kubernetes supports several different types of storage, each with their own unique characteristics and benefits. The most common types are:
-
Dynamically Provisioned Local Storage (Docker Volume Driver) - This is the default option for storing persistent data on a node in a pod, but it has some limitations that make it less than ideal for production workloads. It's also not supported by all providers yet (like AWS). If you're using this type of volume driver, make sure you know what your options will be if something goes wrong!
-
Persistent Volumes - Persistent volumes can be used as either block or file systems within containers or pods respectively (depending on how they're configured). They allow users to share data across multiple containers/pods while giving them control over how much space those containers/pods take up on disk(s). Because persistent volumes exist outside of any one container/pod instance--and thus outside its lifetime--they present an attractive alternative when compared against dynamically provisioned local storage solutions like Docker Volumes Driver mentioned above which require manual management after creation time."
Kubernetes Security
Kubernetes security is a complex topic. There are many different options and best practices to consider, and it's easy to make mistakes when setting up your cluster. To help you navigate this territory, we'll break down Kubernetes security into four main areas:
-
Authentication and authorization
-
Networking (including load balancing)
-
Storage (including persistent storage)
Kubernetes Monitoring
Kubernetes monitoring is a complex and nuanced topic. There are many different approaches to monitoring Kubernetes, each with their own pros and cons. In this section we'll discuss the most common types of Kubernetes monitoring and give you some best practices for implementing them in your environment.
We'll also discuss some of the challenges associated with monitoring your cluster and how to overcome them!
Kubernetes Management
Kubernetes management is the process of deploying and maintaining Kubernetes clusters. There are a number of different ways you can manage your cluster, including:
-
Using an automated tool like Helm or Kops to deploy your cluster on AWS, Google Cloud Platform (GCP), or Azure.
-
Using a managed service like Google Container Engine (GKE) or Azure Container Service (AKS). These services allow you to focus on building your applications instead of worrying about the infrastructure underneath them. They also come with built-in monitoring and alerting features that make it easier for you to stay up-to-date on any issues that may arise in your environment.
While these options provide great convenience, they do come at a cost--namely higher costs than self-hosted solutions like Minikube or EKS would incur over time as well as limitations around security and flexibility due to being tied into specific cloud providers' APIs rather than being able to use whichever one best meets your needs at any given moment (for example: migrating from GCP's Compute Engine instance type "f1-micro" over onto AWS' spot market when demand drops).
Conclusion
So, what are the key takeaways from this blog post?
-
Kubernetes is an open source system for automating deployment, scaling and management of containerized applications.
-
It provides a way to manage your containers in a cluster while providing features like load balancing, service discovery and replication control.
-
You can use it with any cloud provider or on-premise infrastructure that supports Docker.
Â