# Chapter 1- Introducing Kubernetes

<pre><code>This chapter covers

<strong>👉 Introductory information about Kubernetes and its origins
</strong>👉 Why Kubernetes has seen such wide adoption
👉 How Kubernetes transforms your data center
👉 An overview of its architecture and operation
👉 How and if you should integrate Kubernetes into your own organization                        
</code></pre>

```
- The word kubernetes is a greek for pilot or helmsman, the person who steers the ship - The person standing at the helm(the ship's wheel)
A captain is responsible for the ship, while the helmsman is the one who steers it.

- Kubernetes steers your applications and reports on their status while you - the captain - decide where you want the system  to go
```

***Kubernetes in a nutshell***&#x20;

* Kubernetes is a software system for automating the deployment and management of complex, large-scale application systems composed of computer processes running  in containers.&#x20;
* when software developers or operators decide to deploy an application, they do this through kubernetes instead of deploying the application to individual computers. Kubernetes provides an abstraction layer over the underlying hardware to both users and applications.

<figure><img src="/files/FKALAND7o55gV4EQQQkV" alt=""><figcaption></figcaption></figure>

* Kubernetes uses a declarative model to define an application. you describe the components that make up your application and kubernetes turns this description into a running application. It then keeps the application healthy by restarting or recreating parts of it as needed.

* The development and operations engineers are the ship's officers who make high-level decesions while sitting comfortably in their armchairs, and kubernetes is the helmsman who takes care of low-level tasks of steering the system through rough waters your applications and infrastructure sails through

<figure><img src="/files/CaHOLxEmQLBOCUSbZmcg" alt=""><figcaption></figcaption></figure>

kubernetes journey

<figure><img src="/files/hJGHCTymuwAyfJ2lolk1" alt=""><figcaption></figcaption></figure>

kubernetes comes to do automation on hundreds of microservices

<figure><img src="/files/3Ov8mfa6LbSqMz7KN6VL" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/sGq7fbLYYssZVwypxWYD" alt=""><figcaption></figcaption></figure>

* If the application is built on the APIs of kubernetes instead of directly on the proprietary APIs  of a specific cloud provider, it can be transferred relatively easy to any other provider

```
Reasons for kubernete's wide adoption
1. Automating the management of microservices
2. Bridging the dev and ops divide
3. standardizing the cloud
```

***Understanding kubernetes***

&#x20;           kubernetes is like an operating system for computer clusters.

just as an operating system supports the basic functions of a computer,  such as scheduling processes onto its CPUs and acting as an interface between the application and the computer's hardware,  Kubernetes schedules the componets of a distributed application onto individual computers in the underlying computer cluster and acts as an interface between the application and the cluster.

* It frees application developers from the need to implement infrastructure related mechanisms in their appplications; instead, they rely on kubernetes to provide them. This includes things like ;&#x20;

service discovery -  A mechanism that allows applications to find other applications and use the services they provide

Horizontal Scaling - replicating your application to adjust to fluctuations in load

load-balancing - Distributing load across all the application replicas

self-healing -keeping the system healthy by automatically restarting failed applications and moving them  to healthy nodes after their nodes fail

leader election - A mechanism that decides which instance of the application should be active while the others remain idle but ready to take over if the active instance fails.

<figure><img src="/files/Y30IHzpDEl6JPUkCqfGG" alt=""><figcaption></figcaption></figure>

**How kubernetes fits into a computer cluster**

* You start with a fleet of machines that you divide into two groups - t**he master and the worker nodes**. The master node will run the kubernetes control plane, which represents the brain of your system and controls the cluster,while the rest will run your applications - your workloads - and will therefore represent the workload plane (sometimes called as data plane)
* Non-production clusters can use a single master node, but highly available clusters use atleast three physical master nodes to host the control plane . The number of worker nodes depends on the number of applications you'll deploy.

<figure><img src="/files/F9NfumElEFImMt7cuxIu" alt=""><figcaption></figcaption></figure>

How allcluster nodes become one large deployment area ?🤔 The answer will be due to kubernetes api, it will happen

<figure><img src="/files/5FJ4udN2vZXrNYM9nBbC" alt=""><figcaption></figcaption></figure>

Benefits of kubernetes :&#x20;

1. Self-service deployment of applications : K8s chooses the best node on which to run the application based on the resource requirements of the application and the resources available on each node
2. Reducing costs via better infrastructure utilization : By combining different applications on the same machine, kubernetes improves the utilization of  your hardware infrastructure so you can run more applications on fewer servers.
3. Automatically adjusting to changing load : It can monitor the resources consumed by each application and other metrics and adjust the number of running instances of each application to cope with increased load or resource usage.
4. Keeping applications running smoothly : Kubernetes is a self-healing system in deals with both software errors and hardware failures.&#x20;
5. Simplifying application development : kubernetes offers infrastructure-related services that would otherwise have to be implemented in your application. This includes the discovery of services and/or peers in a distributed application, leader election, centralized application configuration and others. Kubernetes provides this while keeping the application Kubernetes-agnostic, but when required, applications can also query the Kubernetes API to obtain detailed information about their environment. They can also use the API to change the environment.<br>

Architecture of a kubernetes cluster

<figure><img src="/files/LE5sPvgkIyaxRJrnQbos" alt=""><figcaption></figcaption></figure>

&#x20;           Control plane&#x20;

<figure><img src="/files/Zyejo1D9o4EwPmzA4qIC" alt=""><figcaption></figcaption></figure>

```
The Kubernetes API Server exposes the RESTful Kubernetes API. Engineers using the cluster and other kubernetes components create objects via this API  

The Etcd distributed datastore persists the objects you create through the API, since the API server itself is stateless. The server is the only component that talks to etcd.

The Scheduler decides on which worker node each application instance should run

Controllers bring to life the objects you create through the API. Most of them simply create other objects, but some also communicate with external systems 
```

* The components of the control plane hold and control the state of the cluster, but they don't run your applications. This can be done by worker nodes.

Worker node components&#x20;

<figure><img src="/files/L9JIy0PHHe0xm7LamxF4" alt=""><figcaption></figcaption></figure>

* In addition to applications, several kubernetes components also run on these nodes. They perform the task of running, monitoring and providing connectivity between your applications.&#x20;
* Each node runs the following set of components :&#x20;
  * The kubelet, an agent that talks to the API Server and manages the applications running on its node. It reports the status of these applications and the node via the API
  * The container Runtime, which can be docker or any other runtime compatible with kubernetes. It runs your applications in containers as instructed by kubelet.
  * The kubernetes service proxy (kube-proxy) load-balances network traffic between applications. It's name suggests that traffic flows through it, but that's no longer the case.&#x20;

Add on components :&#x20;

* Most kubernetes clusters also contain several other components. This includes a DNS server, network plugins, logging agents and many others. They typically run on the worker nodes but can also be configured to run on the master

**How kubernetes runs an application**&#x20;

* Everything in kubernetes is represented by an object. you create and retrieve these objects via the kubernetes API. your application consists of several types of these objects. These objects are usually defined in one or more manifests files in either YAML or JSON format
*

```
<figure><img src="/files/IS1Sgsg38IkE8JQQsQoL" alt=""><figcaption></figcaption></figure>
```

These actions takes place when you deploy the application :&#x20;

1. You submit the application manifest to the kubernetes API. The API server writes the objects defined in the manifest to etcd
2. A controller notices the newly created objects and creates several new objects - one for each application instance
3. The scheduler assigns  a node to each instance
4. The kubernetes notices that an instance is assign to the kubelet's node. It runs the application instance via the container runtime
5. The kubeproxy notices that the application instances are ready to accept connections from clients and configures a load balancer for them
6. The kubernetes and controllers monitor the system and keep the applications running

* After you've created your YAML or JSON files you submit the files to the API, usually via the kubernetes command-line tool called kubectl
* kubectl splits the file into individual objects and creates each of them by sending an HTTP PUT or POST request to the API, as is usually the case with RESTful APIs.  The API server validates the objects and stores them in etcd datastore. In addition, it notifies all interested components that these objects have been created.&#x20;

ABOUT THE CONTROLLERS

* Most object types have an associated controller. A controller is interested in a particular object type. It waits for the API server to notify it that a new object has been created, and then performs operations to bring that object to life. Typically, the controller just creates other objects via the same Kubernetes API.
* The no.of objects created by the controller depends on the number of replicas specified in the application deployment object

ABOUT THE SCHEDULER\
\- The scheduler is a special type of controller, whose only task is to schedule application instances onto worker nodes. It selects the best worker node for each new application instance object and assigns it to the instance - by modifying the object via the API

ABOUT THE KUBELET AND CONTAINER RUNTIME

* The Kubelet that runs on each worker node is also a type of controller. Its task is to wait for application instances to be assigned to the node on which it is located and run the application. This is done by instructing the Container Runtime to start the application’s container.

ABOUT KUBEPROXY

* Because an application deployment can consist of multiple application instances, a load balancer is required to expose them at a single IP address. The Kube Proxy, another controller running alongside the Kubelet, is responsible for setting up the load balancer.

KEEPING THE APPLICATION HEALTHY

* Once the application is up and running, the Kubelet keeps the application healthy by restarting it when it terminates. It also reports the status of the application by updating the object that represents the application instance. The other controllers monitor these objects and ensure that applications are moved to healthy nodes if their nodes fail.

**Introducing kubenetes into our organisation**

1. Running kubernetes on-premise
2. Running kubernetes on cloud
3. Running kubernetes on hybrid-cloud

Should you use kubernetes ?

1. **Do your workloads require automated management?**
2. **Can you afford to invest your engineers’ time into learning Kubernetes?**
3. **Are you prepared for increased costs in the interim ?**
4. **Don't believe the HYPE**

Now let's move on to understand containers


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://charan-techjourney.gitbook.io/charan-techjournal/books/kubernetes-in-action-marko-luksa-kevin-conner/chapter-1-introducing-kubernetes.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
