Kubernetes

Kubernetes (also called K8s) is an open-source platform that helps you automates the deployment, scaling, and management of containerized applications.

Kubernetes helps you organize and control them efficiently just like a traffic controller for your apps.

Kubernetes Core Components

1. Pod 

A Pod is the smallest unit you can deploy in Kubernetes. It wraps one or more containers that need to run together, sharing the same network and storage. Containers inside a Pod can easily communicate and work as a single unit.

2. Node 

A Node is a machine (physical or virtual) in a Kubernetes cluster that runs your applications. Each Node contains the tools needed to run Pods, including the container runtime (like Docker), the Kubelet (agent), and the Kube proxy (networking).

3. Cluster 

A Kubernetes cluster is a group of computers (called nodes) that work together to run your containerized applications. These nodes can be real machines or virtual ones.

There are two types of nodes in a Kubernetes cluster:

  1. Master node (Control Plane):
    • Think of it as the brain of the cluster.
    • It makes decisions, like where to run applications, handles scheduling, and keeps track of everything.
  2. Worker nodes:
    • These are the machines that actually run your apps inside containers.
    • Each worker node has a Kubelet (agent), a container runtime (like Docker or containerd), and tools for networking and monitoring.

4. Deployment 

A Deployment is a Kubernetes object used to manage a set of Pods running your containerized applications. It provides declarative updates, meaning you tell Kubernetes what you want, and it figures out how to get there.

5. ReplicaSet

A ReplicaSet ensures that the right number of identical Pods are running.

6. Service 

A Service in Kubernetes is a way to connect applications running inside your cluster. It gives your Pods a stable way to communicate, even if the Pods themselves keep changing.

7. Ingress 

Ingress is a way to manage external access to your services in a Kubernetes cluster. It provides HTTP and HTTPS routing to your services, acting as a reverse proxy.

8. ConfigMap 

A ConfigMap stores configuration settings separately from the application, so changes can be made without modifying the actual code.

Imagine you have an application that needs some settings, like a database password or an API key. Instead of hardcoding these settings into your app, you store them in a ConfigMap. Your application can then read these settings from the ConfigMap at runtime, which makes it easy to update the settings without changing the app code.

9. Secret 

A Secret is a way to store sensitive information (like passwords, API keys, or tokens) securely in a Kubernetes cluster.

10. Persistent Volume (PV) 

A Persistent Volume (PV) in Kubernetes is a piece of storage in the cluster that you can use to store data — and it doesn’t get deleted when a Pod is removed or restarted.

11. Namespace 

A Namespace is like a separate environment within your Kubernetes cluster. It helps you organize and isolate your resources like Pods, Services, and Deployments.

12. Kubelet 

Kubelet runs on each Worker Node and ensures Pods are running as expected.

13. Kube-proxy 

Kube-proxy manages networking inside the cluster, ensuring different Pods can communicate.

Kubernetes Architecture

Kubernetes operates on a robust and distributed architecture that allows it to manage and orchestrate containerized applications at scale. At a high level, a Kubernetes cluster follows a master-worker model, consisting of a Control Plane (formerly known as the Master Node) and one or more Worker Nodes (also known as Minions or Data Plane).

1. Control Plane (Master Node)

The Control Plane is the “brain” of the Kubernetes cluster. It’s responsible for making global decisions about the cluster (e.g., scheduling, detecting and responding to cluster events) and managing the cluster’s desired state.

Components of the Control Plane:

  • kube-apiserver:
    • Function: This is the front end of the Kubernetes control plane. It exposes the Kubernetes API. All communication with the cluster (from kubectl, other control plane components, or worker nodes) goes through the API Server.
    • Role: It validates and configures data for API objects (like Pods, Services, Deployments) and serves as the gateway to the cluster’s shared state.
  • etcd:
    • Function: A highly available, consistent, and distributed key-value store.
    • Role: It stores all cluster data, including configurations, states of objects (e.g., which pods are running where, what resources are available), and metadata. It’s the single source of truth for the cluster.
  • kube-scheduler:
    • Function: Watches for newly created Pods that have no assigned node.
    • Role: It selects an optimal node for each new Pod to run on, considering factors like resource requirements (CPU, memory), hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, and inter-workload interference.
  • kube-controller-manager:
    • Function: Runs controller processes. Each controller continuously monitors the actual state of the cluster and works to move the current state towards the desired state.
    • Role: It bundles several controller functions into a single binary for simplicity. Examples of controllers it manages include:
      • Node Controller: Responsible for noticing and responding when nodes go down.
      • Replication Controller: Maintains the desired number of Pods for a ReplicaSet or Deployment.
      • Endpoints Controller: Populates the Endpoints object (which connects Services to Pods).
      • Service Account & Token Controllers: Create default Service Accounts and API access tokens for new Namespaces.
  • cloud-controller-manager (Optional):
    • Function: This component runs controllers that interact with the underlying cloud provider’s APIs.
    • Role: It’s used for integrating your cluster with cloud-specific features. For example, it can manage load balancers, persistent volumes, and node lifecycle within a cloud environment (e.g., AWS, GCP, Azure). If you’re running Kubernetes on-premises, you typically won’t have this component.

2. Worker Nodes (Data Plane)

Worker Nodes are the machines (physical or virtual) that run your actual application workloads. They execute the instructions received from the Control Plane.

Components on each Worker Node:

  • kubelet:
    • Function: An agent that runs on each node in the cluster.
    • Role: It watches the API Server for PodSpecs assigned to its node. It then ensures that the containers described in those PodSpecs are running and healthy. It handles container lifecycle (creating, stopping, monitoring) and reports the node’s and Pods’ status back to the API Server.
  • kube-proxy:
    • Function: A network proxy that runs on each node.
    • Role: It maintains network rules on nodes, allowing network communication to your Pods from inside or outside of the cluster. It handles Service discovery and load balancing for Pods behind a Service. It essentially acts as a network “bouncer” and traffic director for your applications.
  • Container Runtime:
    • Function: The software responsible for running containers.
    • Role: It pulls container images from a registry, runs the containers, and manages their lifecycle. Popular container runtimes include containerdCRI-O, and Docker (though Kubernetes now uses the Container Runtime Interface (CRI) to interact with runtimes, making Docker less directly involved at the Kubernetes layer).
  • Pods:
    • Function: The smallest deployable unit in Kubernetes.
    • Role: A Pod encapsulates one or more containers, shared storage (Volumes), network IP, and information about how to run the containers. Containers within a Pod share the same network namespace and can communicate via localhost.

How Kubernetes Components Interact:

  1. User Interaction: A user or an automated system (e.g., CI/CD pipeline) sends a desired state (e.g., “deploy 3 replicas of Nginx”) to the kube-apiserver via kubectl or other API clients.
  2. State Storage: The API Server validates the request and stores this desired state in etcd.
  3. Scheduling: The kube-scheduler constantly watches the API Server for new Pods that haven’t been assigned to a node. When it finds one, it evaluates the available worker nodes and decides the best node for that Pod based on various constraints and resource availability. It then updates the Pod’s status in etcd via the API Server, binding the Pod to a specific node.
  4. Node Action: On the selected Worker Node, the kubelet continuously monitors the API Server for PodSpecs assigned to its node. When it sees a new Pod assigned to it, it instructs the Container Runtime (e.g., containerd) to pull the necessary container images and run the containers within the Pod.
  5. Networking: The kube-proxy on the worker node ensures that the Pods have network connectivity and that Services correctly route traffic to the appropriate Pods, both within the cluster and from external sources (depending on the Service type).
  6. Maintaining Desired State: The kube-controller-manager continuously compares the actual state of the cluster (reported by kubelets and stored in etcd) with the desired state (also in etcd). If there’s a discrepancy (e.g., a Pod crashes, a node becomes unhealthy, or the replica count is not met), the relevant controller takes action (e.g., restarting a Pod, provisioning a new Pod, marking a node unhealthy) to bring the cluster back to its desired state.
  7. Cloud Integration (if applicable): The cloud-controller-manager interacts with the cloud provider’s APIs to provision and manage cloud resources (like load balancers, external IPs, storage volumes) as requested by Kubernetes objects.

This distributed architecture makes Kubernetes highly resilient, scalable, and self-healing, as individual component failures can often be tolerated or recovered from without bringing down the entire cluster or its applications.

FAST API implementation using Kubernetes

Project Setup: FastAPI Application

First, let’s create a simple FastAPI application.

Project Structure:

fastapi-app/
├── app/
│   └── main.py
├── requirements.txt
└── Dockerfile

app/main.py:

from fastapi import FastAPI
from typing import Union

app = FastAPI()

@app.get("/")
async def read_root():
    return {"message": "Hello from FastAPI on Kubernetes!"}

@app.get("/items/{item_id}")
async def read_item(item_id: int, q: Union[str, None] = None):
    return {"item_id": item_id, "q": q}

@app.get("/health")
async def health_check():
    return {"status": "ok"}

requirements.txt:

fastapi
uvicorn[standard]

2. Dockerize Your FastAPI Application

Next, you need to containerize your FastAPI application using Docker.

Dockerfile:

# Use a slim Python base image for smaller size
FROM python:3.9-slim-buster

# Set the working directory inside the container
WORKDIR /app

# Copy only the requirements file first to leverage Docker cache
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of your application code
COPY ./app /app/app

# Expose the port your FastAPI app will listen on
EXPOSE 80

# Command to run the FastAPI application using Uvicorn
# Use the exec form for graceful shutdown and lifespan events
# "--proxy-headers" is important when running behind a reverse proxy like Ingress
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80", "--proxy-headers"]

Build and Test the Docker Image (Local):

  • Build:
    docker build -t dockerkb .
    • Run (local test):
    docker run -p 8000:80 dockerkb
    • Test in browser: Open http://localhost:8000 or http://localhost:8000/docs (for FastAPI’s interactive API documentation).
    • Stop the container: docker ps to find the container ID, then docker stop <container_id>.

    Deploying a FastAPI application on Kubernetes involves several steps, from containerizing your application to defining Kubernetes resources. Here’s a comprehensive guide:

    Project Setup: FastAPI Application

    First, let’s create a simple FastAPI application.

    Project Structure:

    fastapi-app/
    ├── app/
    │   └── main.py
    ├── requirements.txt
    └── Dockerfile
    

    app/main.py:

    Python

    from fastapi import FastAPI
    from typing import Union
    
    app = FastAPI()
    
    @app.get("/")
    async def read_root():
        return {"message": "Hello from FastAPI on Kubernetes!"}
    
    @app.get("/items/{item_id}")
    async def read_item(item_id: int, q: Union[str, None] = None):
        return {"item_id": item_id, "q": q}
    
    @app.get("/health")
    async def health_check():
        return {"status": "ok"}
    

    requirements.txt:

    fastapi
    uvicorn[standard]
    

    Dockerize Your FastAPI Application

    Next, you need to containerize your FastAPI application using Docker.

    Dockerfile:

    Dockerfile

    # Use a slim Python base image for smaller size
    FROM python:3.9-slim-buster
    
    # Set the working directory inside the container
    WORKDIR /app
    
    # Copy only the requirements file first to leverage Docker cache
    COPY requirements.txt .
    
    # Install dependencies
    RUN pip install --no-cache-dir -r requirements.txt
    
    # Copy the rest of your application code
    COPY ./app /app/app
    
    # Expose the port your FastAPI app will listen on
    EXPOSE 80
    
    # Command to run the FastAPI application using Uvicorn
    # Use the exec form for graceful shutdown and lifespan events
    # "--proxy-headers" is important when running behind a reverse proxy like Ingress
    CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80", "--proxy-headers"]
    

    Build and Test the Docker Image (Local):

    1. Build:Bashdocker build -t fastapi-kubernetes-app:1.0.0 .
    2. Run (local test):Bashdocker run -p 8000:80 fastapi-kubernetes-app:1.0.0
    3. Test in browser: Open http://localhost:8000 or http://localhost:8000/docs (for FastAPI’s interactive API documentation).
    4. Stop the container: docker ps to find the container ID, then docker stop <container_id>.

    Push Docker Image to a Registry

    For Kubernetes to pull your image, it needs to be accessible from a container registry (e.g., Docker Hub, Google Container Registry, Amazon ECR).

    • Log in to your registry (e.g., Docker Hub):
      docker login
      • Tag your image: Replace your-dockerhub-username with your actual Docker Hub username.
      docker tag dockerkb susha2020/dockerkb
      • Push the image:
      docker push susha2020/dockerkb

      Kubernetes Manifests

      Now, let’s define the Kubernetes resources needed to deploy your FastAPI application. You’ll typically need at least a Deployment and a Service. For external access, an Ingress is highly recommended.

      Prerequisites for Kubernetes Deployment:

      • Kubernetes Cluster: A running Kubernetes cluster (Minikube for local development, or a cloud-managed service like GKE, EKS, AKS).
      • kubectl: Configured to interact with your cluster.
      • Ingress Controller (for Ingress): If you plan to use an Ingress, you need an Ingress Controller (e.g., Nginx Ingress Controller) installed in your cluster. For Minikube, enable it with minikube addons enable ingress.

      Setting Up Your Local Kubernetes Environment

      For learning and development, a local Kubernetes cluster is the easiest way to get started.

      • Minikube: A popular tool that runs a single-node Kubernetes cluster inside a virtual machine on your local machine.
        • Installation:
          • Install a hypervisor (e.g., VirtualBox, Hyper-V, KVM, Docker Desktop’s built-in engine).
          • Install Minikube.
          • Install kubectl (the Kubernetes command-line tool).
        • Installation Instructions
          • Windows (using Chocolatey)
            • choco install minikube
            • choco install kubernetes-cli
          • macOS (using Homebrew)
            • brew install minikube
            • brew install kubectl
          • Linux (Ubuntu/Debian)
            • curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
            • sudo install minikube-linux-amd64 /usr/local/bin/minikube
            • Install kubectl
            • sudo apt-get update
            • sudo apt-get install -y kubectl
          • Starting a cluster: minikube start
          • Verifying the cluster: kubectl get nodes

      a. Deployment (fastapi-deployment.yaml)

      A Deployment manages the desired state of your Pods, ensuring a specified number of replicas are running and handling updates.

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: fastapi-app-deployment
        labels:
          app: fastapi-app
      spec:
        replicas: 2 # Number of desired Pod replicas
        selector:
          matchLabels:
            app: fastapi-app
        template:
          metadata:
            labels:
              app: fastapi-app
          spec:
            containers:
            - name: fastapi-app
              image: your-dockerhub-username/fastapi-kubernetes-app:1.0.0 # Replace with your image
              ports:
              - containerPort: 80 # The port your FastAPI app listens on inside the container
              livenessProbe: # Checks if the container is still running and healthy
                httpGet:
                  path: /health
                  port: 80
                initialDelaySeconds: 5
                periodSeconds: 5
              readinessProbe: # Checks if the container is ready to serve traffic
                httpGet:
                  path: /health
                  port: 80
                initialDelaySeconds: 10
                periodSeconds: 5
                failureThreshold: 3
              resources: # Define resource requests and limits for better scheduling and stability
                requests:
                  memory: "128Mi"
                  cpu: "100m"
                limits:
                  memory: "256Mi"
                  cpu: "200m"
            imagePullSecrets: # If your image is in a private registry
            - name: regcred # Name of your image pull secret (see optional step below)

      Optional: Image Pull Secret (for private registries)

      If your Docker image is in a private registry (not Docker Hub public), you’ll need to create an image pull secret:

      kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-username> --docker-password=<your-password> --docker-email=<your-email>

      Then, add imagePullSecrets: - name: regcred to your Deployment’s spec.template.spec as shown above.

      b. Service (fastapi-service.yaml)

      A Service provides a stable network endpoint for your Deployment’s Pods.

      apiVersion: v1
      kind: Service
      metadata:
        name: fastapi-app-service
        labels:
          app: fastapi-app
      spec:
        selector:
          app: fastapi-app # Selects Pods with this label
        ports:
          - protocol: TCP
            port: 80 # The port the Service listens on
            targetPort: 80 # The port on the Pod to forward traffic to (your app's port)
        type: ClusterIP # Default type, makes the service only accessible from within the cluster
        # If you want to expose it directly for testing on Minikube or for a simple app:
        # type: NodePort # Exposes the service on a static port on each Node's IP
        # type: LoadBalancer # For cloud providers (e.g., GKE, EKS) to provision an external load balancer

      c. Ingress (fastapi-ingress.yaml)

      For production deployments, Ingress is the preferred way to expose HTTP/HTTPS routes from outside the cluster to services within the cluster. It allows for advanced routing, SSL termination, and hostname-based routing.

      apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
        name: fastapi-app-ingress
        annotations:
          # Use appropriate Ingress controller annotations, e.g., for Nginx Ingress Controller
          nginx.ingress.kubernetes.io/rewrite-target: /
          # nginx.ingress.kubernetes.io/ssl-redirect: "false" # Use if you don't have SSL setup yet
          # Add Cert-Manager annotations if you're using it for SSL/TLS
          # cert-manager.io/cluster-issuer: letsencrypt-prod
      spec:
        ingressClassName: nginx # Or the name of your Ingress Controller (e.g., "nginx")
        rules:
        - host: api.yourdomain.com # Replace with your desired hostname
          http:
            paths:
            - path: /
              pathType: Prefix
              backend:
                service:
                  name: fastapi-app-service # Name of your Service
                  port:
                    number: 80 # Port of your Service
        # tls: # Uncomment and configure for HTTPS
        # - hosts:
        #   - api.yourdomain.com
        #   secretName: your-tls-secret # Kubernetes Secret containing your TLS certificate and key

      Deploy to Kubernetes

      • Apply the Deployment:
      kubectl apply -f fastapi-deployment.yaml
      • Verify Pods:
      kubectl get deployments
      kubectl get pods -l app=fastapi-app

      Wait for the Pods to be in the Running state.

      • Apply the Service:
      kubectl apply -f fastapi-service.yaml
      • Verify Service:
      kubectl get services fastapi-app-service

      If you used NodePort, you can find the port here. For Minikube: minikube service fastapi-app-service --url.

      • Apply the Ingress (if using):
      kubectl apply -f fastapi-ingress.yaml
      • Verify Ingress:
      kubectl get ingress fastapi-app-ingress

      Note the ADDRESS provided by the Ingress (it might take a moment to provision on cloud providers).

      Accessing Your FastAPI Application

      • Using ClusterIP (internal only): You can access it from another Pod in the same cluster using the service name (fastapi-app-service). For local testing, use port-forwarding:
      kubectl port-forward service/fastapi-app-service 8080:80

      Then, open http://localhost:8080 in your browser.

      This will give you the URL to access your application.

      Using NodePort (for testing, especially with Minikube): If you set type: NodePort in your Service:

      minikube service fastapi-app-service --url

      This will give you the URL to access your application.

      Using Ingress (recommended for production):

      1. Get the Ingress IP/Hostname:
      kubectl get ingress fastapi-app-ingress

      Look for the ADDRESS column.

      Update your hosts file (for local testing with custom domain): If you used a custom host like api.yourdomain.com in your Ingress, you might need to add an entry to your /etc/hosts file (or C:\Windows\System32\drivers\etc\hosts on Windows) mapping the Ingress ADDRESS to your hostname:

      <Ingress_ADDRESS> api.yourdomain.com

      Access in browser: Open http://api.yourdomain.com (or https:// if you configured TLS).

      Scaling and Updates

      Scaling your application:

      kubectl scale deployment/fastapi-app-deployment --replicas=5

      Kubernetes will automatically create 3 more Pods.

      Updating your application (rolling update):

      1. Build a new Docker image with a new tag (e.g., fastapi-kubernetes-app:1.0.1).
      2. Update the image field in fastapi-deployment.yaml to the new tag.
      3. Apply the updated deployment:
      kubectl apply -f fastapi-deployment.yaml

      Kubernetes will perform a rolling update, gradually replacing old Pods with new ones without downtime.

      Clean Up

      When you’re done, you can delete the Kubernetes resources:

      kubectl delete -f fastapi-ingress.yaml
      kubectl delete -f fastapi-service.yaml
      kubectl delete -f fastapi-deployment.yaml
      # If you created one
      kubectl delete secret regcred

      If you’re using Minikube and want to stop the cluster:

      minikube stop
      # Or delete it completely
      minikube delete

      This setup provides a robust and scalable way to deploy your FastAPI applications on Kubernetes, leveraging its powerful orchestration capabilities. Remember to adapt the Docker image name, hostnames, and resource requests/limits to your specific needs.

      Related Posts

      CI/CD Pipeline

      A CI/CD pipeline coordinates all the processes involved in continuous integration and continuous delivery. Continuous integration (CI) is the practice of automating the integration of code changes…

      Docker Volume

      In Docker, volumes are the preferred mechanism for persisting data generated by and used by Docker containers. They provide a way to store data outside of the container’s writable layer,…

      Docker

      Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly….

      Docker Vs Kubernetes

      Docker and Kubernetes are both crucial technologies for containerized applications, but they serve different purposes. Docker is a platform for building, sharing, and running containers, while Kubernetes is…

      Leave a Reply

      Your email address will not be published. Required fields are marked *