Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code, you can significantly reduce the delay between writing code and running it in production.
Imagine you’re baking a cake. You have your detailed cooking instructions (your code), ingredients (libraries, frameworks), and baking tools (runtime environment). If you try to bake this cake in different kitchens, you might encounter issues: different oven temperatures, missing ingredients, or incompatible tools. Docker solves this by providing a standardized “kitchen” (the container) that ensures your cake (application) bakes identically, no matter where it’s run.
Key Docker Components
Let’s break down the essential pieces that make Docker so powerful:
- Containers: Imagine these as sleek, self-contained packages for your software. They include everything your application needs to run: your code, the environment it needs, system tools, libraries, and all its settings. What makes them incredibly efficient is that, unlike bulkier virtual machines (VMs), containers share your computer’s core operating system (its kernel). This means they’re super fast to start up and use far fewer resources.
- Docker Images: Think of an image as a read-only blueprint or a master template for building a container. It holds all the instructions and ingredients needed to set up a specific application environment. You can create your own custom images or grab ready-made ones from public libraries, like Docker Hub.
- Dockerfile: This is a simple text file that acts as your recipe for a Docker image. It lists out every instruction, step-by-step, on how to assemble that image. Using a Dockerfile automates the entire image-creation process, making it consistent and ensuring you (or anyone else) get the exact same result every time.
- Docker Engine: This is the heart of Docker, running as a client-server application. Its job is to build, run, and manage all your Docker elements – that includes images, containers, networks, and storage areas.
- Docker Daemon: This is the background service that does all the heavy lifting, executing commands and managing everything Docker-related.
- Docker Client: This is your command-line interface (CLI) – the tool you use to talk to Docker. You type in commands here, and the client sends them straight to the daemon.
- REST API: This is the communication bridge between the client and the daemon. It’s the interface they use to chat and exchange information.
- Docker Hub: Picture Docker Hub as the main cloud library for all things Docker images. It’s a central registry service where you can store your own images, share them with colleagues, or discover and download a vast array of images created by the global community. It’s truly like GitHub, but specifically designed for Docker images.
Architecture of Docker
Docker makes use of a client-server architecture. The Docker client talks with the docker daemon which helps in building, running, and distributing the docker containers. The Docker client runs with the daemon on the same system or we can connect the Docker client with the Docker daemon remotely. With the help of REST API over a UNIX socket or a network, the docker client and daemon interact with each other.
Here’s a breakdown of the key components and how they interact:
1. Docker Daemon (dockerd)
- The Server: This is a persistent background process that runs on the host machine. It’s the core component of Docker.
- Responsibilities:
- Builds Docker images: Based on
Dockerfile
instructions. - Runs containers: Creates and manages container instances from images.
- Manages Docker objects: Handles images, containers, networks, and volumes.
- Handles Docker API requests: Listens for commands from the Docker client.
- Builds Docker images: Based on
- Interaction: It communicates with the host operating system’s kernel to perform low-level operations like resource isolation (using Linux Namespaces and Control Groups – cgroups) and managing filesystems (using technologies like Union File Systems).
2. Docker Client (docker CLI)
- The User Interface: This is the primary way users interact with Docker. It’s a command-line tool (
docker
command). - Responsibilities:
- Sends commands (e.g.,
docker run
,docker build
,docker pull
) to the Docker Daemon. - Parses user input and formats it into API requests.
- Sends commands (e.g.,
- Interaction: The client communicates with the Docker Daemon via a REST API. This communication can happen over a UNIX socket (default on Linux/macOS) or a network interface.
3. REST API
- The Communication Bridge: This is the interface that the Docker Client (or any other program) uses to communicate with the Docker Daemon.
- Function: It defines the requests and responses that allow the client to instruct the daemon to perform actions and receive information back.
4. Docker Registries
- Image Storage: These are services that store and distribute Docker images.
- Examples:
- Docker Hub: The public registry managed by Docker, where you can find official images and share your own.
- Private Registries: Organizations often run their own private registries for security and control over their images.
- Interaction: The Docker Daemon can pull images from registries (e.g.,
docker pull ubuntu
) and push images to them (e.g.,docker push my-repo/my-image
).
5. Docker Objects
These are the entities that the Docker Daemon manages:
- Images: Read-only templates with instructions for creating a Docker container. They contain the application, libraries, dependencies, and configurations.
- Containers: Runnable instances of a Docker image. They are isolated environments with their own file system, network interfaces, and processes.
- Networks: Provide a way for containers to communicate with each other and with the outside world. Docker creates virtual networks to enable this.
- Volumes: Used for persisting data generated by and used by Docker containers. They allow data to outlive the container itself.
Architectural Flow (Simplified)
- A user types a command (e.g.,
docker run -p 80:80 my-image
) into the Docker Client. - The Docker Client sends this command as an API request to the Docker Daemon.
- The Docker Daemon receives the request.
- If the image (
my-image
) isn’t available locally, the daemon will pull it from a Docker Registry (like Docker Hub). - The daemon then uses the image to create a new container.
- It allocates resources (CPU, memory), sets up networking, and ensures process isolation using underlying OS features (namespaces, cgroups).
- If the image (
- The container starts running the specified application.
- The daemon sends back a response to the Docker Client, indicating the status of the operation.
How Docker Works
Here’s a breakdown of how Docker streamlines the process of getting your application from development to deployment:
- Write a Dockerfile: First, you’ll define the steps to build your application’s environment and package it. This file acts as a blueprint, outlining everything Docker needs to set up your app’s world.
- Build an Image: Next, the Docker Engine reads your Dockerfile and constructs an image. Think of an image as a snapshot of your application along with all its necessary components—like libraries, system tools, and code—all bundled into a single, layered file system.
- Run a Container: From that image, you can then create and run one or more containers. Each container is an isolated instance of your application, complete with its own dedicated file system, network, and process space, ensuring it runs independently and securely.
- Distribute and Deploy: Finally, this container can be distributed and run on any system that has Docker installed. This crucial step guarantees that your application will perform consistently, whether it’s on your local machine, a testing server, or in a live production environment.
Quick Docker Installation (All Platforms)
For Window
- Go to: https://docs.docker.com/desktop/install/windows/
- Download Docker Desktop (ensure WSL 2 is enabled).
- Run the
.exe
installer → follow the steps. - After installation, launch Docker Desktop.
- Open CMD/PowerShell:
docker --version docker run hello-world
For macOS (Intel or Apple Silicon)
- Go to: https://docs.docker.com/desktop/install/mac/
- Download the right version for your chip (Intel or M1/M2).
- Install via
.dmg
→ open Docker. - Test it in Terminal:
docker --version docker run hello-world
For Linux (Ubuntu/Debian)
- Run this in terminal:
sudo apt update sudo apt install docker.io -y sudo systemctl start docker sudo systemctl enable docker sudo docker run hello-world
- (Optional) Add user to docker group:
sudo usermod -aG docker $USER newgrp docker
After Installation
Check Docker is running:
docker info
🔗 Helpful Links:
- Official Docs: https://docs.docker.com/get-docker/
- Troubleshooting: https://docs.docker.com/troubleshoot/
Docker Implementation using FastAPI
1. Project Setup
Let’s begin by creating a basic FastAPI application.
Project Structure:
.
├── app/
│ └── main.py
├── requirements.txt
├── Dockerfile
└── .dockerignore
File Contents:
1. fastapi-docker-app/app/main.py
from fastapi import FastAPI
from pydantic import BaseModel
# Define a simple data model for demonstration
class Item(BaseModel):
name: str
price: float
is_offer: bool | None = None
app = FastAPI(
title="My Dockerized FastAPI App",
description="A simple FastAPI application to demonstrate Dockerization.",
version="1.0.0"
)
@app.get("/")
async def read_root():
"""
Root endpoint returning a welcome message.
"""
return {"message": "Hello from FastAPI in a Docker Container!"}
@app.get("/items/{item_id}", response_model=Item)
async def read_item(item_id: int, q: str | None = None):
"""
Retrieve an item by its ID.
"""
return {"item_id": item_id, "name": f"Item {item_id}", "price": 10.5, "is_offer": True if q else False}
@app.post("/items/")
async def create_item(item: Item):
"""
Create a new item.
"""
return {"message": "Item created successfully", "item": item}
@app.get("/health")
async def health_check():
"""
Health check endpoint for container monitoring.
"""
return {"status": "healthy"}
2. fastapi-docker-app/requirements.txt
fastapi==0.111.0
uvicorn==0.30.1
pydantic==2.7.4
gunicorn==22.0.0
3. fastapi-docker-app/Dockerfile
# Stage 1: Build Stage (for installing dependencies)
# Using a specific Python version on a slim Debian base for smaller images
FROM python:3.10-slim-buster AS builder
# Set environment variables
# PYTHONUNBUFFERED ensures that Python output is sent straight to the terminal
# instead of being buffered. Essential for seeing logs in real-time.
ENV PYTHONUNBUFFERED=1
# Set the working directory inside the container
WORKDIR /app
# Copy only the requirements file first to leverage Docker's build cache.
# If requirements.txt doesn't change, this layer won't be rebuilt.
COPY requirements.txt .
# Install Python dependencies.
# --no-cache-dir reduces image size by not storing downloaded packages.
# --upgrade pip ensures pip is up-to-date.
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# Stage 2: Production Stage (final, minimal image)
# Uses the same base image but leverages the installed packages from the builder stage
FROM python:3.10-slim-buster
# Set the working directory
WORKDIR /app
# Copy only the necessary installed packages from the builder stage
# Adjust the python3.10 to your Python version if different
COPY --from=builder /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY --from=builder /usr/local/bin/uvicorn /usr/local/bin/uvicorn
COPY --from=builder /usr/local/bin/gunicorn /usr/local/bin/gunicorn
# Copy your application code
COPY ./app /app/app
# Expose the port that the FastAPI application will run on
EXPOSE 8000
# Command to run the application using Gunicorn with Uvicorn workers for production
# Gunicorn manages Uvicorn worker processes, providing robustness and concurrency.
# --workers: Number of worker processes (often 2-4 * CPU cores, or based on benchmarking)
# --worker-class uvicorn.workers.UvicornWorker: Specifies Uvicorn as the ASGI worker.
# --bind 0.0.0.0:8000: Binds the server to all network interfaces on port 8000.
# app.main:app: Refers to the `app` FastAPI instance in `app/main.py`.
CMD ["gunicorn", "app.main:app", "--workers", "4", "--worker-class", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000"]
Dockerfile Essentials: Instructions, Descriptions, and How to Use Them
- FROM
Specifies the base image you’re building from.
Example:FROM ubuntu:20.04
- RUN
Runs a command in a new layer of the current image and commits the result.
Example:RUN apt-get update && apt-get install -y nginx
- COPY
Copies files or directories from the host filesystem into the image.
Example:COPY . /app
- WORKDIR
Sets the working directory for subsequent instructions.
Example:WORKDIR /app
- ENV
Sets environment variables.
Example:ENV NODE_ENV=production
- EXPOSE
Informs Docker that the container will listen on specific network ports at runtime.
Example:EXPOSE 3000
- CMD
Provides default commands and parameters for the container.
Example:CMD ["node", "app.js"]
- ENTRYPOINT
Configures the container to run as an executable.
Example:ENTRYPOINT ["python3", "-m"]
- VOLUME
Creates a mount point and marks it for external volumes.
Example:VOLUME ["/data"]
- USER
Sets the username or UID to use when running the image.
Example:USER appuser
4. fastapi-docker-app/.dockerignore
.git
.venv/
__pycache__/
*.pyc
*.pyo
.pytest_cache/
.vscode/
*.env
venv/
__pycache__/
Execution
Steps to Create a Virtual Environment and Install Dependencies:
- Navigate to your Project Directory: Open your terminal or command prompt and use the
cd
(change directory) command to go into your main project folder. This is the folder that containsrequirements.txt
.
cd path/to/your/my-fastapi-project
- Create the Virtual Environment: Once you are in your project’s root directory, run the following command. This will create a new directory (usually named
venv
or.venv
) inside your project folder, which will house your isolated Python environment.
python3 -m venv venv
- Activate the Virtual Environment: After creating the
venv
directory, you need to “activate” it. Activating modifies your shell’sPATH
variable so that when you runpython
orpip
, it uses the executables within your virtual environment, not your system-wide ones.
On macOS/Linux:
source venv/bin/activate
On Windows (Command Prompt):
venv\Scripts\activate.bat
On Windows (PowerShell):
venv\Scripts\Activate.ps1
- Install Dependencies from
requirements.txt
: Now that your virtual environment is active, you can install the packages listed in yourrequirements.txt
file.pip
will install them directly into this isolated environment.
pip3 install -r requirements.txt
Docker image
Docker images are built using the Dockerfile which consists of a set of instructions that are required to containerize an application. The docker image includes the following to run a piece of software. A docker image is a platform-independent image that can be built in the Windows environment and it can be pushed to the docker hub and pulled by others with different OS environments like Linux.
- Application ode.
- Runtime.
- Libraries
- Environmentaltools.
Docker image is very light in weight so can be portable to different platforms very easily.
Components of Docker Image
The following are the terminologies and components related to Docker Image:
- Layers: Immutable filesystem layers stacked to form a complete image.
- Base Image: The foundational layer, often a minimal OS or runtime environment.
- Dockerfile: A text file containing instructions to build a Docker image.
- Image ID: A unique identifier for each Docker image.
- Tags: Labels used to manage and version Docker images.
Build docker image
The Following is the command which is used to build the docker image.
docker build -t your_image_name:tag -f path/to/Dockerfile .
- Docker build: Initiates the build process.
- -t your_image_name:tag: Gives the image you’re creating a name and, if desired, a tag.
- path/to/Dockerfile . : Gives the location of the Dockerfile. Give the right path if it’s not in the current directory. “(.) DOT” represents the current wordir.
docker build -t dockereg .
Run docker
docker run -p 8000:8000 dockereg
You can check http://localhost:8000
Health check point http://localhost:8000/health
Docker Cheat Sheet (Simplified)
Image Commands
docker build -t <name> .
→ Builds a Docker image from a Dockerfile in the current directory and tags it with<name>
.docker images
→ Lists all Docker images available locally.docker rmi <image_id>
→ Removes a specific Docker image.docker tag <image_id> <new_name>
→ Tags an image with a new name or version.docker pull <image>
→ Downloads an image from Docker Hub.docker push <image>
→ Uploads an image to Docker Hub.
Container Commands
docker run <image>
→ Runs a container from the given image.docker run -p 5000:5000 <image>
→ Maps port 5000 on the host to port 5000 in the container.docker run -d <image>
→ Runs a container in detached (background) mode.docker run --name <name> <image>
→ Assigns a specific name to the running container.docker exec -it <container> bash
→ Opens an interactive shell session inside a running container.docker ps
→ Lists only running containers.docker ps -a
→ Lists all containers (including stopped ones).docker stop <container>
→ Stops a running container.docker start <container>
→ Starts a stopped container.docker rm <container>
→ Removes a container.docker logs <container>
→ Displays logs from a container.
Volume & Network
docker volume ls
→ Lists all Docker volumes.docker volume create <name>
→ Creates a new volume.docker network ls
→ Lists all Docker networks.docker network create <name>
→ Creates a new network.docker run -v volume_name:/app <image>
→ Mounts a volume into the container’s/app
directory.
Cleanup Commands
docker system prune
→ Removes all unused containers, networks, images, and build cache.docker rmi $(docker images -q)
→ Removes all Docker images.docker rm $(docker ps -aq)
→ Removes all containers (running or stopped).docker volume rm $(docker volume ls -q)
→ Removes all volumes.