↑
Docker Overview
- What Is Docker?
- Docker is a platform that allows developers to package, ship, and run applications inside containers.
- A container includes everything an application needs:
- source code
- runtime (Python, Node.js, Java, etc.)
- system tools
- libraries & dependencies
- configuration
- Containers let your app run consistently across different machines:
- developer laptops
- CI/CD servers
- development/production servers
- cloud platforms
- Think of Docker as “lightweight virtual machines” that start in milliseconds and use very little overhead.
- Why Use Docker?
- Docker solves the infamous:
"It works on my machine" problem
- Docker has become the standard for building and deploying modern applications.
- Understanding Images vs Containers
- Docker Image:
- a read-only template that defines a container
- contains your app + dependencies
- built using a
Dockerfile
- Docker Container:
- a running instance of an image
- you can run many containers from one image
- containers can be started, stopped, removed
- Analogy:
Image = class
Container = object created from that class
- What Is a Dockerfile?
- A
Dockerfile is a text file with instructions on how to build a Docker image.
- Example:
FROM python:3.12-alpine
COPY app.py /app.py
CMD ["python", "app.py"]
- Every line creates a new layer in the final Docker image.
- Basic Docker Commands
docker pull python:3.12
- List images:
docker images
- Run a container:
docker run python:3.12
- List running containers:
docker ps
- Stop a container:
docker stop <container_id>
- Remove a container:
docker rm <container_id>
- Build a Docker image:
docker build -t myapp .
- Layered Architecture of Docker
- Docker images are composed of layers:
- each command in
Dockerfile creates a layer
- layers are cached → faster builds
- shared layers reduce storage usage
- Example:
FROM python
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
- Changing the last line only rebuilds the last layer → faster development.
- Bind Mounts vs Volumes
- Two ways to persist or share data:
- Bind Mounts:
- directly map a folder on your host into the container
- good for development
docker run -v $(pwd):/app myapp
- Volumes:
- managed by Docker
- best for databases and production
docker volume create mydata
- Docker Compose Overview
- Docker Compose is a tool for defining and running multi-container applications.
- Useful for apps needing:
- web server
- database
- cache
- background workers
- Example
docker-compose.yml:
version: "3"
services:
web:
build: .
ports:
- "5000:5000"
db:
image: postgres:16
- Start everything:
docker compose up
- Docker Hub
- Docker Hub is a cloud registry that hosts container images.
- You can:
- pull thousands of official images
- push your own images
- share with teams or make them public
- Push an image:
docker push username/myapp
- Docker in Production
- Docker containers are widely used in production because:
- fast deployment
- easy rollbacks
- isolated environments
- consistent builds
- Often combined with:
- NGINX
- reverse proxies
- Kubernetes orchestration
- cloud services (AWS ECS, Azure ACI, Google Cloud Run)
- Best Practices
- Use small base images (e.g.,
alpine)
- Set
.dockerignore to exclude unnecessary files
- Keep containers single-purpose (“one container = one process”)
- Use multi-stage builds to reduce final image size
- Use health checks for monitoring
- Tag images properly (
latest, version numbers)
- Use docker-compose for development; orchestration tools for production
Docker Hub
- What Is Docker Hub?
- Docker Hub is the official cloud-based registry service provided by Docker.
- It is used to:
- store Docker images
- share images publicly or privately
- distribute images for development and production
- automate image builds
- pull ready-to-use software images (databases, languages, tools)
- Docker Hub is the default registry used when you run:
docker pull ubuntu
- Using Docker Hub Without Logging In
- You can pull images from Docker Hub without an account:
docker pull python:3.12
docker pull nginx
docker pull redis
- Creating a Docker Hub Account
- You can create a free account to:
- push your images
- create private repositories
- configure automated builds
- manage collaborators
- Login from CLI:
docker login
- Pushing an Image to Docker Hub
- Step 1: Build your image locally
docker build -t myapp .
- Step 2: Tag the image with your Docker Hub username
docker tag myapp username/myapp:latest
- Step 3: Push to Docker Hub
docker push username/myapp:latest
- Now anyone (or only your team, if private) can pull it:
docker pull username/myapp:latest
- Pulling Images from Docker Hub
docker pull ubuntu
docker pull node:22
docker pull redis:7-alpine
- Run immediately:
docker run -it ubuntu bash
- Repositories and Tags
- A Docker Hub repository contains multiple versions of an image.
- Example:
python repository tags:
3.12
3.11-slim
3.12-alpine
latest
- A tag identifies a specific variant of an image:
docker pull python:3.12
docker pull python:3.12-slim
docker pull python:alpine
- If you omit the tag:
docker pull python
- You get the
latest tag.
Docker Images
- How Docker Images Work?
- Docker images are built in layers.
- Each instruction in a Dockerfile creates a new layer:
FROM ubuntu
RUN apt-get update
RUN apt-get install python3
COPY app.py /app/app.py
- Layers are cached → faster builds.
- If a layer does not change, Docker reuses it.
- Base Images vs Child Images
- Base Image:
- an image that doesn't depend on any other image
- examples:
ubuntu, alpine, scratch
- Child Image:
- an image built from another image using
FROM
- example:
FROM python:3.12
- Most applications use child images.
- Viewing Docker Images
docker images
- Remove an image:
docker rmi image_id
- Inspect image metadata:
docker inspect image_id
- Show image history (layers):
docker history image_id
- Building Docker Images
- Images are built from a
Dockerfile.
- Basic build:
docker build -t <name> .
-t <name> gives your image a name.
- Example
Dockerfile:
FROM python:3.12-alpine
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
- Tagging Docker Images
- An image can have multiple tags.
- Tags identify versions:
docker tag myapp myapp:v1
docker tag myapp username/myapp:latest
- Common tags:
| Docker Image Tag |
Description |
latest |
Default tag; points to the most recent stable image. |
1.0, 2.3.4 |
Specific version tags for consistent and reproducible builds. |
alpine |
Lightweight image variant based on Alpine Linux. |
slim |
Minimal-dependency image variant, smaller than the default. |
- Pulling Docker Images
docker pull python:3.12
docker pull node:20-slim
docker pull nginx:alpine
- Pull default (latest):
docker pull redis
- Running Containers from Images
- Every container is created from an image:
docker run -it ubuntu bash
- The container runs as an isolated process based on the image’s filesystem.
- Layered Filesystem Explained
- Images use a layered filesystem (UnionFS):
- Bottom layers = read-only
- Top layer (container) = writable
- This means:
- many containers share the same image layers
- saves disk space
- speeds up builds
- Example layer structure:
Layer 1: FROM python:3.12
Layer 2: WORKDIR /app
Layer 3: COPY . .
Layer 4: RUN pip install -r requirements.txt
Layer 5: CMD ["python", "main.py"]
- Best Practices for Docker Images
- Use small base images like
alpine when possible.
Use .dockerignore to exclude unnecessary files:
node_modules/
.git/
__pycache__/
- Keep layers minimal to reduce image size.
- Pin versions to ensure reproducible builds:
FROM node:20.9-slim
- Use multi-stage builds for production apps.
- Do not store secrets (API keys, passwords) inside images.
- Multi-Stage Builds (Advanced)
- Used to create small, optimized images.
FROM node:20 as builder
WORKDIR /app
COPY . .
RUN npm install && npm run build
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
- Benefits:
- final image contains only what is needed to run
- build tools stay in the builder stage
- reduces size dramatically
- Summary
- Docker images are read-only templates used to create containers.
- They are built in layers, making them fast, efficient, and easy to reuse.
- Images are created from Dockerfiles and can be tagged, pushed, or pulled from Docker Hub.
- With best practices (small base images, multi-stage builds), images become more secure and efficient.
Docker Containers
- What Are Docker Containers?
- A Docker container is a running instance of a Docker image.
- It provides:
- an isolated filesystem
- a lightweight runtime environment
- its own processes and network stack
- predictable behavior across machines
- Multiple containers can run from the same image.
- Creating and Running Containers
docker run ubuntu
- Run interactively:
docker run -it ubuntu bash
- Run in detached mode (background):
docker run -d nginx
- Run and map ports:
docker run -p 8080:80 nginx
- Run with a name:
docker run --name mycontainer ubuntu
- Viewing and Managing Containers
docker ps
- List all containers (including stopped):
docker ps -a
- Stop a container:
docker stop mycontainer
- Start a container:
docker start mycontainer
- Remove a container:
docker rm mycontainer
- Container File System
- Containers use a layered filesystem:
- Image layers = read-only
- Container layer = writable
- This means:
- modifications exist only inside that container
- destroying the container removes its write layer
- Inspect layers:
docker history myimage
- Executing Commands Inside Containers
- Run a command in a running container:
docker exec mycontainer ls
- Start an interactive session:
docker exec -it mycontainer bash
- Container Networking
- Docker containers can communicate via built-in networks:
| Docker Network Mode |
Description |
| bridge |
Default network mode, containers get a private internal network and can communicate via virtual interfaces. |
| host |
Shares the host's network stack, container ports map directly to the host's network without NAT. |
| none |
Disables networking entirely, container has no network interfaces except loopback. |
- List networks:
docker network ls
- Connect container to network:
docker network connect mynet mycontainer
- Container Volumes
- Containers lose data if destroyed (ephemeral).
- Use volumes for persistent data:
docker run -v myvolume:/data myimage
- Types of storage:
| Storage Type |
Description |
| Volumes |
Managed entirely by Docker; stored in Docker’s internal storage location. |
| Bind Mounts |
Maps a directory or file from the host filesystem directly into the container. |
- List volumes:
docker volume ls
- Container Lifecycle
- A container lifecycle includes:
- create
- start
- pause (optional)
- stop
- restart
- remove
- Lifecycle example:
docker create myimage
docker start container_id
docker stop container_id
docker rm container_id
- Container Logs
docker logs mycontainer
- Follow logs (like tail -f):
docker logs -f mycontainer
- Cleaning Up Containers
- Remove all stopped containers:
docker container prune
- Remove everything (CAUTION):
docker system prune -a
Dockerfile Anatomy
- What Is a Dockerfile?
- A
Dockerfile is a plain-text script containing step-by-step instructions for building a Docker image.
- Each instruction creates a new layer in the final image.
- Build an image from a Dockerfile:
docker build -t myapp .
- The Dockerfile must be named
Dockerfile (default) unless you specify:
docker build -f CustomFile -t myapp .
- The Structure of a Dockerfile
- A
Dockerfile is a sequence of instructions.
- Common instructions:
| Instruction |
Description |
FROM |
Specifies the base image. |
RUN |
Executes commands during the build process. |
COPY |
Copies files from the host into the image. |
ADD |
Like COPY but supports remote URLs and automatic extraction (use only when necessary). |
WORKDIR |
Sets the working directory for subsequent instructions. |
EXPOSE |
Documents the network ports the container listens on. |
CMD |
Default command executed when the container starts. |
ENTRYPOINT |
Defines the main entrypoint process of the container. |
ENV |
Sets environment variables inside the image. |
ARG |
Defines build-time variables available only during build. |
VOLUME |
Specifies mount points for data storage. |
USER |
Changes the user for subsequent instructions. |
LABEL |
Adds metadata to the image. |
HEALTHCHECK |
Defines a command that checks if the container is healthy. |
- All Dockerfiles start with
FROM.
- FROM — Choosing a Base Image
- The first instruction must specify a base image:
FROM python:3.12-slim
- To start completely from zero:
FROM scratch
- Used for extremely small images.
- WORKDIR — Setting Working Directory
- Sets the directory for the following commands:
WORKDIR /app
- If the directory does not exist, Docker creates it.
- COPY and ADD — Adding Files
COPY copies files from host → image:
COPY . /app
ADD should be avoided unless needed:
- It can fetch URLs
- It auto-extracts tar archives
- Prefer
COPY for normal usage.
- RUN — Execute Commands During Build
- Used to install dependencies or configure the environment:
RUN apt-get update && apt-get install -y python3
- Each
RUN creates a new layer.
- Best practice: combine commands to reduce layers:
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get clean
- ENV — Environment Variables
- Define environment variables accessible inside the container:
ENV PORT=8080
ENV MODE=production
- Useful for configuration.
- ARG — Build-Time Variables
- These exist only during the build stage (not at runtime):
ARG VERSION=1.0
- You can pass them:
docker build --build-arg VERSION=2.0 -t app:2.0 .
- EXPOSE — Documenting Port Usage
- Documents the port your app uses:
EXPOSE 8080
- IMPORTANT: It does not actually publish ports.
- Publish using:
docker run -p 8080:8080 app
- CMD — The Default Container Command
CMD specifies the command to run when the container starts:
CMD ["python", "main.py"]
- If overridden, Docker uses the command passed to
docker run.
docker run app python other.py
- ENTRYPOINT — Forced Command
ENTRYPOINT defines the main binary for the container:
ENTRYPOINT ["python"]
- Combined with
CMD:
ENTRYPOINT ["python"]
CMD ["main.py"]
- This runs:
python main.py
ENTRYPOINT cannot be overridden easily.
- VOLUME — Persistent Data Storage
VOLUME ["/data"]
- Useful for databases or logs.
- USER — Change the Running User
- Run the container as a non-root user for security:
USER appuser
- Helps protect your system if the container is compromised.
- LABEL — Metadata
- Labels add metadata to images:
LABEL maintainer="someone@example.com"
LABEL version="1.0"
- Used in CI/CD pipelines, orchestration, and monitoring.
- HEALTHCHECK — Container Health Verification
- Periodically test if your app is healthy:
HEALTHCHECK CMD curl --fail http://localhost:8080 || exit 1
- This helps orchestrators restart unhealthy containers.
- Full Dockerfile Example
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV PORT=8000
EXPOSE 8000
CMD ["python", "main.py"]
- This Dockerfile:
- uses a Python base image
- installs dependencies
- copies the application
- sets environment variables
- documents exposed port
- runs the app
Introduction to Docker Commands
- What Are Docker Commands?
- Docker provides a powerful command-line interface (CLI) for interacting with images, containers, networks, and volumes.
- The Docker CLI follows the structure:
docker <command> <subcommand> [options]
- Examples:
docker run
docker build
docker pull
docker exec
- Types of Docker Commands (High-Level Overview)
- Container Management
| Command |
Description |
docker run | Create and start a new container. |
docker start | Start an existing stopped container. |
docker stop | Stop a running container. |
docker restart | Restart a container. |
docker exec | Run a command inside a running container. |
docker logs | View container logs. |
- Image Management
| Command |
Description |
docker build | Build a Docker image from a Dockerfile. |
docker pull | Download an image from a registry. |
docker push | Upload an image to a registry. |
docker images | List local Docker images. |
docker rmi | Remove one or more images. |
- Volume Management
| Command |
Description |
docker volume create | Create a volume. |
docker volume ls | List volumes. |
docker volume inspect | Inspect volume configuration. |
- Network Management
| Command |
Description |
docker network create | Create a network. |
docker network ls | List networks. |
docker network inspect | Display detailed network information. |
- System Cleanup
| Command |
Description |
docker system prune | Remove unused containers, networks, images, and cache. |
docker container prune | Remove all stopped containers. |
docker volume prune | Remove all unused volumes. |
- System Commands
docker info
- Show disk usage:
docker system df
- Clean unused data:
docker system prune
- Remove everything (images, volumes, containers):
docker system prune -a
- Docker Help Commands
docker help
- See help for a specific command:
docker run --help
- Print Docker version:
docker --version
Understanding the docker run Command
- What Is It?
- The
docker run command is used to create and start a new container from an image.
- Basic syntax:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
- Running a Simple Container
docker run ubuntu echo "Hello Docker"
- Explanation:
ubuntu is the image used
echo "Hello Docker" is the command executed inside the container
- This runs the command in a temporary container and exits immediately.
-it — Interactive Mode
docker run -it ubuntu bash
- Explanation:
-i keeps the STDIN open
-t allocates a pseudo-TTY (interactive terminal)
ubuntu is the image
bash runs Bash shell inside container
- You get an interactive shell inside the container.
-d — Run in Detached Mode (Background)
docker run -d nginx
- Explanation:
-d runs container in background
nginx is the image that runs a web server
- The container runs in background without attaching to your terminal.
-p — Publishing Ports
docker run -p 8080:80 nginx
- Explanation:
-p maps host port to container port
8080 is the host port (your machine)
80 is the container port
nginx is the web server which is listening on port 80 internally
- You access the Nginx webpage via:
http://localhost:8080
-v — Mounting Volumes
docker run -v /host/data:/container/data ubuntu
- Explanation:
-v mounts the volume
/host/data is the directory on host machine
/container/data is the directory inside container
- Changes inside the container appear on the host, and vice versa.
--name — Naming a Container
docker run --name mynginx -d nginx
- Explanation:
--name mynginx assigns a custom name
-d to enter the detached mode
nginx is the image
- Named containers are easier to manage.
-e — Setting Environment Variables
docker run -e MODE=production -e PORT=5000 myapp
- Explanation:
-e MODE=production sets environment variable MODE
-e PORT=5000 sets environment variable PORT
myapp is the image
- The app inside the container can read these values.
--rm — Auto-remove After Exit
docker run --rm ubuntu echo "temp"
- Explanation:
--rm deletes the container after it exits
ubuntu is the image
echo "temp" is the temporary command
- Useful for running one-time commands.
-w — Working Directory
docker run -w /app ubuntu pwd
- Explanation:
-w /app sets / temporarily overrides the working directory inside container
ubuntu is the image
pwd is the command executed inside /app
--network — Attach Container to a Network
docker run --network mynet nginx
- Explanation:
--network mynet attaches the container to mynet
nginx is the image
- Useful for multi-container applications.
--entrypoint — Override the Default Entry Command
docker run --entrypoint ls ubuntu -l /
- Explanation:
--entrypoint ls replaces the image's default entrypoint with ls
ubuntu is the image
-l / are arguments for ls
- Useful for debugging containers with custom entrypoints.
--cpu and --memory — Resource Limiting
docker run --cpus="1.5" --memory="512m" myapp
- Explanation:
--cpus="1.5" limits container to 1.5 CPU cores
--memory="512m" limits container to 512 MB RAM
myapp is the image
- Important for production deployments.
- Combining Multiple Options
- Example (common real-world):
docker run -d \
-p 8080:80 \
-v /host/www:/usr/share/nginx/html \
-e ENV=production \
--name webserver \
nginx
- Explanation:
-d runs in the background
-p 8080:80 maps host 8080 → container 80
-v /host/www:/usr/share/nginx/html mounts the website files
-e ENV=production sets the environment variable
--name webserver sets the container name
nginx is the image
- This starts a complete Nginx web server with custom settings.
Understanding the docker start Command
- What Is
docker start?
- The
docker start command is used to start an existing (stopped) container.
- Unlike
docker run:
docker run creates a new container from an image
docker start starts an already created container
- It does NOT create new containers.
- Basic syntax:
docker start [OPTIONS] CONTAINER
- Starting an Existing Container
docker start mycontainer
- Explanation:
mycontainer is the name (or ID) of a previously created container
- The container resumes running from its configured command or entrypoint.
- List Containers Before Starting Them
- Show only stopped containers:
docker ps -a
- Example output:
CONTAINER ID IMAGE COMMAND STATUS
abcd1234 nginx "nginx…" Exited (0)
- Now you can start it:
docker start abcd1234
-a — Attach to the Container Output
- The
-a flag attaches your terminal to the container’s STDOUT/STDERR.
- Example:
docker start -a mycontainer
- Explanation:
-a shows the container output in your terminal
mycontainer is the container name or ID
- Useful when container prints logs or immediately exits.
-i — Attach Standard Input (STDIN)
- Use this when restarting a container that expects terminal input.
- Example:
docker start -i mycontainer
- Explanation:
-i keeps STDIN open for the container
mycontainer is the name of container
- This is similar to
docker run -it, but only works if the container was originally created with interactive capabilities.
- Starting Multiple Containers at Once
docker start container1 container2 container3
- Explanation:
- You can pass multiple names/IDs
- Docker starts all of them in sequence
- Useful for restarting a multi-container setup.
- Check Status After Starting
- Inspect running containers:
docker ps
- Example output:
CONTAINER ID IMAGE STATUS
abcd1234 nginx Up 10 seconds
- This confirms the container is running.
- Difference Between
docker start and docker restart
docker start:
- Starts stopped containers
- Does nothing if already running
docker restart:
- Stops the container first
- Then starts it again
- Works even if container is already running
- Example:
docker restart mycontainer
- Example Workflow: Create with
run, Start Again with start
- Step 1: Create a container
docker run --name testcontainer ubuntu sleep 1000
- Container sleeps for 1000 seconds
- Step 2: Stop the container
docker stop testcontainer
- Step 3: Start it again
docker start testcontainer
- This starts the original container without recreating it.
Understanding the docker stop Command
- What Is
docker stop?
- The
docker stop command gracefully stops one or more running containers.
- Stopping a container means:
- Docker sends a
SIGTERM signal to the process
- If it doesn’t exit within 10 seconds (default), Docker sends
SIGKILL
- Basic syntax:
docker stop [OPTIONS] CONTAINER [CONTAINER...]
- This is the preferred way to stop a running container.
- Stopping a Running Container
docker stop mycontainer
- Explanation:
mycontainer is the container name or ID
- Docker attempts a graceful shutdown via
SIGTERM.
- Stopping Multiple Containers
- You can stop multiple containers at once:
docker stop container1 container2 container3
- Docker stops each in sequence.
- Understanding Signal Workflow
- When stopping a container:
- Step 1: Docker sends
SIGTERM
- Step 2: Container has time to exit cleanly (default 10 seconds)
- Step 3: If still running → Docker sends
SIGKILL
- This prevents abrupt shutdowns unless necessary.
-t — Changing the Timeout Before Force-Kill
- Default timeout: 10 seconds.
- You can change this timeout:
docker stop -t 3 mycontainer
- Explanation:
-t 3 waits only 3 seconds before sending SIGKILL
mycontainer is the the container to stop
- What Happens to Processes Inside the Container?
- On
docker stop, the main process (PID 1 inside container) receives:
SIGTERM → graceful shutdown
SIGKILL → forced kill if shutdown fails
- If the main process handles
SIGTERM correctly (web servers, applications), shutdown is clean.
- If the process ignores signals → Docker eventually kills it.
- Stopping All Running Containers
- Example (works via shell expansion):
docker stop $(docker ps -q)
- Explanation:
docker ps -q outputs only container IDs
docker stop ... stops all container IDs given
- This is a commonly used bulk stop command.
- Difference Between
docker stop and docker kill
docker stop → graceful
- tries
SIGTERM first
- forces shutdown only if needed
docker kill → force kill
- immediately sends
SIGKILL
- no graceful shutdown
- Example:
docker kill mycontainer
- Only use
kill if stop does not work.
- Example Workflow: Stop → Start → Attach
docker run -d --name web nginx
- Step 2: Stop the container
docker stop web
- Step 3: Start again
docker start web
- Step 4: View logs
docker logs web
Understanding the docker restart Command
- What Is
docker restart?
- The
docker restart command is used to stop and then start one or more containers in a single operation.
- It is essentially the combination of:
docker stop [container]
docker start [container]
- Basic syntax:
docker restart [OPTIONS] CONTAINER [CONTAINER...]
- Useful when you want the container to restart with updated configurations or refreshed network connections.
- Restarting a Container
docker restart mycontainer
- Explanation:
mycontainer is the container name or ID
- Docker performs:
SIGTERM → graceful stop
SIGKILL → after timeout (default 10 sec, configurable just like in docker stop)
- Then starts the container again
- Using Timeout with
-t
- You can customize the shutdown timeout before the restart begins.
- Example:
docker restart -t 3 mycontainer
- Explanation:
-t 3 waits 3 seconds before force-killing during stop
mycontainer is the container to restart
- Useful for applications needing faster restart cycles.
- Restarting Multiple Containers
- You can restart multiple containers at once:
docker restart web app db
- Explanation: Restarts
web, app, and db containers sequentially
- Useful for multi-container environments.
- Example Workflow: Updating Configuration and Restarting
- Step 1: Modify a config file inside a volume
vim /host/config/nginx.conf
- Step 2: Restart Nginx container
docker restart nginx-server
- Step 3: Confirm status
docker ps
- Example: Fast Restart with Custom Timeout
- Some applications take too long to shut down gracefully.
- Use a shorter timeout:
docker restart -t 1 backend
- Meaning: wait at most 1 second before restarting.
Understanding the docker exec Command
- What Is
docker exec?
- The
docker exec command is used to run a new command inside an already running container.
- It does NOT start a new container — instead, it:
- executes a process in an existing one
- allows debugging, inspection, and tooling
- can open an interactive shell
- Basic syntax:
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
- Running a Simple Command Inside a Container
docker exec mycontainer ls /
- Explanation:
mycontainer is the name or ID of running container
ls / is the command executed inside the container's filesystem
- Useful for checking files or verifying application structure.
-it — Interactive Shell Inside the Container
- Example (open a Bash shell):
docker exec -it mycontainer bash
- If the container uses Alpine Linux (no bash installed):
docker exec -it mycontainer sh
- Explanation:
-i keeps STDIN open
-t allocates an interactive terminal (TTY)
bash or sh is the shell inside the container
- This is the most common use of
docker exec — for debugging and exploration.
-d — Execute in Detached Mode
- Run a command in the background:
docker exec -d mycontainer touch /tmp/file.txt
- Explanation:
-d runs command in background
touch /tmp/file.txt is the command executed without blocking your terminal
- Useful for administrative tasks during runtime.
- Executing Commands as Another User (
-u)
- Run as root (default user is often root, but not always):
docker exec -u root mycontainer whoami
- Run as a specific user inside container:
docker exec -u www-data mycontainer ls /var/www
- Explanation:
-u specifies user
www-data is the user inside container
- Useful for permission testing or debugging web servers.
- Passing Environment Variables Using
env
- You cannot pass
-e in docker exec, but you can wrap commands:
docker exec mycontainer env MODE=dev sh -c 'echo $MODE'
- Explanation:
env MODE=dev creates a temporary environment variable
sh -c runs shell with command inside quotes
- Executing Long-Running Processes
docker exec myapp python script.py
- The process runs alongside the container’s main process.
- This is helpful for maintenance scripts.
- Checking Process Status Inside Container
- You can use
ps or similar tools:
docker exec myapp ps aux
- Useful for debugging memory or CPU usage of running processes.
- Example: Inspect an Nginx Web Server Container
docker exec -it webserver cat /etc/nginx/nginx.conf
- Restart Nginx inside container:
docker exec webserver nginx -s reload
- Explanation:
nginx -s reload hot-reloads config
- Exec runs it inside container
- Example: Debugging a Python App
- View logs stored inside container:
docker exec myapp tail -n 50 /app/logs/error.log
- Restart worker process:
docker exec myapp pkill -f worker.py
- Important Limitations of
docker exec
- Cannot exec into stopped containers → must use
docker start first.
- Does not modify or change the Dockerfile / image.
- Commands executed via
exec disappear when container stops unless data is saved.
Understanding the docker logs Command
- What Is
docker logs?
- The
docker logs command displays logs generated by a running or stopped container.
- Logs include anything written to:
STDOUT (standard output)
STDERR (standard error)
- Basic syntax:
docker logs [OPTIONS] CONTAINER
- Useful for debugging application output, errors, and behavior.
- Viewing Logs of a Container
docker logs mycontainer
- Explanation:
mycontainer is the container name or ID
- Shows all logs since the container started, printed in chronological order.
-f — Follow Logs in Real Time
docker logs -f mycontainer
- Explanation:
-f follows log output (like tail -f)
- Used to monitor apps in real time (web servers, workers, cron jobs).
--tail — View Only the Last X Lines
- Example: see only the last 50 lines
docker logs --tail 50 mycontainer
- Useful for large log output.
-t — Show Timestamps
docker logs -t mycontainer
- Helps correlate logs with events or external systems.
- Combine Options: Follow + Tail + Timestamp
docker logs -f --tail 20 -t mycontainer
- This means:
--tail 20 shows last 20 lines
-t includes timestamps
-f continues watching logs live
- This is the most common real-time monitoring pattern.
- Viewing Logs from Stopped Containers
- Docker stores logs even after the container stops.
- Example:
docker logs my-stopped-container
- Useful for debugging crashes or unexpected exits.
- Example: Debugging a Crash
- Step 1: Check container status:
docker ps -a
- Step 2: View logs:
docker logs myapp
- Step 3: Look for exceptions, Python stack traces, Nginx errors, etc.
- Log Output from a Web Server (Example)
docker logs nginx-server
- Shows access logs + error logs because both go to STDOUT/STDERR.
- Filtering Output with
grep (Shell Feature)
docker logs myapp | grep "ERROR"
- Explanation:
- This is NOT a Docker feature — just shell piping
- Filters logs for the keyword
ERROR
- Difference Between
docker logs and docker exec for Log Files
docker logs only shows STDOUT/STDERR.
- If logs are stored in files inside container:
docker exec myapp cat /app/logs/server.log
- Many applications log internally and not to STDOUT — these logs require
docker exec.
- Where Are Docker Logs Stored on Host?
- Under the hood, Docker stores logs in:
/var/lib/docker/containers/[ID]/[ID]-json.log
- But you rarely need to access this manually.
Understanding the docker build Command
- What Is
docker build?
- The
docker build command is used to create a Docker image from a set of instructions written in a Dockerfile.
- Basic syntax:
docker build [OPTIONS] PATH | URL | -
PATH is usually the directory containing your Dockerfile.
- Building an Image from a Dockerfile
docker build -t myapp:latest .
- Explanation:
-t myapp:latest tags the new image as myapp:latest
. is the current directory (where Dockerfile is located)
- Docker will:
- read the Dockerfile
- execute each instruction (FROM, COPY, RUN…)
- build layers
- produce an image
-t — Tagging an Image
<name>:<tag>
- Example with custom tag:
docker build -t backend:v1.0 .
- Explanation:
backend is the image name
v1.0 is the version tag
- Tags help version and identify images for deployments.
-f — Build with a Custom Dockerfile Name
docker build -f Dockerfile.prod -t web:prod .
- Explanation:
-f Dockerfile.prod means using this file instead of Dockerfile
-t web:prod tags image with web:prod
. builds context directory
- Understanding the Build Context
- The path you pass to
docker build (usually .) is the build context.
- Docker sends this entire directory to the Docker daemon.
- Example:
docker build -t app .
- Everything inside
. becomes available to COPY and ADD commands in the Dockerfile.
- Use
.dockerignore to prevent sending unnecessary files.
--no-cache — Build Without Using Cache
docker build --no-cache -t app:clean .
- Explanation:
- Forces Docker to execute all instructions from scratch
- Good for ensuring fresh dependencies
--pull — Always Pull the Newest Base Image
docker build --pull -t api:latest .
- Explanation:
--pull forces Docker to download the newest image used in FROM
--build-arg — Pass Build Arguments
ARG VERSION
RUN echo "Building version $VERSION"
- Build command:
docker build --build-arg VERSION=1.2.3 -t app:v1 .
- Explanation:
ARG VERSION defines a variable usable only during build
--build-arg VERSION=1.2.3 assigns the value
- Building from a Git Repository
docker build -t app https://github.com/user/repo.git
- Docker automatically downloads and builds from the repository root.
- Building from Standard Input (
-)
- Example using inline Dockerfile:
echo -e "FROM alpine\nCMD echo Hello" | docker build -t hello -
- Explanation:
- tells Docker to read the Dockerfile from stdin
- Checking Built Images
docker images
- Example output:
REPOSITORY TAG IMAGE ID SIZE
myapp latest 7f8a1b23b123 125MB
- Example Workflow: Build → Run → Test
docker build -t webapp .
- Step 2: Run a container
docker run -d -p 8080:80 --name web webapp
- Step 3: View logs
docker logs web
- Common Mistakes and Tips
- Copying too many files into the context
- Fix: use
.dockerignore
- Large images
- Fix: use multi-stage builds
- Missing explicit versions
- Fix: always pin versions in
FROM and RUN
Understanding the docker pull Command
- What Is
docker pull?
- The
docker pull command is used to download Docker images from a registry (usually Docker Hub).
- It retrieves:
- the image manifest
- all required layers
- metadata (tags, architecture, digest)
- Basic syntax:
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
- If no tag is provided, Docker pulls the
latest tag by default.
- Pulling an Image from Docker Hub
docker pull nginx
- Explanation:
nginx is the image name
- No tag means Docker pulls
nginx:latest
- Docker will:
- connect to Docker Hub
- download layers
- save the image locally
- Pulling a Specific Tagged Version
docker pull python:3.11
- Explanation:
python is the image name
3.11 is the specific version tag
- Pulling by Digest (
@sha256:...)
docker pull ubuntu@sha256:3af...
- Explanation:
- Pulls an exact, immutable version of an image
- Useful for production-grade reproducibility
--platform — Pull Image for Specific Architecture
- Example (for Apple Silicon pulling x86_64 image):
docker pull --platform linux/amd64 node:18
- Explanation:
--platform linux/amd64 forces specific architecture
node:18 are the image name and tag
- Pulling Images from Private Registries
docker pull myregistry.com/myapp/backend:latest
- Explanation:
myregistry.com is the custom registry URL
myapp/backend is the repository
latest is the tag
- It requires login via:
docker login myregistry.com
- Pulling All Tags of an Image (Not Native to Docker)
- Docker has no built-in “pull all tags” command.
- You can do it via scripting:
for tag in $(docker search nginx --format "{{.Name}}" | grep nginx); do
docker pull nginx:$tag
done
- When Should You Run
docker pull?
- You need the newest version of an image
- You want to sync a Dockerfile
FROM image
- You are preparing a deployment
- CI pipelines need predictable base images
- Local images are outdated
- Example Workflow: Pull → Run → Inspect
docker pull redis:7
- Step 2: Run
docker run -d --name cache redis:7
- Step 3: Logs
docker logs cache
Understanding the docker push Command
- What Is
docker push?
- The
docker push command is used to upload (publish) Docker images from your local machine to a remote registry such as:
- Docker Hub (default)
- GitHub Container Registry
- GitLab Container Registry
- Private registries
- This allows you to share images with others, deploy images to servers, or store them for CI/CD.
- Basic syntax:
docker push NAME[:TAG]
- Tagging an Image Before Pushing
- You can only push images that are properly tagged with a registry prefix.
- Example: tag an image for Docker Hub:
docker tag myapp:latest myusername/myapp:latest
- Explanation:
myapp:latest is the local image
myusername/myapp:latest is the image in your Docker Hub namespace
- Now the image is ready to be pushed.
- Pushing an Image to Docker Hub
docker push myusername/myapp:latest
- Explanation:
myusername is your Docker Hub username
myapp is the repository name
latest is the tag being uploaded
- Docker uploads all image layers that are not already present in the registry.
- Logging In Before Pushing
- If you are not logged in, Docker will reject the push.
- Login to Docker Hub:
docker login
- For private registries:
docker login myregistry.com
- Pushing to a Private Registry
docker tag api:latest myregistry.com/team/api:v2
docker push myregistry.com/team/api:v2
- Explanation:
myregistry.com is the registry host
team/api is the repository inside registry
v2 is the version tag
- Pushing Multiple Tags
- Each tag must be pushed separately:
docker push myapp:latest
docker push myapp:v1.0
- Pushing by Digest
docker push myapp@sha256:4bb42ad...
- Useful for verifying that a specific immutable image is uploaded.
- Checking Local Images Before Pushing
docker images
- Example output:
REPOSITORY TAG IMAGE ID SIZE
myusername/myapp latest 09be1234abc 155MB
- Example Workflow: Build → Tag → Push
docker build -t myapp .
- Step 2: Tag for Docker Hub
docker tag myapp myusername/myapp:latest
- Step 3: Push to registry
docker push myusername/myapp:latest
Understanding the docker volume Command
- What Is
docker volume?
- The
docker volume command manages Docker-managed storage volumes that persist data outside container lifecycles.
- Volumes allow you to:
- persist database files
- share data between containers
- keep data even after containers are removed
- avoid storing data inside container layers
- Basic syntax:
docker volume [COMMAND]
- Listing Volumes with
docker volume ls
- Shows all existing volumes:
docker volume ls
- Example output:
DRIVER VOLUME NAME
local mydata
local db-volume
- DRIVER is usually
local, the default storage backend.
- Creating a New Volume
docker volume create mydata
- Explanation:
mydata is the name of the new volume
- Volumes are created empty and ready to be mounted into containers.
- Inspecting a Volume
docker volume inspect mydata
- Example output:
[
{
"Name": "mydata",
"Mountpoint": "/var/lib/docker/volumes/mydata/_data",
"Driver": "local"
}
]
- Explanation:
Mountpoint is the actual storage location on host
Driver is the storage backend
- Using a Volume with
docker run
- Example: mount a volume to a container:
docker run -d --name db \
-v mydata:/var/lib/mysql \
mysql:8
- Explanation:
-v mydata:/var/lib/mysql mounts the volume mydata to MySQL data folder inside the container
/var/lib/mysql is the path inside container
- All MySQL data becomes persistent
- If the container is removed, the volume persists.
- Creating and Mounting Volume Automatically
- If you use a volume that doesn’t exist:
docker run -v newvol:/data alpine
- Docker will automatically create
newvol.
- Viewing Mounted Volumes Inside a Container
docker inspect container-name
- Then look under
.Mounts for volume configuration.
- Removing a Volume (
docker volume rm)
docker volume rm mydata
- Important: You can not remove a volume that is currently in use by a container.
- Removing All Unused Volumes —
docker volume prune
docker volume prune
- Explanation:
- Deletes all volumes not referenced by any container
- Helpful for cleaning disk space
- Difference Between Volumes, Bind Mounts, and tmpfs
Volumes are managed by Docker, best for persistence.
Bind mounts map a specific host directory:
docker run -v /host/path:/container/path app
tmpfs is memory-only storage:
docker run --tmpfs /cache app
- Example: Persisting PostgreSQL Data
docker volume create pgdata
docker run -d \
--name pg \
-e POSTGRES_PASSWORD=12345 \
-v pgdata:/var/lib/postgresql/data \
postgres:15
- Now database data persists across container restarts or removals.
- Example: Sharing a Volume Between Multiple Containers
docker run -d --name c1 -v sharedvol:/data alpine
docker run -d --name c2 -v sharedvol:/data alpine
- Both containers share the same storage location.
- Where Docker Volumes Are Stored on Host
/var/lib/docker/volumes/[VOLUME_NAME]/_data
- But you should not directly modify those files unless necessary.
Understanding the docker network Command
- What Is
docker network?
- The
docker network command manages Docker's virtual networks that containers use to communicate with each other and the outside world.
- Docker networks allow you to:
- connect containers together
- isolate services
- assign custom IP ranges
- control DNS resolution inside Docker
- create multi-container apps
- Basic syntax:
docker network [COMMAND]
- Listing Existing Docker Networks
docker network ls
- Example output:
NETWORK ID NAME DRIVER SCOPE
ad8c32f1c012 bridge bridge local
bfd9a1023cd2 host host local
cd1fc8c98bb1 none null local
- Default networks:
bridge → default virtual network for containers
host → container shares host networking stack
none → no networking
- Creating a Custom Network
docker network create mynet
- Explanation:
mynet is the name of the new network
- Driver defaults to
bridge
- Custom networks allow automatic container name-based DNS.
- Creating a Network with Custom Subnet and Gateway
docker network create \
--subnet=192.168.10.0/24 \
--gateway=192.168.10.1 \
mycustomnet
- Explanation:
--subnet is the CIDR block for IP allocation
--gateway is the network gateway
- Inspecting a Network
docker network inspect mynet
{
"Name": "mynet",
"Driver": "bridge",
"Containers": {
"abcd1234": {
"Name": "web",
"IPv4Address": "172.18.0.2/16"
}
}
}
- Running a Container Attached to a Network
docker run -d --name webapp --network mynet nginx
- Explanation:
--network mynet → container joins the mynet network
- Containers on the same network can reach each other by name.
- Connecting an Existing Container to a Network
docker network connect mynet backend
- Explanation:
mynet is the network
backend is the existing container
- This allows dynamic network changes without restarting containers.
- Disconnecting a Container from a Network
docker network disconnect mynet backend
- Explanation: it removes the
backend container from mynet
- Removing a Docker Network
docker network rm mynet
- Important: you cannot remove a network while containers are attached
- Removing All Unused Networks —
docker network prune
docker network prune
- Deletes all networks not being used by any container.
- Understanding Docker Network Drivers
- Docker supports multiple network drivers:
| Network Driver |
Description |
bridge |
Default driver; best for communication between local containers. |
host |
Container shares the host's network stack directly. |
none |
Disables networking entirely. |
overlay |
Used for multi-host networking in Docker Swarm. |
macvlan |
Assigns each container its own MAC address on the physical LAN. |
- Most users only need
bridge.
- Example: Two Containers Talking to Each Other
docker network create appnet
- Start containers:
docker run -d --name api --network appnet node:18
docker run -d --name web --network appnet nginx
- Now inside
web container:
curl http://api:3000
api is resolved by Docker DNS!
- Example: Static IP Assignment
docker network create --subnet=172.20.0.0/16 appnet
docker run -d --name app1 --network appnet --ip 172.20.0.10 alpine
docker run -d --name app2 --network appnet --ip 172.20.0.11 alpine
- Allows fixed IPs for legacy systems.
Understanding the Docker Daemon (dockerd)
- What Is the Docker Daemon?
- The Docker daemon (the process named
dockerd) is the system-level background service responsible for managing every Docker operation.
- It runs continuously in the background and handles:
- container lifecycle (start, stop, restart)
- building and running images
- networking between containers
- storage (volumes, layers)
- pulling/pushing images from registries
- API communication with Docker CLI
- The daemon exposes a REST API that the
docker CLI communicates with.
- Process name on Linux:
/usr/bin/dockerd
- Architecture: How Docker CLI Talks to the Daemon
- Docker uses a client–server architecture:
+----------------------+ +-----------------+
| Docker CLI (client) | -----> | Docker Daemon |
| "docker run ..." | API | dockerd |
+----------------------+ +-----------------+
|
v
Containers, Images,
Volumes, Networks
- You never interact with containers directly — everything goes through the daemon’s API.
- Why the Docker Daemon Requires Root Privileges
- Historically,
dockerd runs as root because it manages:
- network namespaces
- cgroups (CPU/memory control)
- iptables firewall rules
- mounting filesystems
- kernel features for containers
- These are privileged operations → only root can perform them.
- Therefore:
- Docker daemon runs as
root
- the
docker CLI communicates with it over a privileged Unix socket at /var/run/docker.sock
- This root-level nature is the source of many security concerns.
- The Famous Security Problem: Access to
/var/run/docker.sock
- If a user can access this socket, they effectively have root permissions on the host machine.
- This means:
- They can mount root filesystem
- Launch privileged containers
- Run commands as root
- Modify system files
- For example:
docker run -v /:/host -it ubuntu chroot /host
- This gives full root access to the host.
- This is why giving users Docker access is equivalent to giving them sudo.
- Rootless Docker: Attempt to Solve the Problem
- Docker now supports rootless mode, where
dockerd runs as a normal user.
- Benefits:
- No root privileges required
- Safer in multi-user environments
- User cannot modify critical system structures
- Limitations:
- Cannot use privileged ports (<1024)
- Weaker performance in some operations
- Networking is more limited
- Why Kubernetes Stopped Using Docker as Runtime
- Kubernetes deprecated Docker as a container runtime because:
- It depends on the system-level daemon
- Not compliant with Kubernetes CRI (Container Runtime Interface)
- Kubernetes prefers containerd and CRI-O
- Important: Kubernetes still uses Docker images, but NOT the Docker daemon.
- Alternatives to Docker That Avoid the Daemon
- Tools like Podman avoid the daemon entirely.
- Podman features:
- Daemonless
- Rootless by default
- Docker CLI compatible (
alias docker=podman)
Understanding the docker compose Command
- What Is
docker compose?
- The
docker compose command is used to run multi-container applications using a single configuration file: docker-compose.yml.
- Purpose:
- Start multiple services (containers) together
- Create networks automatically
- Create volumes automatically
- Define environment variables, ports, dependencies
- Manage an entire stack with one command
- Docker Compose is ideal for:
- Web applications
- Microservices
- Local development environments
- Database + application stacks
- Newer versions use:
docker compose
- (not
docker-compose with a hyphen).
- Understanding the
docker-compose.yml Structure
version: "3.9"
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: postgres
environment:
POSTGRES_PASSWORD: 12345
- Explanation:
version → Compose schema version
services → each container definition
web → first service
db → second service
- Compose will:
- create a shared network
- allow
web to connect to db by name
- Starting a Compose Application —
docker compose up
- Run everything defined in
docker-compose.yml:
docker compose up
- Foreground mode: streams logs of all services.
- Run in background:
docker compose up -d
- What happens under the hood?
- Networks created
- Volumes created (if any)
- Containers started in dependency order
- Name format:
foldername_service_index
- Stopping a Compose Application —
docker compose down
docker compose down
- Removes:
- containers
- networks created by Compose
- Does NOT remove:
- Remove volumes too:
docker compose down -v
- Viewing Logs —
docker compose logs
docker compose logs
- Real-time logs:
docker compose logs -f
- Logs from specific service:
docker compose logs web
- Listing All Services —
docker compose ps
docker compose ps
- Shows container status and exposed ports.
- Building Images —
docker compose build
- If your service uses a
Dockerfile:
docker compose build
- Force rebuild (ignore cache):
docker compose build --no-cache
- Build a specific service:
docker compose build web
- Running Commands in Services —
docker compose exec
- Example: open a shell in
db service:
docker compose exec db bash
- If the container only has
sh:
docker compose exec db sh
- Viewing Container Resource Usage —
docker compose top
docker compose top
- Shows running processes inside services.
- Stopping and Starting Individual Services
docker compose stop web
- Start one service:
docker compose start web
- Scaling Services —
--scale
docker compose up -d --scale web=3
- Runs 3 instances of the
web service.
- Great for load-balancing demos or worker clusters.
- Using Environment Files
- Inside
docker-compose.yml:
services:
app:
env_file:
- .env
- Your
.env file:
API_KEY=12345
DEBUG=true
- Example: Web + Database Compose Setup
services:
web:
image: nginx
ports:
- "8080:80"
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: secret
- Explanation:
depends_on ensures DB starts before web
- Both share an automatically created network
- Web can reach DB via hostname
db
How to Write a docker-compose.yml File
- What Is
docker-compose.yml?
docker-compose.yml is a YAML configuration file that describes a multi-container application.
- In this file you define:
- which services (containers) exist
- which images they use or how they are built
- ports to expose
- volumes to persist data
- environment variables
- networks and dependencies
- Once defined, you can start everything with a single command:
docker compose up
- YAML is whitespace-sensitive, so indentation (2 spaces) is extremely important.
- Basic Structure of a Compose File
- A minimal structure looks like this:
services:
servicename:
image: some-image
- Common top-level keys:
version (optional in newer Compose specs)
services → your containers
volumes → named volumes (optional)
networks → named networks (optional)
- We will build up a realistic example step by step: a simple web app + database.
- Step 1: Start with
services
- Every Compose file starts by defining your services:
services:
web:
# web service config will go here
db:
# database service config will go here
- Explanation:
services: → top-level key
web: → name of first service (later becomes container name prefix)
db: → name of second service
- Service names are how containers talk to each other via DNS (e.g.
web can reach db using hostname db).
- Step 2: Choose
image or build for Each Service
- You can either:
- use an existing image from Docker Hub →
image:
- build your own image from a Dockerfile →
build:
- Example:
web uses a local Dockerfile, db uses official Postgres image
services:
web:
build: ./web
db:
image: postgres:15
- Explanation:
build: ./web → Docker will look for a Dockerfile in the ./web directory.
image: postgres:15 → pulls and runs the official postgres image tag 15.
- Step 3: Expose Ports with
ports
- To access your service from the host machine, map host port → container port:
services:
web:
build: ./web
ports:
- "8080:80"
- Explanation of
"8080:80":
8080 is the host port (browser will connect to localhost:8080)
80 is the port inside container (your app listens on 80)
- You can define multiple ports as a YAML list:
ports:
- "8080:80"
- "8443:443"
- Step 4: Add Environment Variables with
environment
- Environment variables configure things like passwords, modes, etc.
- Example for Postgres:
services:
db:
image: postgres:15
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: secret123
POSTGRES_DB: appdb
- Explanation:
environment: is a map of key/value pairs
POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB are variables understood by the Postgres image
- You can also load from a
.env file using env_file, but for learning, inline variables are clearer.
- Step 5: Persist Data with
volumes
- Database data must survive container restarts → use a named volume.
- Example:
services:
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret123
volumes:
- dbdata:/var/lib/postgresql/data
volumes:
dbdata:
- Explanation:
- Service-level
volumes:: dbdata:/var/lib/postgresql/data mounts named volume dbdata into container data directory.
- Top-level
volumes:: dbdata: declares the named volume so Docker knows it exists.
- Now deleting the
db container will not delete your database data.
- Step 6: Define Networks (Optional but Helpful)
- By default, Compose creates one network for your project and puts all services inside it.
- You can explicitly define networks to be more clear:
services:
web:
build: ./web
networks:
- appnet
db:
image: postgres:15
networks:
- appnet
networks:
appnet:
- Explanation:
networks: under each service, attaches the service to the appnet network.
- Top-level
networks: defines the named network.
- Within the same network:
web can reach database using db:5432
- Step 7: Control Start Order with
depends_on
- Web service should wait for DB to at least start its container.
- Example:
services:
web:
build: ./web
depends_on:
- db
db:
image: postgres:15
- Explanation:
depends_on: list → tells Compose to start db before web.
- Note: this checks container start order, not full DB readiness. For production, you’d often add retry logic in your app or use healthchecks.
- Step 8: Override Commands with
command and restart Policy
- You can replace the default command of the image using
command:
services:
web:
build: ./web
command: ["python", "app.py"]
- Explanation:
command: accepts either a string or YAML list.
- Here it runs
python app.py inside the container.
- Add a restart policy, e.g. auto-restart on failure:
services:
web:
build: ./web
restart: on-failure
- Common values for
restart:
no (default)
on-failure
always
unless-stopped
- Step 9: Full Example – Simple Web + Postgres Stack
- Here is a complete, consistent
docker-compose.yml that uses everything we discussed:
services:
web:
build: ./web
ports:
- "8080:80"
environment:
DATABASE_HOST: db
DATABASE_NAME: appdb
DATABASE_USER: appuser
DATABASE_PASSWORD: secret123
depends_on:
- db
networks:
- appnet
restart: unless-stopped
db:
image: postgres:15
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: secret123
POSTGRES_DB: appdb
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- appnet
restart: unless-stopped
volumes:
dbdata:
networks:
appnet:
- How the pieces connect:
web:
- built from
./web directory
- accessible on
localhost:8080
- knows DB connection details via environment variables
- shares network
appnet with db
db:
- uses official
postgres:15 image
- stores data in named volume
dbdata
- exposed internally as host
db on port 5432 to web
volumes / networks declared at bottom:
dbdata persists data
appnet connects the two services
- Step 10: Using the Compose File
- Save the YAML content into
docker-compose.yml at your project root.
- Then run:
docker compose up
- Run in background:
docker compose up -d
- Stop and clean up containers and network:
docker compose down
- Stop and also remove volumes:
docker compose down -v