Docker Architecture, Operations, and Best Practices: A Complete Guide to Images, Containers, Compose, and Troubleshooting

[AI Readability Summary] Docker is an application delivery platform built around containers. Its core capability is packaging code, dependencies, and runtime environments into a consistent unit so you can build once and run anywhere. It reduces environment drift, improves deployment efficiency, and simplifies operations. Keywords: Docker, containers, Compose.

Technical specifications are summarized below.

Parameter Details
Core technology Docker Engine
Primary language Go
Architecture pattern Client/Server
Runtime standard OCI
Common protocols HTTP API, Registry API
Typical dependencies dockerd, containerd, runc, systemd
Image registries Docker Hub, Harbor, GHCR
GitHub stars Original data not provided

Docker standardizes application delivery.

Docker is not a traditional virtual machine. It uses operating-system-level virtualization. It packages an application, runtime, dependency libraries, environment variables, and startup commands into a container, which reduces the environment gap across development, testing, and production.

For teams, Docker’s biggest value is not just “running containers”. It is the ability to reproduce runtime results consistently. That is why Docker is widely adopted in microservices, CI/CD pipelines, and local development environments.

# Pull the official Nginx image
docker pull nginx

# Start a container in the background and map host port 8080 to container port 80
docker run -d --name web -p 8080:80 nginx

# List currently running containers
docker ps

These commands pull an image, start a container, and confirm its runtime status.

Docker consists of images, containers, and registries.

An image is a read-only template that defines everything an application needs to run. A container is a running instance of an image. A registry distributes images. Their relationship is similar to a template, an instance, and a distribution center.

Images ensure delivery consistency, containers provide runtime isolation, and registries handle version distribution. Understanding this model matters more than memorizing commands.

# List local images
docker images

# Remove an image
docker rmi nginx

# Stop and remove a container
docker stop web
docker rm web

These commands demonstrate the basic lifecycle management of images and containers.

Docker uses a layered runtime architecture.

Most users only interact with the Docker CLI, but the actual call chain is docker CLI → dockerd → containerd → runc → container. In this chain, dockerd provides centralized management, containerd handles higher-level runtime control, and runc creates the actual container process.

This means that when Docker fails, the root cause is not always the application container. The issue may come from the daemon, runtime components, or underlying system configuration.

# Check Docker service status
sudo systemctl status docker

# Start the Docker service
sudo systemctl start docker

# Enable Docker to start on boot
sudo systemctl enable docker

These commands manage Docker on Linux systems that use systemd.

Dockerfile is the declarative entry point for image builds.

A Dockerfile defines the image build process in plain text and serves as the foundation of reproducible builds. Common instructions include FROM, COPY, RUN, WORKDIR, EXPOSE, and CMD.

A good Dockerfile should prioritize clear layering, minimal base images, cache-friendly steps, and non-root execution.

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt  # Install application dependencies
COPY . .
EXPOSE 8000  # Declare the service port
CMD ["python", "app.py"]  # Define the container startup command

This Dockerfile builds a lightweight Python web service image.

Docker Compose enables multi-container orchestration.

A single container works well for demos, but real projects usually include at least a web service, cache, database, and messaging component. Docker Compose uses compose.yml to define multiple services, networks, and volumes in one place, which significantly reduces environment setup overhead.

Compared with manually running multiple docker run commands, Compose is better suited for team collaboration, test environments, and small to medium-sized deployment scenarios.

services:
  nginx:
    image: nginx:latest
    container_name: nginx-web
    ports:
      - "8080:80"  # Expose the web service port
    volumes:
      - ./html:/usr/share/nginx/html  # Mount the static page directory
    restart: always
  redis:
    image: redis:7
    container_name: redis-cache
    restart: always

This configuration defines a two-service environment composed of Nginx and Redis.

# Start the orchestrated services
docker compose up -d

# Check service status
docker compose ps

# Stop and remove the services
docker compose down

These commands start, inspect, and tear down a Compose-managed service stack.

Docker networking and volumes determine whether containers are usable.

At the networking layer, bridge fits default scenarios, host is suitable for high-performance network services, none works for highly isolated tasks, and a custom bridge network is ideal for multi-container connectivity. In production, custom networks are generally recommended because they provide clearer service discovery and stronger isolation.

At the data layer, a container filesystem is ephemeral. Databases, cache persistence, and uploaded file directories must be externalized through volumes or bind mounts. Otherwise, data is lost when the container is removed.

# Create a custom network
docker network create app-net

# Create a volume
docker volume create mysql-data

# Start MySQL on the custom network and mount the volume
docker run -d --name mysql \
  --network app-net \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=your_password \
  mysql:8

These commands demonstrate how to combine network isolation with persistent storage.

Docker operations rely on logging, restart policies, and troubleshooting.

Once a service runs over time, the key challenge is no longer “how to start it” but “how to keep it stable”. That is why you need to monitor log size, automatic restart policies, disk usage, port conflicts, and DNS resolution.

A practical default is to use unless-stopped as the restart policy and configure log rotation limits so that json-file logs do not fill the host disk.

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  }
}

This configuration limits the size and retention count of Docker container log files.

# View Docker service logs
journalctl -u docker -n 100

# View and follow container logs
docker logs -f web

# Remove unused resources and free disk space
docker system prune

These commands cover three common operational tasks: daemon troubleshooting, container log inspection, and resource cleanup.

Docker security practices should begin at build time.

Containers are not secure by default. Avoid using --privileged, do not run application processes as root unless absolutely necessary, never bake secrets into images, and set CPU and memory limits as baseline controls.

If you run Docker in production, you should also add image vulnerability scanning, private registry access control, and minimal base image policies.

# Run a container with constrained resources
docker run -d \
  --memory 512m \
  --cpus 1 \
  --restart unless-stopped \
  nginx

This command demonstrates a baseline configuration for resource limits and service stability.

FAQ provides structured answers to common questions.

Why does a container exit immediately after startup?

This usually happens because the foreground main process finishes, or because the startup command, environment variables, or configuration files are incorrect. Start with docker ps -a to check the exit code, then use docker logs to identify the specific error.

What is the most important difference between Docker and virtual machines?

Docker shares the host kernel, starts quickly, and uses fewer resources, which makes it ideal for application delivery. Virtual machines have independent kernels and stronger isolation, which makes them better for multi-OS workloads or high-isolation scenarios.

In production, should you prefer bind mounts or volumes?

For persistent workloads such as databases, volumes are usually the better choice because Docker manages them consistently. For development, debugging, and hot reloading static files, bind mounts are often more convenient, but you must handle host paths and file permissions carefully.

AI Visual Insight: This article rebuilds Docker knowledge into a practical operations framework, covering images, containers, registries, Dockerfile, Compose, networking, volumes, logging, restart policies, and troubleshooting. It helps developers and operators build a complete understanding from architecture to hands-on practice.