Docker from Zero to One: Essential Commands, Image Builds, Networking, Orchestration, and Advanced Practices

Docker is a container platform for building, distributing, and running applications. Its core value lies in solving three common problems: inconsistent environments, complex deployments, and dependency drift. This article covers the critical path across images, containers, Dockerfile, and Compose. Keywords: Docker, containerization, images.

Technical specifications provide a quick snapshot

Parameter Description
Core Technology Docker container platform
Primary Language Go (the implementation language of Docker)
Runtime Model Images, containers, registries
Common Protocols HTTP/HTTPS, Registry API
Orchestration Tools Docker Compose, Swarm
Typical Dependencies registry, MySQL, Redis, Tomcat
Supported Systems Linux / CentOS 7 / Ubuntu
GitHub Stars Not provided in the source material

Docker’s core value is standardizing delivery environments

Docker is not a replacement for virtual machines by definition. It is a tool for standardizing application delivery. It packages code, runtime, dependencies, and configuration into an image so that applications can truly follow the principle of “build once, run anywhere.”

It mainly solves three categories of problems: inconsistent development and test environments, complicated deployment steps, and high service migration costs. It is especially valuable for CI/CD, microservices, fast rollback, and local reproducibility.

The difference between containers and virtual machines makes containers more lightweight

Virtual machines require a full guest operating system, while containers share the host kernel and isolate only processes, file systems, and networking. As a result, containers start faster and consume fewer resources, which makes them better suited for high-density deployments.

docker ps                 # List running containers
docker ps -a              # List all containers
docker images             # List local images
docker --help             # Show help documentation

These commands build the most basic visibility into containers and images.

Common Docker commands cover the daily lifecycle of container management

At the service level, systemctl start|stop|restart|status docker manages the Docker daemon. At the image level, docker images and docker rmi -f are commonly used for inspection and deletion.

At the container level, docker run is the most important entry point. -d runs a container in the background, -it starts an interactive terminal, --name assigns a name, -p maps ports, and -P randomly assigns host ports.

docker run -d --name myredis redis:6.0.8      # Start a Redis container in detached mode
docker run -it --name ubuntu-dev ubuntu /bin/bash  # Open an interactive Ubuntu shell
docker exec -it myredis /bin/bash             # Recommended way to enter a container
docker rm -f myredis                          # Force-remove a running container

These commands cover the core workflow for creating, entering, and cleaning up containers.

Data volumes give containers persistence

Containers are well suited to stateless workloads, but databases, logs, and configuration require persistence. That is why you must use data volumes in those scenarios. After mounting a volume, the host and container directories can share data, and the data remains even after the container is destroyed.

If you add :ro, the mounted directory inside the container becomes read-only. You can verify whether the mount has taken effect with docker inspect. Volumes also support inheritance through --volumes-from.

docker run -it --privileged=true \
  -v /tmp/hostData:/tmp/data \
  --name u1 ubuntu /bin/bash   # Mount a host directory into the container

This command creates a container instance with a persistent directory.

Image builds and extension capabilities come from Dockerfile’s layered model

A Dockerfile is an automated image build script. Each instruction creates a new layer, which gives image builds their caching, reuse, and traceability characteristics. Common instructions include FROM, RUN, COPY, ADD, ENV, WORKDIR, CMD, and ENTRYPOINT.

The logic behind image extension is also straightforward: run a container, install additional software, and then use commit to create a new image. In engineering practice, however, writing the process into a Dockerfile is strongly preferred over relying on manual commits.

FROM ubuntu:22.04
WORKDIR /app
RUN apt-get update && apt-get install -y vim net-tools  # Install common tools
COPY . /app                                             # Copy application files into the image
CMD ["/bin/bash"]                                      # Default command after the container starts

This Dockerfile shows the minimum closed loop for customizing a base image.

A private registry fits offline environments and internal artifact distribution

When images are not suitable for a public registry, you can use registry to quickly build a private image service. The typical process is to pull the registry image, start a service on port 5000, retag the image, configure insecure-registries, and finally run push and pull operations.

docker pull registry
docker run -d -p 5000:5000 \
  -v /krisswen/myregistry/:/tmp/registry \
  --privileged=true registry

docker tag ubuntu1:1.1 192.168.150.135:5000/ubuntu1:1.1  # Tag the image for the private registry
docker push 192.168.150.135:5000/ubuntu1:1.1              # Push the image to the private registry

This workflow builds an internal image distribution pipeline for enterprise use.

Networking and orchestration determine whether Docker can support multi-service systems

Docker uses the bridge network mode by default and assigns each container its own network namespace. In addition to that, it also supports host, container, and none modes, which respectively share the host network, share another container’s network, or disable networking entirely.

In real-world development, custom networks and container-name-based communication are preferred over relying on unstable container IP addresses. This is the foundation for reliable Compose workflows, microservices, and local integration environments.

docker-compose up -d          # Start orchestrated services in detached mode
docker-compose ps             # Check the status of orchestrated containers
docker-compose logs web       # View logs for a specific service
docker-compose down           # Stop and remove related resources

These commands manage startup, troubleshooting, and cleanup for multi-container applications.

Swarm is a practical path from a single host to cluster scheduling

Swarm is Docker’s built-in orchestration capability and is well suited to entry-level cluster scenarios. Nodes are divided into manager nodes and worker nodes, while services support two scheduling modes: replicated and global.

After Swarm initialization, manager nodes maintain cluster state through the Raft protocol. When deploying applications, you can publish services directly from YAML with docker stack deploy -c docker-compose.yml wordpress.

Monitoring and database practices demonstrate Docker’s engineering value

For database services, the source material outlines a MySQL primary-replica replication approach: the primary enables the binary log, and replicas read from a position and replay the relay log. This is a classic case for learning containerized database operations.

At the same time, docker stats only shows real-time resource usage and cannot store history or trigger alerts. A common stack is therefore CAdvisor for collection, InfluxDB for storage, and Grafana for visualization, forming a basic monitoring loop.

docker run -d -p 3307:3306 \
  -v /mydata/mysql-master/log:/var/log/mysql \
  -v /mydata/mysql-master/data:/var/lib/mysql \
  -v /mydata/mysql-master/conf:/etc/mysql \
  -e MYSQL_ROOT_PASSWORD=123456 \
  --name mysql-master mysql:5.7

This command starts a MySQL primary container with persistent configuration, logs, and data.

The image asset is a site decoration element rather than a technical architecture diagram

C Zhidao

This image is a product branding asset. As required, no AI visual analysis is added.

Developers should learn Docker in structured layers

A recommended learning path is to first understand the three-part model of images, containers, and registries; then master core commands such as run, exec, ps, images, and rm; then move on to volumes, networking, Dockerfile, and Compose; and finally expand into private registries, monitoring, Swarm, and database deployment.

If your goal is application delivery rather than low-level internals, prioritize image building, port mapping, volume mounting, Compose orchestration, and log troubleshooting. These areas provide the highest return.

FAQ provides structured answers to common Docker questions

Why is Docker better suited than virtual machines for development and deployment?

Because containers share the host kernel and do not require a full operating system, they start quickly, use fewer resources, and deliver stronger environment consistency. That makes them a better fit for microservices and CI/CD scenarios.

Why is docker exec recommended over attach when entering a container?

docker exec -it starts a new interactive process without interfering with the main process. attach connects directly to the main process, so operator mistakes are more likely to cause the container to exit.

When must you use data volumes?

You should use data volumes in any scenario involving databases, logs, uploaded files, or persistent configuration. Otherwise, data is lost when the container is removed, which does not meet production requirements.

Core summary distills the article into one practical takeaway

This article restructures the original Docker study notes into a high-density technical document. It systematically explains core container concepts, common commands, data volumes, private registries, Dockerfile, Compose, Swarm, and monitoring strategies to help developers quickly build a deployable and scalable mental model of containerization.