Built for Docker operations on Linux servers, this guide explains how to back up, compress, restore, and migrate MySQL and PostgreSQL databases from the command line. It addresses data protection in environments without a graphical interface. Keywords: Docker backup, MySQL restore, PostgreSQL dump.
Technical Specifications at a Glance
| Parameter | Description |
|---|---|
| Runtime Environment | Linux / Docker |
| Databases | MySQL, PostgreSQL |
| Backup Method | In-container command execution + host-side redirection |
| Common Tools | docker exec, mysqldump, mysql, pg_dump, pg_restore, psql, gzip |
| Article Type | Command-line operations guide |
| Star Count | Not provided in the source data |
| Core Dependencies | Docker Engine, gzip, official database client tools |
This backup baseline works well for server environments
On Linux servers, many environments do not include graphical database tools. In containerized deployments, entering the container and running the official backup utilities directly is often more reliable. Compared with connecting from the host over the network, the docker exec approach is less affected by ports, permissions, and network policies.
In production, the goal of a backup is not just to export an SQL file. You also need to verify that data, schema, triggers, stored procedures, and the restore path are all recoverable. MySQL and PostgreSQL use different toolchains, and their restore workflows are not interchangeable.
Running backup commands inside the container is usually the best choice
# Run official database tools inside the container to avoid missing clients or network issues on the host
docker exec
<container_name> <db_dump_command> > /backup/output.sql
The main benefit of this pattern is version alignment: the database tools match the container image, which reduces export compatibility issues.
MySQL in Docker supports reliable logical backups with mysqldump
Assume the container name is mysql and the database name is biz_db. A logical backup is usually the best starting point because it is easy to restore across machines and containers, and it works well in scheduled tasks.
A standard backup works well for routine full exports
# Export a single database and use transaction-consistent options to reduce table locking impact
docker exec mysql \
mysqldump -u root -p'数据库密码' \
--single-transaction --quick \
biz_db > /home/soft/mysql/backup/biz_db_$(date +%F).sql
This command creates a date-stamped SQL file on the host, which makes it suitable for daily archiving.
Compressed backups are better for long-term storage and transfer
# Compress the exported output directly to significantly reduce backup size
docker exec mysql \
mysqldump -u root -p'数据库密码' biz_db \
| gzip > /home/soft/mysql/backup/biz_db_bak.sql.gz
This method reduces disk usage and works well for object storage uploads or offsite synchronization.
Production backups should explicitly include procedural objects
# Back up stored procedures, triggers, and events together to preserve business logic after restore
docker exec mysql \
mysqldump -u root -p'数据库密码' \
--routines --triggers --events \
--single-transaction biz_db \
| gzip > /home/soft/mysql/backup/biz_db_full.sql.gz
This is a more production-ready logical backup approach.
MySQL restore workflows should match the backup format
At its core, a restore operation feeds the SQL stream back into the mysql client. If the target database does not exist yet, create it first and then run the import.
Restoring from a plain SQL file is the most direct option
# Import SQL content into the target database through standard input
cat /home/soft/mysql/backup/biz_db.sql | docker exec -i mysql \
mysql -u root -p'数据库密码' biz_db
This command works well for test environments or quick restores of smaller databases.
Restoring from a compressed file avoids manual decompression
# Decompress and restore in a single stream to avoid intermediate files
gunzip < /home/soft/mysql/backup/biz_db.sql.gz | docker exec -i mysql \
mysql -u root -p'数据库密码' biz_db
This is a good fit for servers with limited disk space.
Restoring directly into a new container is useful for migration and drills
# Start a new container first, then import the compressed backup into the new instance
docker run -d --name mysql-new \
-e MYSQL_ROOT_PASSWORD=数据库密码 \
mysql:8.0
# Wait until the database has started, then run the restore
gunzip < biz_db.sql.gz | docker exec -i mysql-new \
mysql -uroot -p'数据库密码' biz_db
This workflow is common in version migrations, disaster recovery drills, and new environment rebuilds.
PostgreSQL in Docker benefits from separating SQL backups and custom dump backups
Assume the container name is postgres and the database name is biz_db. PostgreSQL commonly uses two backup formats: plain SQL and custom-format dumps. The custom format is more flexible for granular restores with pg_restore.
A standard backup should prefer the custom format
# Generate a custom-format backup file inside the container for later restore with pg_restore
docker exec postgres \
pg_dump -U postgres -d biz_db \
-F c -f /tmp/biz_db.dump
docker cp postgres:/tmp/biz_db.dump /backup/
This format balances compression and restore flexibility, which makes it a strong choice for production.
You can also write a plain SQL file directly on the host
# Export standard SQL to a file on the host
docker exec postgres \
pg_dump -U postgres biz_db \
> /backup/biz_db.sql
This approach is simple and transparent, and it works well for reviewing SQL or handling small migrations.
Compressed backups are well suited for archival storage
# Compress the exported output directly for long-term retention
docker exec postgres \
pg_dump -U postgres biz_db \
| gzip > /backup/biz_db.sql.gz
This reduces backup size, but you still restore it as an SQL stream.
PostgreSQL restore methods must strictly match the backup format
Plain SQL backups should be imported with psql, while custom-format dumps should be restored with pg_restore. This is the most important distinction in PostgreSQL restore workflows.
Use psql to restore SQL files
# Import an SQL file into the target database
cat /backup/biz_db.sql | docker exec -i postgres \
psql -U postgres -d biz_db
This method is appropriate for backups generated by pg_dump in plain-text mode.
Custom dump restores should use pg_restore
# Restore a custom-format dump file
cat /backup/biz_db.dump | docker exec -i postgres \
pg_restore -U postgres -d biz_db
This is the more common restore method in PostgreSQL production environments.
Compressed SQL restores also support streaming imports
# Decompress and restore a PostgreSQL backup in a single stream
gunzip < /backup/biz_db.sql.gz | docker exec -i postgres \
psql -U postgres -d biz_db
This approach avoids creating extra temporary files and is useful for bulk restore tasks.
A production backup strategy must cover both recoverability and security
First, do not write a MySQL password as -p password; there must be no space after -p. Second, create a dedicated backup account instead of using root or a superuser for long-running automated tasks.
Also, a completed backup job does not mean the work is done. You should regularly validate restores in a new container or test environment to confirm that backup files are not corrupted, privilege-related objects are intact, and business tables can be queried normally. This matters more than simply having a backup.
Example script for scheduled backup jobs
#!/bin/bash
# Run a compressed MySQL backup every day and name it by date
BACKUP_DIR=/home/soft/mysql/backup
DATE=$(date +%F)
mkdir -p ${BACKUP_DIR} # Ensure the backup directory exists
docker exec mysql \
mysqldump -u backup -p'你的密码' --single-transaction biz_db \
| gzip > ${BACKUP_DIR}/biz_db_${DATE}.sql.gz
You can plug this script directly into crontab to create a minimal viable automated backup pipeline.
FAQ
1. Why is docker exec recommended for Docker database backups instead of connecting directly from the host?
Because the tools inside the container match the database instance version, and the process does not depend on host networking, port mappings, or locally installed clients. In most cases, this makes the workflow more stable.
2. Why should MySQL backups include --routines --triggers --events?
Because a default export may not fully include stored procedures, triggers, and event objects. If these objects are missing during a production restore, your application logic may be incomplete.
3. How should pg_dump and pg_restore work together in PostgreSQL?
If you generate a custom-format file with pg_dump -F c, restore it with pg_restore. If you export plain SQL, import it with psql. The backup format and restore tool must match.
Core summary: This guide reconstructs practical command-line backup and restore workflows for MySQL and PostgreSQL in Docker. It covers standard exports, compressed backups, complete backups of schema and procedural objects, SQL and dump restores, and migration into new containers. It is well suited for Linux server environments without a graphical interface.