MySQL Parallel Dump
Multi threaded mysqldump is not an utopia any more. mysqlpdump can dump all your tables and databases in parallel so it can be much faster in systems with multiple cpu’s.
It stores each table in a different file by default. It can also generate the dump to stdout although this is not recommended because it can use all the memory in your system if your tables are big.
Here is my effort to implement some of that suggestions.
Simplest usage (will save a file for each table):
mysqlpdump.py -u root -p password
Save compressed files (gzip) to /tmp/dumps and pass “–skip-opt” to mysqldump
mysqlpdump.py -u root -p password -d /tmp/dumps/ -g -P "--skip-opt"
Output to stdout and use 20 threads:
mysqlpdump.py -u root -p password -stdout -t 20
Be more “verbose”:
mysqlpdump.py -u root -p password -v
Exclude “mysql” and “test” table from dumping:
mysqlpdump.py -u root -p password -e mysql -e test
Only dump “mysql” table:
mysqlpdump.py -u root -p password -i mysql
- mysqlpdump at freshmeat
- Original article in MySQL Performance blog
- mysql-paralel-dump (similar script from the autor of MySQL Toolkit)
- Compress 00_master_data.sql file if specified
- bugfix: when it’s called without a terminal or a logged user, it uses “nobody”.
- bugfix: destination now works with 00_master_data.sql
- Made it compatible with python 2.4
- Can include and exclude specified databases.
- Fixed a bug that prevented the tables of being dumped because of a lock
- Added –master-data option to write “CHANGE MASTER TO ” statement
- Store dumps to files directly instead to stdout
- Can compress files
- Dump each table in its own file
- Can pass parameters directly to mysqldump
- First version
mysqlpdump uses GNU/GPL License.