I'm trying to dump all tables of a specific MySQL database into each file(s) per table from a remote server.
I tried to use mysqldump
with option --tab=dir_name
as the output directory, but it seems only work locally. When I do it with option --host=remote_db_ip
to connect to remote DB server, it only produces .sql files (with only table structure inside) on my server, and throws the following errors because it is using SELECT INTO OUTFILE
which is trying to find that output path on the remote server machine.
mysqldump: Got error: 1: Can't create/write to file '/output/path/table_name.txt' (Errcode: 2 - No such file or directory) when executing 'SELECT INTO OUTFILE'
P.S. I know one workaround that is to dump to the remote DB server first then transfer back to my server, but the target database is huge and there is not enough extra disk space for storing dump output on the remote DB server.
MySQL version: 5.6
2 Answers 2
You could temporarily set up a sshfs
mount of the target directory and use --tab=dir_name
on that mounted directory.
Use information_schema
.TABLES
to generate one of these for each table:
mysqldump ... -h source_host | mysql ... -h target_host
Then copy them to execute them.
This uses a "pipe" instead of a disk file between the two commands, thereby avoiding what seems to be a permission problem.
-
I'm dumping them for archive purpose so no need to execute right away. Also it is the wrong machine being used instead of "permission problem". Thanks any way.Myles– Myles2018年02月12日 02:15:05 +00:00Commented Feb 12, 2018 at 2:15