0

I am creating MySQLDumps for the sake of versioning of a database. The problem currently is a table with blobs.

My first approach would be to dump single tables to single files. But next, I want on this specific table to dump each row to a single file.

How can I accomplish this using MySQLdump?

RolandoMySQLDBA
185k34 gold badges327 silver badges541 bronze badges
asked Dec 30, 2017 at 19:29

2 Answers 2

1

There are three(3) things you need to do

  1. Provide a list to tables that you want to dump row by row
  2. Use --skip-extended-insert, which forces each INSERT to be one row
  3. Use --hex-blob

Here is such a script

MYSQL_USER=root
MYSQL_PASS=rootpass
MYSQL_CONN="-h127.0.0.1 -P3306 --protocol=tcp -u${MYSQL_USER} -p${MYSQL_PASS}"
DB_TO_DUMP=mydb
TBLISTFILE=/tmp/tables_in_${DB_TO_DUMP}.txt
TABLES_TO_SPLIT_BY_ROW="tb1 tb2 tb3"
SQL="SELECT table_name FROM information_schema.tables"
SQL="${SQL} WHERE table_schema='${DB_TO_DUMP}'"
mysql ${MYSQL_CONN} -ANe"${SQL}" > ${TBLISTFILE}
TBLIST=""
for TB in `cat ${TBLISTFILE}` ; do TBLIST="${TBLIST} ${TB}" ; done
DT=`date +"%Y%m%d%H%M%S"`
for TB1 in `echo "${TBLIST}"`
do
 ROW_BY_ROW=0
 for TB2 in `echo "${TABLES_TO_SPLIT_BY_ROW}"`
 do
 if [ "${TB1}" == "${TB2}" ] ; then ROW_BY_ROW=1 ; fi
 done
 MYSQLDUMP_OPTIONS=""
 if [ ${ROW_BY_ROW} -eq 1 ]
 then
 MYSQLDUMP_OPTIONS="--hex-blob --skip-extended-insert"
 fi
 DUMPFILE=${TB1}_${DT}.sql
 mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} ${DB_TO_DUMP} ${TB1} > ${DUMPFILE}
done

Just supply the name of the database in the DB_TO_DUMP along with the tables whose rows are to be one by one in the TABLES_TO_SPLIT_BY_ROW variable.

Please note how the dumpfile name has a datetime appended to the table name so you can do some versioning based on the datetime. You can change that to be versioned any way you wish.

You did ask that each row be in its own file. I will leave that up to you to script.

answered Dec 30, 2017 at 23:28
1
  • Okay, I am using this to split the file: split -l1 -a10 -d ../belege.sql now one small problem: I better should name the file by the primary key in the dump. Commented Jan 15, 2018 at 17:22
0

This is what I ended up doing to split the single-table dump into one file per row, named by ID:

awk -F',' '/^INSERT/{print > gensub(".*\\(","", "G", 15ドル)".sql"}' ../big.sql

Where my table has around 15 fields. This takes the 15th column, separated by "," and replaces out the remaining SQL part (there are more elegant ways to do this, works only because I have numeric-only identifiers)

This is full script - I happen to have only one such table, so it misses the logic of the other answer

#!/bin/bash
DBNAME=example
BIGTABLE=big
mysqldump $DBNAME --single-transaction --quick --complete-insert --skip-extended-insert --ignore-table=$DBNAME.$BIG | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > sql/master.sql
rm -rf sql/master.d
mkdir sql/master.d
mysqldump $DBNAME --hex-blob --single-transaction --quick --complete-insert --skip-extended-insert $BIG | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > sql/master.d/$BIG.sql
cd sql/master.d/
mkdir $BIG.d
cd $BIG.d
awk -F',' '/^INSERT/{print > gensub(".*\\(","", "G", 15ドル)".sql"}' ../$BIG.sql
rm ../$BIG.sql
answered Jan 15, 2018 at 17:35

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.