I am in a bit of an awkward situation right now. I have a database with a few large InnoDB tables, one of which is about 145 GiB. I have been running a lot of maintenance on this table in particular, not realizing that a lot of that compounded in the ibdata1 file. Now I am in a situation where the ibdata1 file has grown so much that it has used up all of my disk space on my dedicated SQL SSD drive. I have read up on the matter and I know that I have to reload the database into the sql server for it to shrink the ibdata1 file. However, the mysqldump I took of my database is suspiciously small (it is about 2/3rds of my database size and is uncompressed), so what I want to do is load the dump file in a test database to check if there are any issues with it. The issue is, as I do not have any disk space left on my drive (this is a vicious cycle). So, what I would like to do is to move the ibdata1 file to my main HDD, which has plenty of space, and then add a line in the mysql configuration file telling the new location of the ibdata1 file, so I can free up enough space to load up a dummy database. Can I do this? And if so, how do I do this?
Mysql's location; /mysql,
ibdata1 location; /tmpsql/ibdata1 (I can move this to let's say /var/lib/mysql or so if that would be better).
I already set innodb_file_per_table in the configuration file before it used up all my disk space. I have moved the ib_logfile* files as well to the same directory as ibdata1. The ibdata1 file was still used and extended regardless of this setting.
Please let me know if you need any additional information. Thanks in advance!
1 Answer 1
You should be very happy the mysqldump is two-thirds the size of the data. Why?
You should want to move data and indexes without copying unused space at the same time. What aspects would you have to consider by simply importing a mysqldump?
Aspect #1
InnoDB indexes tend to get very bloaty because Secondary Indexes have keys into the Clustered Index. This always results in a double index lookup. This can be overlooked with large, write-heavy datasets.
Aspect #2
InnoDB tablespaces tend to get very bloaty because of MVCC being created and discarded without an automatic reclaiming of disk space:
- With innodb_file_per_table disabled, the system tablespace file ibdata1 would steadily grow with 0% chance of reclaiming of disk space. You would have to convert InnoDB storage engine to use innodb_file_per_table.
- With innodb_file_per_table enabled, you could go to the individual InnoDB and run of the the following to physically shrink the InnoDB table
mydb.mytable
:OPTIMIZE TABLE mydb.mytable;
ALTER TABLE mydb.mytable ENGINE=InnoDB;
In light of this, you should keep in mind that a mysqldump is a logical representation of the data and indexes. There is no wasted space to be concerned with. You should not have to scp or rsync bloated InnoDB data or logs. Importing a mysqldump into a fresh installation of MySQL should be what you what want.
I have written about this many times:
- MySql - Clean ibdata1 (April 19, 2012)
- How do you remove fragmentation from InnoDB tables? (April 11, 2012)
- Is innodb_file_per_table advisable? (April 01, 2012)
- How to Clean a mysql innodb storage engine (October 29, 2010)
Once you have loaded the test database with the mysqldump, you should run CHECKSUM TABLE
on random tables of various sizes. If all tables have the same checksum value, you could certify as good. Make a disklevel backup of the data on the prod server as well.
You can then do this in production
- shutdown mysql
- hose everything (including ibdata1, ib_logfile0, ib_logfile1) in /var/lib/mysql except for /var/lib/mysql/mysql
- Set
innodb_data_file_path=ibdata1:10M:autoextend
in /etc/my.cnf - restart mysql (this recreates ibdata1, ib_logfile0, ib_logfile1)
- reload mysqldump
Your InnoDB architecture and data will then be lean and mean.
-
Thanks for your post. I resolved the situation. Things run a lot smoother now I set it to use a per table tablespace.Battle_707– Battle_7072012年07月05日 23:10:33 +00:00Commented Jul 5, 2012 at 23:10
-
@Battle_707 if you'll login to the site using the same credentials you did on Stack Overflow you would associate both accounts, and start on this site with a +100 rep bonus for your contributions on SO.2012年07月06日 00:10:49 +00:00Commented Jul 6, 2012 at 0:10
-
@jcolebrand: I see. Thank you for the information!Battle_707– Battle_7072012年07月06日 17:24:47 +00:00Commented Jul 6, 2012 at 17:24