I have a MySQL InnoDB database with two tables related by a foreign key. Each time a row is inserted into one it uses an INSERT ... SELECT
statement from the other to get the corresponding foreign key.
I have an application that is continuously doing a large number of these INSERT ... SELECT
statements to populate this database. I have read that using a multiple row insert, i.e. INSERT INTO table (col1, col2, col3) VALUES (1,2,3), (4,5,6), ...
, can be significantly faster than individual inserts. Unfortunately it appears that splitting the database into separate tables prevents me from taking advantage of this? Is there a workaround or another way to improve the speed of these inserts?
EDIT:
Would any of the following be likely to help?
- Combining the
INSERT ... SELECT
statements with aUNION
. - A multiple row insert into a temporary table followed by a
JOIN
and a singleINSERT ... SELECT
. - Splitting the inserts among multiple threads.
1 Answer 1
SUGGESTIONS
- Increase the size of your Log Buffer. The default value for innodb_log_buffer_size is 8M. You should make 256M or 512M. (Restart Required)
- Turn off the Double Write Buffer. Set innodb_doublewrite to 0. (Restart Required)
- OPTIONAL : Temporarily disable change buffering, which throttles secondary index updates.
- OPTIONAL : In your application, you should limit the number of rows to insert. Try insert 500 or 1000 rows at a time. If the rows have long VARCHAR columns, then insert 100 or 200 rows at a time.
SUMMARY
Add these lines to my.cnf
[mysqld]
innodb_log_buffer_size = 256M
innodb_doublewrite = 0
then, restart mysql.
If these do not improve things, then try disabling the change buffering with
SET GLOBAL innodb_change_buffering = 'none';
You could always change it back to 'all' after a number of large bulk inserts.
GIVE IT A TRY !!!
-
Thank you for the suggestions, I will experiment with these. Do you know if there is any way I can better restructure the queries themselves?Patrick– Patrick2014年05月03日 00:57:38 +00:00Commented May 3, 2014 at 0:57