Is there a way of making MySQL slow logs to start new log file every day ? At the moment its just a single large file, and have to grep lines for every day. It would be much more convenient to have separate files for every days slow logs.
Do I have to configure a my.cnf or some linux feature ?
3 Answers 3
Everyone is used to this one, the good old text file.
Just run the following to flush a slow log everyday
STEP 01) Turn off the slow query log
SET GLOBAL slow_query_log = 'OFF';
STEP 02) Copy the text file
cat slow-query.log | gzip > /logs/slow-query-`date +"%Y%m%d-%H%M"`.log.gz
STEP 03) Truncate the file to zero bytes
echo -n > slow-query.log
STEP 04) Turn on the slow query log
SET GLOBAL slow_query_log = 'ON';
You could switch to log-output=TABLE
and deal with it as a Table to Query.
STEP 01) Convert mysql.slow_log from CSV to MyISAM
ALTER TABLE mysql.slow_log ENGINE = MyISAM;
STEP 02) Index the table
ALTER TABLE mysql.slow_log ADD INDEX (start_time);
STEP 03) Activate log format to be TABLE
[mysqld]
log-output=TABLE
STEP 04) service mysql restart
Once you startup mysqld, the slow log entries are recorded in the MyISAM table mysql.slow_log;
To rotate out the entries before midnight, you could something like this:
SET GLOBAL slow_query_log = 'OFF';
SET @dt = NOW();
SET @dtstamp = DATE_FORMAT(@dt,'%Y%m%d_%H%i%S');
SET @midnight = DATE(@dt) + INTERVAL 0 SECOND;
ALTER TABLE mysql.slow_log RENAME mysql.slow_log_old;
CREATE TABLE mysql.slow_log LIKE mysql.slow_log_old;
INSERT INTO mysql.slow_log SELECT * FROM mysql.slow_log_old WHERE start_time >= @midnight;
DELETE FROM mysql.slow_log_old WHERE start_time >= @midnight;
SET @sql = CONCAT('ALTER TABLE mysql.slow_log_old RENAME mysql.slow_log_',@dtstamp);
PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt;
SET GLOBAL slow_query_log = 'ON';
and that's all for slow logs...
-
Cheers for useful and quick reply, as always. My only concern is about FLUSH LOGS on servers with replication slave or master. Will that influence bin-logs or relay-logs ?Katafalkas– Katafalkas2012年04月03日 08:54:22 +00:00Commented Apr 3, 2012 at 8:54
-
FLUSH LOGS will only closing the current binary log and opena new one. If you have replication, the slave will keep pace with the master's log rotation.RolandoMySQLDBA– RolandoMySQLDBA2012年04月03日 21:44:10 +00:00Commented Apr 3, 2012 at 21:44
-
I tried both this script and logrotate, and non of them seem to work. It turned out that flush logs did not work. I started new question on that:dba.stackexchange.com/questions/16339/…Katafalkas– Katafalkas2012年04月11日 10:38:35 +00:00Commented Apr 11, 2012 at 10:38
Update
As Aaron points out, there is the chance the copy-and-truncate can miss some entries. So the safer method is to move and FLUSH
.
Original
This article has the basic principle to rotating the slow query log that I use. Basically you need to copy the slow log to a new file, then truncate the contents of the slow.log:
cp log/slow.log log/slow.log.`date +%M`; > log/slow.log
If you just move the slow log to a new file and creating a new 'slow.log', it won't work because the moved file still has the same inode, and mysql still has it open. I suppose moving the file and then issuing a FLUSH SLOW LOGS
command would work, as that closes the file and reopens, but I find the copy-and-truncate to be just as effective and doesn't require logging into mysql.
His article mentions using logrotate in Linux, but I just made a cronjob to run once a day at midnight to do this for me.
Also, to address the issue of replication on FLUSH LOGS
:
FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE, and FLUSH TABLES WITH READ LOCK (with or without a table list) are not written to the binary log in any case because they would cause problems if replicated to a slave. [src]
So no, since those statements are not written to the binary log, it will not interfere with replication. For your purposes I would specify FLUSH SLOW LOGS
to only close/open the slow query log.
-
2Copy and truncate can miss entries. mv + FLUSH LOGS will not.Aaron Brown– Aaron Brown2012年03月30日 15:06:46 +00:00Commented Mar 30, 2012 at 15:06
-
I agree with Aaron. Moving it is just some file system shuffling so quicker as well (assuming it's on the same FS) . mysqld keeps the file pointer to the one you just mv'd until you flush logsatxdba– atxdba2012年03月30日 15:37:55 +00:00Commented Mar 30, 2012 at 15:37
-
Good points, both of you. Obviously in my environment I have not been bothered with the possibility of missing <1 second of slow logs, but to avoid a potential 'bite-me-in-the-***' I'll probably switch it myself.Derek Downey– Derek Downey2012年03月30日 17:58:21 +00:00Commented Mar 30, 2012 at 17:58
-
But if you have replication happening, will the FLUSH LOGS do influence it some way ?Katafalkas– Katafalkas2012年04月03日 07:35:21 +00:00Commented Apr 3, 2012 at 7:35
-
@Katafaikas added a bit about replication..Derek Downey– Derek Downey2012年04月03日 14:32:00 +00:00Commented Apr 3, 2012 at 14:32
use Logrotate.d to daily rotate the files and keep as many days as you want or move them off...then issue a flush-logs from the same script to get MySQL to start a new file....having that in log rotate, set to daily should get you what you want..
I am hoping someday they implement something similar to 'expire_log_days' for debugging logs like genlog and slow log
Explore related questions
See similar questions with these tags.