2

I have the MySQL slow log feature enabled: http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html

But sometimes the query_times are high simply due to high CPU load.

How can I append the current CPU load to each entry in the MySQL slow log (it writes to a file)?

asked Jan 13, 2012 at 16:10
0

2 Answers 2

2

While I don't know of any tool that provides this information specifically, Percona Server does have a setting (log_slow_verbosity=full) that enables extended statisitics in the slow log along with microsecond granularity. The data does not explicitly include CPU utilization, but it does provide a number of additional statistics about the internals of MySQL and InnoDB that may help you determine whether or not the problem is CPU related or a result of poor optimization.

This is an example of the additional information in the slow log:

# Time: 120114 6:34:33
# User@Host: user[user] @ [10.10.10.10]
# Thread_id: 28313080 Schema: mydb Last_errno: 0 Killed: 0
# Query_time: 0.588882 Lock_time: 0.000068 Rows_sent: 3 Rows_examined: 183839 Rows_affected: 0 Rows_read: 100
# Bytes_sent: 121 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# InnoDB_trx_id: 9903E4DB1
# QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No
# Filesort: No Filesort_on_disk: No Merge_passes: 0
# InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000
# InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000
# InnoDB_pages_distinct: 11359

Also, pt-query-digest from the Percona Toolkit understands this additional information and will summarize the slow query log for you and produce output, per query, that looks like this:

# Query 1: 0.09 QPS, 0.07x concurrency, ID 0x20112A1CCDBDBA4E at byte 4337645
# Scores: Apdex = 1.00 [1.0], V/M = 0.01
# Query_time sparkline: | ^ |
# Time range: 2012年01月14日 06:34:33 to 08:51:57
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 63 775
# Exec time 59 539s 522ms 855ms 695ms 816ms 87ms 640ms
# Lock time 18 60ms 43us 498us 76us 103us 36us 66us
# Rows sent 1 3.15k 0 38 4.17 12.54 4.90 2.90
# Rows examine 22 135.93M 179.38k 179.90k 179.60k 174.27k 0 174.27k
# Rows affecte 0 0 0 0 0 0 0 0
# Rows read 0 3.43k 0 118 4.53 20.43 11.93 0.99
# Bytes sent 1 103.07k 82 576 136.18 246.02 63.85 118.34
# Merge passes 0 0 0 0 0 0 0 0
# Tmp tables 0 0 0 0 0 0 0 0
# Tmp disk tbl 0 0 0 0 0 0 0 0
# Tmp tbl size 0 0 0 0 0 0 0 0
# Query size 5 222.51k 294 294 294 294 0 294
# InnoDB:
# IO r bytes 0 0 0 0 0 0 0 0
# IO r ops 0 0 0 0 0 0 0 0
# IO r wait 0 0 0 0 0 0 0 0
# pages distin 63 8.44M 10.66k 12.72k 11.15k 11.34k 357.67 10.80k
# queue wait 0 0 0 0 0 0 0 0
# rec lock wai 0 0 0 0 0 0 0 0
# String:
# Databases mydb
# Hosts
# InnoDB trxID 9903E4DB1 (1/0%), 9903E4E09 (1/0%)... 773 more
# Last errno 0
# Users user
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms
# 100ms ################################################################
# 1s
# 10s+
# Tables
# SHOW TABLE STATUS FROM `mydb` LIKE 'table'\G
# SHOW CREATE TABLE `mydb`.`table`\G
# EXPLAIN /*!50100 PARTITIONS*/

...followed by the query itself.

I have blogged a bit about using pt-query-digest and other tools for finding performance bottlenecks (and promised more blogging, but haven't gotten around to it yet).

Another tool which may help you is pt-stalk and pt-collect. This will allow you to set a threshold based on a running server variable, such as when the number of concurrently running threads jumps up (Threads_running) and then collect a wealth of data about the running system, including cpu, I/O, MySQL statistics, the running processlist, etc to give you the ability to perform a thorough post-mortem.

The additional statistics are only available in Percona Server, but these tools will all work with MySQL GA.

answered Jan 14, 2012 at 14:02
2
  • 1
    Ok, I removed it even though I can't think of a use case where Percona Server wouldn't be an improvement over MySQL GA. I agree that, especially without justification, it doesn't belong in that post, however. Commented Feb 13, 2012 at 16:41
  • I agree that pt-stalk/pt-collect will help you identify times of higher-than-normal load. Even if you can't inject that into the slow query, you can use it to cross-reference Commented Feb 13, 2012 at 17:12
0

AFAIK you cannot configure mysql to inject this information for you. What you could do is setup some other system monitoring such as nagios to independently sample system values such as CPU util and load avgs. Then you can correlate the graphs with the times of your queries

answered Jan 13, 2012 at 20:50
1
  • I do that already, but the slow log does not occur constantly. It would be far easier and more convenient if I could somehow append the CPU usage into the slow log file. Commented Jan 13, 2012 at 21:27

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.