3

I'm trying to resolve an issue where one particular user hammers our mysql 5.7 database every 2 hours, for a few minutes. Not evil, but I'm suspicious it may be inefficient (or it may not be and it's just the way it is).

The mysql.general_log table gives me everything I want, but I prefer the ease of a file. The output to the file specified with general_log_file works fine, but doesn't include the user_host data that the database table has.

I've read 5.4.3 The General Query Log, googled, but can't seem to find any hint of this.

Is there a format I can set that will include user_host into the file system log file? I must of missed something obvious, why wouldn't the output be exactly the same, file or table?

example of what I see in the file:

2018年03月01日T15:41:45.450491Z 137300 Query SHOW COLLATION
2018年03月01日T15:41:45.451280Z 137299 Query SET NAMES latin1
2018年03月01日T15:41:45.451425Z 137299 Query SET character_set_results = NULL
2018年03月01日T15:41:45.451554Z 137299 Query SET autocommit=1

PS

My workaround is

mysql --user etc... -e "select user_host,argument from general_log where user_host like 'billybob%'" > junk

which isn't as nice as (IMO)

grep billybob log_file

edit:

2018年03月01日T15:41:45.450317Z 137300 Query /* mysql-connector-java-5.1.16 ( Revision: ${bzr.revision-id} ) */SELECT
 @@session.auto_increment_increment
asked Mar 1, 2018 at 16:01
4
  • What does the first line with 137300 have in it? Commented Mar 7, 2018 at 3:03
  • @RickJames I'm not sure I understand. That is the complete cut of those lines from the file. Commented Mar 9, 2018 at 21:32
  • I would expect something about connection #137300 being connected to show up before SHOW.... (But maybe I am wrong.) Commented Mar 9, 2018 at 22:33
  • I understand, I have added the line above the SHOW. I see the connector is out of rev, would that have anything to do with log output? Commented Mar 10, 2018 at 15:39

1 Answer 1

2

IDEA #1 : USE THE SLOW LOG !!!

The Slow Log ??? Yea, I know what I said : USE THE SLOW LOG !!!

How can you ???

With the slow log being written to a text file, the header info for each slow log entry has the user@host right on the first line of each entry (ignore the # Time: line since one or more slow log entries can be saves within a single time window)

For example, the post Mysql slow query log always include "# Time:" has this example slow log:

# Time: 150419 6:00:43
# User@Host: web[web] @ localhost []
# Query_time: 7.730519 Lock_time: 0.000070 Rows_sent: 167620 Rows_examined: 167620
SET timestamp=1429416043;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `messages`;
# Time: 150419 6:00:45
# User@Host: web[web] @ localhost []
# Query_time: 3.480173 Lock_time: 0.000101 Rows_sent: 0 Rows_examined: 0
SET timestamp=1429416045;
INSERT INTO user (lastupdated, fk_id, user_id) ..
# User@Host: web[web] @ localhost []
# Query_time: 3.388204 Lock_time: 0.000133 Rows_sent: 0 Rows_examined: 0
SET timestamp=1429416045;
INSERT INTO user (lastupdated, fk_id, user_id) ..

In your case, you could probably run

grep -A 3 billybob log_file

which shows for each occurrence of billybob

  • The line matching billybob
  • The Query Time
  • Actual timestamp
  • Line 1 of the actual SQL

In order to make the slow log behave like a general log you must run

mysql> SET GLOBAL long_query_time = 0.01;

Then, everything and its grandmother will be written to the slow log.

When you are done with your analysis, set the long_query_time back to its old value:

mysql> SET GLOBAL long_query_time = 10;

Don't forget to truncate the slow log after the analysis by doing

echo -n > log_file

IDEA #2 : Poll the Processlist

The problem with using either the slow log or the general log is that entry will not entry the log until the action is done. In other words, if a query takes 10 min to run, the entry will not land in the log file until the command is done executing 10 minutes after it started.

Wouldn't it be nice to catch the bad queries in the act ???

You should use pt-query-digest. Even with queries that are not done, a histogram will be generated with the pattern of queries that have executed or are still in progress.

Rather than write up an example, please see my posts where I discuss using pt-query-digest

GIVE IT A TRY !!!

answered Mar 2, 2018 at 21:02
1
  • I've been waiting any other answers to come in, but it's amazingly quiet. Even on the mysql dev forum there isn't any notice. I still don't understand why a logger would change data out/format depending on the output destination. But, this is pretty inventive and I wouldn't have thought of using the slow log in this fashion. I love the out of the box thinking! thank you! Commented Mar 7, 2018 at 15:26

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.