0

I have an asp.net application which use Mysql. I test it on testing server running MySQL. When the application is running, it consumes 40% of cpu. I run the same code in production server which also running MySQL, now it consumes 99% of cpu usage. What setting in MySQL config will affect the CPU consuming level? Because I am running the same code but the result is totally different.

Here is the config of high cpu usage mysql:

[mysqld]
interactive_timeout=28800
wait_timeout=28800
port=3306
datadir=C:/ProgramData/MySQL/MySQL Server 5.7\Data
character-set-server=utf8
default-storage-engine=INNODB
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
log-output=FILE
general-log=1
general_log_file="EC2AMAZ-SSAVRCC.log"
slow-query-log=1
slow_query_log_file="EC2AMAZ-SSAVRCC-slow.log"
long_query_time=10
log-error="EC2AMAZ-SSAVRCC.err"
server-id=1
secure-file-priv="C:/ProgramData/MySQL/MySQL Server 5.7/Uploads"
max_connections=151
query_cache_size=0
table_open_cache=2000
tmp_table_size=2G
thread_cache_size=10
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=4G
key_buffer_size = 10M
read_buffer_size=64K
read_rnd_buffer_size=256K
innodb_flush_log_at_trx_commit=1
innodb_log_buffer_size = 8M
innodb_buffer_pool_size = 6144M
innodb_log_file_size=48M
innodb_thread_concurrency=33
innodb_autoextend_increment=64
innodb_buffer_pool_instances=8
innodb_concurrency_tickets=5000
innodb_old_blocks_time=1000
innodb_open_files=300
innodb_stats_on_metadata=0
innodb_file_per_table=1
innodb_checksum_algorithm=0
back_log=80
flush_time=0
 join_buffer_size=256K
max_allowed_packet=10M
max_connect_errors=100
open_files_limit=4161
query_cache_type=0
sort_buffer_size=256K
table_definition_cache=1400
binlog_row_event_max_size=8K
sync_master_info=10000
sync_relay_log=10000
sync_relay_log_info=10000

Here is the code of less cpu usage mysql:

[mysqld]

port=3306

datadir=C:/ProgramData/MySQL/MySQL Server 5.7/Data

default-storage-engine=INNODB

sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

log-output=FILE

general-log=0

general_log_file="c:\MySQL_LOG\WIN-9BPM7TSIO9C.log"

slow-query-log=0

slow_query_log_file="WIN-9BPM7TSIO9C-slow.log"

long_query_time=1

log-error="WIN-9BPM7TSIO9C.err"

server-id=1

secure-file-priv="C:/ProgramData/MySQL/MySQL Server 5.7/Uploads"

max_connections=151

table_open_cache=2000

tmp_table_size=16M

thread_cache_size=10

myisam_max_sort_file_size=100G

myisam_sort_buffer_size=8M

key_buffer_size=8M

read_buffer_size=0

read_rnd_buffer_size=0

innodb_flush_log_at_trx_commit=1

innodb_log_buffer_size=1M

innodb_buffer_pool_size=8M

innodb_log_file_size=48M

innodb_thread_concurrency=17

innodb_autoextend_increment=64

innodb_buffer_pool_instances=8

innodb_concurrency_tickets=5000

innodb_old_blocks_time=1000

innodb_open_files=300

innodb_stats_on_metadata=0

innodb_file_per_table=1

innodb_checksum_algorithm=0

back_log=80

flush_time=0

join_buffer_size=256K

max_allowed_packet=100M

max_connect_errors=100

open_files_limit=4161

sort_buffer_size=256K

table_definition_cache=1400

binlog_row_event_max_size=8K

sync_master_info=10000

sync_relay_log=10000

sync_relay_log_info=10000

net_read_timeout = 999999 net_write_timeout = 999999

asked Sep 27, 2019 at 15:43
1

2 Answers 2

1

There may be various causes for this difference, but the first thing that comes to mind is that the second configuration is tuned to use less memory (e.g. innodb_buffer_pool_size, tmp_table_size). Therefore, it may need to use more disk IO to process the same amount of data. CPU has to wait for disk operations to complete before it may proceed to the next chunk of data and hence it becomes underutilized if there are lots of them.

You may try to confirm this conjecture by establishing disk performance monitoring approximately like described in https://blogs.technet.microsoft.com/askcore/2012/03/16/windows-performance-monitor-disk-counters-explained/ .

And I would say 100% utilization of one CPU core is a normal thing with bulk operations in MySQL. The engine doesn’t have much parallelization so it generally can’t use more than one core per session. Lower utilization during bulk operations (e.g. mass insert) means there’s non-CPU bottleneck that needs addressing (because it makes the same operation take longer). You may try to figure out if that’s the case by comparing the time it takes the test script to complete in the two environments.

answered Sep 28, 2019 at 15:20
0

For starters, the General Log is turned on with the 'slow' production machine. Compare your test to production ini files and you should recognize many 'capacity' variables that are significantly different and will cause the production to consume more resources.

answered Sep 28, 2019 at 21:48

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.