Skip to main content

What to do if a query is logged in mysql slow query log.



On a mysql server there are a lot of queries to optimize and a lot of load generated by them. I’ll try to present the most usual optimization issues and how to identify them.

First you’ll have to check mysql service settings . You can check them manually by following commands in mysql command line:

mysql>show variables;

or

mysql>show variables like ‘%cache%’;

and

mysql> show status;

you can check the counters and increase or decrease them according to their usage and limits.

The more easy way is to use some scripts to check mysql settings like : MySQL performance tuning primer script (tuning-primer.sh).

First and most important optimization is to activate and set query_cache and query_cache_size to lower disk IO usage.

After that you will have to check the running queries . For that you’ll need to enable first :

log-slow-queries= /var/lib/mysql/mysql-slow.log

long-query-time=3

in my.cnf then touch /var/lib/mysql/mysql-slow.log and set owner as mysql user/group. You can optionally add “log-queries-not-using-indexes”

Open then /var/lib/mysql/mysql-slow.log and check slow queries one by one.

Not every query in the slow query logs is necessary a bad one. Look for queries where some of the following criteria are met:

A) “Rows_examined” is more than 2000

B) “Rows_examined” is less than 2000 but that query is being run 20 times a second.

C) “Rows_examined” is three times bigger than “Rows_sent”

(Note that these are rough criteria. Your mileage may vary depending upon your situation.)

Focus on the two or three worst queries at first. Once you’ve found a few, utilize the EXPLAIN statement to find a better way to restructure your query. See this link for an explanation and walkthrough of the EXPLAIN statement.

Now lets grep in the log file:

grep Rows_examined /var/lib/mysql/mysql-slow.log.old | sort -g -k9 -r |head -5

a sample result would be:

# Query_time: 19 Lock_time: 0 Rows_sent: 31 Rows_examined: 6424367

# Query_time: 58 Lock_time: 0 Rows_sent: 3886219 Rows_examined: 3886219

# Query_time: 47 Lock_time: 0 Rows_sent: 3886219 Rows_examined: 3886219

# Query_time: 40 Lock_time: 0 Rows_sent: 3886219 Rows_examined: 3886219

# Query_time: 39 Lock_time: 0 Rows_sent: 3886219 Rows_examined: 3886219

# search for the worst offender, here’s one way to do that:

grep -A 2 -B 2 3886219 /var/lib/mysql/mysql-slow.log

# Time: 030611 18:49:05

# User@Host: dbusername[dbusername] @ composer.com [166.233.115.222]

# Query_time: 1 Lock_time: 0 Rows_sent: 3886219 Rows_examined: 3886219

SELECT msgs.*, username AS sender_username FROM msgs INNER JOIN users ON (users.id = msgs.sender_id) WHERE user_id=939 AND msgs.status != 1 AND del != 2 ORDER BY date DESC;

# Open up a mysql shell to fix the problem:

[localhost]$ mysql -h mysql.exampledomain.com -u dbusername -pYOURPASSWORDHERENOSPACES dbname

# EXPLAIN statement to show you how bad the query is.

# Notice that 42000 rows rows of data are examined.

mysql> EXPLAIN SELECT msgs.*, username AS sender_username FROM msgs INNER JOIN users ON (users.id = msgs.sender_id) WHERE user_id=939 AND msgs.status != 1 AND del != 2 ORDER BY date DESC;

+——-+——–+—————+———+———+—————-+——-+—————————-+

| table | type | possible_keys | key | key_len | ref | rows | Extra |

+——-+——–+—————+———+———+—————-+——-+—————————-+

| msgs | ALL | NULL | NULL | NULL | NULL | 3886219 | where used; Using filesort |

| users | eq_ref | PRIMARY | PRIMARY | 8 | msgs.sender_id | 1 | |

+——-+——–+—————+———+———+—————-+——-+—————————-+

2 rows in set (0.00 sec)

# How long does the query take before fixing? About 1 second.

# (slightly modified for demostration purposes, but same result still).

mysql> SELECT count(*) FROM msgs INNER JOIN users ON (users.id = msgs.sender_id) WHERE user_id=939 AND msgs.status != 1 AND del != 2 ORDER BY date DESC;

+———-+

| count(*) |

+———-+

| 631 |

+———-+

1 row in set (1.03 sec)

# You want to be indexing on stuff in your WHERE and JOIN statements.

# specifically, where there is lots of uniqueness or “cardinality”.

# user_id from above is really good, because there are lots of unique values

# for user_id. Same thing goes for users.id and msgs.sender_id

# msgs.status won’t help that much (but won’t hurt) because mostly its values are

# 0 and 1. same thing goes for “del”.

# Add an index on the user_id, and msgs.sender_id columns.

# users.id is already indexed

# Note: always try to add a key of length 10 first, it’s better (if possible).

mysql> create index user_id_index on msgs(user_id(10));

ERROR 1089: Incorrect sub part key. The used key part isn’t a string, the used length is longer than the key part or the table handler doesn’t support unique sub keys

mysql> create index user_id_index on msgs(user_id);

Query OK, 42857 rows affected (1.59 sec)

Records: 42857 Duplicates: 0 Warnings: 0

mysql> create index sender_id_index on msgs(sender_id(10));ERROR 1089: Incorrect sub part key. The used key part isn’t a string, the used length is longer than the key part or the table handler doesn’t support unique sub keys

mysql> create index sender_id_index on msgs(sender_id);

Query OK, 42858 rows affected (1.16 sec)

Records: 42858 Duplicates: 0 Warnings: 0

# Check the indices, see if they look good.

# They do look good. Notice the high cardinatlity (uniqueness) of all three keys.

mysql> show index from msgs;

+——-+————+—————–+————–+————-+———–+————-+———-+——–+———+

| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Comment |

+——-+————+—————–+————–+————-+———–+————-+———-+——–+———+

| msgs | 0 | PRIMARY | 1 | id | A | 42855 | NULL | NULL | |

| msgs | 1 | user_id_index | 1 | user_id | A | 1224 | NULL | NULL | |

| msgs | 1 | sender_id_index | 1 | sender_id | A | 1071 | NULL | NULL | |

+——-+————+—————–+————–+————-+———–+————-+———-+——–+———+

3 rows in set (0.00 sec)

# Very good.

# Now, check to see if your index actually improved anything.

# First, check with the EXPLAIN statement. Much better!

mysql> EXPLAIN SELECT msgs.*, username AS sender_username FROM msgs INNER JOIN users ON (users.id = msgs.sender_id) WHERE user_id=939 AND msgs.status != 1 AND del != 2 ORDER BY date DESC;

+——-+——–+——————————-+—————+———+—————-+——+—————————-+

| table | type | possible_keys | key | key_len | ref | rows | Extra |

+——-+——–+——————————-+—————+———+—————-+——+—————————-+

| msgs | ref | user_id_index,sender_id_index | user_id_index | 8 | const | 635 | where used; Using filesort |

| users | eq_ref | PRIMARY | PRIMARY | 8 | msgs.sender_id | 1 | |

+——-+——–+——————————-+—————+———+—————-+——+—————————-+

2 rows in set (0.00 sec)

mysql>

# Now check the time it takes the query to complete.

# Only 0.01 seconds to complete. Much faster.

mysql> SELECT count(*) FROM msgs INNER JOIN users ON (users.id = msgs.sender_id) WHERE user_id=939 AND msgs.status != 1 AND del != 2 ORDER BY date DESC;

+———-+

| count(*) |

+———-+

| 631 | +———-+

1 row in set (0.01 sec)

# Now start watching tail -f /var/lib/mysql/mysql-slow.log

# to find out more tables that should be indexed

tail -f /var/lib/mysql/mysql-slow.log

Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...