Skip to main content

Mysql Master Master Replication & LoadBalancing using HaProxy

Requirements
##############

Two server running Ubuntu 16.04.
Static IP address 10.0.1.12 (dn1) configured on first master server.
Static IP address 10.0.1.14 (dn2) configured on second master server.
A non-root user with sudo privileges configure on both server.

First, you will need to install mysql server and client on dn1 server. You can install it with the following command:

# sudo apt-get install mysql-server mysql-client

Next, you will need to make some changes in my.cnf file of First server as below:
----------------------------------------------------------------------------------

vi /etc/mysql/mysql.conf.d/mysqld.cnf

server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size   = 100M

# service mysql restart

Next, you will need to make some changes in my.cnf file of dn2 server as below:
----------------------------------------------------------------------------------

vi /etc/mysql/mysql.conf.d/mysqld.cnf

server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size   = 100M

# service mysql restart



On DN1 Server:
################

mysql -u root -p

GRANT REPLICATION SLAVE ON *.* TO 'replication1'@'10.0.1.14' IDENTIFIED BY 'password';

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      912 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

Here, we need 2 pieces of information: the File (mysql-bin.000001) and the Position (605) of DN2 for setting DN2 as master of this server. (along with the username and password we set in the last step).

Run the following command on the server to tell it that dn2 is its master:

mysql> stop slave;
mysql> CHANGE MASTER TO MASTER_HOST = 'dn2.hostname.com', MASTER_USER = 'replication2', MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000001', MASTER_LOG_POS = 605;
mysql> start slave;



On DN2 Server:
#################


mysql -u root -p

GRANT REPLICATION SLAVE ON *.* TO 'replication2'@'10.0.1.12' IDENTIFIED BY 'password';

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      605 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

Here, we need 2 pieces of information: the File (mysql-bin.000001) and the Position (912) of DN1 for setting DN1 as master of this server (along with the username and password we set in the last step).

Run the following command on the server to tell it that dn1 is its master:

mysql> stop slave;
mysql> CHANGE MASTER TO MASTER_HOST = 'dn1.hostname.com', MASTER_USER = 'replication1', MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000001', MASTER_LOG_POS = 912;
mysql> start slave;

Testing:
#######

mysql> SHOW SLAVE STATUS \G;

Now if you create a DB on DN1 it will be created on the DN2 also.

How to Exclude a query from replication
#################################

SET sql_log_bin = 0;
drop index index_1 on recipes;
SET sql_log_bin = 1;

The above drop command will be only executed in the server which it is executed, and not replicated to its master.

https://dev.mysql.com/doc/refman/5.7/en/set-sql-log-bin.html


Errors & Fixes:
##############

[ERROR] Slave SQL for channel 'group_replication_applier': Slave failed to initialize relay log info structure from the repository, Error_code: 1872

Fix :
------
USE mysql;
TRUNCATE TABLE slave_relay_log_info;
Than restart the mysql:
service mysql restart
Error:
-------
MAR 19 09:56:08 http2 mysqld[1380]: 080529 9:56:08 [ERROR] Slave: Error 'Table 'mydb.taggregate_temp_1212047760' doesn't exist' on query. Default database: 'mydb'. Query: 'UPDATE thr

Fix:
----

mysql> STOP SLAVE;
mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;

mysql> START SLAVE;


HA Proxy load balancing MYSql connections:
##########################################

Droplet 1 - Load Balancer
Hostname: haproxy.hostname.com
OS: Ubuntu Private IP: 10.0.1.9

Droplet 2 - Dn1
Hostname: mysqldn1 
OS: Debian 7 Private IP: 10.0.1.12

Droplet 2 - Dn2 
Hostname: mysqldn1 
OS: Debian 7 Private IP: 10.0.1.14

First Login to Dn1 & Dn2 and give the below permissions:
------------------------------------------------------
As there is replication done you only have to do this in any one of the server.

Make sure you have an fqdn as hostname or set the entries in the /etc/hosts file of Dn1 and Dn2

mysql> use mysql
mysql > INSERT INTO user (Host,User,ssl_cipher,x509_issuer,x509_subject) VALUES ('haproxy.hostname.com', 'haproxy_check','','','');FLUSH PRIVILEGES;
mysql > GRANT ALL PRIVILEGES ON *.* TO 'haproxy_root'@'haproxy.hostname.com' IDENTIFIED BY 'yourstrongpassword' WITH GRANT OPTION; FLUSH PRIVILEGES";



In haproxy server (10.0.1.9):
--------------------------

[root@haproxy-lb ~]# vi /etc/haproxy/haproxy.cfg

listen mysql-cluster
    bind *:3306
    mode tcp
    option mysql-check user haproxy_check
    balance roundrobin
    server mysqldn1 10.0.1.8:3306 check
    server mysqldn2 10.0.1.4:3306 check
    
[root@haproxy-lb ~]# sudo service haproxy reload
    
# Installing mysql client in haproxy server

[root@haproxy-lb ~]# yum install mysql

# Testing the connection from haproxy server:

[root@haproxy-lb ~]# mysql -h 10.0.1.8 -u haproxy_root -p -e "SHOW DATABASES"

Testing Load Balancing and Failover
##################################

ubuntu@namenode:~$ mysql -h 10.0.1.9 -u haproxy_root -p -e "show variables like 'server_id'";
Enter password: 
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 1     |
+---------------+-------+
ubuntu@namenode:~$ mysql -h 10.0.1.9 -u haproxy_root -p -e "show variables like 'server_id'";
Enter password: 
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 2     |
+---------------+-------+
ubuntu@namenode:~$ mysql -h 10.0.1.9 -u haproxy_root -p -e "show variables like 'server_id'";
Enter password: 
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 1     |
+---------------+-------+
ubuntu@namenode:~$ mysql -h 10.0.1.9 -u haproxy_root -p -e "show variables like 'server_id'";
Enter password: 
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 2     |
+---------------+-------+

Ref:
####


https://www.digitalocean.com/community/tutorials/how-to-use-haproxy-to-set-up-mysql-load-balancing--3

Comments

Popular posts from this blog

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It i...

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command:...