【DB宝42】MySQL高可用架构MHA+ProxySQL实现读写分离和负载均衡
一、MHA+ProxySQL架构
之前发过一篇MHA的文章,介绍了MHA相关的知识和功能测试,连接为:【DB宝19】在Docker中使用MySQL高可用之MHA 。今天这一篇给大家分享一下“MHA+中间件ProxySQL”来实现读写分离+负载均衡的相关知识。
我们都知道,MHA(Master High Availability Manager and tools for MySQL)目前在MySQL高可用方面是一个相对成熟的解决方案,是一套作为MySQL高可用性环境下故障切换和主从提升的高可用软件。它的架构是要求一个MySQL复制集群必须最少有3台数据库服务器,一主二从,即一台充当Master,一台充当备用Master,另一台充当从库。但是,如果不连接任何外部的数据库中间件,那么就会导致所有的业务压力流向主库,从而造成主库压力过大,而2个从库除了本身的IO和SQL线程外,无任何业务压力,会严重造成资源的浪费。因此,我们可以把MHA和ProxySQL结合使用来实现读写分离和负载均衡。所有的业务通过中间件ProxySQL后,会被分配到不同的MySQL机器上。从而,前端的写操作会流向主库,而读操作会被负载均衡的转发到2个从库上。
MHA+ProxySQL架构如下图所示:
二、快速搭建MHA环境
2.1 下载MHA镜像
- 小麦苗的Docker Hub的地址:https://hub.docker.com/u/lhrbest
1 2 3 4 5 6 7 8 9 10 11 | -- 下载镜像 docker pull registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-master1-ip131 docker pull registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-slave1-ip132 docker pull registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-slave2-ip133 docker pull registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-monitor-ip134 -- 重命名镜像 docker tag registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-master1-ip131 lhrbest/mha-lhr-master1-ip131 docker tag registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-slave1-ip132 lhrbest/mha-lhr-slave1-ip132 docker tag registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-slave2-ip133 lhrbest/mha-lhr-slave2-ip133 docker tag registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-monitor-ip134 lhrbest/mha-lhr-monitor-ip134 |
一共4个镜像,3个MHA Node,一个MHA Manager,压缩包大概3G,下载完成后:
1 2 3 4 5 | [root@lhrdocker ~]# docker images | grep mha registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-monitor-ip134 latest 7d29597dc997 14 hours ago 1.53GB registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-slave2-ip133 latest d3717794e93a 40 hours ago 4.56GB registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-slave1-ip132 latest f62ee813e487 40 hours ago 4.56GB registry.cn-hangzhou.aliyuncs.com/lhrbest/mha-lhr-master1-ip131 latest ae7be48d83dc 40 hours ago 4.56GB |
2.2 编辑yml文件,创建MHA相关容器
编辑yml文件,使用docker-compose来创建MHA相关容器,注意docker-compose.yml文件的格式,对空格、缩进、对齐都有严格要求:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | # 创建存放yml文件的路径 mkdir -p /root/mha # 编辑文件/root/mha/docker-compose.yml cat > /root/mha/docker-compose.yml <<"EOF" version: '3.8' services: MHA-LHR-Master1-ip131: container_name: "MHA-LHR-Master1-ip131" restart: "always" hostname: MHA-LHR-Master1-ip131 privileged: true image: lhrbest/mha-lhr-master1-ip131 ports: - "33131:3306" - "2201:22" networks: mhalhr: ipv4_address: 192.168.68.131 MHA-LHR-Slave1-ip132: container_name: "MHA-LHR-Slave1-ip132" restart: "always" hostname: MHA-LHR-Slave1-ip132 privileged: true image: lhrbest/mha-lhr-slave1-ip132 ports: - "33132:3306" - "2202:22" networks: mhalhr: ipv4_address: 192.168.68.132 MHA-LHR-Slave2-ip133: container_name: "MHA-LHR-Slave2-ip133" restart: "always" hostname: MHA-LHR-Slave2-ip133 privileged: true image: lhrbest/mha-lhr-slave2-ip133 ports: - "33133:3306" - "2203:22" networks: mhalhr: ipv4_address: 192.168.68.133 MHA-LHR-Monitor-ip134: container_name: "MHA-LHR-Monitor-ip134" restart: "always" hostname: MHA-LHR-Monitor-ip134 privileged: true image: lhrbest/mha-lhr-monitor-ip134 ports: - "33134:3306" - "2204:22" networks: mhalhr: ipv4_address: 192.168.68.134 networks: mhalhr: name: mhalhr ipam: config: - subnet: "192.168.68.0/16" EOF |
2.3 安装docker-compose软件(若已安装,可忽略)
安装 Docker Compose官方文档:https://docs.docker.com/compose/
编辑docker-compose.yml文件官方文档:https://docs.docker.com/compose/compose-file/
1 2 3 4 5 6 7 8 | [root@lhrdocker ~]# curl --insecure -L https://github.com/docker/compose/releases/download/1.28.4/docker-compose-Linux-x86_64 -o /usr/local/bin/docker-compose % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 638 100 638 0 0 530 0 0:00:01 0:00:01 --:--:-- 531 100 11.6M 100 11.6M 0 0 1994k 0 0:00:06 0:00:06 --:--:-- 2943k [root@lhrdocker ~]# chmod +x /usr/local/bin/docker-compose [root@lhrdocker ~]# docker-compose -v docker-compose version 1.28.4, build cabd5cfb |
2.4 创建MHA容器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # 启动mha环境的容器,一定要进入文件夹/root/mha/后再操作 -- docker rm -f MHA-LHR-Master1-ip131 MHA-LHR-Slave1-ip132 MHA-LHR-Slave2-ip133 MHA-LHR-Monitor-ip134 [root@lhrdocker ~]# cd /root/mha/ [root@lhrdocker mha]# [root@lhrdocker mha]# docker-compose up -d Creating network "mhalhr" with the default driver Creating MHA-LHR-Monitor-ip134 ... done Creating MHA-LHR-Slave2-ip133 ... done Creating MHA-LHR-Master1-ip131 ... done Creating MHA-LHR-Slave1-ip132 ... done [root@docker35 ~]# docker ps | grep "mha\|COMMAND" CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2978361198b7 lhrbest/mha-lhr-master1-ip131 "/usr/sbin/init" 2 minutes ago Up 2 minutes 16500-16599/tcp, 0.0.0.0:2201->22/tcp, 0.0.0.0:33131->3306/tcp MHA-LHR-Master1-ip131 a64e2e86589c lhrbest/mha-lhr-slave1-ip132 "/usr/sbin/init" 2 minutes ago Up 2 minutes 16500-16599/tcp, 0.0.0.0:2202->22/tcp, 0.0.0.0:33132->3306/tcp MHA-LHR-Slave1-ip132 d7d6ce34800b lhrbest/mha-lhr-monitor-ip134 "/usr/sbin/init" 2 minutes ago Up 2 minutes 0.0.0.0:2204->22/tcp, 0.0.0.0:33134->3306/tcp MHA-LHR-Monitor-ip134 dacd22edb2f8 lhrbest/mha-lhr-slave2-ip133 "/usr/sbin/init" 2 minutes ago Up 2 minutes 16500-16599/tcp, 0.0.0.0:2203->22/tcp, 0.0.0.0:33133->3306/tcp MHA-LHR-Slave2-ip133 |
2.5 主库131添加VIP
1 2 3 4 5 6 7 8 9 | # 进入主库131 docker exec -it MHA-LHR-Master1-ip131 bash # 添加VIP135 /sbin/ifconfig eth0:1 192.168.68.135/24 ifconfig # 如果删除的话 ip addr del 192.168.68.135/24 dev eth1 |
添加完成后:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | [root@MHA-LHR-Master1-ip131 /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.68.131 netmask 255.255.0.0 broadcast 192.168.255.255 ether 02:42:c0:a8:44:83 txqueuelen 0 (Ethernet) RX packets 220 bytes 15883 (15.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 189 bytes 17524 (17.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.68.135 netmask 255.255.255.0 broadcast 192.168.68.255 ether 02:42:c0:a8:44:83 txqueuelen 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 5 bytes 400 (400.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5 bytes 400 (400.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 # 管理节点已经可以ping通VIP了 [root@MHA-LHR-Monitor-ip134 /]# ping 192.168.68.135 PING 192.168.68.135 (192.168.68.135) 56(84) bytes of data. 64 bytes from 192.168.68.135: icmp_seq=1 ttl=64 time=0.172 ms 64 bytes from 192.168.68.135: icmp_seq=2 ttl=64 time=0.076 ms ^C --- 192.168.68.135 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.076/0.124/0.172/0.048 ms |
到这一步就可以验证主从复制是否正确,若正确,则可以直接测试MHA了。
1 2 3 4 5 6 7 8 9 10 | mysql -uroot -plhr -h192.168.68.131 -P3306 show slave hosts; mysql> show slave hosts; +-----------+----------------+------+-----------+--------------------------------------+ | Server_id | Host | Port | Master_id | Slave_UUID | +-----------+----------------+------+-----------+--------------------------------------+ | 573306133 | 192.168.68.133 | 3306 | 573306131 | d391ce7e-aec3-11ea-94cd-0242c0a84485 | | 573306132 | 192.168.68.132 | 3306 | 573306131 | d24a77d1-aec3-11ea-9399-0242c0a84484 | +-----------+----------------+------+-----------+--------------------------------------+ 2 rows in set (0.00 sec) |