在不同主机安装TiDB集群
其它内容可以参考:
- 【DB宝54】NewSQL数据库之TiDB简介 :https://www.xmmup.com/dbbao54newsqlshujukuzhitidbjianjie.html
- 【DB宝57】使用Docker-Compose快速部署TiDB集群环境:https://www.xmmup.com/dbbao57shiyongdocker-composekuaisubushutidbjiqunhuanjing.html
- 使用TiUP快速部署TiDB上手环境(在同一个主机上安装TiDB集群–使用TiUP Playground方式):https://www.xmmup.com/shiyongtiupkuaisubushutidbshangshouhuanjing.html
- 在同一个主机上安装集群(非TiUP Playground方式):https://www.xmmup.com/zaitongyigezhujishanganzhuangjiqunfeitiup-playgroundfangshi.html
参考:https://docs.pingcap.com/zh/tidb/stable/hardware-and-software-requirements
环境规划
具体配置如下所示:
主机名 | IP | 端口 | 主机映射端口 | 作用 |
---|---|---|---|---|
lhrpd1 | 172.16.6.11 | 2379/2380 | 22311 | PD1 |
lhrpd2 | 172.16.6.12 | 2379/2380 | 22312 | PD2 |
lhrpd3 | 172.16.6.13 | 2379/2380 | 22313 | PD3 |
lhrtikv1 | 172.16.6.14 | 20160/20180 | TiKV1 | |
lhrtikv2 | 172.16.6.15 | 20160/20180 | TiKV2 | |
lhrtikv3 | 172.16.6.16 | 20160/20180 | TiKV3 | |
lhrtidb1 | 172.16.6.17 | 4000/10080 | 24000 | TiDB1 |
lhrtidb2 | 172.16.6.18 | 4001/10081 | 24001 | TiDB2 |
lhrtiflash1 | 172.16.6.21 | TiFlash1 | ||
lhrtiflash2 | 172.16.6.22 | TiFlash2 | ||
lhrticdc1 | 172.16.6.23 | 8300 | TiCDC1 | |
lhrticdc2 | 172.16.6.24 | 8300 | TiCDC2 | |
lhrtidbmonitor | 172.16.6.25 | 9090,3000,9093/9094 | 39090,33000,3399,3400 | Prometheus + Grafana + Alertmanager + 中控机 + HAProxy |
环境申请
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | docker network create --subnet=172.16.6.0/16 lhrtidb-network docker network inspect lhrtidb-network docker rm -f lhrpd1 lhrpd2 lhrpd3 lhrtidb1 lhrtidb2 lhrtikv1 lhrtikv2 lhrtikv3 lhrtiflash1 lhrtiflash2 lhrticdc1 lhrticdc2 lhrtidbmonitor # 申请PD docker run -d --name lhrpd1 -h lhrpd1 \ --net=lhrtidb-network --ip 172.16.6.11 \ -p 22311:2379 -p 23389:3389 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker run -d --name lhrpd2 -h lhrpd2 \ --net=lhrtidb-network --ip 172.16.6.12 \ -p 22312:2379 -p 23390:3389 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker run -d --name lhrpd3 -h lhrpd3 \ --net=lhrtidb-network --ip 172.16.6.13 \ -p 22313:2379 -p 23391:3389 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init # 申请TiKV docker run -d --name lhrtikv1 -h lhrtikv1 \ --net=lhrtidb-network --ip 172.16.6.14 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker run -d --name lhrtikv2 -h lhrtikv2 \ --net=lhrtidb-network --ip 172.16.6.15 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker run -d --name lhrtikv3 -h lhrtikv3 \ --net=lhrtidb-network --ip 172.16.6.16 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init # 申请TiDB docker run -d --name lhrtidb1 -h lhrtidb1 \ --net=lhrtidb-network --ip 172.16.6.17 \ -p 24000:4000 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker run -d --name lhrtidb2 -h lhrtidb2 \ --net=lhrtidb-network --ip 172.16.6.18 \ -p 24001:4000 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init # 申请TiFLash docker run -d --name lhrtiflash1 -h lhrtiflash1 \ --net=lhrtidb-network --ip 172.16.6.21 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker run -d --name lhrtiflash2 -h lhrtiflash2 \ --net=lhrtidb-network --ip 172.16.6.22 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init # 申请TiCDC docker run -d --name lhrticdc1 -h lhrticdc1 \ --net=lhrtidb-network --ip 172.16.6.23 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker run -d --name lhrticdc2 -h lhrticdc2 \ --net=lhrtidb-network --ip 172.16.6.24 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init # 申请中控机 docker run -d --name lhrtidbmonitor -h lhrtidbmonitor \ --net=lhrtidb-network --ip 172.16.6.25 \ -p 3399-3400:3399-3400 -p 39090:9090 -p 33000:3000 -p 33389:3389 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init # docker network connect bridge lhrtidbmonitor docker exec -it lhrtidbmonitor bash |
环境配置
配置NTP
TiDB 是一套分布式数据库系统,需要节点间保证时间的同步,从而确保 ACID 模型的事务线性一致性。目前解决授时的普遍方案是采用 NTP 服务,可以通过互联网中的 pool.ntp.org
授时服务来保证节点的时间同步,也可以使用离线环境自己搭建的 NTP 服务来解决授时。
这里以“172.16.6.25”为时间服务器,其它几台OBServer同步该机器的时间:
1 2 3 4 | yum install ntp ntpdate -y ntpq -4p ntpstat timedatectl |
修改“172.16.6.25”为时间服务器/etc/ntp.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | # For more information about this file, see the man pages # ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5). driftfile /var/lib/ntp/drift #新增:日志目录 logfile /var/log/ntpd.log # Permit time synchronization with our time source, but do not # permit the source to query or modify the service on this system. restrict default nomodify notrap nopeer noquery # Permit all access over the loopback interface. This could # be tightened as well, but to do so would effect some of # the administrative functions. restrict 127.0.0.1 restrict ::1 #新增:这一行的含义是授权172.72.8.0网段上的所有机器可以从这台机器上查询和同步时间. restrict 172.72.8.0 mask 255.255.255.0 nomodify notrap # Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst #新增:时间服务器列表. server 0.cn.pool.ntp.org iburst server 1.cn.pool.ntp.org iburst server 2.cn.pool.ntp.org iburst server 3.cn.pool.ntp.org iburst #新增:当外部时间不可用时,使用本地时间 server 127.0.0.1 iburst fudge 127.0.0.1 stratum 10 #broadcast 192.168.1.255 autokey # broadcast server #broadcastclient # broadcast client #broadcast 224.0.1.1 autokey # multicast server #multicastclient 224.0.1.1 # multicast client #manycastserver 239.255.254.254 # manycast server #manycastclient 239.255.254.254 autokey # manycast client # Enable public key cryptography. #crypto includefile /etc/ntp/crypto/pw # Key file containing the keys and key identifiers used when operating # with symmetric key cryptography. keys /etc/ntp/keys # Specify the key identifiers which are trusted. #trustedkey 4 8 42 # Specify the key identifier to use with the ntpdc utility. #requestkey 8 # Specify the key identifier to use with the ntpq utility. #controlkey 8 # Enable writing of statistics records. #statistics clockstats cryptostats loopstats peerstats # Disable the monitoring facility to prevent amplification attacks using ntpdc # monlist command when default restrict does not include the noquery flag. See # CVE-2013-5211 for more details. # Note: Monitoring will not be disabled with the limited restriction flag. disable monitor |
配置开机启动:
1 2 3 4 5 6 7 8 9 10 | systemctl enable ntpd systemctl is-enabled ntpd ntpdate -u 1.cn.pool.ntp.org systemctl restart ntpd [root@lhrobproxy /]# ntpstat synchronised to NTP server (84.16.73.33) at stratum 2 time correct to within 98 ms polling server every 64 s |
其它客户端,修改“/etc/ntp.conf”,注释server开头的行,并添加如下行:
1 2 3 4 5 6 7 | server 172.16.6.25 restrict 172.16.6.25 nomodify notrap noquery server 127.0.0.1 fudge 127.0.0.1 stratum 10 |
配置开机启动:
1 2 | systemctl enable ntpd systemctl restart ntpd |
客户端配置自动同步:
1 2 | crontab -e * * * * * /usr/sbin/ntpdate -u 172.16.6.25 & > /dev/null |
检查和配置操作系统优化参数
在生产系统的 TiDB 中,建议对操作系统进行如下的配置优化:
- 关闭透明大页(即 Transparent Huge Pages,缩写为 THP)。数据库的内存访问模式往往是稀疏的而非连续的。当高阶内存碎片化比较严重时,分配 THP 页面会出现较高的延迟。
- 将存储介质的 I/O 调度器设置为 noop。对于高速 SSD 存储介质,内核的 I/O 调度操作会导致性能损失。将调度器设置为 noop 后,内核不做任何操作,直接将 I/O 请求下发给硬件,以获取更好的性能。同时,noop 调度器也有较好的普适性。
- 为调整 CPU 频率的 cpufreq 模块选用 performance 模式。将 CPU 频率固定在其支持的最高运行频率上,不进行动态调节,可获取最佳的性能。
1 2 3 | cat /sys/kernel/mm/transparent_hugepage/enabled cat /sys/block/sd[bc]/queue/scheduler cpupower frequency-info --policy |
内核参数修改
所有目标主机都配置:
1 2 3 4 5 6 7 8 9 10 11 | echo "fs.file-max = 1000000">> /etc/sysctl.conf echo "net.core.somaxconn = 32768">> /etc/sysctl.conf echo "vm.overcommit_memory = 1">> /etc/sysctl.conf sysctl -p cat << EOF >>/etc/security/limits.conf tidb soft nofile 1000000 tidb hard nofile 1000000 tidb soft stack 32768 tidb hard stack 32768 EOF |
创建用户及 sudo 免密码
所有目标主机都配置:
1 2 3 4 5 | useradd tidb echo "tidb:lhr" | chpasswd echo "tidb ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers |
设置无密码SSH登陆
可以使用rac上的sshUserSetup.sh快速配置,只在中控机lhrtidbmonitor上运行:
1 | sh sshUserSetup.sh -user tidb -hosts "lhrpd1 lhrpd2 lhrpd3 lhrtikv1 lhrtikv2 lhrtikv3 lhrtidb1 lhrtidb2 lhrtiflash1 lhrtiflash2 lhrticdc1 lhrticdc2 lhrtidbmonitor" -advanced exverify -confirm |
安装集群
参考:https://docs.pingcap.com/zh/tidb/stable/production-deployment-using-tiup
以tidb用户安装:
安装tiup组件
1 2 3 4 5 6 | curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh source /home/tidb/.bash_profile which tiup tiup cluster tiup update --self && tiup update cluster tiup --binary cluster |
初始化集群拓扑文件
1 | tiup cluster template --full > topology.yaml |
修改拓扑文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | cat > topology.yaml <<"EOF" global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" server_configs: tidb: {} tikv: log-level: warning storage.reserve-space: 0 pd: {} tiflash: {} tiflash-learner: {} pump: {} drainer: {} cdc: {} pd_servers: - host: 172.16.6.11 - host: 172.16.6.12 - host: 172.16.6.13 tidb_servers: - host: 172.16.6.17 - host: 172.16.6.18 tikv_servers: - host: 172.16.6.14 - host: 172.16.6.15 - host: 172.16.6.16 tiflash_servers: - host: 172.16.6.21 - host: 172.16.6.22 cdc_servers: - host: 172.16.6.23 - host: 172.16.6.24 monitoring_servers: - host: 172.16.6.25 grafana_servers: - host: 172.16.6.25 alertmanager_servers: - host: 172.16.6.25 EOF |
执行部署命令
执行 deploy 命令前,先使用 check
及 check --apply
命令,检查和自动修复集群存在的潜在风险:
1 2 3 4 5 6 | tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa] -- 例如 tiup cluster check ./topology.yaml --user tidb tiup cluster check ./topology.yaml --apply --user tidb |
然后执行 deploy
命令部署 TiDB 集群:
1 | tiup cluster deploy lhrtidb v5.2.2 ./topology.yaml --user tidb |
以上部署命令中:
- 通过 TiUP cluster 部署的集群名称为
lhrtidb
- 可以通过执行
tiup list tidb
来查看 TiUP 支持的最新可用版本,后续内容以版本v5.2.2
为例 - 初始化配置文件为
topology.yaml
- --user root:通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。
- [-i] 及 [-p]:非必选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 --user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码
- 如果需要指定在目标机创建的用户组名,可以参考这个例子。
预期日志结尾输出会有 Deployed cluster
lhrtidb successfully` 关键词,表示部署成功。
部署过程:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | [tidb@lhrtidbmonitor ~]$ tiup cluster deploy lhrtidb v5.2.2 ./topology.yaml --user tidb Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.6.1/tiup-cluster deploy lhrtidb v5.2.2 ./topology.yaml --user tidb + Detect CPU Arch + Detect CPU Arch + Detect CPU Arch - Detecting node 172.16.6.11 ... ⠧ Shell: host=172.16.6.11, sudo=false, command=`uname -m` - Detecting node 172.16.6.12 ... ⠧ Shell: host=172.16.6.12, sudo=false, command=`uname -m` + Detect CPU Arch + Detect CPU Arch + Detect CPU Arch + Detect CPU Arch - Detecting node 172.16.6.11 ... Done - Detecting node 172.16.6.12 ... Done - Detecting node 172.16.6.13 ... Done - Detecting node 172.16.6.14 ... Done - Detecting node 172.16.6.15 ... Done - Detecting node 172.16.6.16 ... Done - Detecting node 172.16.6.17 ... Done - Detecting node 172.16.6.18 ... Done - Detecting node 172.16.6.21 ... Done - Detecting node 172.16.6.22 ... Done - Detecting node 172.16.6.23 ... Done - Detecting node 172.16.6.24 ... Done - Detecting node 172.16.6.25 ... Done Please confirm your topology: Cluster type: tidb Cluster name: lhrtidb Cluster version: v5.2.2 Role Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- pd 172.16.6.11 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 pd 172.16.6.12 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 pd 172.16.6.13 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 tikv 172.16.6.14 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tikv 172.16.6.15 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tikv 172.16.6.16 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tidb 172.16.6.17 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000 tidb 172.16.6.18 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000 tiflash 172.16.6.21 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000 tiflash 172.16.6.22 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000 cdc 172.16.6.23 8300 linux/x86_64 /tidb-deploy/cdc-8300,/tidb-data/cdc-8300 cdc 172.16.6.24 8300 linux/x86_64 /tidb-deploy/cdc-8300,/tidb-data/cdc-8300 prometheus 172.16.6.25 9090 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090 grafana 172.16.6.25 3000 linux/x86_64 /tidb-deploy/grafana-3000 alertmanager 172.16.6.25 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y/N]: (default=N) Y + Generate SSH keys ... Done + Download TiDB components - Download pd:v5.2.2 (linux/amd64) ... Done - Download tikv:v5.2.2 (linux/amd64) ... Done - Download tidb:v5.2.2 (linux/amd64) ... Done - Download tiflash:v5.2.2 (linux/amd64) ... Done - Download cdc:v5.2.2 (linux/amd64) ... Done - Download prometheus:v5.2.2 (linux/amd64) ... Done - Download grafana:v5.2.2 (linux/amd64) ... Done - Download alertmanager: (linux/amd64) ... Done - Download node_exporter: (linux/amd64) ... Done - Download blackbox_exporter: (linux/amd64) ... Done + Initialize target host environments - Prepare 172.16.6.11:22 ... Done - Prepare 172.16.6.12:22 ... Done - Prepare 172.16.6.13:22 ... Done - Prepare 172.16.6.14:22 ... Done - Prepare 172.16.6.15:22 ... Done - Prepare 172.16.6.16:22 ... Done - Prepare 172.16.6.17:22 ... Done - Prepare 172.16.6.18:22 ... Done - Prepare 172.16.6.21:22 ... Done - Prepare 172.16.6.22:22 ... Done - Prepare 172.16.6.23:22 ... Done - Prepare 172.16.6.24:22 ... Done - Prepare 172.16.6.25:22 ... Done - Copy node_exporter -> 172.16.6.22 ... Done - Copy node_exporter -> 172.16.6.24 ... Done - Copy node_exporter -> 172.16.6.25 ... Done - Copy node_exporter -> 172.16.6.14 ... Done - Copy node_exporter -> 172.16.6.15 ... Done - Copy node_exporter -> 172.16.6.17 ... Done - Copy node_exporter -> 172.16.6.16 ... Done - Copy node_exporter -> 172.16.6.23 ... Done - Copy node_exporter -> 172.16.6.11 ... Done - Copy node_exporter -> 172.16.6.12 ... Done - Copy node_exporter -> 172.16.6.13 ... Done - Copy blackbox_exporter -> 172.16.6.13 ... Done - Copy blackbox_exporter -> 172.16.6.16 ... Done - Copy blackbox_exporter -> 172.16.6.23 ... Done - Copy blackbox_exporter -> 172.16.6.11 ... Done - Copy blackbox_exporter -> 172.16.6.12 ... Done - Copy blackbox_exporter -> 172.16.6.17 ... Done - Copy blackbox_exporter -> 172.16.6.18 ... Done - Copy blackbox_exporter -> 172.16.6.21 ... Done - Copy blackbox_exporter -> 172.16.6.22 ... Done - Copy blackbox_exporter -> 172.16.6.24 ... Done - Copy blackbox_exporter -> 172.16.6.25 ... Done - Copy blackbox_exporter -> 172.16.6.14 ... Done - Copy blackbox_exporter -> 172.16.6.15 ... Done + Check status Enabling component pd Enabling instance 172.16.6.13:2379 Enabling instance 172.16.6.11:2379 Enabling instance 172.16.6.12:2379 Enable instance 172.16.6.12:2379 success Enable instance 172.16.6.13:2379 success Enable instance 172.16.6.11:2379 success Enabling component tikv Enabling instance 172.16.6.16:20160 Enabling instance 172.16.6.14:20160 Enabling instance 172.16.6.15:20160 Enable instance 172.16.6.16:20160 success Enable instance 172.16.6.15:20160 success Enable instance 172.16.6.14:20160 success Enabling component tidb Enabling instance 172.16.6.18:4000 Enabling instance 172.16.6.17:4000 Enable instance 172.16.6.17:4000 success Enable instance 172.16.6.18:4000 success Enabling component tiflash Enabling instance 172.16.6.22:9000 Enabling instance 172.16.6.21:9000 Enable instance 172.16.6.22:9000 success Enable instance 172.16.6.21:9000 success Enabling component cdc Enabling instance 172.16.6.24:8300 Enabling instance 172.16.6.23:8300 Enable instance 172.16.6.23:8300 success Enable instance 172.16.6.24:8300 success Enabling component prometheus Enabling instance 172.16.6.25:9090 Enable instance 172.16.6.25:9090 success Enabling component grafana Enabling instance 172.16.6.25:3000 Enable instance 172.16.6.25:3000 success Enabling component alertmanager Enabling instance 172.16.6.25:9093 Enable instance 172.16.6.25:9093 success Enabling component node_exporter Enabling instance 172.16.6.17 Enabling instance 172.16.6.25 Enabling instance 172.16.6.11 Enabling instance 172.16.6.23 Enabling instance 172.16.6.13 Enabling instance 172.16.6.21 Enabling instance 172.16.6.15 Enabling instance 172.16.6.24 Enabling instance 172.16.6.22 Enabling instance 172.16.6.12 Enabling instance 172.16.6.18 Enabling instance 172.16.6.16 Enabling instance 172.16.6.14 Enable 172.16.6.24 success Enable 172.16.6.17 success Enable 172.16.6.23 success Enable 172.16.6.12 success Enable 172.16.6.25 success Enable 172.16.6.13 success Enable 172.16.6.15 success Enable 172.16.6.21 success Enable 172.16.6.16 success Enable 172.16.6.18 success Enable 172.16.6.22 success Enable 172.16.6.11 success Enable 172.16.6.14 success Enabling component blackbox_exporter Enabling instance 172.16.6.17 Enabling instance 172.16.6.21 Enabling instance 172.16.6.22 Enabling instance 172.16.6.24 Enabling instance 172.16.6.13 Enabling instance 172.16.6.11 Enabling instance 172.16.6.15 Enabling instance 172.16.6.14 Enabling instance 172.16.6.23 Enabling instance 172.16.6.25 Enabling instance 172.16.6.16 Enabling instance 172.16.6.18 Enabling instance 172.16.6.12 Enable 172.16.6.21 success Enable 172.16.6.23 success Enable 172.16.6.24 success Enable 172.16.6.25 success Enable 172.16.6.13 success Enable 172.16.6.18 success Enable 172.16.6.15 success Enable 172.16.6.17 success Enable 172.16.6.16 success Enable 172.16.6.12 success Enable 172.16.6.22 success Enable 172.16.6.14 success Enable 172.16.6.11 success Cluster `lhrtidb` deployed successfully, you can start it with command: `tiup cluster start lhrtidb` |
校验集群
1 2 3 4 5 | [tidb@lhrtidbmonitor ~]$ tiup cluster list Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.6.1/tiup-cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- lhrtidb tidb v5.2.2 /home/tidb/.tiup/storage/cluster/clusters/lhrtidb /home/tidb/.tiup/storage/cluster/clusters/lhrtidb/ssh/id_rsa |
TiUP 支持管理多个 TiDB 集群,该命令会输出当前通过 TiUP cluster 管理的所有集群信息,包括集群名称、部署用户、版本、密钥信息等:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | [tidb@lhrtidbmonitor ~]$ tiup cluster display lhrtidb Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.6.1/tiup-cluster display lhrtidb Cluster type: tidb Cluster name: lhrtidb Cluster version: v5.2.2 Deploy user: tidb SSH type: builtin ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.6.25:9093 alertmanager 172.16.6.25 9093/9094 linux/x86_64 Down /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093 172.16.6.23:8300 cdc 172.16.6.23 8300 linux/x86_64 Down /tidb-data/cdc-8300 /tidb-deploy/cdc-8300 172.16.6.24:8300 cdc 172.16.6.24 8300 linux/x86_64 Down /tidb-data/cdc-8300 /tidb-deploy/cdc-8300 172.16.6.25:3000 grafana 172.16.6.25 3000 linux/x86_64 Down - /tidb-deploy/grafana-3000 172.16.6.11:2379 pd 172.16.6.11 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379 172.16.6.12:2379 pd 172.16.6.12 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379 172.16.6.13:2379 pd 172.16.6.13 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379 172.16.6.25:9090 prometheus 172.16.6.25 9090 linux/x86_64 Down /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090 172.16.6.17:4000 tidb 172.16.6.17 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000 172.16.6.18:4000 tidb 172.16.6.18 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000 172.16.6.21:9000 tiflash 172.16.6.21 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 172.16.6.22:9000 tiflash 172.16.6.22 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 172.16.6.14:20160 tikv 172.16.6.14 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 172.16.6.15:20160 tikv 172.16.6.15 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 172.16.6.16:20160 tikv 172.16.6.16 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 Total nodes: 15 |
预期输出包括 lhrtidb
集群中实例 ID、角色、主机、监听端口和状态(由于还未启动,所以状态为 Down/inactive)、目录信息。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | [tidb@lhrtidbmonitor ~]$ tiup cluster start lhrtidb Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.6.1/tiup-cluster start lhrtidb Starting cluster lhrtidb... + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/lhrtidb/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/lhrtidb/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.6.15 + [Parallel] - UserSSH: user=tidb, host=172.16.6.12 + [Parallel] - UserSSH: user=tidb, host=172.16.6.16 + [Parallel] - UserSSH: user=tidb, host=172.16.6.17 + [Parallel] - UserSSH: user=tidb, host=172.16.6.11 + [Parallel] - UserSSH: user=tidb, host=172.16.6.13 + [Parallel] - UserSSH: user=tidb, host=172.16.6.22 + [Parallel] - UserSSH: user=tidb, host=172.16.6.14 + [Parallel] - UserSSH: user=tidb, host=172.16.6.21 + [Parallel] - UserSSH: user=tidb, host=172.16.6.25 + [Parallel] - UserSSH: user=tidb, host=172.16.6.18 + [Parallel] - UserSSH: user=tidb, host=172.16.6.23 + [Parallel] - UserSSH: user=tidb, host=172.16.6.25 + [Parallel] - UserSSH: user=tidb, host=172.16.6.25 + [Parallel] - UserSSH: user=tidb, host=172.16.6.24 + [ Serial ] - StartCluster Starting component pd Starting instance 172.16.6.13:2379 Starting instance 172.16.6.12:2379 Starting instance 172.16.6.11:2379 Start instance 172.16.6.12:2379 success Start instance 172.16.6.13:2379 success Start instance 172.16.6.11:2379 success Starting component tikv Starting instance 172.16.6.16:20160 Starting instance 172.16.6.15:20160 Starting instance 172.16.6.14:20160 Start instance 172.16.6.16:20160 success Start instance 172.16.6.15:20160 success Start instance 172.16.6.14:20160 success Starting component tidb Starting instance 172.16.6.18:4000 Starting instance 172.16.6.17:4000 Start instance 172.16.6.18:4000 success Start instance 172.16.6.17:4000 success Starting component tiflash Starting instance 172.16.6.21:9000 Starting instance 172.16.6.22:9000 Start instance 172.16.6.21:9000 success Start instance 172.16.6.22:9000 success Starting component cdc Starting instance 172.16.6.24:8300 Starting instance 172.16.6.23:8300 Start instance 172.16.6.23:8300 success Start instance 172.16.6.24:8300 success Starting component prometheus Starting instance 172.16.6.25:9090 Start instance 172.16.6.25:9090 success Starting component grafana Starting instance 172.16.6.25:3000 Start instance 172.16.6.25:3000 success Starting component alertmanager Starting instance 172.16.6.25:9093 Start instance 172.16.6.25:9093 success Starting component node_exporter Starting instance 172.16.6.14 Starting instance 172.16.6.11 Starting instance 172.16.6.23 Starting instance 172.16.6.16 Starting instance 172.16.6.21 Starting instance 172.16.6.13 Starting instance 172.16.6.24 Starting instance 172.16.6.22 Starting instance 172.16.6.12 Starting instance 172.16.6.15 Starting instance 172.16.6.25 Starting instance 172.16.6.17 Starting instance 172.16.6.18 Start 172.16.6.17 success Start 172.16.6.14 success Start 172.16.6.16 success Start 172.16.6.21 success Start 172.16.6.13 success Start 172.16.6.12 success Start 172.16.6.18 success Start 172.16.6.23 success Start 172.16.6.24 success Start 172.16.6.15 success Start 172.16.6.11 success Start 172.16.6.25 success Start 172.16.6.22 success Starting component blackbox_exporter Starting instance 172.16.6.17 Starting instance 172.16.6.14 Starting instance 172.16.6.15 Starting instance 172.16.6.16 Starting instance 172.16.6.18 Starting instance 172.16.6.22 Starting instance 172.16.6.23 Starting instance 172.16.6.11 Starting instance 172.16.6.12 Starting instance 172.16.6.25 Starting instance 172.16.6.21 Starting instance 172.16.6.24 Starting instance 172.16.6.13 Start 172.16.6.22 success Start 172.16.6.23 success Start 172.16.6.17 success Start 172.16.6.15 success Start 172.16.6.14 success Start 172.16.6.12 success Start 172.16.6.18 success Start 172.16.6.24 success Start 172.16.6.11 success Start 172.16.6.13 success Start 172.16.6.25 success Start 172.16.6.21 success Start 172.16.6.16 success + [ Serial ] - UpdateTopology: cluster=lhrtidb Started cluster `lhrtidb` successfully [tidb@lhrtidbmonitor ~]$ tiup cluster display lhrtidb Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.6.1/tiup-cluster display lhrtidb Cluster type: tidb Cluster name: lhrtidb Cluster version: v5.2.2 Deploy user: tidb SSH type: builtin Dashboard URL: http://172.16.6.11:2379/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.6.25:9093 alertmanager 172.16.6.25 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093 172.16.6.23:8300 cdc 172.16.6.23 8300 linux/x86_64 Up /tidb-data/cdc-8300 /tidb-deploy/cdc-8300 172.16.6.24:8300 cdc 172.16.6.24 8300 linux/x86_64 Up /tidb-data/cdc-8300 /tidb-deploy/cdc-8300 172.16.6.25:3000 grafana 172.16.6.25 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000 172.16.6.11:2379 pd 172.16.6.11 2379/2380 linux/x86_64 Up|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379 172.16.6.12:2379 pd 172.16.6.12 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379 172.16.6.13:2379 pd 172.16.6.13 2379/2380 linux/x86_64 Up /tidb-data/pd-2379 /tidb-deploy/pd-2379 172.16.6.25:9090 prometheus 172.16.6.25 9090 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090 172.16.6.17:4000 tidb 172.16.6.17 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000 172.16.6.18:4000 tidb 172.16.6.18 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000 172.16.6.21:9000 tiflash 172.16.6.21 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 172.16.6.22:9000 tiflash 172.16.6.22 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 172.16.6.14:20160 tikv 172.16.6.14 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 172.16.6.15:20160 tikv 172.16.6.15 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 172.16.6.16:20160 tikv 172.16.6.16 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 Total nodes: 15 [tidb@lhrtidbmonitor ~]$ |
mysql客户端
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | yum install -y mariadb mariadb-libs mariadb-devel mysql --host 172.16.6.17 --port 4000 -u root mysql -uroot -P 24000 -h192.168.66.35 select tidb_version(); select version(); select * from INFORMATION_SCHEMA.cluster_info order by type,instance; select STORE_ID,ADDRESS,STORE_STATE,STORE_STATE_NAME,CAPACITY,AVAILABLE,UPTIME from INFORMATION_SCHEMA.TIKV_STORE_STATUS; select * from mysql.tidb; C:\Users\lhrxxt>mysql -uroot -P 24000 -h192.168.66.35 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 5.7.25-TiDB-v5.2.2 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> select tidb_version(); +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | tidb_version() | +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Release Version: v5.2.2 Edition: Community Git Commit Hash: da1c21fd45a4ea5900ac16d2f4a248143f378d18 Git Branch: heads/refs/tags/v5.2.2 UTC Build Time: 2021-10-20 06:08:33 GoVersion: go1.16.4 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false | +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.06 sec) MySQL [(none)]> select version(); +--------------------+ | version() | +--------------------+ | 5.7.25-TiDB-v5.2.2 | +--------------------+ 1 row in set (0.05 sec) MySQL [(none)]> MySQL [(none)]> select * from INFORMATION_SCHEMA.cluster_info order by type,instance; +---------+-------------------+-------------------+---------+------------------------------------------+---------------------------+------------------+-----------+ | TYPE | INSTANCE | STATUS_ADDRESS | VERSION | GIT_HASH | START_TIME | UPTIME | SERVER_ID | +---------+-------------------+-------------------+---------+------------------------------------------+---------------------------+------------------+-----------+ | pd | 172.16.6.11:2379 | 172.16.6.11:2379 | 5.2.2 | 02139dc2a160e24215f634a82b943b2157a2e8ed | 2021-11-05T16:26:35+08:00 | 24m35.805938351s | 0 | | pd | 172.16.6.12:2379 | 172.16.6.12:2379 | 5.2.2 | 02139dc2a160e24215f634a82b943b2157a2e8ed | 2021-11-05T16:26:35+08:00 | 24m35.805934607s | 0 | | pd | 172.16.6.13:2379 | 172.16.6.13:2379 | 5.2.2 | 02139dc2a160e24215f634a82b943b2157a2e8ed | 2021-11-05T16:26:35+08:00 | 24m35.805941322s | 0 | | tidb | 172.16.6.17:4000 | 172.16.6.17:10080 | 5.2.2 | da1c21fd45a4ea5900ac16d2f4a248143f378d18 | 2021-11-05T16:27:01+08:00 | 24m9.805930429s | 0 | | tidb | 172.16.6.18:4000 | 172.16.6.18:10080 | 5.2.2 | da1c21fd45a4ea5900ac16d2f4a248143f378d18 | 2021-11-05T16:27:01+08:00 | 24m9.805899506s | 0 | | tiflash | 172.16.6.21:3930 | 172.16.6.21:20292 | 5.2.2 | 8242cf25b1e3f99c2fe3c7dd8909c05f7338c455 | 2021-11-05T16:27:16+08:00 | 23m54.805965297s | 0 | | tiflash | 172.16.6.22:3930 | 172.16.6.22:20292 | 5.2.2 | 8242cf25b1e3f99c2fe3c7dd8909c05f7338c455 | 2021-11-05T16:27:16+08:00 | 23m54.805960033s | 0 | | tikv | 172.16.6.14:20160 | 172.16.6.14:20180 | 5.2.2 | 7acaec5d9c809439b9b0017711f114b44ffd9a49 | 2021-11-05T16:26:41+08:00 | 24m29.805944685s | 0 | | tikv | 172.16.6.15:20160 | 172.16.6.15:20180 | 5.2.2 | 7acaec5d9c809439b9b0017711f114b44ffd9a49 | 2021-11-05T16:26:41+08:00 | 24m29.8059514s | 0 | | tikv | 172.16.6.16:20160 | 172.16.6.16:20180 | 5.2.2 | 7acaec5d9c809439b9b0017711f114b44ffd9a49 | 2021-11-05T16:26:41+08:00 | 24m29.805947759s | 0 | +---------+-------------------+-------------------+---------+------------------------------------------+---------------------------+------------------+-----------+ 10 rows in set (0.06 sec) MySQL [(none)]> select STORE_ID,ADDRESS,STORE_STATE,STORE_STATE_NAME,CAPACITY,AVAILABLE,UPTIME from INFORMATION_SCHEMA.TIKV_STORE_STATUS; +----------+-------------------+-------------+------------------+----------+-----------+------------------+ | STORE_ID | ADDRESS | STORE_STATE | STORE_STATE_NAME | CAPACITY | AVAILABLE | UPTIME | +----------+-------------------+-------------+------------------+----------+-----------+------------------+ | 3 | 172.16.6.14:20160 | 0 | Up | 1008GiB | 101.9GiB | 24m55.924664094s | | 1 | 172.16.6.16:20160 | 0 | Up | 1008GiB | 101.9GiB | 24m55.92438747s | | 2 | 172.16.6.15:20160 | 0 | Up | 1008GiB | 101.9GiB | 24m55.924509316s | | 108 | 172.16.6.22:3930 | 0 | Up | 1008GiB | 1008GiB | 24m21.416082819s | | 109 | 172.16.6.21:3930 | 0 | Up | 1008GiB | 1008GiB | 24m21.416672469s | +----------+-------------------+-------------+------------------+----------+-----------+------------------+ 5 rows in set (0.06 sec) MySQL [(none)]> MySQL [(none)]> select * from mysql.tidb d ; +--------------------------+--------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+ | VARIABLE_NAME | VARIABLE_VALUE | COMMENT | +--------------------------+--------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+ | bootstrapped | True | Bootstrap flag. Do not delete. | | tidb_server_version | 72 | Bootstrap version. Do not delete. | | system_tz | Asia/Shanghai | TiDB Global System Timezone. | | new_collation_enabled | False | If the new collations are enabled. Do not edit it. | | tikv_gc_leader_uuid | 5f3bcd59bac0001 | Current GC worker leader UUID. (DO NOT EDIT) | | tikv_gc_leader_desc | host:lhrtidb2, pid:14637, start at 2021-11-05 16:27:06.906556281 +0800 CST m=+23.122156824 | Host name and pid of current GC leader. (DO NOT EDIT) | | tikv_gc_leader_lease | 20211105-16:53:08 +0800 | Current GC worker leader lease. (DO NOT EDIT) | | tikv_gc_enable | true | Current GC enable status | | tikv_gc_run_interval | 10m0s | GC run interval, at least 10m, in Go format. | | tikv_gc_life_time | 10m0s | All versions within life time will not be collected by GC, at least 10m, in Go format. | | tikv_gc_last_run_time | 20211105-16:47:36 +0800 | The time when last GC starts. (DO NOT EDIT) | | tikv_gc_safe_point | 20211105-16:37:36 +0800 | All versions after safe point can be accessed. (DO NOT EDIT) | | tikv_gc_auto_concurrency | true | Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used | | tikv_gc_scan_lock_mode | legacy | Mode of scanning locks, "physical" or "legacy" | | tikv_gc_mode | distributed | Mode of GC, "central" or "distributed" | +--------------------------+--------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+ 15 rows in set (0.06 sec) |
图形监控
提供了几个监控:
dashboard: http://172.16.6.11:2379/dashboard root/空密码
Prometheus: http://172.16.6.25:9090
Grafana: http://172.16.6.25:3000 admin/admin
配置HAProxy负载均衡
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | yum -y install haproxy cat > /etc/haproxy/haproxy.cfg <<"EOF" global # 全局配置。 log 127.0.0.1 local2 # 定义全局的 syslog 服务器,最多可以定义两个。 chroot /var/lib/haproxy # 更改当前目录并为启动进程设置超级用户权限,从而提高安全性。 pidfile /var/run/haproxy.pid # 将 HAProxy 进程的 PID 写入 pidfile。 maxconn 4000 # 每个 HAProxy 进程所接受的最大并发连接数。 user haproxy # 同 UID 参数。 group haproxy # 同 GID 参数,建议使用专用用户组。 nbproc 40 # 在后台运行时创建的进程数。在启动多个进程转发请求时,确保该值足够大,保证 HAProxy 不会成为瓶颈。 daemon # 让 HAProxy 以守护进程的方式工作于后台,等同于命令行参数“-D”的功能。当然,也可以在命令行中用“-db”参数将其禁用。 stats socket /var/lib/haproxy/stats # 统计信息保存位置。 defaults # 默认配置。 log global # 日志继承全局配置段的设置。 retries 2 # 向上游服务器尝试连接的最大次数,超过此值便认为后端服务器不可用。 timeout connect 2s # HAProxy 与后端服务器连接超时时间。如果在同一个局域网内,可设置成较短的时间。 timeout client 30000s # 客户端与 HAProxy 连接后,数据传输完毕,即非活动连接的超时时间。 timeout server 30000s # 服务器端非活动连接的超时时间。 listen admin_stats # frontend 和 backend 的组合体,此监控组的名称可按需进行自定义。 bind 0.0.0.0:3400 # 监听端口。 mode http # 监控运行的模式,此处为 `http` 模式。 option httplog # 开始启用记录 HTTP 请求的日志功能。 maxconn 10 # 最大并发连接数。 stats refresh 30s # 每隔 30 秒自动刷新监控页面。 stats uri /haproxy # 监控页面的 URL。 stats realm HAProxy # 监控页面的提示信息。 stats auth admin:lhr # 监控页面的用户和密码,可设置多个用户名。 stats hide-version # 隐藏监控页面上的 HAProxy 版本信息。 stats admin if TRUE # 手工启用或禁用后端服务器(HAProxy 1.4.9 及之后版本开始支持)。 listen tidb-cluster # 配置 database 负载均衡。 bind 0.0.0.0:3399 # 浮动 IP 和 监听端口。 mode tcp # HAProxy 要使用第 4 层的传输层。 balance leastconn # 连接数最少的服务器优先接收连接。`leastconn` 建议用于长会话服务,例如 LDAP、SQL、TSE 等,而不是短会话协议,如 HTTP。该算法是动态的,对于启动慢的服务器,服务器权重会在运行中作调整。 server tidb-1 172.16.6.17:4000 check inter 2000 rise 2 fall 3 # 检测 4000 端口,检测频率为每 2000 毫秒一次。如果 2 次检测为成功,则认为服务器可用;如果 3 次检测为失败,则认为服务器不可用。 server tidb-2 172.16.6.18:4000 check inter 2000 rise 2 fall 3 EOF -- 启动 #haproxy -f /etc/haproxy/haproxy.cfg systemctl enable haproxy systemctl start haproxy systemctl status haproxy -- web界面 http://192.168.66.35:3400/haproxy admin/lhr -- 负载均衡连接 mysql -uroot -P 3399 -h192.168.66.35 |
巡检
DM集群部署
1 2 3 4 5 6 7 8 9 10 11 12 | tiup install dm tiup update --self && tiup update dm tiup dm template > topology.yaml tiup list dm-master tiup dm deploy lhrdm v5.4.0-nightly-20211113 ./topology.yaml tiup dm list tiup dm display lhrdm tiup dm start lhrdm tiup dm display lhrdm tiup dmctl:v5.4.0-nightly-20211113 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 | [tidb@lhrtidbmonitor ~]$ cat topology.yaml # The topology template is used deploy a minimal DM cluster, which suitable # for scenarios with only three machinescontains. The minimal cluster contains # - 3 master nodes # - 3 worker nodes # You can change the hosts according your environment --- global: user: "tidb" ssh_port: 22 deploy_dir: "/home/tidb/dm/deploy" data_dir: "/home/tidb/dm/data" # arch: "amd64" master_servers: - host: 172.16.6.11 - host: 172.16.6.12 - host: 172.16.6.13 worker_servers: - host: 172.16.6.11 - host: 172.16.6.12 - host: 172.16.6.13 monitoring_servers: - host: 172.16.6.11 grafana_servers: - host: 172.16.6.11 alertmanager_servers: - host: 172.16.6.11 [tidb@lhrtidbmonitor ~]$ tiup list dm-master Available versions for dm-master: Version Installed Release Platforms ------- --------- ------- --------- nightly -> v5.4.0-nightly-20211113 2021-11-13T22:35:54+08:00 linux/amd64,linux/arm64 v2.0.0-rc 2020-08-21T17:49:08+08:00 linux/amd64,linux/arm64 v2.0.0-rc.2 2020-09-01T20:51:29+08:00 linux/amd64,linux/arm64 v2.0.0 2020-10-30T16:10:58+08:00 linux/amd64,linux/arm64 v2.0.1 2020-12-25T13:22:29+08:00 linux/amd64,linux/arm64 v2.0.3 2021-05-11T22:14:31+08:00 linux/amd64,linux/arm64 v2.0.4 2021-06-18T16:34:30+08:00 linux/amd64,linux/arm64 v2.0.5 2021-07-30T18:46:27+08:00 linux/amd64,linux/arm64 v2.0.6 2021-08-13T17:36:06+08:00 linux/amd64,linux/arm64 v2.0.7 2021-09-29T16:34:31+08:00 linux/amd64,linux/arm64 v5.4.0-nightly-20211113 2021-11-13T22:35:54+08:00 linux/amd64,linux/arm64 [tidb@lhrtidbmonitor ~]$ tiup dm deploy lhrdm v5.4.0-nightly-20211113 ./topology.yaml Starting component `dm`: /home/tidb/.tiup/components/dm/v1.6.1/tiup-dm deploy lhrdm v5.4.0-nightly-20211113 ./topology.yaml + Detect CPU Arch - Detecting node 172.16.6.11 ... Done - Detecting node 172.16.6.12 ... Done - Detecting node 172.16.6.13 ... Done Please confirm your topology: Cluster type: dm Cluster name: lhrdm Cluster version: v5.4.0-nightly-20211113 Role Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- dm-master 172.16.6.11 8261/8291 linux/x86_64 /home/tidb/dm/deploy/dm-master-8261,/home/tidb/dm/data/dm-master-8261 dm-master 172.16.6.12 8261/8291 linux/x86_64 /home/tidb/dm/deploy/dm-master-8261,/home/tidb/dm/data/dm-master-8261 dm-master 172.16.6.13 8261/8291 linux/x86_64 /home/tidb/dm/deploy/dm-master-8261,/home/tidb/dm/data/dm-master-8261 dm-worker 172.16.6.11 8262 linux/x86_64 /home/tidb/dm/deploy/dm-worker-8262,/home/tidb/dm/data/dm-worker-8262 dm-worker 172.16.6.12 8262 linux/x86_64 /home/tidb/dm/deploy/dm-worker-8262,/home/tidb/dm/data/dm-worker-8262 dm-worker 172.16.6.13 8262 linux/x86_64 /home/tidb/dm/deploy/dm-worker-8262,/home/tidb/dm/data/dm-worker-8262 prometheus 172.16.6.11 9090 linux/x86_64 /home/tidb/dm/deploy/prometheus-9090,/home/tidb/dm/data/prometheus-9090 grafana 172.16.6.11 3000 linux/x86_64 /home/tidb/dm/deploy/grafana-3000 alertmanager 172.16.6.11 9093/9094 linux/x86_64 /home/tidb/dm/deploy/alertmanager-9093,/home/tidb/dm/data/alertmanager-9093 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y/N]: (default=N) y + Generate SSH keys ... Done + Download TiDB components - Download dm-master:v5.4.0-nightly-20211113 (linux/amd64) ... Done - Download dm-worker:v5.4.0-nightly-20211113 (linux/amd64) ... Done - Download prometheus: (linux/amd64) ... Done - Download grafana: (linux/amd64) ... Done - Download alertmanager: (linux/amd64) ... Done + Initialize target host environments - Prepare 172.16.6.11:22 ... Done - Prepare 172.16.6.12:22 ... Done - Prepare 172.16.6.13:22 ... Done + Copy files - Copy dm-master -> 172.16.6.11 ... Done - Copy dm-master -> 172.16.6.12 ... Done - Copy dm-master -> 172.16.6.13 ... Done - Copy dm-worker -> 172.16.6.11 ... Done - Copy dm-worker -> 172.16.6.12 ... Done - Copy dm-worker -> 172.16.6.13 ... Done - Copy prometheus -> 172.16.6.11 ... Done - Copy grafana -> 172.16.6.11 ... Done - Copy alertmanager -> 172.16.6.11 ... Done Enabling component dm-master Enabling instance 172.16.6.13:8261 Enabling instance 172.16.6.11:8261 Enabling instance 172.16.6.12:8261 Enable instance 172.16.6.13:8261 success Enable instance 172.16.6.12:8261 success Enable instance 172.16.6.11:8261 success Enabling component dm-worker Enabling instance 172.16.6.13:8262 Enabling instance 172.16.6.12:8262 Enabling instance 172.16.6.11:8262 Enable instance 172.16.6.11:8262 success Enable instance 172.16.6.13:8262 success Enable instance 172.16.6.12:8262 success Enabling component prometheus Enabling instance 172.16.6.11:9090 Enable instance 172.16.6.11:9090 success Enabling component grafana Enabling instance 172.16.6.11:3000 Enable instance 172.16.6.11:3000 success Enabling component alertmanager Enabling instance 172.16.6.11:9093 Enable instance 172.16.6.11:9093 success Cluster `lhrdm` deployed successfully, you can start it with command: `tiup dm start lhrdm` [tidb@lhrtidbmonitor ~]$ tiup dm start lhrdm Starting component `dm`: /home/tidb/.tiup/components/dm/v1.6.1/tiup-dm start lhrdm Starting cluster lhrdm... + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/dm/clusters/lhrdm/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/dm/clusters/lhrdm/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.6.12 + [Parallel] - UserSSH: user=tidb, host=172.16.6.12 + [Parallel] - UserSSH: user=tidb, host=172.16.6.13 + [Parallel] - UserSSH: user=tidb, host=172.16.6.11 + [Parallel] - UserSSH: user=tidb, host=172.16.6.13 + [Parallel] - UserSSH: user=tidb, host=172.16.6.11 + [Parallel] - UserSSH: user=tidb, host=172.16.6.11 + [Parallel] - UserSSH: user=tidb, host=172.16.6.11 + [Parallel] - UserSSH: user=tidb, host=172.16.6.11 + [ Serial ] - StartCluster Starting component dm-master Starting instance 172.16.6.13:8261 Starting instance 172.16.6.11:8261 Starting instance 172.16.6.12:8261 Start instance 172.16.6.11:8261 success Start instance 172.16.6.13:8261 success Start instance 172.16.6.12:8261 success Starting component dm-worker Starting instance 172.16.6.11:8262 Starting instance 172.16.6.12:8262 Starting instance 172.16.6.13:8262 Start instance 172.16.6.11:8262 success Start instance 172.16.6.13:8262 success Start instance 172.16.6.12:8262 success Starting component prometheus Starting instance 172.16.6.11:9090 Start instance 172.16.6.11:9090 success Starting component grafana Starting instance 172.16.6.11:3000 Start instance 172.16.6.11:3000 success Starting component alertmanager Starting instance 172.16.6.11:9093 Start instance 172.16.6.11:9093 success Started cluster `lhrdm` successfully [tidb@lhrtidbmonitor ~]$ [tidb@lhrtidbmonitor ~]$ tiup dm display lhrdm Starting component `dm`: /home/tidb/.tiup/components/dm/v1.6.1/tiup-dm display lhrdm Cluster type: dm Cluster name: lhrdm Cluster version: v5.4.0-nightly-20211113 Deploy user: tidb SSH type: builtin ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.6.11:9093 alertmanager 172.16.6.11 9093/9094 linux/x86_64 Up /home/tidb/dm/data/alertmanager-9093 /home/tidb/dm/deploy/alertmanager-9093 172.16.6.11:8261 dm-master 172.16.6.11 8261/8291 linux/x86_64 Healthy /home/tidb/dm/data/dm-master-8261 /home/tidb/dm/deploy/dm-master-8261 172.16.6.12:8261 dm-master 172.16.6.12 8261/8291 linux/x86_64 Healthy /home/tidb/dm/data/dm-master-8261 /home/tidb/dm/deploy/dm-master-8261 172.16.6.13:8261 dm-master 172.16.6.13 8261/8291 linux/x86_64 Healthy|L /home/tidb/dm/data/dm-master-8261 /home/tidb/dm/deploy/dm-master-8261 172.16.6.11:8262 dm-worker 172.16.6.11 8262 linux/x86_64 Free /home/tidb/dm/data/dm-worker-8262 /home/tidb/dm/deploy/dm-worker-8262 172.16.6.12:8262 dm-worker 172.16.6.12 8262 linux/x86_64 Free /home/tidb/dm/data/dm-worker-8262 /home/tidb/dm/deploy/dm-worker-8262 172.16.6.13:8262 dm-worker 172.16.6.13 8262 linux/x86_64 Free /home/tidb/dm/data/dm-worker-8262 /home/tidb/dm/deploy/dm-worker-8262 172.16.6.11:3000 grafana 172.16.6.11 3000 linux/x86_64 Up - /home/tidb/dm/deploy/grafana-3000 172.16.6.11:9090 prometheus 172.16.6.11 9090 linux/x86_64 Up /home/tidb/dm/data/prometheus-9090 /home/tidb/dm/deploy/prometheus-9090 Total nodes: 9 [tidb@lhrtidbmonitor ~]$ [tidb@lhrtidbmonitor ~]$ tiup dmctl:v5.4.0-nightly-20211113 The component `dmctl` version v5.4.0-nightly-20211113 is not installed; downloading from repository. download https://tiup-mirrors.pingcap.com/dmctl-v5.4.0-nightly-20211113-linux-amd64.tar.gz 29.12 MiB / 29.12 MiB 100.00% 2.63 MiB/s Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v5.4.0-nightly-20211113/dmctl/dmctl Error: --master-addr not provided, this parameter is required when interacting with the dm-master, you can also use environment variable 'DM_MASTER_ADDR' to specify the value. Use `dmctl --help` to see more help messages Error: run `/home/tidb/.tiup/components/dmctl/v5.4.0-nightly-20211113/dmctl/dmctl` (wd:/home/tidb/.tiup/data/Sok7mlU) failed: exit status 1 [tidb@lhrtidbmonitor ~]$ [tidb@lhrtidbmonitor dmctl]$ pwd /home/tidb/.tiup/components/dmctl/v5.4.0-nightly-20211113/dmctl [tidb@lhrtidbmonitor dmctl]$ ll total 77456 drwxr-xr-x 2 tidb tidb 4096 Nov 14 21:00 conf -rwxr-xr-x 1 tidb tidb 79299946 Nov 14 21:00 dmctl drwxr-xr-x 2 tidb tidb 4096 Nov 14 21:00 scripts |