在同一个主机上安装集群(非TiUP Playground方式)
参考:https://docs.pingcap.com/zh/tidb/stable/quick-start-with-tidb#Linux
其它内容可以参考:
- 【DB宝54】NewSQL数据库之TiDB简介 :https://www.xmmup.com/dbbao54newsqlshujukuzhitidbjianjie.html
- 【DB宝57】使用Docker-Compose快速部署TiDB集群环境:https://www.xmmup.com/dbbao57shiyongdocker-composekuaisubushutidbjiqunhuanjing.html
- 使用TiUP快速部署TiDB上手环境(在同一个主机上安装TiDB集群–使用TiUP Playground方式):https://www.xmmup.com/shiyongtiupkuaisubushutidbshangshouhuanjing.html
https://docs.pingcap.com/zh/tidb/stable/quick-start-with-tidb/#Linux
适用场景:希望用单台 Linux 服务器,体验 TiDB 最小的完整拓扑的集群,并模拟生产环境下的部署步骤。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | docker rm -f lhrtidb53 docker run -d --name lhrtidb53 -h lhrtidb53 \ -p 54000-54001:4000-4001 -p 52379-52385:2379-2385 -p 59090:9090 -p 53000:3000 -p 53389:3389 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrcentos76:8.5 \ /usr/sbin/init docker exec -it lhrtidb53 bash -- 安装tiup curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh source /root/.bash_profile echo "export PATH=/root/.tiup/bin:$PATH" >> /root/.bashrc -- 安装cluster组件并升级 tiup cluster tiup update --self && tiup update cluster -- 由于模拟多机部署,需要通过 root 用户调大 sshd 服务的连接数限制 echo 'MaxSessions=40' >> /etc/ssh/sshd_config systemctl restart sshd -- 安装用户 useradd tidb echo "tidb:lhr" | chpasswd |
配置参数(1个tidb,1个pd,1个tiflash,1个monitor,3个tikv):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | cat > /tmp/topo.yaml <<"EOF" # # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 oom-action: log tikv: log-level: warning readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 127.0.0.1 tidb_servers: - host: 127.0.0.1 tikv_servers: - host: 127.0.0.1 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" } - host: 127.0.0.1 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" } - host: 127.0.0.1 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" } tiflash_servers: - host: 127.0.0.1 monitoring_servers: - host: 127.0.0.1 grafana_servers: - host: 127.0.0.1 EOF |
配置参数(2个tidb,3个pd,1个tiflash,1个monitor,3个tikv):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | cat > /tmp/topo.yaml <<"EOF" # # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 oom-action: log tikv: log-level: warning storage.reserve-space: 0 readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 127.0.0.1 client_port: 2379 peer_port: 2380 - host: 127.0.0.1 client_port: 2381 peer_port: 2382 - host: 127.0.0.1 client_port: 2383 peer_port: 2384 tidb_servers: - host: 127.0.0.1 port: 4000 status_port: 10080 - host: 127.0.0.1 port: 4001 status_port: 10081 tikv_servers: - host: 127.0.0.1 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" } - host: 127.0.0.1 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" } - host: 127.0.0.1 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" } tiflash_servers: - host: 127.0.0.1 monitoring_servers: - host: 127.0.0.1 grafana_servers: - host: 127.0.0.1 EOF |
开始部署:
1 | tiup cluster deploy lhrtidb v5.3.0 /tmp/topo.yaml --user root -p |
运行过程:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | [root@lhrtidb53 ~]# tiup cluster deploy lhrtidb v5.3.0 /tmp/topo.yaml --user root -p Starting component `cluster`: /root/.tiup/components/cluster/v1.8.1/tiup-cluster deploy lhrtidb v5.3.0 /tmp/topo.yaml --user root -p Input SSH password: + Detect CPU Arch - Detecting node 127.0.0.1 ... Done Please confirm your topology: Cluster type: tidb Cluster name: lhrtidb Cluster version: v5.3.0 Role Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- pd 127.0.0.1 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 pd 127.0.0.1 2381/2382 linux/x86_64 /tidb-deploy/pd-2381,/tidb-data/pd-2381 pd 127.0.0.1 2383/2384 linux/x86_64 /tidb-deploy/pd-2383,/tidb-data/pd-2383 tikv 127.0.0.1 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tikv 127.0.0.1 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161 tikv 127.0.0.1 20162/20182 linux/x86_64 /tidb-deploy/tikv-20162,/tidb-data/tikv-20162 tidb 127.0.0.1 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000 tidb 127.0.0.1 4001/10081 linux/x86_64 /tidb-deploy/tidb-4001 tiflash 127.0.0.1 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000 prometheus 127.0.0.1 9090 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090 grafana 127.0.0.1 3000 linux/x86_64 /tidb-deploy/grafana-3000 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y/N]: (default=N) y + Generate SSH keys ... Done + Download TiDB components - Download pd:v5.3.0 (linux/amd64) ... Done - Download tikv:v5.3.0 (linux/amd64) ... Done - Download tidb:v5.3.0 (linux/amd64) ... Done - Download tiflash:v5.3.0 (linux/amd64) ... Done - Download prometheus:v5.3.0 (linux/amd64) ... Done - Download grafana:v5.3.0 (linux/amd64) ... Done - Download node_exporter: (linux/amd64) ... Done - Download blackbox_exporter: (linux/amd64) ... Done + Initialize target host environments - Prepare 127.0.0.1:22 ... Done + Copy files - Copy pd -> 127.0.0.1 ... Done - Copy pd -> 127.0.0.1 ... Done - Copy pd -> 127.0.0.1 ... Done - Copy tikv -> 127.0.0.1 ... Done - Copy tikv -> 127.0.0.1 ... Done - Copy tikv -> 127.0.0.1 ... Done - Copy tidb -> 127.0.0.1 ... Done - Copy tidb -> 127.0.0.1 ... Done - Copy tiflash -> 127.0.0.1 ... Done - Copy prometheus -> 127.0.0.1 ... Done - Copy grafana -> 127.0.0.1 ... Done - Copy node_exporter -> 127.0.0.1 ... Done - Copy blackbox_exporter -> 127.0.0.1 ... Done + Check status Enabling component pd Enabling instance 127.0.0.1:2383 Enabling instance 127.0.0.1:2381 Enabling instance 127.0.0.1:2379 Enable instance 127.0.0.1:2379 success Enable instance 127.0.0.1:2383 success Enable instance 127.0.0.1:2381 success Enabling component tikv Enabling instance 127.0.0.1:20162 Enabling instance 127.0.0.1:20161 Enabling instance 127.0.0.1:20160 Enable instance 127.0.0.1:20160 success Enable instance 127.0.0.1:20162 success Enable instance 127.0.0.1:20161 success Enabling component tidb Enabling instance 127.0.0.1:4001 Enabling instance 127.0.0.1:4000 Enable instance 127.0.0.1:4001 success Enable instance 127.0.0.1:4000 success Enabling component tiflash Enabling instance 127.0.0.1:9000 Enable instance 127.0.0.1:9000 success Enabling component prometheus Enabling instance 127.0.0.1:9090 Enable instance 127.0.0.1:9090 success Enabling component grafana Enabling instance 127.0.0.1:3000 Enable instance 127.0.0.1:3000 success Enabling component node_exporter Enabling instance 127.0.0.1 Enable 127.0.0.1 success Enabling component blackbox_exporter Enabling instance 127.0.0.1 Enable 127.0.0.1 success Cluster `lhrtidb` deployed successfully, you can start it with command: `tiup cluster start lhrtidb` |
启动集群:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | [root@lhrtidb53 ~]# tiup cluster start lhrtidb Starting component `cluster`: /root/.tiup/components/cluster/v1.8.1/tiup-cluster start lhrtidb Starting cluster lhrtidb... + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/lhrtidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/lhrtidb/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [Parallel] - UserSSH: user=tidb, host=127.0.0.1 + [ Serial ] - StartCluster Starting component pd Starting instance 127.0.0.1:2379 Starting instance 127.0.0.1:2383 Starting instance 127.0.0.1:2381 Start instance 127.0.0.1:2379 success Start instance 127.0.0.1:2383 success Start instance 127.0.0.1:2381 success Starting component tikv Starting instance 127.0.0.1:20162 Starting instance 127.0.0.1:20160 Starting instance 127.0.0.1:20161 Start instance 127.0.0.1:20161 success Start instance 127.0.0.1:20160 success Start instance 127.0.0.1:20162 success Starting component tidb Starting instance 127.0.0.1:4001 Starting instance 127.0.0.1:4000 Start instance 127.0.0.1:4001 success Start instance 127.0.0.1:4000 success Starting component tiflash Starting instance 127.0.0.1:9000 Start instance 127.0.0.1:9000 success Starting component prometheus Starting instance 127.0.0.1:9090 Start instance 127.0.0.1:9090 success Starting component grafana Starting instance 127.0.0.1:3000 Start instance 127.0.0.1:3000 success Starting component node_exporter Starting instance 127.0.0.1 Start 127.0.0.1 success Starting component blackbox_exporter Starting instance 127.0.0.1 Start 127.0.0.1 success + [ Serial ] - UpdateTopology: cluster=lhrtidb Started cluster `lhrtidb` successfully [root@lhrtidb53 ~]# |
其它:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | -- 状态查询 tiup cluster list tiup cluster display lhrtidb -- 查询安装的组件 tiup list --installed -- 使用 TiUP 连接 TiDB: tiup client -- 或者,使用MySQL客户端进行连接 yum install -y mariadb mariadb-libs mariadb-devel mysql --host 172.17.0.15 --port 4000 -u root mysql -uroot -P 44000 -h192.168.66.35 select tidb_version(); select version(); select STORE_ID,ADDRESS,STORE_STATE,STORE_STATE_NAME,CAPACITY,AVAILABLE,UPTIME from INFORMATION_SCHEMA.TIKV_STORE_STATUS; show config where name like '%oom-action%'; show config where name like '%reserve-space%'; select * from INFORMATION_SCHEMA.cluster_info order by type,instance; -- 清理 TiDB 集群 tiup cluster clean lhrtidb --all rm -rf /root/.tiup/storage/cluster/clusters/lhrtidb |
提供了几个监控:
dashboard: http://192.168.66.35:52379/dashboard root/空密码
Grafana: http://192.168.66.35:53000 admin/admin
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | [root@lhrtidb53 ~]# tiup cluster list Starting component `cluster`: /root/.tiup/components/cluster/v1.8.1/tiup-cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- lhrtidb tidb v5.3.0 /root/.tiup/storage/cluster/clusters/lhrtidb /root/.tiup/storage/cluster/clusters/lhrtidb/ssh/id_rsa [root@lhrtidb53 ~]# tiup cluster display lhrtidb Starting component `cluster`: /root/.tiup/components/cluster/v1.8.1/tiup-cluster display lhrtidb Cluster type: tidb Cluster name: lhrtidb Cluster version: v5.3.0 Deploy user: tidb SSH type: builtin Dashboard URL: http://127.0.0.1:2383/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 127.0.0.1:3000 grafana 127.0.0.1 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000 127.0.0.1:2379 pd 127.0.0.1 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379 127.0.0.1:2381 pd 127.0.0.1 2381/2382 linux/x86_64 Up /tidb-data/pd-2381 /tidb-deploy/pd-2381 127.0.0.1:2383 pd 127.0.0.1 2383/2384 linux/x86_64 Up|UI /tidb-data/pd-2383 /tidb-deploy/pd-2383 127.0.0.1:9090 prometheus 127.0.0.1 9090 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090 127.0.0.1:4000 tidb 127.0.0.1 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000 127.0.0.1:4001 tidb 127.0.0.1 4001/10081 linux/x86_64 Up - /tidb-deploy/tidb-4001 127.0.0.1:9000 tiflash 127.0.0.1 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 127.0.0.1:20160 tikv 127.0.0.1 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 127.0.0.1:20161 tikv 127.0.0.1 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161 127.0.0.1:20162 tikv 127.0.0.1 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162 Total nodes: 11 MySQL [(none)]> show config where name like '%reserve-space%'; No connection. Trying to reconnect... Connection id: 5 Current database: *** NONE *** +---------+-----------------+---------------------------------------+-------+ | Type | Instance | Name | Value | +---------+-----------------+---------------------------------------+-------+ | tikv | 127.0.0.1:20161 | storage.reserve-space | 0KiB | | tikv | 127.0.0.1:20162 | storage.reserve-space | 0KiB | | tikv | 127.0.0.1:20160 | storage.reserve-space | 0KiB | | tiflash | 127.0.0.1:3930 | raftstore-proxy.storage.reserve-space | 1GiB | +---------+-----------------+---------------------------------------+-------+ 4 rows in set (0.08 sec) MySQL [(none)]> show config where name like '%oom-action%'; +------+----------------+------------+-------+ | Type | Instance | Name | Value | +------+----------------+------------+-------+ | tidb | 127.0.0.1:4001 | oom-action | log | | tidb | 127.0.0.1:4000 | oom-action | log | +------+----------------+------------+-------+ 2 rows in set (0.08 sec) MySQL [(none)]> MySQL [(none)]> select * from INFORMATION_SCHEMA.cluster_info order by type,instance; +---------+-----------------+-----------------+---------+------------------------------------------+---------------------------+-----------------+-----------+ | TYPE | INSTANCE | STATUS_ADDRESS | VERSION | GIT_HASH | START_TIME | UPTIME | SERVER_ID | +---------+-----------------+-----------------+---------+------------------------------------------+---------------------------+-----------------+-----------+ | pd | 127.0.0.1:2379 | 127.0.0.1:2379 | 5.3.0 | fe6fab9268d2d6fd34cd22edd1cf31a302e8dc5c | 2022-01-05T16:55:26+08:00 | 2m52.365322261s | 0 | | pd | 127.0.0.1:2381 | 127.0.0.1:2381 | 5.3.0 | fe6fab9268d2d6fd34cd22edd1cf31a302e8dc5c | 2022-01-05T16:55:26+08:00 | 2m52.365328104s | 0 | | pd | 127.0.0.1:2383 | 127.0.0.1:2383 | 5.3.0 | fe6fab9268d2d6fd34cd22edd1cf31a302e8dc5c | 2022-01-05T16:55:26+08:00 | 2m52.365325191s | 0 | | tidb | 127.0.0.1:4000 | 127.0.0.1:10080 | 5.3.0 | 4a1b2e9fe5b5afb1068c56de47adb07098d768d6 | 2022-01-05T16:56:06+08:00 | 2m12.365318533s | 0 | | tidb | 127.0.0.1:4001 | 127.0.0.1:10081 | 5.3.0 | 4a1b2e9fe5b5afb1068c56de47adb07098d768d6 | 2022-01-05T16:56:06+08:00 | 2m12.365307094s | 0 | | tiflash | 127.0.0.1:3930 | 127.0.0.1:20292 | 5.3.0 | 0c9c1ff26c572134b0c80ff350b3def4a66b94b1 | 2022-01-05T16:56:16+08:00 | 2m2.365345471s | 0 | | tikv | 127.0.0.1:20160 | 127.0.0.1:20180 | 5.3.0 | 6c1424706f3d5885faa668233f34c9f178302f36 | 2022-01-05T16:55:38+08:00 | 2m40.365342307s | 0 | | tikv | 127.0.0.1:20161 | 127.0.0.1:20181 | 5.3.0 | 6c1424706f3d5885faa668233f34c9f178302f36 | 2022-01-05T16:55:36+08:00 | 2m42.365331097s | 0 | | tikv | 127.0.0.1:20162 | 127.0.0.1:20182 | 5.3.0 | 6c1424706f3d5885faa668233f34c9f178302f36 | 2022-01-05T16:55:37+08:00 | 2m41.365338353s | 0 | +---------+-----------------+-----------------+---------+------------------------------------------+---------------------------+-----------------+-----------+ 9 rows in set (0.06 sec) |