在docker中模拟不同主机快速搭建GBase 8a V95集群环境
Tags: Dockerdocker环境GBaseGBase 8a环境搭建集群部署高可用
集群节点环境
IP | 角色 | OS | hostID |
---|---|---|---|
172.72.3.40 | 管理、数据节点、主节点 | CentOS Linux release 7.6.1810 (Core) | gbase8a_1 |
172.72.3.41 | 管理、数据节点 | CentOS Linux release 7.6.1810 (Core) | gbase8a_2 |
172.72.3.42 | 管理、数据节点 | CentOS Linux release 7.6.1810 (Core) | gbase8a_3 |
- 操作系统要求:redhat 7.x(或者centos 7.x)。安装系统时建议在“软件选择”中勾选“带GUI的服务器”中的“开发工具”选项。
- 硬件配置:内存2G以上(推荐4G),硬盘 20G以上,固定IP地址。
- 网络要求:各节点IP是同一网段,并互相能连通;开启 SSH 服务;关闭防火墙、关闭seLinux服务。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | -- 网卡 docker network create --subnet=172.72.0.0/16 lhrnw docker rm -f gbase8a_1 docker run -itd --name gbase8a_1 -h gbase8a_1 \ --net=lhrnw --ip 172.72.3.40 \ -p 63330:5432 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='gbase8a_1:172.72.3.40' \ --add-host='gbase8a_2:172.72.3.41' \ --add-host='gbase8a_3:172.72.3.42' \ lhrbest/lhrcentos76:9.2 \ /usr/sbin/init docker rm -f gbase8a_2 docker run -itd --name gbase8a_2 -h gbase8a_2 \ --net=lhrnw --ip 172.72.3.41 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='gbase8a_1:172.72.3.40' \ --add-host='gbase8a_2:172.72.3.41' \ --add-host='gbase8a_3:172.72.3.42' \ lhrbest/lhrcentos76:9.2 \ /usr/sbin/init docker rm -f gbase8a_3 docker run -itd --name gbase8a_3 -h gbase8a_3 \ --net=lhrnw --ip 172.72.3.42 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='gbase8a_1:172.72.3.40' \ --add-host='gbase8a_2:172.72.3.41' \ --add-host='gbase8a_3:172.72.3.42' \ lhrbest/lhrcentos76:9.2 \ /usr/sbin/init -- 拷贝到主节点即可 docker cp GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2 gbase8a_1:/soft/ -- 安装依赖包 yum install -y pcre krb5-libs libdb glibc keyutils-libs libidn libuuid ncurses-libs libgpg-error libgomp libstdc++ libcom_err libgcc python-libs libselinux libgcrypt nss-softokn-freebl |
集群的安装
1、在集群所有节点上创建DBA用户
1 2 3 4 | useradd gbase echo "gbase:lhr" | chpasswd echo "gbase ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers |
2、在集群所有节点上创建安装目录并授权
1 2 3 4 5 | mkdir -p /opt/gbase mkdir -p /opt/gcinstall/ chown gbase:gbase /opt/gbase chown gbase:gbase /opt/gcinstall chown gbase:gbase /tmp |
3、安装包解压缩
1 2 | mv /soft/GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2 /opt/ tar xfj GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2 |
解压缩完成后,opt 下能看到生成 gcinstall 安装目录。
4、设置环境变量
1、复制主节点的环境设置脚本(SetSysEnv.py)至2个从节点
1 2 | scp /opt/gcinstall/SetSysEnv.py root@172.72.3.41:/opt/gcinstall/ scp /opt/gcinstall/SetSysEnv.py root@172.72.3.42:/opt/gcinstall/ |
2、运行SetSysEnv.py脚本配置安装环境(3个节点都需要执行)
1 2 | cd /opt/gcinstall/ python /opt/gcinstall/SetSysEnv.py --dbaUser=gbase --installPrefix=/opt/gbase --cgroup |
如果遇到 IPV6 protocol not supported,please turn it on…的错误提示,请手动开启 IPV6,执行:
123 echo "net.ipv6.conf.all.disable_ipv6 = 0" >> /etc/sysctl.confecho "net.ipv6.conf.default.disable_ipv6 = 0" >> /etc/sysctl.confsysctl -p
该步骤的日志:/tmp/SetSysEnv.log
5、修改主节点的安装配置文件(demo.options)
切换到 gbase 用户:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | su - gbase cat > /opt/gcinstall/demo.options <<"EOF" installPrefix = /opt/gbase coordinateHost = 172.72.3.40,172.72.3.41,172.72.3.42 coordinateHostNodeID = 40,41,42 dataHost = 172.72.3.40,172.72.3.41,172.72.3.42 #existCoordinateHost = #existDataHost = dbaUser = gbase dbaGroup = gbase dbaPwd = 'lhr' rootPwd = 'lhr' #rootPwdFile = rootPwd.json EOF |
dbaPwd是 gbase 账户的密码
rootPwd 是 root 账户的密码
6、gbase用户执行安装脚本
1 2 3 | su - gbase cd /opt/gcinstall ./gcinstall.py --silent=/opt/gcinstall/demo.options |
安装结果:
1 2 3 4 5 6 7 8 9 | 172.72.3.42 install cluster on host 172.72.3.42 successfully. 172.72.3.41 install cluster on host 172.72.3.41 successfully. 172.72.3.40 install cluster on host 172.72.3.40 successfully. Starting all gcluster nodes... start service failed on host 172.72.3.41. start service failed on host 172.72.3.42. start service failed on host 172.72.3.40. adding new datanodes to gcware... InstallCluster Successfully. |
日志:/opt/gcinstall/gcinstall.log
备注:在安装过程中,先进行环境检查,可能会有错,列出缺少rpm依赖包名称,说明操作系统没有安装全必须的rpm包,需要根据rpm包的名称去各节点逐个安装。
8a需要的必备依赖包列表,请查看安装目录gcinstall下的 dependRpms 文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | [gbase@gbase8a_1 gcinstall]$ cat dependRpms pcre krb5-libs libdb glibc keyutils-libs libidn libuuid ncurses-libs libgpg-error libgomp libstdc++ libcom_err libgcc python-libs libselinux libgcrypt nss-softokn-freebl |
7、集群状态查看
安装结束之后,查看集群的状态。
因为没有注册授权,gcluster 和 gnode 服务是闪烁的 CLOSE 状态属于正常现象。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | [gbase@gbase8a_1 ~]$ gcadmin CLUSTER STATE: ACTIVE ============================================================== | GBASE COORDINATOR CLUSTER INFORMATION | ============================================================== | NodeName | IpAddress | gcware | gcluster | DataState | -------------------------------------------------------------- | coordinator1 | 172.72.3.40 | OPEN | CLOSE | 0 | -------------------------------------------------------------- | coordinator2 | 172.72.3.41 | OPEN | CLOSE | 0 | -------------------------------------------------------------- | coordinator3 | 172.72.3.42 | OPEN | CLOSE | 0 | -------------------------------------------------------------- ============================================================ | GBASE CLUSTER FREE DATA NODE INFORMATION | ============================================================ | NodeName | IpAddress | gnode | syncserver | DataState | ------------------------------------------------------------ | FreeNode1 | 172.72.3.40 | CLOSE | OPEN | 0 | ------------------------------------------------------------ | FreeNode2 | 172.72.3.41 | CLOSE | OPEN | 0 | ------------------------------------------------------------ | FreeNode3 | 172.72.3.42 | CLOSE | OPEN | 0 | ------------------------------------------------------------ 0 virtual cluster 3 coordinator node 3 free data node |
8、申请授权
注意:如果您正在使用我们提供的云服务器,/opt/ 目录下的授权文件(*.lic)可以直接使用。请跳过“申请授权”步骤。
① 导出集群各节点的指纹信息:
1 2 | cd /opt/gcinstall ./gethostsid -n 172.72.3.40,172.72.3.41,172.72.3.42 -u root -p lhr -f /tmp/finger.txt |
结果:
1 2 3 4 5 6 7 8 9 10 | [gbase@gbase8a_1 gcinstall]$ ./gethostsid -n 172.72.3.40,172.72.3.41,172.72.3.42 -u root -p lhr -f /tmp/finger.txt ====================================================================== Successful node nums: 3 ====================================================================== [gbase@gbase8a_1 gcinstall]$ [gbase@gbase8a_1 gcinstall]$ more /tmp/finger.txt {"HWADDR":"03:42:AC:48:03:28","SOCKETS":4,"ARCHITECTURE":"x86_64","BYTE ORDER":"Little Endian","MODEL":"85","THREADS":16,"CPUS":16,"NNNODES":2,"CONFUSE DATA":"FrFyhd=By,Q#eYW"} {"HWADDR":"03:42:AC:48:03:29","SOCKETS":4,"ARCHITECTURE":"x86_64","BYTE ORDER":"Little Endian","MODEL":"85","THREADS":16,"CPUS":16,"NNNODES":2,"CONFUSE DATA":"FrFyhd=By,Q#eYW"} {"HWADDR":"03:42:AC:48:03:2A","SOCKETS":4,"ARCHITECTURE":"x86_64","BYTE ORDER":"Little Endian","MODEL":"85","THREADS":16,"CPUS":16,"NNNODES":2,"CONFUSE DATA":"FrFyhd=By,Q#eYW"} [gbase@gbase8a_1 gcinstall]$ |
② 申请授权
- 发邮件给:license@gbase.cn;抄送给 shenliping@gbase.cn;附件为指纹信息文件finger.txt。
邮件标题:GBase 8a MPP Cluster v95 license 申请
邮件正文:
1 2 3 4 5 6 7 | 客户名称: 您的单位全称 项目名称: 2022年X月GBase 8a MPP Cluster GDCA认证培训 申请人: 您的姓名 申请原因: 培训实操练习 有效期: 3个月 操作系统名称及版本: CentOS Linux release 7.6.1810 (Core) 8a集群版本: GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2 |
③ 授权申请处理时间点为工作日9:00、13:30和17:30。学员收到授权文件(20210608.lic)后上传到主节点的 /tmp 下。
9、导入和检查授权
① 导入授权:
1 2 | cd /opt/gcinstall ./License -n 172.72.3.40,172.72.3.41,172.72.3.42 -f /tmp/20230416-09.lic -u gbase -p lhr |
② 检查授权导入情况:
1 | ./chkLicense -n 172.72.3.40,172.72.3.41,172.72.3.42 -u gbase -p lhr |
示例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | [gbase@gbase8a_1 gcinstall]$ ./License -n 172.72.3.40,172.72.3.41,172.72.3.42 -f /tmp/20230416-09.lic -u gbase -p lhr ====================================================================== Successful node nums: 3 ====================================================================== [gbase@gbase8a_1 gcinstall]$ ./chkLicense -n 172.72.3.40,172.72.3.41,172.72.3.42 -u gbase -p lhr ====================================================================== 172.72.3.42 is_exist:yes version:trial expire_time:20230716 is_valid:yes ====================================================================== 172.72.3.41 is_exist:yes version:trial expire_time:20230716 is_valid:yes ====================================================================== 172.72.3.40 is_exist:yes version:trial expire_time:20230716 is_valid:yes [gbase@gbase8a_1 gcinstall]$ |
License 状态说明:
- is_exist 用于标识 license 文件是否存在: yes 代表存在,no 代表不存在;
- version 用于标识 license 类型: trial 为试用版, business 为商用版;
- expire_time 用于标识试用版 license 的到期日期,只在检测试用版license 时才会显示;
- is_valid 用于标识 license 是否有效: yes 代表 license 有效,no代表 license 失效;
1、CPU变更、Memory变更(总内存大小变动等)、Mac地址变更(更换网卡、网卡数量变动等)和license过期都会造成license失效。另外,license文件导入成功之后删除license文件并不会导致license失效。
2、若发现授权失效(is_valid is no),可能由于集群节点硬件变更,请重新生成指纹文件并发邮件申请授权。
10、在集群所有节点上启动全部集群服务
1、在3个节点上都执行如下命令启动集群服务:
1 2 | su - gbase gcluster_services all start |
2、3个节点都执行完毕后,再次查看集群状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | [gbase@gbase8a_1 ~]$ gcadmin CLUSTER STATE: ACTIVE VIRTUAL CLUSTER MODE: NORMAL ============================================================== | GBASE COORDINATOR CLUSTER INFORMATION | ============================================================== | NodeName | IpAddress | gcware | gcluster | DataState | -------------------------------------------------------------- | coordinator1 | 172.72.3.40 | OPEN | OPEN | 0 | -------------------------------------------------------------- | coordinator2 | 172.72.3.41 | OPEN | OPEN | 0 | -------------------------------------------------------------- | coordinator3 | 172.72.3.42 | OPEN | OPEN | 0 | -------------------------------------------------------------- ============================================================ | GBASE CLUSTER FREE DATA NODE INFORMATION | ============================================================ | NodeName | IpAddress | gnode | syncserver | DataState | ------------------------------------------------------------ | FreeNode1 | 172.72.3.40 | OPEN | OPEN | 0 | ------------------------------------------------------------ | FreeNode2 | 172.72.3.41 | OPEN | OPEN | 0 | ------------------------------------------------------------ | FreeNode3 | 172.72.3.42 | OPEN | OPEN | 0 | ------------------------------------------------------------ 0 virtual cluster 3 coordinator node 3 free data node |
11、设置分片信息(创建发布)
1 | gcadmin distribution gcChangeInfo.xml p 2 d 1 pattern 1 |
gcinstall 下生成 gcChangeInfo.xml 文件
1 2 3 4 5 6 7 8 | <?xml version="1.0" encoding="utf-8"?> <servers> <rack> <node ip="172.72.3.40"/> <node ip="172.72.3.41"/> <node ip="172.72.3.42"/> </rack> </servers> |
再次查看集群状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [gbase@gbase8a_1 gcinstall]$ gcadmin CLUSTER STATE: ACTIVE VIRTUAL CLUSTER MODE: NORMAL ============================================================== | GBASE COORDINATOR CLUSTER INFORMATION | ============================================================== | NodeName | IpAddress | gcware | gcluster | DataState | -------------------------------------------------------------- | coordinator1 | 172.72.3.40 | OPEN | OPEN | 0 | -------------------------------------------------------------- | coordinator2 | 172.72.3.41 | OPEN | OPEN | 0 | -------------------------------------------------------------- | coordinator3 | 172.72.3.42 | OPEN | OPEN | 0 | -------------------------------------------------------------- ========================================================================================================= | GBASE DATA CLUSTER INFORMATION | ========================================================================================================= | NodeName | IpAddress | DistributionId | gnode | syncserver | DataState | --------------------------------------------------------------------------------------------------------- | node1 | 172.72.3.40 | 1 | OPEN | OPEN | 0 | --------------------------------------------------------------------------------------------------------- | node2 | 172.72.3.41 | 1 | OPEN | OPEN | 0 | --------------------------------------------------------------------------------------------------------- | node3 | 172.72.3.42 | 1 | OPEN | OPEN | 0 | --------------------------------------------------------------------------------------------------------- |
也可以执行如下命令查看发布信息:
1 2 3 4 5 6 7 8 9 10 11 12 13 | [gbase@gbase8a_1 gcinstall]$ gcadmin showdistribution node Distribution ID: 1 | State: new | Total segment num: 6 ==================================================================================================================================== | nodes | 172.72.3.40 | 172.72.3.41 | 172.72.3.42 | ------------------------------------------------------------------------------------------------------------------------------------ | primary | 1 | 2 | 3 | | segments | 4 | 5 | 6 | ------------------------------------------------------------------------------------------------------------------------------------ |duplicate | 3 | 1 | 2 | |segments 1| 5 | 6 | 4 | ==================================================================================================================================== [gbase@gbase8a_1 gcinstall]$ |
12、数据库初始化
在管理节点上执行如下命令(数据库root密码默认为空)
1 2 | gccli -u root -p gbase> initnodedatamap; |
13、创建库表
初始化成功,则整个8a集群安装完毕,可以创建第一个库和表。
1 2 3 4 5 | create database test; show databases; use test; create table t(id int ,name varchar(20)); show tables; |
示例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | [gbase@gbase8a_1 gcinstall]$ gccli -u root -p Enter password: GBase client 9.5.2.39.126761. Copyright (c) 2004-2023, GBase. All Rights Reserved. gbase> initnodedatamap; Query OK, 0 rows affected (Elapsed: 00:00:00.74) gbase> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | performance_schema | | gbase | | gctmpdb | | gclusterdb | +--------------------+ 5 rows in set (Elapsed: 00:00:00.00) gbase> gbase> create database test; Query OK, 1 row affected (Elapsed: 00:00:00.06) gbase> use test; Query OK, 0 rows affected (Elapsed: 00:00:00.00) gbase> create table t(id int ,name varchar(20)); Query OK, 0 rows affected (Elapsed: 00:00:00.13) gbase> show tables; +----------------+ | Tables_in_test | +----------------+ | t | +----------------+ 1 row in set (Elapsed: 00:00:00.00) gbase> gbase> show variables like 'port'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | port | 5258 | +---------------+-------+ 1 row in set (Elapsed: 00:00:00.00) gbase> gbase> select * from gbase.user; +--------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+-------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+----------+--------------+---------------+----------------+---------------+-------------------------+-------------------+-----+-----------------------+-------------+ | Host | User | Password | Default_VC | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Unmask_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | max_cpus | max_memories | max_tmp_space | resource_group | task_priority | user_limit_storage_size | user_storage_size | UID | plugin | auth_string | +--------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+-------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+----------+--------------+---------------+----------------+---------------+-------------------------+-------------------+-----+-----------------------+-------------+ | % | root | | | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | | 0 | 1 | gbase_native_password | | | % | gbase | *9C0ADBD7F08FA9D49D82760B104110C55B943B8D | | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | | 0 | 2 | gbase_native_password | | +--------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+-------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+----------+--------------+---------------+----------------+---------------+-------------------------+-------------------+-----+-----------------------+-------------+ 2 rows in set (Elapsed: 00:00:00.00) gbase> SET PASSWORD FOR gbase = PASSWORD('') |
远程登录
1 2 3 4 5 6 7 8 9 10 11 12 | -- 修改密码为空 SET PASSWORD FOR gbase = PASSWORD('') -- 命令行登陆 gbase -u root -p -h172.72.3.40 -P5258 gbase -u gbase -p -h172.72.3.40 -P5258 gccli -u gbase -p -h172.72.3.40 -P5258 gccli -u root -p -- 若是MySQL客户端,则使用5.7版本,否则会报错:ERROR 1043 (08S01): Bad handshake mysql -uroot -p -h172.72.3.40 -P5258 |
集群的卸载
1、在所有节点执行:
1 | gcluster_services all stop |
2、在主节点上执行卸载命令
1 2 | cd /opt/gcinstall ./unInstall.py --silent=demo.options |
常见问题
Q01. 集群安装成功后,在管理节点执行 gcadmin,系统提示找不到命令
- 原因:环境变量没有生效
- 解决方法:切换操作系统账户
$exit
$su gbase
Q02. 在管理节点执行 gcadmin,系统提示
1 | Could not initialize CRM instance error: [122]->[can not connect to any server] |
- 原因:所有节点的集群服务都没有启动
- 解决方法:
[40]$gcluster_services all start
[41]$gcluster_services all start
[42]$gcluster_services all start
Q03 SSH 服务确认22端口被禁止,能安装8a集群吗?
- 解决方法:
- 修改 SSH 配置文件。
[40]#cd /etc/ssh
[40]#vi ssh_config
假设,修改配置文件中“Port”的值为 10022 - 重启 SSH 服务:
[40]#service sshd restart
- 查看 SSH 监听端口是否修改为 10022
[40]#netstat -tunlp | grep ssh
- 关闭所有管理节点集群服务
[40]$gcluster_services all stop
[41]$gcluster_services all stop
[42]$gcluster_services all stop
- 修改所有管理节点$GCWARE_BASE/config/gcware.conf 中的
gcware 配置文件的 node_ssh_port: 22 - 重启所有管理节点集群服务
[40]$gcluster_services all start
[41]$gcluster_services all start
[42]$gcluster_services all start
Q04. 查看安装8a集群的依赖包列表
$ cat /opt/gcinstall/dependRpms
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | pcre krb5-libs libdb glibc keyutils-libs libidn libuuid ncurses-libs libgpg-error libgomp libstdc++ libcom_err libgcc python-libs libselinux libgcrypt nss-softokn-freebl |
在安装脚本执行过程中,如果系统提示缺少rpm依赖包,需要按照以上rpm列表名,在各节点安装缺少的包。
Q05. Error: gcinstall.py(line 2604) -- SetSysEnv.py must be executed before cluster is installed,not executed nodes are
执行脚本:
1 | ./gcinstall.py --silent=/opt/gcinstall/demo.options |
安装集群的时候报错信息:
1 2 3 | Error: gcinstall.py(line 2604) -- SetSysEnv.py must be executed before cluster is installed,not executed nodes are 172.72.3.42 172.72.3.41 172.72.3.40 |
原因:
需要在每个节点上先执行:
1 | python /opt/gcinstall/SetSysEnv.py --dbaUser=gbase --installPrefix=/opt/gbase --cgroup |
该步骤的日志:/tmp/SetSysEnv.log 。
但是,我的是docker 容器环境,在执行完以上脚本后,依然报错,报错的原因是以下这几个内核参数不存在导致的:
1 2 3 4 5 6 7 8 9 10 11 12 | 2023-04-15 17:15:23,861-root-ERROR sysctl: cannot stat /proc/sys/net/core/netdev_max_backlog: No such file or directory sysctl: cannot stat /proc/sys/net/core/rmem_default: No such file or directory sysctl: cannot stat /proc/sys/net/core/rmem_max: No such file or directory sysctl: cannot stat /proc/sys/net/core/wmem_default: No such file or directory sysctl: cannot stat /proc/sys/net/core/wmem_max: No such file or directory sysctl: cannot stat /proc/sys/net/ipv4/tcp_max_syn_backlog: No such file or directory sysctl: cannot stat /proc/sys/net/ipv4/tcp_rmem: No such file or directory sysctl: cannot stat /proc/sys/net/ipv4/tcp_sack: No such file or directory sysctl: cannot stat /proc/sys/net/ipv4/ip_local_reserved_ports: No such file or directory sysctl: cannot stat /proc/sys/net/ipv4/tcp_syncookies: No such file or directory sysctl: cannot stat /proc/sys/net/ipv4/tcp_window_scaling: No such file or directory sysctl: cannot stat /proc/sys/net/ipv4/tcp_wmem: No such file or directory |
查看:
1 2 3 4 5 6 7 8 9 10 11 12 | cat /proc/sys/net/core/netdev_max_backlog cat /proc/sys/net/core/rmem_default cat /proc/sys/net/core/rmem_max cat /proc/sys/net/core/wmem_default cat /proc/sys/net/core/wmem_max cat /proc/sys/net/ipv4/tcp_max_syn_backlog cat /proc/sys/net/ipv4/tcp_rmem cat /proc/sys/net/ipv4/tcp_sack cat /proc/sys/net/ipv4/ip_local_reserved_ports cat /proc/sys/net/ipv4/tcp_syncookies cat /proc/sys/net/ipv4/tcp_window_scaling cat /proc/sys/net/ipv4/tcp_wmem |
解决:
修改文件/opt/gcinstall/gcinstall.py
,将涉及到“SetSysEnv.py must be executed before cluster is installed”的部分都注释掉,然后重新安装即可。
集群默认端口
组件 | 默认端口 | 协议 | 说明 |
---|---|---|---|
Gcluster | 5258 | TCP | Coordinator集群节点对外提供服务端口 |
Gnode | 5050 | TCP | Data 集群节点对外提供服务端口 |
Gcware | 5918 | TCP/UDP | gcware节点间通讯端口 |
gcware | 5919 | TCP | 外部连接gcware节点端口 |
Recover_monit_port | 6268 | TCP | 监控收集信息端口 |
syncServer | 5288 | TCP | syncServer 服务端口 |
GcrecoverMonit | 6268 | TCP | Gcrecover 服务端口 |
数据远程导出端口 | 16066-16166 | TCP | 数据远程导出端口 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | [root@gbase8a_1 gcinstall]# netstat -tulnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 313/sshd tcp 0 0 0.0.0.0:6268 0.0.0.0:* LISTEN 25384/gcrecover tcp 0 0 172.72.3.40:5918 0.0.0.0:* LISTEN 25157/gcware tcp 0 0 172.72.3.40:5919 0.0.0.0:* LISTEN 25157/gcware tcp 0 0 127.0.0.11:44498 0.0.0.0:* LISTEN - tcp6 0 0 :::22 :::* LISTEN 313/sshd tcp6 0 0 127.0.0.1:3350 :::* LISTEN 162/xrdp-sesman tcp6 0 0 :::3389 :::* LISTEN 163/xrdp tcp6 0 0 :::5288 :::* LISTEN 25620/gc_sync_serve udp 0 0 127.0.0.11:37430 0.0.0.0:* - udp 0 0 172.72.3.40:5918 0.0.0.0:* 25157/gcware udp 0 0 0.0.0.0:41260 0.0.0.0:* 25157/gcware [root@gbase8a_1 gcinstall]# |