横向及纵向扩容Greenplum系统实战过程
Tags: GreenPlumscale out(横向扩展)scale up(纵向扩展)增加节点扩容
简介
更多理论知识请参考:https://www.xmmup.com/kuoronggreenplumxitongzengjiasegmentjiedian.html
若是组镜像(默认),则至少需要增加2个主机!!!
环境介绍
接下来的实验环境参考:https://www.xmmup.com/mppjiagouzhigreenplumdeanzhuangpeizhigaojiban.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | docker commit mdw1 lhrbest/gpdb_mdw1:6.23.1 docker commit mdw2 lhrbest/gpdb_smdw:6.23.1 docker commit sdw1 lhrbest/gpdb_sdw1:6.23.1 docker commit sdw2 lhrbest/gpdb_sdw2:6.23.1 docker commit sdw3 lhrbest/gpdb_sdw3:6.23.1 docker commit sdw4 lhrbest/gpdb_sdw4:6.23.1 docker rm -f mdw docker run -itd --name mdw -h mdw \ --net=lhrnw --ip 172.72.6.50 \ -p 64350:5432 -p 28280:28080 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/gpdb_mdw1:6.23.1 \ /usr/sbin/init docker rm -f smdw docker run -itd --name smdw -h smdw \ --net=lhrnw --ip 172.72.6.51 \ -p 64351:5432 -p 28081:28080 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/gpdb_smdw:6.23.1 \ /usr/sbin/init docker rm -f sdw1 docker run -itd --name sdw1 -h sdw1 \ --net=lhrnw --ip 172.72.6.52 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/gpdb_sdw1:6.23.1 \ /usr/sbin/init docker rm -f sdw2 docker run -itd --name sdw2 -h sdw2 \ --net=lhrnw --ip 172.72.6.53 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/gpdb_sdw2:6.23.1 \ /usr/sbin/init docker rm -f sdw3 docker run -itd --name sdw3 -h sdw3 \ --net=lhrnw --ip 172.72.6.54 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/gpdb_sdw3:6.23.1 \ /usr/sbin/init docker rm -f sdw4 docker run -itd --name sdw4 -h sdw4 \ --net=lhrnw --ip 172.72.6.55 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/gpdb_sdw4:6.23.1 \ /usr/sbin/init docker rm -f sdw5 docker run -itd --name sdw5 -h sdw5 \ --net=lhrnw --ip 172.72.6.56 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/lhrcentos76:9.0 \ /usr/sbin/init docker rm -f sdw6 docker run -itd --name sdw6 -h sdw6 \ --net=lhrnw --ip 172.72.6.57 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true \ --add-host='mdw1 mdw:172.72.6.50' \ --add-host='mdw2 smdw:172.72.6.51' \ --add-host='sdw1:172.72.6.52' \ --add-host='sdw2:172.72.6.53' \ --add-host='sdw3:172.72.6.54' \ --add-host='sdw4:172.72.6.55' \ --add-host='sdw5:172.72.6.56' \ --add-host='sdw6:172.72.6.57' \ lhrbest/lhrcentos76:9.0 \ /usr/sbin/init |
扩容之后的环境:
横向扩容(增加节点数)
greenplum横向扩容相当于增加服务器数量,原有服务器节点保持现状,增加服务器存储节点!!!
现有环境
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | postgres=# select * from gp_segment_configuration order by hostname,role desc; dbid | content | role | preferred_role | mode | status | port | hostname | address | datadir ------+---------+------+----------------+------+--------+------+----------+---------+------------------------------------- 1 | -1 | p | p | n | u | 5432 | mdw1 | mdw1 | /opt/greenplum/data/master/gpseg-1 34 | -1 | m | m | s | u | 5432 | mdw2 | mdw2 | /opt/greenplum/data/master/gpseg-1 2 | 0 | p | p | s | u | 6000 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg0 3 | 1 | p | p | s | u | 6001 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg1 4 | 2 | p | p | s | u | 6002 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg2 5 | 3 | p | p | s | u | 6003 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg3 31 | 13 | m | m | s | u | 7001 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg13 33 | 15 | m | m | s | u | 7003 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg15 32 | 14 | m | m | s | u | 7002 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg14 30 | 12 | m | m | s | u | 7000 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg12 6 | 4 | p | p | s | u | 6000 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg4 9 | 7 | p | p | s | u | 6003 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg7 7 | 5 | p | p | s | u | 6001 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg5 8 | 6 | p | p | s | u | 6002 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg6 19 | 1 | m | m | s | u | 7001 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg1 20 | 2 | m | m | s | u | 7002 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg2 21 | 3 | m | m | s | u | 7003 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg3 18 | 0 | m | m | s | u | 7000 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg0 11 | 9 | p | p | s | u | 6001 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg9 10 | 8 | p | p | s | u | 6000 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg8 13 | 11 | p | p | s | u | 6003 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg11 12 | 10 | p | p | s | u | 6002 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg10 23 | 5 | m | m | s | u | 7001 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg5 22 | 4 | m | m | s | u | 7000 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg4 24 | 6 | m | m | s | u | 7002 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg6 25 | 7 | m | m | s | u | 7003 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg7 15 | 13 | p | p | s | u | 6001 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg13 14 | 12 | p | p | s | u | 6000 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg12 17 | 15 | p | p | s | u | 6003 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg15 16 | 14 | p | p | s | u | 6002 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg14 29 | 11 | m | m | s | u | 7003 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg11 26 | 8 | m | m | s | u | 7000 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg8 28 | 10 | m | m | s | u | 7002 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg10 27 | 9 | m | m | s | u | 7001 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg9 (34 rows) |
新增节点配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | hostnamectl set-hostname sdw5 sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config setenforce 0 cat >> /etc/security/limits.conf <<"EOF" * soft nofile 65535 * hard nofile 65535 EOF ulimit -HSn 65535 cat >> /etc/sysctl.conf <<"EOF" kernel.shmmax = 4398046511104 kernel.shmmni = 4096 kernel.shmall = 4000000000 kernel.sem = 32000 1024000000 500 32000 vm.overcommit_memory=2 vm.overcommit_ratio=95 EOF sysctl -p groupadd -g 530 gpadmin useradd -g 530 -u 530 -m -d /home/gpadmin -s /bin/bash gpadmin chown -R gpadmin:gpadmin /home/gpadmin echo "gpadmin:lhr" | chpasswd |
所有节点更新
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | -- 所有节点更新 cat >> /etc/hosts <<"EOF" 172.72.6.50 mdw 172.72.6.51 smdw 172.72.6.52 sdw1 172.72.6.53 sdw2 172.72.6.54 sdw3 172.72.6.55 sdw4 172.72.6.56 sdw5 172.72.6.56 sdw6 EOF su - gpadmin mkdir -p /home/gpadmin/conf/ cat > /home/gpadmin/conf/all_hosts <<"EOF" mdw smdw sdw1 sdw2 sdw3 sdw4 sdw5 sdw6 EOF cat > /home/gpadmin/conf/seg_hosts <<"EOF" sdw1 sdw2 sdw3 sdw4 sdw5 sdw6 EOF |
配置互信
master配置:
1 2 3 | ./sshUserSetup.sh -user root -hosts "mdw smdw sdw1 sdw2 sdw3 sdw4 sdw5 sdw6" -advanced exverify –confirm ./sshUserSetup.sh -user gpadmin -hosts "mdw smdw sdw1 sdw2 sdw3 sdw4 sdw5 sdw6" -advanced exverify –confirm chmod 600 /home/gpadmin/.ssh/config |
安装GP软件
sdw5操作:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | yum install -y apr apr-util bash bzip2 curl krb5 libcurl libevent libxml2 libyaml zlib openldap openssh openssl openssl-libs perl readline rsync R sed tar zip krb5-devel -- 主要安装包 yum install -y apr apr-util bzip2 krb5-devel libyaml perl rsync zip libevent --downloadonly --downloaddir=/soft rpm -ivh /soft/open-source-greenplum-db-6.23.1-rhel7-x86_64.rpm chown -R gpadmin:gpadmin /usr/local/greenplum-db chown -R gpadmin:gpadmin /usr/local/greenplum-db-6.23.1 mkdir -p /opt/greenplum/data/ chown -R gpadmin:gpadmin /opt/greenplum echo ". /usr/local/greenplum-db/greenplum_path.sh" >> /home/gpadmin/.bashrc |
扩容前的检查
1 2 3 4 5 6 7 8 9 10 11 | -- 原集群应该处于启动状态,最好关闭gpcc,删除gpperfmon数据库 gpstate gpcc status gpcc stop -- 测试环境配置较低,需要修改参数 gpconfig -c shared_buffers -v 64MB -m 64MB gpconfig -c max_connections -v 300 -m 100 -- 检查 gpcheckcat -A -B 32 |
开始4步扩容
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | su - gpadmin -- 1、使用gpexpand创建初始化文件,输入Y,然后个回车即可 cd /home/gpadmin/conf gpexpand -f /home/gpadmin/conf/seg_hosts -- 2、利用生成的初始化文件,初始化Segment并且创建扩容schema -- 该步骤若报错,可以修复错误后,再重复运行如下命令 gpexpand -i gpexpand_inputfile_20230313_131204 -- 3、重新分布数据,最长时间1小时 gpexpand -d 1:00:00 -- 4、移除扩容schema gpexpand -c |
示例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 | [gpadmin@mdw conf]$ gpstate 20230313:12:00:16:005065 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: 20230313:12:00:16:005065 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:12:00:16:005065 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:12:00:16:005065 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20230313:12:00:16:005065 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments... . 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:-Greenplum instance status summary 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Master instance = Active 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Master standby = mdw2 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Standby master state = Standby host passive 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total segment instance count from metadata = 32 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Primary Segment Status 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total primary segments = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total primary segment valid (at master) = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total primary segment failures (at master) = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Mirror Segment Status 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total mirror segments = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total mirror segment valid (at master) = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total mirror segment failures (at master) = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes missing = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes found = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number mirror segments acting as primary segments = 0 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:- Total number mirror segments acting as mirror segments = 16 20230313:12:00:17:005065 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- [gpadmin@mdw conf]$ cd /home/gpadmin/conf [gpadmin@mdw conf]$ gpexpand -f /home/gpadmin/conf/seg_hosts 20230313:13:11:58:006045 gpexpand:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:11:58:006045 gpexpand:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:11:58:006045 gpexpand:mdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state System Expansion is used to add segments to an existing GPDB array. gpexpand did not detect a System Expansion that is in progress. Before initiating a System Expansion, you need to provision and burn-in the new hardware. Please be sure to run gpcheckperf to make sure the new hardware is working properly. Please refer to the Admin Guide for more information. Would you like to initiate a new System Expansion Yy|Nn (default=N): > y You must now specify a mirroring strategy for the new hosts. Spread mirroring places a given hosts mirrored segments each on a separate host. You must be adding more hosts than the number of segments per host to use this. Grouped mirroring places all of a given hosts segments on a single mirrored host. You must be adding at least 2 hosts in order to use this. What type of mirroring strategy would you like? spread|grouped (default=grouped): > By default, new hosts are configured with the same number of primary segments as existing hosts. Optionally, you can increase the number of segments per host. For example, if existing hosts have two primary segments, entering a value of 2 will initialize two additional segments on existing hosts, and four segments on new hosts. In addition, mirror segments will be added for these new primary segments if mirroring is enabled. How many new primary segments per host do you want to add? (default=0): > Generating configuration file... 20230313:13:12:04:006045 gpexpand:mdw:gpadmin-[INFO]:-Generating input file... Input configuration file was written to 'gpexpand_inputfile_20230313_131204'. Please review the file and make sure that it is correct then re-run with: gpexpand -i gpexpand_inputfile_20230313_131204 20230313:13:12:04:006045 gpexpand:mdw:gpadmin-[INFO]:-Exiting... [gpadmin@mdw conf]$ cat gpexpand -i gpexpand_inputfile_20230313_131204 cat: invalid option -- 'i' Try 'cat --help' for more information. [gpadmin@mdw conf]$ cat gpexpand_inputfile_20230313_131204 sdw5|sdw5|6000|/opt/greenplum/data/primary/gpseg16|35|16|p sdw6|sdw6|7000|/opt/greenplum/data/mirror/gpseg16|47|16|m sdw5|sdw5|6001|/opt/greenplum/data/primary/gpseg17|36|17|p sdw6|sdw6|7001|/opt/greenplum/data/mirror/gpseg17|48|17|m sdw5|sdw5|6002|/opt/greenplum/data/primary/gpseg18|37|18|p sdw6|sdw6|7002|/opt/greenplum/data/mirror/gpseg18|49|18|m sdw5|sdw5|6003|/opt/greenplum/data/primary/gpseg19|38|19|p sdw6|sdw6|7003|/opt/greenplum/data/mirror/gpseg19|50|19|m sdw6|sdw6|6000|/opt/greenplum/data/primary/gpseg20|39|20|p sdw5|sdw5|7000|/opt/greenplum/data/mirror/gpseg20|43|20|m sdw6|sdw6|6001|/opt/greenplum/data/primary/gpseg21|40|21|p sdw5|sdw5|7001|/opt/greenplum/data/mirror/gpseg21|44|21|m sdw6|sdw6|6002|/opt/greenplum/data/primary/gpseg22|41|22|p sdw5|sdw5|7002|/opt/greenplum/data/mirror/gpseg22|45|22|m sdw6|sdw6|6003|/opt/greenplum/data/primary/gpseg23|42|23|p sdw5|sdw5|7003|/opt/greenplum/data/mirror/gpseg23|46|23|m [gpadmin@mdw conf]$ gpexpand -i gpexpand_inputfile_20230313_131204 20230313:13:12:40:006110 gpexpand:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:12:40:006110 gpexpand:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:12:40:006110 gpexpand:mdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state 20230313:13:12:41:006110 gpexpand:mdw:gpadmin-[INFO]:-Heap checksum setting consistent across cluster 20230313:13:12:41:006110 gpexpand:mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions 20230313:13:12:42:006110 gpexpand:mdw:gpadmin-[INFO]:-The following packages will be installed on sdw5: MetricsCollector-6.8.4_gp_6.23.0-rhel7-x86_64.gppkg 20230313:13:12:44:006110 gpexpand:mdw:gpadmin-[INFO]:-The following packages will be installed on sdw6: MetricsCollector-6.8.4_gp_6.23.0-rhel7-x86_64.gppkg 20230313:13:12:45:006110 gpexpand:mdw:gpadmin-[INFO]:-Locking catalog 20230313:13:12:45:006110 gpexpand:mdw:gpadmin-[INFO]:-Locked catalog 20230313:13:12:46:006110 gpexpand:mdw:gpadmin-[INFO]:-Creating segment template 20230313:13:12:52:006110 gpexpand:mdw:gpadmin-[INFO]:-Copying postgresql.conf from existing segment into template 20230313:13:12:52:006110 gpexpand:mdw:gpadmin-[INFO]:-Copying pg_hba.conf from existing segment into template 20230313:13:12:53:006110 gpexpand:mdw:gpadmin-[INFO]:-Creating schema tar file 20230313:13:12:54:006110 gpexpand:mdw:gpadmin-[INFO]:-Distributing template tar file to new hosts 20230313:13:13:08:006110 gpexpand:mdw:gpadmin-[INFO]:-Configuring new segments (primary) 20230313:13:13:08:006110 gpexpand:mdw:gpadmin-[INFO]:-{'sdw5': '/opt/greenplum/data/primary/gpseg16:6000:true:false:35:16::-1:,/opt/greenplum/data/primary/gpseg17:6001:true:false:36:17::-1:,/opt/greenplum/data/primary/gpseg18:6002:true:false:37:18::-1:,/opt/greenplum/data/primary/gpseg19:6003:true:false:38:19::-1:', 'sdw6': '/opt/greenplum/data/primary/gpseg20:6000:true:false:39:20::-1:,/opt/greenplum/data/primary/gpseg21:6001:true:false:40:21::-1:,/opt/greenplum/data/primary/gpseg22:6002:true:false:41:22::-1:,/opt/greenplum/data/primary/gpseg23:6003:true:false:42:23::-1:'} 20230313:13:13:45:006110 gpexpand:mdw:gpadmin-[INFO]:-Cleaning up temporary template files 20230313:13:13:46:006110 gpexpand:mdw:gpadmin-[INFO]:-Cleaning up databases in new segments. 20230313:13:13:54:006110 gpexpand:mdw:gpadmin-[INFO]:-Unlocking catalog 20230313:13:13:54:006110 gpexpand:mdw:gpadmin-[INFO]:-Unlocked catalog 20230313:13:13:54:006110 gpexpand:mdw:gpadmin-[INFO]:-Creating expansion schema 20230313:13:13:55:006110 gpexpand:mdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database template1 20230313:13:13:56:006110 gpexpand:mdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database postgres 20230313:13:13:56:006110 gpexpand:mdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database lhrgpdb 20230313:13:13:56:006110 gpexpand:mdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database gpperfmon 20230313:13:13:58:006110 gpexpand:mdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database lhrdb 20230313:13:13:58:006110 gpexpand:mdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database db1 20230313:13:13:58:006110 gpexpand:mdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database testdb 20230313:13:13:59:006110 gpexpand:mdw:gpadmin-[INFO]:-Starting new mirror segment synchronization 20230313:13:14:46:006110 gpexpand:mdw:gpadmin-[ERROR]:-gpexpand failed: ExecutionError: 'non-zero rc: 1' occurred. Details: '$GPHOME/bin/gprecoverseg -a -F' cmd had rc=1 completed=True halted=False stdout='20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: -a -F 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Heap checksum setting is consistent between master and the segments that are candidates for recoverseg 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Greenplum instance recovery parameters 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery type = Standard 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 1 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg16 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7000 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg16 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6000 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 2 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg17 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7001 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg17 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6001 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 3 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg18 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7002 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg18 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6002 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 4 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg19 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7003 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg19 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6003 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 5 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg20 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7000 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg20 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6000 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 6 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg21 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7001 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg21 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6001 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 7 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg22 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7002 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg22 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6002 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 8 of 8 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw5 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /opt/greenplum/data/mirror/gpseg23 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 7003 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw6 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /opt/greenplum/data/primary/gpseg23 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 6003 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:---------------------------------------------------------- 20230313:13:14:00:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Starting to create new pg_hba.conf on primary segments 20230313:13:14:03:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Successfully modified pg_hba.conf on primary segments to allow replication connections 20230313:13:14:03:006497 gprecoverseg:mdw:gpadmin-[INFO]:-8 segment(s) to recover 20230313:13:14:03:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Ensuring 8 failed segment(s) are stopped 20230313:13:14:06:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Ensuring that shared memory is cleaned up for stopped segments 20230313:13:14:06:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Setting up the required segments for recovery 20230313:13:14:07:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Updating configuration for mirrors 20230313:13:14:07:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Initiating segment recovery. Upon completion, will start the successfully recovered segments 20230313:13:14:07:006497 gprecoverseg:mdw:gpadmin-[INFO]:-era is ce1530fde02bf9cc_230313130237 sdw5 (dbid 43): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw5 (dbid 44): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw5 (dbid 45): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw5 (dbid 46): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw6 (dbid 47): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw6 (dbid 48): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw6 (dbid 49): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw6 (dbid 50): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw5 (dbid 43): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw5 (dbid 44): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw5 (dbid 45): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw5 (dbid 46): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw6 (dbid 47): pg_basebackup: initiating base backup, waiting for checkpoint to complete sdw6 (dbid 48): 0/309551 kB (0%), 0/1 tablespace (...data/mirror/gpseg17/backup_label) sdw6 (dbid 49): 0/309583 kB (0%), 0/1 tablespace (...data/mirror/gpseg18/backup_label) sdw6 (dbid 50): 1578/309583 kB (0%), 0/1 tablespace (...data/mirror/gpseg19/base/1/10017) sdw5 (dbid 43): 32188/309583 kB (10%), 0/1 tablespace (...rror/gpseg20/base/12809/12654_vm) sdw5 (dbid 44): 80163/309583 kB (25%), 0/1 tablespace (...ror/gpseg21/base/16384/12575_fsm) sdw5 (dbid 45): 81583/309583 kB (26%), 0/1 tablespace (...ror/gpseg22/base/16384/12593_fsm) sdw5 (dbid 46): 80614/309583 kB (26%), 0/1 tablespace (...rror/gpseg23/base/16384/12579_vm) sdw6 (dbid 47): 86339/309583 kB (27%), 0/1 tablespace (...rror/gpseg16/base/16384/12658_vm) sdw6 (dbid 48): 100847/309551 kB (32%), 0/1 tablespace (.../mirror/gpseg17/base/16386/10016) sdw6 (dbid 49): 109648/309583 kB (35%), 0/1 tablespace (.../mirror/gpseg18/base/16386/10017) sdw6 (dbid 50): 126783/309583 kB (40%), 0/1 tablespace (.../mirror/gpseg19/base/16386/10142) sdw5 (dbid 43): 122351/309583 kB (39%), 0/1 tablespace (.../mirror/gpseg20/base/16386/10093) sdw5 (dbid 44): 122837/309583 kB (39%), 0/1 tablespace (.../mirror/gpseg21/base/16386/10106) sdw5 (dbid 45): 127301/309583 kB (41%), 0/1 tablespace (.../mirror/gpseg22/base/16386/10157) sdw5 (dbid 46): 125464/309583 kB (40%), 0/1 tablespace (.../mirror/gpseg23/base/16386/10127) sdw6 (dbid 47): 172050/309583 kB (55%), 0/1 tablespace (...ror/gpseg16/base/16386/12566_fsm) sdw6 (dbid 48): 179164/309551 kB (57%), 0/1 tablespace (...rror/gpseg17/base/16386/12686_vm) sdw6 (dbid 49): 121867/309583 kB (39%), 0/1 tablespace (.../mirror/gpseg18/base/16386/10085) sdw6 (dbid 50): 173664/309583 kB (56%), 0/1 tablespace (.../mirror/gpseg19/base/16386/12593) sdw5 (dbid 43): 127106/309583 kB (41%), 0/1 tablespace (.../mirror/gpseg20/base/16386/10149) sdw5 (dbid 44): 130892/309583 kB (42%), 0/1 tablespace (.../mirror/gpseg21/base/16386/12550) sdw5 (dbid 45): 176556/309583 kB (57%), 0/1 tablespace (...ror/gpseg22/base/16386/12624_fsm) sdw5 (dbid 46): 175497/309583 kB (56%), 0/1 tablespace (...ror/gpseg23/base/16386/12618_fsm) sdw6 (dbid 47): 174883/309583 kB (56%), 0/1 tablespace (.../mirror/gpseg16/base/16386/12600) sdw6 (dbid 48): 202292/309551 kB (65%), 0/1 tablespace (.../mirror/gpseg17/base/16386/12795) sdw6 (dbid 49): 127106/309583 kB (41%), 0/1 tablespace (.../mirror/gpseg18/base/16386/10149) sdw6 (dbid 50): 178062/309583 kB (57%), 0/1 tablespace (...rror/gpseg19/base/16386/12626_vm) sdw5 (dbid 43): 177933/309583 kB (57%), 0/1 tablespace (.../mirror/gpseg20/base/16386/12626) sdw5 (dbid 44): 173535/309583 kB (56%), 0/1 tablespace (...ror/gpseg21/base/16386/12589_fsm) sdw5 (dbid 45): 178999/309583 kB (57%), 0/1 tablespace (...ror/gpseg22/base/16386/12662_fsm) sdw5 (dbid 46): 178676/309583 kB (57%), 0/1 tablespace (...ror/gpseg23/base/16386/12658_fsm) sdw6 (dbid 47): 228876/309583 kB (73%), 0/1 tablespace (.../mirror/gpseg16/base/50754/10093) sdw6 (dbid 48): 242574/309551 kB (78%), 0/1 tablespace (.../mirror/gpseg17/base/50754/12656) sdw6 (dbid 49): 220809/309583 kB (71%), 0/1 tablespace (.../mirror/gpseg18/base/50754/10010) sdw6 (dbid 50): 234441/309583 kB (75%), 0/1 tablespace (.../mirror/gpseg19/base/50754/12550) sdw5 (dbid 43): 269816/309583 kB (87%), 0/1 tablespace (.../mirror/gpseg20/base/50755/12767) sdw5 (dbid 44): 295485/309583 kB (95%), 0/1 tablespace (...ror/gpseg21/base/50756/12795_fsm) sdw5 (dbid 45): 303158/309583 kB (97%), 0/1 tablespace (...data/mirror/gpseg22/global/10151) sdw5 (dbid 46): 301963/309583 kB (97%), 0/1 tablespace (...data/mirror/gpseg23/global/10081) sdw6 (dbid 47): 308004/309583 kB (99%), 0/1 tablespace (...a/mirror/gpseg16/global/12692_vm) sdw6 (dbid 48): pg_basebackup: base backup completed sdw6 (dbid 49): 295942/309583 kB (95%), 0/1 tablespace (...data/mirror/gpseg18/global/10003) sdw6 (dbid 50): pg_basebackup: sync the target data direcotory sdw5 (dbid 43): pg_basebackup: sync the target data direcotory sdw5 (dbid 44): 309549/309583 kB (99%), 0/1 tablespace (...a/mirror/gpseg21/postgresql.conf) sdw5 (dbid 45): pg_basebackup: sync the target data direcotory sdw5 (dbid 46): pg_basebackup: waiting for background process to finish streaming ... sdw6 (dbid 47): 309549/309583 kB (99%), 0/1 tablespace (...a/mirror/gpseg16/postgresql.conf) sdw6 (dbid 48): pg_basebackup: base backup completed sdw6 (dbid 49): 302028/309583 kB (97%), 0/1 tablespace (...data/mirror/gpseg18/global/10104) sdw6 (dbid 50): pg_basebackup: sync the target data direcotory sdw5 (dbid 43): pg_basebackup: base backup completed sdw5 (dbid 44): pg_basebackup: sync the target data direcotory sdw5 (dbid 45): pg_basebackup: base backup completed sdw5 (dbid 46): pg_basebackup: base backup completed sdw6 (dbid 47): pg_basebackup: base backup completed sdw6 (dbid 48): pg_basebackup: base backup completed sdw6 (dbid 49): pg_basebackup: sync the target data direcotory sdw6 (dbid 50): pg_basebackup: base backup completed sdw5 (dbid 43): pg_basebackup: base backup completed sdw5 (dbid 44): pg_basebackup: base backup completed sdw5 (dbid 45): pg_basebackup: base backup completed sdw5 (dbid 46): pg_basebackup: base backup completed sdw6 (dbid 47): pg_basebackup: base backup completed sdw6 (dbid 48): pg_basebackup: base backup completed sdw6 (dbid 49): pg_basebackup: base backup completed sdw6 (dbid 50): pg_basebackup: base backup completed sdw5 (dbid 43): pg_basebackup: base backup completed sdw5 (dbid 44): pg_basebackup: base backup completed sdw5 (dbid 45): pg_basebackup: base backup completed sdw5 (dbid 46): pg_basebackup: base backup completed sdw6 (dbid 47): pg_basebackup: base backup completed sdw6 (dbid 48): pg_basebackup: base backup completed sdw6 (dbid 49): pg_basebackup: base backup completed sdw6 (dbid 50): pg_basebackup: base backup completed 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:----------------------------------------------------------- 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Failed to start the following segments. Please check the latest logs located in segment's data directory 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:- hostname: sdw5; port: 7001; datadir: /opt/greenplum/data/mirror/gpseg21 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:- hostname: sdw5; port: 7003; datadir: /opt/greenplum/data/mirror/gpseg23 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:- hostname: sdw5; port: 7002; datadir: /opt/greenplum/data/mirror/gpseg22 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:- hostname: sdw5; port: 7000; datadir: /opt/greenplum/data/mirror/gpseg20 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:- hostname: sdw6; port: 7003; datadir: /opt/greenplum/data/mirror/gpseg19 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:- hostname: sdw6; port: 7000; datadir: /opt/greenplum/data/mirror/gpseg16 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[INFO]:-Triggering FTS probe 20230313:13:14:46:006497 gprecoverseg:mdw:gpadmin-[ERROR]:-gprecoverseg failed. Please check the output for more details. stderr='' Exiting... 20230313:13:14:46:006110 gpexpand:mdw:gpadmin-[ERROR]:-gpexpand is past the point of rollback. Any remaining issues must be addressed outside of gpexpand. 20230313:13:14:46:006110 gpexpand:mdw:gpadmin-[INFO]:-Shutting down gpexpand... -- 我这里因为内存不足导致的,释放内存后重新多次运行 [gpadmin@mdw conf]$ gpexpand -i gpexpand_inputfile_20230313_131204 20230313:13:39:59:010147 gpexpand:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:39:59:010147 gpexpand:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:39:59:010147 gpexpand:mdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state 20230313:13:39:59:010147 gpexpand:mdw:gpadmin-[INFO]:-Expansion has already completed. 20230313:13:39:59:010147 gpexpand:mdw:gpadmin-[INFO]:-If you want to expand again, run gpexpand -c to remove 20230313:13:39:59:010147 gpexpand:mdw:gpadmin-[INFO]:-the gpexpand schema and begin a new expansion 20230313:13:39:59:010147 gpexpand:mdw:gpadmin-[INFO]:-Exiting... [gpadmin@mdw conf]$ [gpadmin@mdw conf]$ gpexpand -d 1:00:00 20230313:13:41:03:010229 gpexpand:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:41:03:010229 gpexpand:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:41:03:010229 gpexpand:mdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state 20230313:13:41:03:010229 gpexpand:mdw:gpadmin-[INFO]:-Expansion has already completed. 20230313:13:41:03:010229 gpexpand:mdw:gpadmin-[INFO]:-If you want to expand again, run gpexpand -c to remove 20230313:13:41:03:010229 gpexpand:mdw:gpadmin-[INFO]:-the gpexpand schema and begin a new expansion 20230313:13:41:03:010229 gpexpand:mdw:gpadmin-[INFO]:-Exiting... [gpadmin@mdw conf]$ gpstate -x 20230313:13:41:12:010262 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -x 20230313:13:41:12:010262 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:41:12:010262 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:41:12:010262 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20230313:13:41:12:010262 gpstate:mdw:gpadmin-[INFO]:-Cluster Expansion State = Expansion Complete [gpadmin@mdw conf]$ gpexpand -c 20230313:13:41:22:010297 gpexpand:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:41:22:010297 gpexpand:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:41:22:010297 gpexpand:mdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state Do you want to dump the gpexpand.status_detail table to file? Yy|Nn (default=Y): > y 20230313:13:41:24:010297 gpexpand:mdw:gpadmin-[INFO]:-Dumping gpexpand.status_detail to /opt/greenplum/data/master/gpseg-1/gpexpand.status_detail 20230313:13:41:24:010297 gpexpand:mdw:gpadmin-[INFO]:-Removing gpexpand schema 20230313:13:41:24:010297 gpexpand:mdw:gpadmin-[INFO]:-Cleanup Finished. exiting... |
检查
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 | [gpadmin@mdw ~]$ gpstate 20230313:13:52:40:012294 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: 20230313:13:52:40:012294 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:52:40:012294 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:52:40:012294 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20230313:13:52:40:012294 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments... 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:-Greenplum instance status summary 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Master instance = Active 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Master standby = mdw2 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Standby master state = Standby host passive 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total segment instance count from metadata = 48 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Primary Segment Status 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total primary segments = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total primary segment valid (at master) = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total primary segment failures (at master) = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Mirror Segment Status 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total mirror segments = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total mirror segment valid (at master) = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total mirror segment failures (at master) = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes missing = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes found = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number mirror segments acting as primary segments = 0 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:- Total number mirror segments acting as mirror segments = 24 20230313:13:52:41:012294 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- [gpadmin@mdw ~]$ [gpadmin@mdw conf]$ postgres=# select * from gp_segment_configuration order by hostname,role desc; dbid | content | role | preferred_role | mode | status | port | hostname | address | datadir ------+---------+------+----------------+------+--------+------+----------+---------+------------------------------------- 1 | -1 | p | p | n | u | 5432 | mdw1 | mdw1 | /opt/greenplum/data/master/gpseg-1 34 | -1 | m | m | s | u | 5432 | mdw2 | mdw2 | /opt/greenplum/data/master/gpseg-1 4 | 2 | p | p | s | u | 6002 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg2 3 | 1 | p | p | s | u | 6001 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg1 2 | 0 | p | p | s | u | 6000 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg0 5 | 3 | p | p | s | u | 6003 | sdw1 | sdw1 | /opt/greenplum/data/primary/gpseg3 32 | 14 | m | m | s | u | 7002 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg14 33 | 15 | m | m | s | u | 7003 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg15 30 | 12 | m | m | s | u | 7000 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg12 31 | 13 | m | m | s | u | 7001 | sdw1 | sdw1 | /opt/greenplum/data/mirror/gpseg13 9 | 7 | p | p | s | u | 6003 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg7 6 | 4 | p | p | s | u | 6000 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg4 8 | 6 | p | p | s | u | 6002 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg6 7 | 5 | p | p | s | u | 6001 | sdw2 | sdw2 | /opt/greenplum/data/primary/gpseg5 20 | 2 | m | m | s | u | 7002 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg2 18 | 0 | m | m | s | u | 7000 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg0 21 | 3 | m | m | s | u | 7003 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg3 19 | 1 | m | m | s | u | 7001 | sdw2 | sdw2 | /opt/greenplum/data/mirror/gpseg1 13 | 11 | p | p | s | u | 6003 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg11 10 | 8 | p | p | s | u | 6000 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg8 11 | 9 | p | p | s | u | 6001 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg9 12 | 10 | p | p | s | u | 6002 | sdw3 | sdw3 | /opt/greenplum/data/primary/gpseg10 25 | 7 | m | m | s | u | 7003 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg7 22 | 4 | m | m | s | u | 7000 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg4 23 | 5 | m | m | s | u | 7001 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg5 24 | 6 | m | m | s | u | 7002 | sdw3 | sdw3 | /opt/greenplum/data/mirror/gpseg6 17 | 15 | p | p | s | u | 6003 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg15 14 | 12 | p | p | s | u | 6000 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg12 15 | 13 | p | p | s | u | 6001 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg13 16 | 14 | p | p | s | u | 6002 | sdw4 | sdw4 | /opt/greenplum/data/primary/gpseg14 29 | 11 | m | m | s | u | 7003 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg11 26 | 8 | m | m | s | u | 7000 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg8 27 | 9 | m | m | s | u | 7001 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg9 28 | 10 | m | m | s | u | 7002 | sdw4 | sdw4 | /opt/greenplum/data/mirror/gpseg10 38 | 19 | p | p | s | u | 6003 | sdw5 | sdw5 | /opt/greenplum/data/primary/gpseg19 36 | 17 | p | p | s | u | 6001 | sdw5 | sdw5 | /opt/greenplum/data/primary/gpseg17 37 | 18 | p | p | s | u | 6002 | sdw5 | sdw5 | /opt/greenplum/data/primary/gpseg18 35 | 16 | p | p | s | u | 6000 | sdw5 | sdw5 | /opt/greenplum/data/primary/gpseg16 43 | 20 | m | m | s | u | 7000 | sdw5 | sdw5 | /opt/greenplum/data/mirror/gpseg20 45 | 22 | m | m | s | u | 7002 | sdw5 | sdw5 | /opt/greenplum/data/mirror/gpseg22 44 | 21 | m | m | s | u | 7001 | sdw5 | sdw5 | /opt/greenplum/data/mirror/gpseg21 46 | 23 | m | m | s | u | 7003 | sdw5 | sdw5 | /opt/greenplum/data/mirror/gpseg23 41 | 22 | p | p | s | u | 6002 | sdw6 | sdw6 | /opt/greenplum/data/primary/gpseg22 42 | 23 | p | p | s | u | 6003 | sdw6 | sdw6 | /opt/greenplum/data/primary/gpseg23 39 | 20 | p | p | s | u | 6000 | sdw6 | sdw6 | /opt/greenplum/data/primary/gpseg20 40 | 21 | p | p | s | u | 6001 | sdw6 | sdw6 | /opt/greenplum/data/primary/gpseg21 48 | 17 | m | m | s | u | 7001 | sdw6 | sdw6 | /opt/greenplum/data/mirror/gpseg17 49 | 18 | m | m | s | u | 7002 | sdw6 | sdw6 | /opt/greenplum/data/mirror/gpseg18 50 | 19 | m | m | s | u | 7003 | sdw6 | sdw6 | /opt/greenplum/data/mirror/gpseg19 47 | 16 | m | m | s | u | 7000 | sdw6 | sdw6 | /opt/greenplum/data/mirror/gpseg16 (50 rows) postgres=# |
纵向扩容(增加PG实例数)
相当于集群机器保持原样,通过更改配置,利用cpu核数增加存储数
最后效果就是每个单机存储节点上的seg目录增加了
因为没有新增服务器,所以操作基本都是在gpmaster上实施
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | su - gpadmin -- 1、使用gpexpand创建初始化文件,输入Y -- 到“How many new primary segments per host do you want to add? (default=0):”的时候表示每个节点新增的segment实例数 cd /home/gpadmin/conf gpexpand -f /home/gpadmin/conf/seg_hosts -- 2、利用生成的初始化文件,初始化Segment并且创建扩容schema -- 该步骤若报错,可以修复错误后,再重复运行如下命令 gpexpand -i gpexpand_inputfile_20230313_135610 -- 3、重新分布数据,最长时间1小时 gpexpand -d 1:00:00 -- 4、移除扩容schema gpexpand -c |
实例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | [gpadmin@mdw ~]$ cd /home/gpadmin/conf [gpadmin@mdw conf]$ gpexpand -f /home/gpadmin/conf/seg_hosts 20230313:13:54:25:012452 gpexpand:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source' 20230313:13:54:25:012452 gpexpand:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.23.1 build commit:2731a45ecb364317207c560730cf9e2cbf17d7e4 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Feb 7 2023 22:54:40' 20230313:13:54:25:012452 gpexpand:mdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state System Expansion is used to add segments to an existing GPDB array. gpexpand did not detect a System Expansion that is in progress. Before initiating a System Expansion, you need to provision and burn-in the new hardware. Please be sure to run gpcheckperf to make sure the new hardware is working properly. Please refer to the Admin Guide for more information. Would you like to initiate a new System Expansion Yy|Nn (default=N): > y You must now specify a mirroring strategy for the new hosts. Spread mirroring places a given hosts mirrored segments each on a separate host. You must be adding more hosts than the number of segments per host to use this. Grouped mirroring places all of a given hosts segments on a single mirrored host. You must be adding at least 2 hosts in order to use this. What type of mirroring strategy would you like? spread|grouped (default=grouped): > grouped ** No hostnames were given that do not already exist in the ** ** array. Additional segments will be added existing hosts. ** By default, new hosts are configured with the same number of primary segments as existing hosts. Optionally, you can increase the number of segments per host. For example, if existing hosts have two primary segments, entering a value of 2 will initialize two additional segments on existing hosts, and four segments on new hosts. In addition, mirror segments will be added for these new primary segments if mirroring is enabled. How many new primary segments per host do you want to add? (default=0): > 1 Enter new primary data directory 1: > /opt/greenplum/data/primary Enter new mirror data directory 1: > /opt/greenplum/data/mirror Generating configuration file... 20230313:13:56:10:012452 gpexpand:mdw:gpadmin-[INFO]:-Generating input file... Input configuration file was written to 'gpexpand_inputfile_20230313_135610'. Please review the file and make sure that it is correct then re-run with: gpexpand -i gpexpand_inputfile_20230313_135610 20230313:13:56:10:012452 gpexpand:mdw:gpadmin-[INFO]:-Exiting... [gpadmin@mdw conf]$ |
删除节点(缩容)
以下方法对PG6不适用。。。。别执行,,,,包括之前的删除mirror都是不可行。。。。
1 2 3 4 5 6 7 8 9 | gpstop -M fast gpstart -m PGOPTIONS="-c gp_session_role=utility" psql -d postgres set allow_system_table_mods='TRUE'; delete from gp_segment_configuration where dbid=13; delete from pg_filespace_entry where fsedbid=13; gpstop -m gpstart |
总结
1、若有gpcc,则会自动进行复制到新节点,若该库很大,建议删除gpperfmon库,然后再做扩容,最后再重新配置gpcc。自己测试过程中,有关gpcc导致扩容失败的情况很多。
2、如果扩容在初始化步骤失败,而数据库没有启动,用户必须首先通过运行gpstart -m命令以master-only模式重启数据库。用下列命令回滚失败的扩容:
1 | gpexpand --rollback |
3、监控视图,在postgres数据库中:
1 2 3 4 5 | select * from gpexpand.status; select * from gpexpand.status_detail; select * from gpexpand.expansion_progress; gpstate -x |
4、注意内存和CPU的使用,避免资源耗费完而报错,若报错可以查看对应segment的日志。
5、纵向和横向扩容过程基本一样,只是横向扩容多了一个新机器配置的过程。
6、官方并没有给出删除节点(缩容)的方法,目前也没有比较好的办法。比较稳妥的办法就是备份,删除集群,然后重建,最后重新导入数据。
参考
https://www.xmmup.com/kuoronggreenplumxitongzengjiasegmentjiedian.html