AIX系统静默安装Oracle 11gR2 RAC

0    351    2

Tags:

👉 本文共约63613个字,系统预计阅读时间或需240分钟。

安装准备

安装信息收集

软件环境

数据库:

p10404530_112030_AIX64-5L_1of7.zip、

p10404530_112030_AIX64-5L_2of7.zip

集群软件(11G 中的 clusterware):

p10404530_112030_AIX64-5L_3of7.zip

操作系统:

7100-03-03-1415

注意: 解压时 p10404530_112030_AIX64-5L_1of7.zip、p10404530_112030_AIX64-5L_2of7.zip

这两个包要解到同一个目录下,p10404530_112030_AIX64-5L_3of7.zip 包解到另一个不同的目录下。

网络规划及/etc/hosts

vi /etc/hosts

20.188.187.148 LHRTEST1101

220.188.187.148 LHRTEST1101-priv

20.188.187.149 LHRTEST1101-vip

20.188.187.158 LHRTEST2101

220.188.187.158 LHRTEST2101-priv

20.188.187.150 LHRTEST2101-vip

20.188.187.160 LHRTEST2101-scan

配置私网,这里的ip为220.188.187.148

chdev -l 'en1' -a netaddr=220.188.187.148 -a netmask='255.255.255.0' -a state='up'

[ZFPRMDB2:root]:/>smitty tcpip

Minimum Configuration & Startup

* HOSTNAME [LHRTEST1101]

* Internet ADDRESS (dotted decimal) [220.188.187.148]

Network MASK (dotted decimal) [255.255.255.0]

节点一:

[LHRTEST1101:root]/]> smitty tcpip

Minimum Configuration & Startup

To Delete existing configuration data, please use Further Configuration menus

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[Entry Fields]

* HOSTNAME [LHRTEST1101]

* Internet ADDRESS (dotted decimal) [220.188.187.148]

Network MASK (dotted decimal) [255.255.255.0]

* Network INTERFACE en1

NAMESERVER

Internet ADDRESS (dotted decimal) []

DOMAIN Name []

Default Gateway

Address (dotted decimal or symbolic name) []

Cost [] #

Do Active Dead Gateway Detection? no +

Your CABLE Type N/A +

START Now no +

[LHRTEST1101:root]/]> ifconfig -a

en0: flags=1e084863,480\<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

inet 20.188.187.148 netmask 0xffffff00 broadcast 20.188.187.255

tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480\<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

inet 220.188.187.148 netmask 0xffffff00 broadcast 220.188.187.255

tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0\<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

inet6 ::1%1/0

tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

[LHRTEST1101:root]/]>

[LHRTEST1101:root]/]>

节点二:

[LHRTEST2101:root]/]> ifconfig -a

en0: flags=1e084863,480\<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

inet 20.188.187.158 netmask 0xffffff00 broadcast 20.188.187.255

tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480\<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

inet 220.188.187.158 netmask 0xffffff00 broadcast 220.188.187.255

tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0\<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

inet6 ::1%1/0

tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]>

说明:11g共7个IP地址,2块网卡,其中public、vip和scan 都在同一个网段,private在另一个网段,主机名不要包含下横线,如:RAC_01;通过执行ifconfig -a 检查2个节点的网络设备名字是否一致,另外公网、私网共4个IP可以ping通,其它3个不能ping通才是正常的。

硬件、软件环境检查

硬件环境和os参数修改详细内容参考:

以LHRTEST2101为例:

[LHRTEST2101:root]/]> getconf REAL_MEMORY

4194304

[LHRTEST2101:root]/]> /usr/sbin/lsattr -E -l sys0 -a realmem

realmem 4194304 Amount of usable physical memory in Kbytes False

[LHRTEST2101:root]/]> lsps -a

Page Space Physical Volume Volume Group Size %Used Active Auto Type Chksum

hd6 hdisk0 rootvg 8192MB 0 yes yes lv 0

[LHRTEST2101:root]/]> getconf HARDWARE_BITMODE

64

[LHRTEST2101:root]/]> bootinfo -K

64

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]> df -g

Filesystem GB blocks Free %Used Iused %Iused Mounted on

/dev/hd4 4.25 4.00 6% 12709 2% /

/dev/hd2 10.00 4.57 55% 118820 11% /usr

/dev/hd9var 4.50 4.24 6% 1178 1% /var

/dev/hd3 4.25 4.23 1% 172 1% /tmp

/dev/hd1 1.00 1.00 1% 77 1% /home

/dev/hd11admin 0.25 0.25 1% 7 1% /admin

/proc - - - - - /proc

/dev/hd10opt 4.50 4.37 3% 2567 1% /opt

/dev/livedump 1.00 1.00 1% 6 1% /var/adm/ras/livedump

/dev/Plv_install 1.00 1.00 1% 4 1% /install

/dev/Plv_mtool 1.00 1.00 1% 4 1% /mtool

/dev/Plv_audit 2.00 1.99 1% 5 1% /audit

/dev/Plv_ftplog 1.00 1.00 1% 5 1% /ftplog

/dev/Tlv_bocnet 50.00 49.99 1% 4 1% /bocnet

/dev/Tlv_WebSphere 10.00 5.71 43% 45590 4% /WebSphere

/dev/TLV_TEST_DATA 100.00 99.98 1% 7 1% /lhr

/dev/tlv_softtmp 30.00 20.30 33% 5639 1% /softtmp

ZTDNETAP3:/nfs 1240.00 14.39 99% 513017 14% /nfs

/dev/tlv_u01 50.00 32.90 35% 51714 1% /u01

[LHRTEST2101:root]/]> cat /etc/.init.state

2

[LHRTEST2101:root]/]> oslevel -s

7100-03-03-1415

[LHRTEST1101:root]/]> lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix61.rte

Fileset Level State Description

----------------------------------------------------------------------------

Path: /usr/lib/objrepos

bos.adt.base 7.1.3.15 COMMITTED Base Application Development

Toolkit

bos.adt.lib 7.1.2.15 COMMITTED Base Application Development

Libraries

bos.adt.libm 7.1.3.0 COMMITTED Base Application Development

Math Library

bos.perf.libperfstat 7.1.3.15 COMMITTED Performance Statistics Library

Interface

bos.perf.perfstat 7.1.3.15 COMMITTED Performance Statistics

Interface

bos.perf.proctools 7.1.3.15 COMMITTED Proc Filesystem Tools

rsct.basic.rte 3.1.5.3 COMMITTED RSCT Basic Function

rsct.compat.clients.rte 3.1.5.0 COMMITTED RSCT Event Management Client

Function

xlC.aix61.rte 12.1.0.1 COMMITTED IBM XL C++ Runtime for AIX 6.1

and 7.1

Path: /etc/objrepos

bos.adt.base 7.1.3.15 COMMITTED Base Application Development

Toolkit

bos.perf.libperfstat 7.1.3.15 COMMITTED Performance Statistics Library

Interface

bos.perf.perfstat 7.1.3.15 COMMITTED Performance Statistics

Interface

rsct.basic.rte 3.1.5.3 COMMITTED RSCT Basic Function

[LHRTEST1101:root]/]>

交换分区至少4G,若小于3G,可以执行 chps -s 20 hd6 来增加分区大小,增加20个PP,如下例:

[LHRTEST1101:root]/]> lsps -a

Page Space Physical Volume Volume Group Size %Used Active Auto Type Chksum

hd6 hdisk0 rootvg 8192MB 0 yes yes lv 0

[LHRTEST1101:root]/]> vmstat

System configuration: lcpu=128 mem=49152MB ent=9.00

kthr memory page faults cpu

----- ----------- ------------------------ ------------ -----------------------

r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec

1 1 2678713 4864613 0 0 0 0 0 0 39 23328 9527 0 0 99 0 0.05 0.6

[LHRTEST1101:root]/]> chps -s 20 hd6

[LHRTEST1101:root]/]> lsps -a

Page Space Physical Volume Volume Group Size %Used Active Auto Type Chksum

hd6 hdisk0 rootvg 13312MB 0 yes yes lv 0

===》》8192+20*256=13312

[LHRTEST1101:root]/]> lsvg rootvg

VOLUME GROUP: rootvg VG IDENTIFIER: 00c49fb400004c000000015234fcc9de

VG STATE: active PP SIZE: 256 megabyte(s)

VG PERMISSION: read/write TOTAL PPs: 512 (131072 megabytes)

MAX LVs: 256 FREE PPs: 222 (56832 megabytes)

LVs: 17 USED PPs: 290 (74240 megabytes)

OPEN LVs: 16 QUORUM: 2 (Enabled)

TOTAL PVs: 1 VG DESCRIPTORS: 2

STALE PVs: 0 STALE PPs: 0

ACTIVE PVs: 1 AUTO ON: yes

MAX PPs per VG: 32512

MAX PPs per PV: 1016 MAX PVs: 32

LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no

HOT SPARE: no BB POLICY: relocatable

PV RESTRICTION: none INFINITE RETRY: no

DISK BLOCK SIZE: 512 CRITICAL VG: no

[LHRTEST1101:root]/]>

操作系统参数调整

shell脚本:

vi os_pre_lhr.sh

_chlimit(){

[ -f /etc/security/limits.org ] || { cp -p /etc/security/limits /etc/security/limits.org; }

cat /etc/security/limits.org |egrep -vp "root|oracle|grid" > /etc/security/limits

echo "root:

core = -1

cpu = -1

data = -1

fsize = -1

nofiles = -1

rss = -1

stack = -1

core_hard = -1

cpu_hard = -1

data_hard = -1

fsize_hard = -1

nofiles_hard = -1

rss_hard = -1

stack_hard = -1

oracle:

core = -1

cpu = -1

data = -1

fsize = -1

nofiles = -1

rss = -1

stack = -1

cpu_hard = -1

core_hard = -1

data_hard = -1

fsize_hard = -1

nofiles_hard = -1

rss_hard = -1

stack_hard = -1

grid:

core = -1

cpu = -1

data = -1

fsize = -1

nofiles = -1

rss = -1

stack = -1

core_hard = -1

cpu_hard = -1

data_hard = -1

fsize_hard = -1

nofiles_hard = -1

rss_hard = -1

stack_hard = -1" >> /etc/security/limits

}

_chospara(){

vmo -p -o minperm%=3

echo "yes"|vmo -p -o maxperm%=90

echo "yes" |vmo -p -o maxclient%=90

echo "yes"|vmo -p -o lru_file_repage=0

echo "yes"|vmo -p -o strict_maxclient=1

echo "yes" |vmo -p -o strict_maxperm=0

echo "yes\nno" |vmo -r -o page_steal_method=1;

ioo -a|egrep -w "aio_maxreqs|aio_maxservers|aio_minservers"

/usr/sbin/chdev -l sys0 -a maxuproc=16384 -a ncargs=256 -a minpout=4096 -a maxpout=8193 -a fullcore=true

echo "check sys0 16384 256"

lsattr -El sys0 |egrep "maxuproc|ncargs|pout|fullcore" |awk '{print $1,$2}'

/usr/sbin/no -p -o sb_max=41943040

/usr/sbin/no -p -o udp_sendspace=2097152

/usr/sbin/no -p -o udp_recvspace=20971520

/usr/sbin/no -p -o tcp_sendspace=1048576

/usr/sbin/no -p -o tcp_recvspace=1048576

/usr/sbin/no -p -o rfc1323=1

/usr/sbin/no -r -o ipqmaxlen=512

/usr/sbin/no -p -o clean_partial_conns=1

cp -p /etc/environment /etc/environment.\date '+%Y%m%d'\

cat /etc/environment.\date '+%Y%m%d'\ |awk '/\^TZ=/{print "TZ=BEIST-8"} !/\^TZ=/{print}' >/etc/environment

_chlimit

}

_chlimit

_chospara

stopsrc -s xntpd

#startsrc -s xntpd -a "-x"

mv /etc/ntp.conf /etc/ntp.conf.org

sh os_pre_lhr.sh

注意:所有参数的修改均以官方文档为准,具体文档为:Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11.2) for IBM AIX on POWER Systems (64-Bit) e48294.pdf 或网页版也可以,可以参考最后的附录部分。

创建文件系统

/usr/lpp/EMC/Symmetrix/bin/inq.aix64_51 -showvol -sid

for diskname in \lspv\|cut -f "1" -d ' '\;do

echo "/dev/r$diskname" \getconf DISK_SIZE /dev/r$diskname\

done

lspv

mkvg -S -y t_u01_vg -s 128 hdisk22

mklv -t jfs2 -y tlv_u01 -x 1024 t_u01_vg 400

crfs -v jfs2 -d tlv_u01 -m /u01 -A yes

mount /u01

mklv -t jfs2 -y tlv_softtmp -x 1024 t_u01_vg 240

crfs -v jfs2 -d tlv_softtmp -m /softtmp -A yes

mount /softtmp

以LHRTEST2101为例:

[LHRTEST2101:root]/]> /usr/lpp/EMC/Symmetrix/bin/inq.aix64_51 -showvol -sid

Inquiry utility, Version V7.3-1214 (Rev 0.1) (SIL Version V7.3.0.1 (Edit Level 1214)

Copyright (C) by EMC Corporation, all rights reserved.

For help type inq -h.

.........................

------------------------------------------------------------------------------------------------

DEVICE :VEND :PROD :REV :SER NUM :Volume :CAP(kb) :SYMM ID

------------------------------------------------------------------------------------------------

/dev/rhdisk0 :AIX :VDASD :0001 :hdisk5 : 00000: 134246400 :N/A

/dev/rhdisk1 :EMC :SYMMETRIX :5876 :640250a000 : 0250A: 2880 :000492600664

/dev/rhdisk2 :EMC :SYMMETRIX :5876 :640250b000 : 0250B: 2880 :000492600664

/dev/rhdisk3 :EMC :SYMMETRIX :5876 :640250c000 : 0250C: 2880 :000492600664

/dev/rhdisk4 :EMC :SYMMETRIX :5876 :640250d000 : 0250D: 2880 :000492600664

/dev/rhdisk5 :EMC :SYMMETRIX :5876 :64026f6000 : 026F6: 134246400 :000492600664

/dev/rhdisk6 :EMC :SYMMETRIX :5876 :64026fe000 : 026FE: 134246400 :000492600664

/dev/rhdisk7 :EMC :SYMMETRIX :5876 :6402706000 : 02706: 134246400 :000492600664

/dev/rhdisk8 :EMC :SYMMETRIX :5876 :640270e000 : 0270E: 134246400 :000492600664

/dev/rhdisk9 :EMC :SYMMETRIX :5876 :6402716000 : 02716: 134246400 :000492600664

/dev/rhdisk10 :EMC :SYMMETRIX :5876 :640271e000 : 0271E: 134246400 :000492600664

/dev/rhdisk11 :EMC :SYMMETRIX :5876 :6402726000 : 02726: 134246400 :000492600664

/dev/rhdisk12 :EMC :SYMMETRIX :5876 :640272e000 : 0272E: 134246400 :000492600664

/dev/rhdisk13 :EMC :SYMMETRIX :5876 :6402736000 : 02736: 134246400 :000492600664

/dev/rhdisk14 :EMC :SYMMETRIX :5876 :640273e000 : 0273E: 134246400 :000492600664

/dev/rhdisk15 :EMC :SYMMETRIX :5876 :6402746000 : 02746: 134246400 :000492600664

/dev/rhdisk16 :EMC :SYMMETRIX :5876 :640274e000 : 0274E: 134246400 :000492600664

/dev/rhdisk17 :EMC :SYMMETRIX :5876 :6402756000 : 02756: 134246400 :000492600664

/dev/rhdisk18 :EMC :SYMMETRIX :5876 :640275e000 : 0275E: 134246400 :000492600664

/dev/rhdisk19 :EMC :SYMMETRIX :5876 :6402766000 : 02766: 134246400 :000492600664

/dev/rhdisk20 :EMC :SYMMETRIX :5876 :640276e000 : 0276E: 134246400 :000492600664

/dev/rhdisk21 :EMC :SYMMETRIX :5876 :6402776000 : 02776: 134246400 :000492600664

/dev/rhdisk22 :EMC :SYMMETRIX :5876 :640277e000 : 0277E: 134246400 :000492600664

/dev/rhdisk23 :EMC :SYMMETRIX :5876 :6402786000 : 02786: 134246400 :000492600664

/dev/rhdisk24 :EMC :SYMMETRIX :5876 :640278e000 : 0278E: 134246400 :000492600664

[LHRTEST2101:root]/]> lspv

hdisk0 00c49fc434da2434 rootvg active

hdisk1 00c49fc461fc76b2 None

hdisk2 00c49fc461fc76f5 None

hdisk3 00c49fc461fc7739 None

hdisk4 00c49fc461fc777a None

hdisk5 00c49fc461fc77bd None

hdisk6 00c49fc461fc77fe None

hdisk7 00c49fc461fc783f None

hdisk8 00c49fc461fc7880 None

hdisk9 00c49fc461fc78c5 None

hdisk10 00c49fc461fc7908 None

hdisk11 00c49fc461fc7958 None

hdisk12 00c49fc461fc79a0 None

hdisk13 00c49fc461fc79ea None

hdisk14 00c49fc461fc7a2f None

hdisk15 00c49fc461fc7a71 None

hdisk16 00c49fc461fc7ab1 None

hdisk17 00c49fb4e3a8fc12 None

hdisk18 00c49fc461fc7b3b T_NET_APP_vg active

hdisk19 00c49fc461fc7b7d None

hdisk20 00c49fc461fc7bbe None

hdisk21 00c49fc461fc7bff None

hdisk22 00c49fc461fc7c40 None

hdisk23 00c49fc461fc7c88 T_TEST_LHR_VG active

hdisk24 00c49fc461fc7cca T_TEST_LHR_VG active

[LHRTEST2101:root]/]> for diskname in \lspv\|cut -f "1" -d ' '\;do

echo "/dev/r$diskname" \getconf DISK_SIZE /dev/r$diskname\

done

/dev/rhdisk0 131100

/dev/rhdisk1 2

/dev/rhdisk2 2

/dev/rhdisk3 2

/dev/rhdisk4 2

/dev/rhdisk5 131100

/dev/rhdisk6 131100

/dev/rhdisk7 131100

/dev/rhdisk8 131100

/dev/rhdisk9 131100

/dev/rhdisk10 131100

/dev/rhdisk11 131100

/dev/rhdisk12 131100

/dev/rhdisk13 131100

/dev/rhdisk14 131100

/dev/rhdisk15 131100

/dev/rhdisk16 131100

/dev/rhdisk17 131100

/dev/rhdisk18 131100

/dev/rhdisk19 131100

/dev/rhdisk20 131100

/dev/rhdisk21 131100

/dev/rhdisk22 131100

/dev/rhdisk23 131100

/dev/rhdisk24 131100

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]> df -g

Filesystem GB blocks Free %Used Iused %Iused Mounted on

/dev/hd4 4.25 4.00 6% 12643 2% /

/dev/hd2 10.00 4.58 55% 118785 10% /usr

/dev/hd9var 4.50 4.08 10% 1175 1% /var

/dev/hd3 4.25 3.75 12% 1717 1% /tmp

/dev/hd1 1.00 1.00 1% 17 1% /home

/dev/hd11admin 0.25 0.25 1% 7 1% /admin

/proc - - - - - /proc

/dev/hd10opt 4.50 4.37 3% 2559 1% /opt

/dev/livedump 1.00 1.00 1% 6 1% /var/adm/ras/livedump

/dev/Plv_install 1.00 1.00 1% 4 1% /install

/dev/Plv_mtool 1.00 1.00 1% 4 1% /mtool

/dev/Plv_audit 2.00 1.99 1% 5 1% /audit

/dev/Plv_ftplog 1.00 1.00 1% 5 1% /ftplog

/dev/Tlv_bocnet 50.00 49.99 1% 4 1% /bocnet

/dev/Tlv_WebSphere 10.00 5.71 43% 45590 4% /WebSphere

/dev/TLV_TEST_DATA 100.00 99.98 1% 7 1% /lhr

ZTDNETAP3:/nfs 1240.00 14.39 99% 512924 14% /nfs

ZTINIMSERVER:/sharebkup 5500.00 1258.99 78% 2495764 1% /sharebkup

[LHRTEST2101:root]/]> mklv -t jfs2 -y tlv_u01 -x 1024 t_u01_vg 400

tlv_u01

[LHRTEST2101:root]/]> crfs -v jfs2 -d tlv_u01 -m /u01 -A yes

File system created successfully.

52426996 kilobytes total disk space.

New File System size is 104857600

[LHRTEST2101:root]/]> mount /u01

[LHRTEST2101:root]/]> df -g

Filesystem GB blocks Free %Used Iused %Iused Mounted on

/dev/hd4 4.25 4.00 6% 12648 2% /

/dev/hd2 10.00 4.58 55% 118785 10% /usr

/dev/hd9var 4.50 4.08 10% 1176 1% /var

/dev/hd3 4.25 3.75 12% 1717 1% /tmp

/dev/hd1 1.00 1.00 1% 17 1% /home

/dev/hd11admin 0.25 0.25 1% 7 1% /admin

/proc - - - - - /proc

/dev/hd10opt 4.50 4.37 3% 2559 1% /opt

/dev/livedump 1.00 1.00 1% 6 1% /var/adm/ras/livedump

/dev/Plv_install 1.00 1.00 1% 4 1% /install

/dev/Plv_mtool 1.00 1.00 1% 4 1% /mtool

/dev/Plv_audit 2.00 1.99 1% 5 1% /audit

/dev/Plv_ftplog 1.00 1.00 1% 5 1% /ftplog

/dev/Tlv_bocnet 50.00 49.99 1% 4 1% /bocnet

/dev/Tlv_WebSphere 10.00 5.71 43% 45590 4% /WebSphere

/dev/TLV_TEST_DATA 100.00 99.98 1% 7 1% /lhr

ZTDNETAP3:/nfs 1240.00 14.39 99% 512924 14% /nfs

ZTINIMSERVER:/sharebkup 5500.00 1258.99 78% 2495764 1% /sharebkup

/dev/tlv_u01 50.00 49.99 1% 4 1% /u01

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]> mklv -t jfs2 -y tlv_softtmp -x 1024 t_u01_vg 240

tlv_softtmp

[LHRTEST2101:root]/]> crfs -v jfs2 -d tlv_softtmp -m /softtmp -A yes

File system created successfully.

31456116 kilobytes total disk space.

New File System size is 62914560

[LHRTEST2101:root]/]> mount /softtmp

[LHRTEST2101:root]/]> df -g

Filesystem GB blocks Free %Used Iused %Iused Mounted on

/dev/hd4 4.25 4.00 6% 12650 2% /

/dev/hd2 10.00 4.58 55% 118785 10% /usr

/dev/hd9var 4.50 4.08 10% 1177 1% /var

/dev/hd3 4.25 3.75 12% 1717 1% /tmp

/dev/hd1 1.00 1.00 1% 17 1% /home

/dev/hd11admin 0.25 0.25 1% 7 1% /admin

/proc - - - - - /proc

/dev/hd10opt 4.50 4.37 3% 2559 1% /opt

/dev/livedump 1.00 1.00 1% 6 1% /var/adm/ras/livedump

/dev/Plv_install 1.00 1.00 1% 4 1% /install

/dev/Plv_mtool 1.00 1.00 1% 4 1% /mtool

/dev/Plv_audit 2.00 1.99 1% 5 1% /audit

/dev/Plv_ftplog 1.00 1.00 1% 5 1% /ftplog

/dev/Tlv_bocnet 50.00 49.99 1% 4 1% /bocnet

/dev/Tlv_WebSphere 10.00 5.71 43% 45590 4% /WebSphere

/dev/TLV_TEST_DATA 100.00 99.98 1% 7 1% /lhr

ZTDNETAP3:/nfs 1240.00 14.39 99% 512924 14% /nfs

ZTINIMSERVER:/sharebkup 5500.00 1258.99 78% 2495764 1% /sharebkup

/dev/tlv_u01 50.00 49.99 1% 4 1% /u01

/dev/tlv_softtmp 30.00 30.00 1% 4 1% /softtmp

[LHRTEST2101:root]/]>

创建卷组的时候注意踩盘,懂AIX的人懂的,不多说。

建立安装目录

直接复制粘贴执行:

mkdir -p /u01/app/11.2.0/grid

chmod -R 755 /u01/app/11.2.0/grid

mkdir -p /u01/app/grid

chmod -R 755 /u01/app/grid

mkdir -p /u01/app/oracle

chmod -R 755 /u01/app/oracle

[LHRTEST2101:root]/]> mkdir -p /u01/app/11.2.0/grid

[LHRTEST2101:root]/]> chmod -R 755 /u01/app/11.2.0/grid

[LHRTEST2101:root]/]> mkdir -p /u01/app/grid

[LHRTEST2101:root]/]> chmod -R 755 /u01/app/grid

[LHRTEST2101:root]/]> mkdir -p /u01/app/oracle

[LHRTEST2101:root]/]> chmod -R 755 /u01/app/oracle

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]> cd /u01/app

[LHRTEST2101:root]/u01/app]> l

total 0

drwxr-xr-x 3 root system 256 Mar 08 16:11 11.2.0

drwxr-xr-x 2 root system 256 Mar 08 16:11 grid

drwxr-xr-x 2 root system 256 Mar 08 16:11 oracle

[LHRTEST2101:root]/u01/app]>

建立用户和用户组

直接复制粘贴执行:

mkgroup -A id=1024 dba

mkgroup -A id=1025 asmadmin

mkgroup -A id=1026 asmdba

mkgroup -A id=1027 asmoper

mkgroup -A id=1028 oinstall

mkgroup -A id=1029 oper

mkuser -a id=1025 pgrp=oinstall groups=dba,asmadmin,asmdba,asmoper,oinstall home=/home/grid fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid

echo "grid:grid" |chpasswd

pwdadm -c grid

mkuser -a id=1024 pgrp=dba groups=dba,asmadmin,asmdba,asmoper,oinstall home=/home/oracle fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

echo "oracle:oracle" |chpasswd

pwdadm -c oracle

chown -R grid:dba /u01/app/11.2.0

chown grid:dba /u01/app

chown grid:dba /u01/app/grid

chown -R oracle:dba /u01/app/oracle

chown oracle:dba /u01

/usr/sbin/lsuser -a capabilities grid

/usr/sbin/lsuser -a capabilities oracle

[LHRTEST2101:root]/u01/app]> mkgroup -A id=1024 dba

[LHRTEST2101:root]/u01/app]> mkuser -a id=1025 pgrp=dba groups=dba home=/home/grid fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid

[LHRTEST2101:root]/u01/app]> passwd grid

Changing password for "grid"

grid's New password:

Enter the new password again:

[LHRTEST2101:root]/u01/app]>

[LHRTEST2101:root]/u01/app]> mkuser -a id=1024 pgrp=dba groups=dba home=/home/oracle fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

[LHRTEST2101:root]/u01/app]> passwd oracle

Changing password for "oracle"

oracle's New password:

Enter the new password again:

[LHRTEST2101:root]/u01/app]> chown -R grid:dba /u01/app/11.2.0

[LHRTEST2101:root]/u01/app]> chown grid:dba /u01/app

[LHRTEST2101:root]/u01/app]> chown grid:dba /u01/app/grid

[LHRTEST2101:root]/u01/app]> chown -R oracle:dba /u01/app/oracle

[LHRTEST2101:root]/u01/app]> chown oracle:dba /u01

[LHRTEST2101:root]/u01/app]> /usr/sbin/lsuser -a capabilities grid

grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

[LHRTEST2101:root]/u01/app]> /usr/sbin/lsuser -a capabilities oracle

oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

[LHRTEST2101:root]/u01/app]>

2个节点都校验:

[LHRTEST1101:root]/]> id grid

uid=1025(grid) gid=1028(oinstall) groups=1024(dba),1025(asmadmin),1026(asmdba),1027(asmoper)

[LHRTEST1101:root]/]> id oracle

uid=1024(oracle) gid=1024(dba) groups=1025(asmadmin),1026(asmdba),1027(asmoper),1028(oinstall)

[LHRTEST1101:root]/]>

配置 grid 和 oracle的 .profile

---------2个节点分别配置,注意修改ORACLE_SID的值为+ASM1,+ASM2

su - grid

vi .profile

umask 022

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export ORACLE_SID=+ASM

export ORACLE_TERM=vt100

export ORACLE_OWNER=grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/app/oracle/product/11.2.0/dbhome_1/lib32

export LIBPATH=$LIBPATH:/u01/app/oracle/product/11.2.0/dbhome_1/lib

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

export PATH=$PATH:/bin:/usr/ccs/bin:/usr/bin/X11:$ORACLE_HOME/bin

export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'

set -o vi

export EDITOR=vi

alias l='ls -l'

export PS1='[$LOGNAME@'\hostname\:'$PWD'']$ '

export AIXTHREAD_SCOPE=S

export ORACLE_TERM=vt100

export TMP=/tmp

export TMPDIR=/tmp

export LANG=en_US

export PS1='[$LOGNAME@'\hostname\:'$PWD'']$ '

export DISPLAY=20.188.216.97:0.0

su - oracle

vi .profile

umask 022

export ORACLE_SID=ora11g

export ORACLE_BASE=/u01/app/oracle

export GRID_HOME=/u01/app/11.2.0/grid

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$GRID_HOME/bin:$PATH:$ORACLE_HOME/OPatch

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'

export ORACLE_OWNER=oracle

set -o vi

export EDITOR=vi

alias l='ls -l'

export AIXTHREAD_SCOPE=S

export ORACLE_TERM=vt100

export TMP=/tmp

export TMPDIR=/tmp

export LANG=en_US

export PS1='[$LOGNAME@'\hostname\:'$PWD'']$ '

export DISPLAY=20.188.216.97:0.0

. ~/.profile 生效当前的环境变量

[LHRTEST1101:root]/]> . ~/.profile

准备ASM磁盘

2个节点都执行, ASM磁盘权限和属性的修改,否则执行root.sh的时候报错:

Disk Group OCR creation failed with the following message:

ORA-15018: diskgroup cannot be created

ORA-15031: disk specification '/dev/rhdisk10' matches no disks

ORA-15025: could not open disk "/dev/rhdisk10"

ORA-15056: additional error message

chown grid.asmadmin /dev/rhdisk10

chown grid.asmadmin /dev/rhdisk11

chmod 660 /dev/rhdisk10

chmod 660 /dev/rhdisk11

lquerypv -h /dev/hdisk10

chdev -l hdisk10 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

chdev -l hdisk11 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

chdev -l hdisk10 -a pv=clear

chdev -l hdisk11 -a pv=clear

lsattr -El hdisk10

检测:

lsattr -El hdisk10 | grep reserve_

# 如果上面查看的结果是reserve_lock,则执行

chdev -l hdisk2 -a reserve_lock=no

[LHRTEST2101:root]/]> lsattr -El hdisk10

PCM PCM/friend/MSYMM_VRAID Path Control Module True

PR_key_value none Persistant Reserve Key Value True

algorithm fail_over Algorithm True

clr_q yes Device CLEARS its Queue on error True

dist_err_pcnt 0 Distributed Error Percentage True

dist_tw_width 50 Distributed Error Sample Time True

hcheck_cmd inquiry Health Check Command True

hcheck_interval 60 Health Check Interval True

hcheck_mode nonactive Health Check Mode True

location Location Label True

lun_id 0x9000000000000 Logical Unit Number ID False

lun_reset_spt yes FC Forced Open LUN True

max_coalesce 0x100000 Maximum Coalesce Size True

max_retries 5 Maximum Number of Retries True

max_transfer 0x100000 Maximum TRANSFER Size True

node_name 0x50000978080a6000 FC Node Name False

pvid 00c49fc461fc79080000000000000000 Physical volume identifier False

q_err no Use QERR bit True

q_type simple Queue TYPE True

queue_depth 32 Queue DEPTH True

reserve_policy single_path Reserve Policy True

rw_timeout 40 READ/WRITE time out value True

scsi_id 0xce0040 SCSI ID False

start_timeout 180 START UNIT time out value True

timeout_policy retry_path Timeout Policy True

ww_name 0x50000978080a61d1 FC World Wide Name False

[LHRTEST2101:root]/]> chdev -l hdisk10 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

hdisk10 changed

[LHRTEST2101:root]/]> chdev -l hdisk11 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

hdisk11 changed

[LHRTEST2101:root]/]> lsattr -El hdisk11

PCM PCM/friend/MSYMM_VRAID Path Control Module True

PR_key_value none Persistant Reserve Key Value True

algorithm round_robin Algorithm True

clr_q yes Device CLEARS its Queue on error True

dist_err_pcnt 0 Distributed Error Percentage True

dist_tw_width 50 Distributed Error Sample Time True

hcheck_cmd inquiry Health Check Command True

hcheck_interval 60 Health Check Interval True

hcheck_mode nonactive Health Check Mode True

location Location Label True

lun_id 0xa000000000000 Logical Unit Number ID False

lun_reset_spt yes FC Forced Open LUN True

max_coalesce 0x100000 Maximum Coalesce Size True

max_retries 5 Maximum Number of Retries True

max_transfer 0x100000 Maximum TRANSFER Size True

node_name 0x50000978080a6000 FC Node Name False

pvid 00c49fc461fc79580000000000000000 Physical volume identifier False

q_err no Use QERR bit True

q_type simple Queue TYPE True

queue_depth 32 Queue DEPTH True

reserve_policy no_reserve Reserve Policy True

rw_timeout 40 READ/WRITE time out value True

scsi_id 0xce0040 SCSI ID False

start_timeout 180 START UNIT time out value True

timeout_policy retry_path Timeout Policy True

ww_name 0x50000978080a61d1 FC World Wide Name False

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]> lquerypv -h /dev/rhdisk10

00000000 00000000 00000000 00000000 00000000 |................|

00000010 00000000 00000000 00000000 00000000 |................|

00000020 00000000 00000000 00000000 00000000 |................|

00000030 00000000 00000000 00000000 00000000 |................|

00000040 00000000 00000000 00000000 00000000 |................|

00000050 00000000 00000000 00000000 00000000 |................|

00000060 00000000 00000000 00000000 00000000 |................|

00000070 00000000 00000000 00000000 00000000 |................|

00000080 00000000 00000000 00000000 00000000 |................|

00000090 00000000 00000000 00000000 00000000 |................|

000000A0 00000000 00000000 00000000 00000000 |................|

000000B0 00000000 00000000 00000000 00000000 |................|

000000C0 00000000 00000000 00000000 00000000 |................|

000000D0 00000000 00000000 00000000 00000000 |................|

000000E0 00000000 00000000 00000000 00000000 |................|

000000F0 00000000 00000000 00000000 00000000 |................|

配置SSH连通性

Oracle Database 11gR2 OUI在安装过程中使用ssh和scp命令,因此要求在集群中配置ssh用户对等。我们可以在 OUI的运行界面中配置SSH。SSH信任关系也可在grid安装时选择自动配置。

可以采用shell脚本或者手动配置,推荐shell脚本的方式。

方法一:shell脚本(2个节点都执行)

注意:Oracle11g R2 grid在AIX上自动配置ssh时会报错,因为Oracle调用的命令路径和AIX系统上命令实际路径不符,可以修改oracle安装程序的sshsetup.sh脚本,或按照oracle调用路径添加程序软连接,具体路径安装过程中Oracle会提示。

首先在两台机器上安装好OpenSSH软件;

具体安装方法本处不详述,需要下载openssh、openssl,安装时需先安装openssl,然后再安装openssh。

也可以通过AIX系统光盘,执行smitty install,选择所有ssh包安装,安装完毕后可以检查ssh包:

# lslpp -l | grep ssh

为ssh和scp创建连接,检验是否存在:

ls -l /usr/local/bin/ssh

ls -l /usr/local/bin/scp

若不存在则创建:

/bin/ln -s /usr/bin/ssh /usr/local/bin/ssh

/bin/ln -s /usr/bin/scp /usr/local/bin/scp

[root@rac01 ~]# /bin/ln -s /usr/bin/ssh /usr/local/bin/ssh

[root@rac01 ~]# /bin/ln -s /usr/bin/scp /usr/local/bin/scp

[LHRTEST2101:root]/softtmp/grid/sshsetup]> pwd

/softtmp/grid/sshsetup

[LHRTEST2101:root]/softtmp/grid/sshsetup]> l

total 64

-rwxr-xr-x 1 root system 32343 Dec 17 2009 sshUserSetup.sh

[LHRTEST2101:root]/softtmp/grid/sshsetup]>

[LHRTEST2101:root]/softtmp/grid/sshsetup]> lslpp -l | grep ssh

openssh.base.client 6.0.0.6103 COMMITTED Open Secure Shell Commands

openssh.base.server 6.0.0.6103 COMMITTED Open Secure Shell Server

openssh.license 6.0.0.6103 COMMITTED Open Secure Shell License

openssh.man.en_US 6.0.0.6103 COMMITTED Open Secure Shell

openssh.msg.CA_ES 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.CS_CZ 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.DE_DE 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.EN_US 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.ES_ES 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.FR_FR 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.HU_HU 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.IT_IT 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.JA_JP 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.Ja_JP 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.KO_KR 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.PL_PL 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.PT_BR 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.RU_RU 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.SK_SK 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.ZH_CN 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.ZH_TW 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.Zh_CN 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.Zh_TW 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.ca_ES 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.cs_CZ 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.de_DE 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.en_US 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.es_ES 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.fr_FR 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.hu_HU 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.it_IT 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.ja_JP 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.ko_KR 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.pl_PL 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.pt_BR 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.ru_RU 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.sk_SK 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.zh_CN 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.msg.zh_TW 6.0.0.6103 COMMITTED Open Secure Shell Messages -

openssh.base.client 6.0.0.6103 COMMITTED Open Secure Shell Commands

openssh.base.server 6.0.0.6103 COMMITTED Open Secure Shell Server

[LHRTEST2101:root]/softtmp/grid/sshsetup]>

[LHRTEST2101:root]/softtmp/grid/sshsetup]> ls -l /usr/local/bin/ssh

lrwxrwxrwx 1 root system 12 Mar 08 17:24 /usr/local/bin/ssh -> /usr/bin/ssh

[LHRTEST2101:root]/softtmp/grid/sshsetup]> ls -l /usr/local/bin/scp

lrwxrwxrwx 1 root system 12 Mar 08 17:24 /usr/local/bin/scp -> /usr/bin/scp

[LHRTEST2101:root]/softtmp/grid/sshsetup]>

如下为shell脚本部分:注意修改黄色背景的部分,oth代表另外一个节点的主机名,执行cfgssh.sh即可,执行testssh.sh测试ssh的连通性,该脚本AIX和linux通用,若只给一个节点配置,可以将oth的值设置为hn的值 :

vi cfgssh.sh

echo "config ssh..."

grep "\^LoginGraceTime 0" /etc/ssh/sshd_config

[ $? -ne 0 ] && { cp -p /etc/ssh/sshd_config /etc/ssh/sshd_config.org; echo "LoginGraceTime 0" >>/etc/ssh/sshd_config; }

export hn=\hostname\

export oth=LHRTEST2101

export p_pwd=\pwd\

su - grid -c "$p_pwd/sshUserSetup.sh -user grid -hosts $oth -noPromptPassphrase"

su - grid -c "ssh $hn hostname"

su - grid -c "ssh $oth hostname"

su - oracle -c "$p_pwd/sshUserSetup.sh -user oracle -hosts $oth -noPromptPassphrase"

su - oracle -c "ssh $hn hostname"

su - oracle -c "ssh $oth hostname"

vi sshUserSetup.sh

vi testssh.sh

export hn=\hostname\

export oth=LHRTEST2101

su - grid -c "ssh $hn pwd"

su - grid -c "ssh $oth pwd"

su - oracle -c "ssh $hn pwd"

su - oracle -c "ssh $oth pwd"

chmod 777 *.sh

sh cfgssh.sh

方法二:手动配置

分别配置grid和oracle用户的ssh

----------------------------------------------------------------------------------

[root@node1 : /]# su - oracle

[oracle@node1 ~]$ mkdir ~/.ssh

[oracle@node1 ~]$ chmod 700 ~/.ssh

[oracle@node1 ~]$ ssh-keygen -t rsa ->回车->回车->回车

[oracle@node1 ~]$ ssh-keygen -t dsa ->回车->回车->回车

-----------------------------------------------------------------------------------

[root@node2 : /]# su - oracle

[oracle@node2 ~]$ mkdir ~/.ssh

[oracle@node2 ~]$ chmod 700 ~/.ssh

[oracle@node2 ~]$ ssh-keygen -t rsa ->回车->回车->回车

[oracle@node2 ~]$ ssh-keygen -t dsa ->回车->回车->回车

-----------------------------------------------------------------------------------

[oracle@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[oracle@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[oracle@node1 ~]$ ssh LHRTEST2101 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ->输入node2密码

[oracle@node1 ~]$ ssh LHRTEST2101 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys ->输入node2密码

[oracle@node1 ~]$ scp ~/.ssh/authorized_keys LHRTEST2101:~/.ssh/authorized_keys ->输入node2密码

-----------------------------------------------------------------------------------

测试两节点连通性:

[oracle@node1 ~]$ ssh LHRTEST1101 date

[oracle@node1 ~]$ ssh LHRTEST2101 date

[oracle@node1 ~]$ ssh LHRTEST1101-priv date

[oracle@node1 ~]$ ssh LHRTEST2101-priv date

[oracle@node2 ~]$ ssh LHRTEST1101 date

[oracle@node2 ~]$ ssh LHRTEST2101 date

[oracle@node2 ~]$ ssh LHRTEST1101-priv date

[oracle@node2 ~]$ ssh LHRTEST2101-priv date

grid安装

准备安装源

上传文件到softtmp目录:

AIX系统静默安装Oracle 11gR2 RAC

[LHRTEST2101:root]/softtmp]> l

total 9644872

drwxr-xr-x 2 root system 256 Mar 08 16:10 lost+found

-rw-r----- 1 root system 1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r----- 1 root system 1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r----- 1 root system 2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[LHRTEST2101:root]/softtmp]> unzip p10404530_112030_AIX64-5L_3of7.zip

Archive: p10404530_112030_AIX64-5L_3of7.zip

creating: grid/

creating: grid/stage/

inflating: grid/stage/shiphomeproperties.xml

creating: grid/stage/Components/

creating: grid/stage/Components/oracle.crs/

creating: grid/stage/Components/oracle.crs/11.2.0.3.0/

creating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/

creating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/

inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup5.jar

inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup4.jar

inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup3.jar

inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup2.jar

inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup1.jar

creating: grid/stage/Components/oracle.has.crs/

《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》

inflating: grid/doc/server.11203/E18951-02.mobi

inflating: grid/welcome.html

creating: grid/sshsetup/

inflating: grid/sshsetup/sshUserSetup.sh

inflating: grid/readme.html

[LHRTEST2101:root]/softtmp]>

[LHRTEST2101:root]/softtmp]> l

total 9644880

drwxr-xr-x 9 root system 4096 Oct 28 2011 grid

drwxr-xr-x 2 root system 256 Mar 08 16:10 lost+found

-rw-r----- 1 root system 1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r----- 1 root system 1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r----- 1 root system 2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[LHRTEST2101:root]/softtmp]> cd grid

[LHRTEST2101:root]/softtmp/grid]> l

total 168

drwxr-xr-x 9 root system 4096 Oct 10 2011 doc

drwxr-xr-x 4 root system 4096 Oct 21 2011 install

-rwxr-xr-x 1 root system 28122 Oct 28 2011 readme.html

drwxrwxr-x 2 root system 256 Oct 21 2011 response

drwxrwxr-x 3 root system 256 Oct 21 2011 rootpre

-rwxr-xr-x 1 root system 13369 Sep 22 2010 rootpre.sh

drwxrwxr-x 2 root system 256 Oct 21 2011 rpm

-rwxr-xr-x 1 root system 10006 Oct 21 2011 runInstaller

-rwxrwxr-x 1 root system 4878 May 14 2011 runcluvfy.sh

drwxrwxr-x 2 root system 256 Oct 21 2011 sshsetup

drwxr-xr-x 14 root system 4096 Oct 21 2011 stage

-rw-r--r-- 1 root system 4561 Oct 10 2011 welcome.html

执行runcluvfy.sh脚本预检测

[grid@LHRTEST2101:/softtmp/grid]$ /softtmp/grid/runcluvfy.sh stage -pre crsinst -n LHRTEST2101,LHRTEST1101 -verbose -fixup

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "LHRTEST2101"

Destination Node Reachable?

------------------------------------ ------------------------

LHRTEST2101 yes

LHRTEST1101 yes

Result: Node reachability check passed from node "LHRTEST2101"

Checking user equivalence...

Check: User equivalence for user "grid"

Node Name Status

------------------------------------ ------------------------

LHRTEST2101 passed

LHRTEST1101 passed

Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Node Name Status

------------------------------------ ------------------------

LHRTEST2101 passed

LHRTEST1101 passed

Verification of the hosts config file successful

Interface information for node "LHRTEST2101"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

en0 20.188.187.158 20.188.187.0 20.188.187.158 20.188.187.1 C6:03:AE:03:97:83 1500

en1 220.188.187.158 220.188.187.0 220.188.187.158 20.188.187.1 C6:03:A7:3E:FE:01 1500

Interface information for node "LHRTEST1101"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

en0 20.188.187.148 20.188.187.0 20.188.187.148 UNKNOWN FE:B6:72:EF:12:83 1500

en1 220.188.187.148 220.188.187.0 220.188.187.148 UNKNOWN FE:B6:7D:9F:6C:01 1500

Check: Node connectivity of subnet "20.188.187.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101[20.188.187.158] LHRTEST1101[20.188.187.148] yes

Result: Node connectivity passed for subnet "20.188.187.0" with node(s) LHRTEST2101,LHRTEST1101

Check: TCP connectivity of subnet "20.188.187.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101:20.188.187.158 LHRTEST1101:20.188.187.148 passed

Result: TCP connectivity check passed for subnet "20.188.187.0"

Check: Node connectivity of subnet "220.188.187.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101[220.188.187.158] LHRTEST1101[220.188.187.148] yes

Result: Node connectivity passed for subnet "220.188.187.0" with node(s) LHRTEST2101,LHRTEST1101

Check: TCP connectivity of subnet "220.188.187.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101:220.188.187.158 LHRTEST1101:220.188.187.148 passed

Result: TCP connectivity check passed for subnet "220.188.187.0"

Interfaces found on subnet "20.188.187.0" that are likely candidates for VIP are:

LHRTEST2101 en0:20.188.187.158

LHRTEST1101 en0:20.188.187.148

Interfaces found on subnet "220.188.187.0" that are likely candidates for VIP are:

LHRTEST2101 en1:220.188.187.158

LHRTEST1101 en1:220.188.187.148

WARNING:

Could not find a suitable set of interfaces for the private interconnect

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "20.188.187.0".

Subnet mask consistency check passed for subnet "220.188.187.0".

Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "20.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "20.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "220.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "220.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 4GB (4194304.0KB) 2GB (2097152.0KB) passed

LHRTEST1101 48GB (5.0331648E7KB) 2GB (2097152.0KB) passed

Result: Total memory check passed

Check: Available memory

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 2.3528GB (2467056.0KB) 50MB (51200.0KB) passed

LHRTEST1101 43.8485GB (4.5978476E7KB) 50MB (51200.0KB) passed

Result: Available memory check passed

Check: Swap space

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 8GB (8388608.0KB) 4GB (4194304.0KB) passed

LHRTEST1101 8GB (8388608.0KB) 16GB (1.6777216E7KB) failed

Result: Swap space check failed

Check: Free disk space for "LHRTEST2101:/tmp"

Path Node Name Mount point Available Required Status

---------------- ------------ ------------ ------------ ------------ ------------

/tmp LHRTEST2101 /tmp 3.5657GB 1GB passed

Result: Free disk space check passed for "LHRTEST2101:/tmp"

Check: Free disk space for "LHRTEST1101:/tmp"

Path Node Name Mount point Available Required Status

---------------- ------------ ------------ ------------ ------------ ------------

/tmp LHRTEST1101 /tmp 18.4434GB 1GB passed

Result: Free disk space check passed for "LHRTEST1101:/tmp"

Check: User existence for "grid"

Node Name Status Comment

------------ ------------------------ ------------------------

LHRTEST2101 passed exists(1025)

LHRTEST1101 passed exists(1025)

Checking for multiple users with UID value 1025

Result: Check for multiple users with UID value 1025 passed

Result: User existence check passed for "grid"

Check: Group existence for "oinstall"

Node Name Status Comment

------------ ------------------------ ------------------------

LHRTEST2101 passed exists

LHRTEST1101 passed exists

Result: Group existence check passed for "oinstall"

Check: Group existence for "dba"

Node Name Status Comment

------------ ------------------------ ------------------------

LHRTEST2101 passed exists

LHRTEST1101 passed exists

Result: Group existence check passed for "dba"

Check: Membership of user "grid" in group "oinstall" [as Primary]

Node Name User Exists Group Exists User in Group Primary Status

---------------- ------------ ------------ ------------ ------------ ------------

LHRTEST2101 yes yes yes yes passed

LHRTEST1101 yes yes yes yes passed

Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Membership of user "grid" in group "dba"

Node Name User Exists Group Exists User in Group Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 yes yes yes passed

LHRTEST1101 yes yes yes passed

Result: Membership check for user "grid" in group "dba" passed

Check: Run level

Node Name run level Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 2 2 passed

LHRTEST1101 2 2 passed

Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 hard 9223372036854776000 65536 passed

LHRTEST1101 hard 9223372036854776000 65536 passed

Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 soft 9223372036854776000 1024 passed

LHRTEST1101 soft 9223372036854776000 1024 passed

Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 hard 16384 16384 passed

LHRTEST1101 hard 16384 16384 passed

Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 soft 16384 2047 passed

LHRTEST1101 soft 16384 2047 passed

Result: Soft limits check passed for "maximum user processes"

Check: System architecture

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 powerpc powerpc passed

LHRTEST1101 powerpc powerpc passed

Result: System architecture check passed

Check: Kernel version

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 7.1-7100.03.03.1415 7.1-7100.00.01.1037 passed

LHRTEST1101 7.1-7100.02.05.1415 7.1-7100.00.01.1037 passed

WARNING:

PRVF-7524 : Kernel version is not consistent across all the nodes.

Kernel version = "7.1-7100.02.05.1415" found on nodes: LHRTEST1101.

Kernel version = "7.1-7100.03.03.1415" found on nodes: LHRTEST2101.

Result: Kernel version check passed

Check: Kernel parameter for "ncargs"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 256 128 passed

LHRTEST1101 256 128 passed

Result: Kernel parameter check passed for "ncargs"

Check: Kernel parameter for "maxuproc"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 16384 2048 passed

LHRTEST1101 16384 2048 passed

Result: Kernel parameter check passed for "maxuproc"

Check: Kernel parameter for "tcp_ephemeral_low"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 32768 9000 failed (ignorable)

LHRTEST1101 32768 9000 failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_low"

Check: Kernel parameter for "tcp_ephemeral_high"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 65535 65500 failed (ignorable)

LHRTEST1101 65535 65500 failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_high"

Check: Kernel parameter for "udp_ephemeral_low"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 32768 9000 failed (ignorable)

LHRTEST1101 32768 9000 failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_low"

Check: Kernel parameter for "udp_ephemeral_high"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 65535 65500 failed (ignorable)

LHRTEST1101 65535 65500 failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_high"

Check: Package existence for "bos.adt.base"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.adt.base-7.1.3.15-0 bos.adt.base-... passed

LHRTEST1101 bos.adt.base-7.1.3.15-0 bos.adt.base-... passed

Result: Package existence check passed for "bos.adt.base"

Check: Package existence for "bos.adt.lib"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.adt.lib-7.1.2.15-0 bos.adt.lib-... passed

LHRTEST1101 bos.adt.lib-7.1.2.15-0 bos.adt.lib-... passed

Result: Package existence check passed for "bos.adt.lib"

Check: Package existence for "bos.adt.libm"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.adt.libm-7.1.3.0-0 bos.adt.libm-... passed

LHRTEST1101 bos.adt.libm-7.1.3.0-0 bos.adt.libm-... passed

Result: Package existence check passed for "bos.adt.libm"

Check: Package existence for "bos.perf.libperfstat"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.perf.libperfstat-7.1.3.15-0 bos.perf.libperfstat-... passed

LHRTEST1101 bos.perf.libperfstat-7.1.3.15-0 bos.perf.libperfstat-... passed

Result: Package existence check passed for "bos.perf.libperfstat"

Check: Package existence for "bos.perf.perfstat"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.perf.perfstat-7.1.3.15-0 bos.perf.perfstat-... passed

LHRTEST1101 bos.perf.perfstat-7.1.3.15-0 bos.perf.perfstat-... passed

Result: Package existence check passed for "bos.perf.perfstat"

Check: Package existence for "bos.perf.proctools"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.perf.proctools-7.1.3.15-0 bos.perf.proctools-... passed

LHRTEST1101 bos.perf.proctools-7.1.3.15-0 bos.perf.proctools-... passed

Result: Package existence check passed for "bos.perf.proctools"

Check: Package existence for "xlC.aix61.rte"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 xlC.aix61.rte-12.1.0.1-0 xlC.aix61.rte-10.1.0.0 passed

LHRTEST1101 xlC.aix61.rte-12.1.0.1-0 xlC.aix61.rte-10.1.0.0 passed

Result: Package existence check passed for "xlC.aix61.rte"

Check: Package existence for "xlC.rte"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 xlC.rte-12.1.0.1-0 xlC.rte-10.1.0.0 passed

LHRTEST1101 xlC.rte-12.1.0.1-0 xlC.rte-10.1.0.0 passed

Result: Package existence check passed for "xlC.rte"

Check: Operating system patch for "Patch IZ87216"

Node Name Applied Required Comment

------------ ------------------------ ------------------------ ----------

LHRTEST2101 Patch IZ87216:devices.common.IBM.mpio.rte Patch IZ87216 passed

LHRTEST1101 Patch IZ87216:devices.common.IBM.mpio.rte Patch IZ87216 passed

Result: Operating system patch check passed for "Patch IZ87216"

Check: Operating system patch for "Patch IZ87564"

Node Name Applied Required Comment

------------ ------------------------ ------------------------ ----------

LHRTEST2101 Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof Patch IZ87564 passed

LHRTEST1101 Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof Patch IZ87564 passed

Result: Operating system patch check passed for "Patch IZ87564"

Check: Operating system patch for "Patch IZ89165"

Node Name Applied Required Comment

------------ ------------------------ ------------------------ ----------

LHRTEST2101 Patch IZ89165:bos.rte.bind_cmds Patch IZ89165 passed

LHRTEST1101 Patch IZ89165:bos.rte.bind_cmds Patch IZ89165 passed

Result: Operating system patch check passed for "Patch IZ89165"

Check: Operating system patch for "Patch IZ97035"

Node Name Applied Required Comment

------------ ------------------------ ------------------------ ----------

LHRTEST2101 Patch IZ97035:devices.vdevice.IBM.l-lan.rte Patch IZ97035 passed

LHRTEST1101 Patch IZ97035:devices.vdevice.IBM.l-lan.rte Patch IZ97035 passed

Result: Operating system patch check passed for "Patch IZ97035"

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

Check: Current group ID

Result: Current group ID check passed

Starting check for consistency of primary group of root user

Node Name Status

------------------------------------ ------------------------

LHRTEST2101 passed

LHRTEST1101 passed

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

The NTP configuration file "/etc/ntp.conf" is available on all nodes

NTP Configuration file check passed

Checking daemon liveness...

Check: Liveness for "xntpd"

Node Name Running?

------------------------------------ ------------------------

LHRTEST2101 yes

LHRTEST1101 yes

Result: Liveness check passed for "xntpd"

Check for NTP daemon or service alive passed on all nodes

Checking NTP daemon command line for slewing option "-x"

Check: NTP daemon command line

Node Name Slewing Option Set?

------------------------------------ ------------------------

LHRTEST2101 yes

LHRTEST1101 no

Result:

NTP daemon slewing option check failed on some nodes

PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"

Result: Clock synchronization check using Network Time Protocol(NTP) failed

Checking Core file name pattern consistency...

Core file name pattern consistency check passed.

Checking to make sure user "grid" is not in "system" group

Node Name Status Comment

------------ ------------------------ ------------------------

LHRTEST2101 passed does not exist

LHRTEST1101 passed does not exist

Result: User "grid" is not part of "system" group. Check passed

Check default user file creation mask

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

LHRTEST2101 022 0022 passed

LHRTEST1101 022 0022 passed

Result: Default user file creation mask check passed

Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not exist on any node of the cluster. Skipping further checks

File "/etc/resolv.conf" is consistent across nodes

Check: Time zone consistency

Result: Time zone consistency check passed

Result: User ID \< 65535 check passed

Result: Kernel 64-bit mode check passed

[grid@LHRTEST2101:/softtmp/grid]$

静默安装grid软件

先root执行:

/softtmp/grid/rootpre.sh

[LHRTEST2101:root]/]> /softtmp/grid/rootpre.sh

/softtmp/grid/rootpre.sh output will be logged in /tmp/rootpre.out_16-03-09.09:47:33

Checking if group services should be configured....

Nothing to configure.

[LHRTEST2101:root]/]>

./runInstaller -silent -force -noconfig -IgnoreSysPreReqs -ignorePrereq -showProgress \

INVENTORY_LOCATION=/u01/app/oraInventory \

SELECTED_LANGUAGES=en \

ORACLE_BASE=/u01/app/grid \

ORACLE_HOME=/u01/app/11.2.0/grid \

oracle.install.asm.OSDBA=asmdba \

oracle.install.asm.OSOPER=asmoper \

oracle.install.asm.OSASM=asmadmin \

oracle.install.crs.config.storageOption=ASM_STORAGE \

oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=EXTERNAL \

oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL \

oracle.install.crs.config.useIPMI=false \

oracle.install.asm.diskGroup.name=OCR \

oracle.install.asm.diskGroup.redundancy=EXTERNAL \

oracle.installer.autoupdates.option=SKIP_UPDATES \

oracle.install.crs.config.gpnp.scanPort=1521 \

oracle.install.crs.config.gpnp.configureGNS=false \

oracle.install.option=CRS_CONFIG \

oracle.install.asm.SYSASMPassword=lhr \

oracle.install.asm.monitorPassword=lhr \

oracle.install.asm.diskGroup.diskDiscoveryString=/dev/rhdisk* \

oracle.install.asm.diskGroup.disks=/dev/rhdisk10 \

oracle.install.crs.config.gpnp.scanName=LHRTEST2101-scan \

oracle.install.crs.config.clusterName=LHRTEST-cluster \

oracle.install.crs.config.autoConfigureClusterNodeVIP=false \

oracle.install.crs.config.clusterNodes=LHRTEST2101:LHRTEST2101-vip,LHRTEST1101:LHRTEST1101-vip \

oracle.install.crs.config.networkInterfaceList=en0:20.188.187.0:1,en1:220.188.187.0:2 \

ORACLE_HOSTNAME=LHRTEST2101

命令行模式执行静默安装,注意复制脚本的时候最后不能多加回车符号,当前窗口不要执行其他内容,开始执行有点慢,需要修改的地方我已经用黄色背景表示了:

[grid@LHRTEST2101:/home/grid]$ umask

022

[grid@LHRTEST2101:/softtmp/grid]$ ./runInstaller -silent -force -noconfig -IgnoreSysPreReqs -ignorePrereq -showProgress \

INVENTORY_LOCATION=/u01/app/oraInventory \

SELECTED_LANGUAGES=en \

ORACLE_BASE=/u01/app/grid \

ORACLE_HOME=/u01/app/11.2.0/grid \

oracle.install.asm.OSDBA=asmdba \

oracle.install.asm.OSOPER=asmoper \

oracle.install.asm.OSASM=asmadmin \

oracle.install.crs.config.storageOption=ASM_STORAGE \

oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=EXTERNAL \

oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL \

oracle.install.crs.config.useIPMI=false \

oracle.install.asm.diskGroup.name=OCR \

oracle.install.asm.diskGroup.redundancy=EXTERNAL \

oracle.installer.autoupdates.option=SKIP_UPDATES \

oracle.install.crs.config.gpnp.scanPort=1521 \

oracle.install.crs.config.gpnp.configureGNS=false \

oracle.install.option=CRS_CONFIG \

oracle.install.asm.SYSASMPassword=lhr \

oracle.install.asm.monitorPassword=lhr \

oracle.install.asm.diskGroup.diskDiscoveryString=/dev/rhdisk* \

oracle.install.asm.diskGroup.disks=/dev/rhdisk10 \

oracle.install.crs.config.gpnp.scanName=LHRTEST2101-scan \

oracle.install.crs.config.clusterName=LHRTEST-cluster \

oracle.install.crs.config.autoConfigureClusterNodeVIP=false \

oracle.install.crs.config.clusterNodes=LHRTEST2101:LHRTEST2101-vip,LHRTEST1101:LHRTEST1101-vip \

oracle.install.crs.config.networkInterfaceList=en0:20.188.187.0:1,en1:220.188.187.0:2 \

ORACLE_HOSTNAME=LHRTEST2101

********************************************************************************

Your platform requires the root user to perform certain pre-installation

OS preparation. The root user should run the shell script 'rootpre.sh' before

you proceed with Oracle installation. rootpre.sh can be found at the top level

of the CD or the stage area.

Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle

installation.

Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.

********************************************************************************

Has 'rootpre.sh' been run by root on all nodes? [y/n] (n)

y

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 190 MB. Actual 4330 MB Passed

Checking swap space: must be greater than 150 MB. Actual 8192 MB Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-03-10_04-54-07PM. Please wait ...[grid@LHRTEST2101:/softtmp/grid]$

[grid@LHRTEST2101:/softtmp/grid]$

[grid@LHRTEST2101:/softtmp/grid]$

[grid@LHRTEST2101:/softtmp/grid]$

[grid@LHRTEST2101:/softtmp/grid]$ [WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards.

CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].

ACTION: Provide a password that conforms to the Oracle recommended standards.

[WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle recommended standards.

CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].

ACTION: Provide a password that conforms to the Oracle recommended standards.

You can find the log of this install session at:

/u01/app/oraInventory/logs/installActions2016-03-10_04-54-07PM.log

Prepare in progress.

.................................................. 5% Done.

Prepare successful.

Copy files in progress.

.................................................. 10% Done.

.................................................. 15% Done.

........................................

Copy files successful.

.................................................. 27% Done.

Link binaries in progress.

Link binaries successful.

.................................................. 34% Done.

Setup files in progress.

Setup files successful.

.................................................. 41% Done.

Perform remote operations in progress.

.................................................. 48% Done.

Perform remote operations successful.

The installation of Oracle Grid Infrastructure was successful.

Please check '/u01/app/oraInventory/logs/silentInstall2016-03-10_04-54-07PM.log' for more details.

.................................................. 97% Done.

Execute Root Scripts in progress.

As a root user, execute the following script(s):

1. /u01/app/oraInventory/orainstRoot.sh

2. /u01/app/11.2.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:

[LHRTEST2101, LHRTEST1101]

Execute /u01/app/11.2.0/grid/root.sh on the following nodes:

[LHRTEST2101, LHRTEST1101]

.................................................. 100% Done.

Execute Root Scripts successful.

As install user, execute the following script to complete the configuration.

1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands

Note:

1. This script must be run on the same system from where installer was run.

2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

Successfully Setup Software.

[grid@LHRTEST2101:/softtmp/grid]$

执行命令的节点:

[LHRTEST2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

6.80 /u01/app/11.2.0/grid

[LHRTEST2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

7.41 /u01/app/11.2.0/grid

[LHRTEST2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

8.03 /u01/app/11.2.0/grid

[LHRTEST2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

8.61 /u01/app/11.2.0/grid

[LHRTEST2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

9.80 /u01/app/11.2.0/grid

[LHRTEST2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

9.80 /u01/app/11.2.0/grid

执行到 Perform remote operations in progress. 的时候,可以查看另外一个节点的grid目录的大小来判断是否卡掉:

[LHRTEST1101:root]/u01/app/11.2.0/grid/bin]> du -sg .

1.78 .

[LHRTEST1101:root]/u01/app/11.2.0/grid/bin]> cd

[LHRTEST1101:root]/]> du -sg /u01/app/11.2.0/grid

2.90 /u01/app/11.2.0/grid

[LHRTEST1101:root]/]> du -sg /u01/app/11.2.0/grid

3.41 /u01/app/11.2.0/grid

[LHRTEST1101:root]/]> du -sg /u01/app/11.2.0/grid

7.25 /u01/app/11.2.0/grid

[LHRTEST1101:root]/]> du -sg /u01/app/11.2.0/grid

8.76 /u01/app/11.2.0/grid

[LHRTEST1101:root]/]> du -sg /u01/app/11.2.0/grid

9.81 /u01/app/11.2.0/grid

[LHRTEST1101:root]/]>

执行root.sh

As a root user, execute the following script(s):

1. /u01/app/oraInventory/orainstRoot.sh

2. /u01/app/11.2.0/grid/root.sh

[LHRTEST2101:root]/]> /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[LHRTEST2101:root]/]> /u01/app/11.2.0/grid/root.sh

Check /u01/app/11.2.0/grid/install/root_LHRTEST2101_2016-03-10_17-08-45.log for the output of root script

回车后一直在等待。。。。。直到自动跳出才是完成,单独开窗口查看日志:

[LHRTEST2101:root]/softtmp]> tail -2000f /u01/app/11.2.0/grid/install/root_LHRTEST2101_2016-03-10_17-08-45.log

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/11.2.0/grid

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

User grid has the required capabilities to run CSSD in realtime mode

OLR initialization - successful

root wallet

root wallet cert

root cert export

peer wallet

profile reader wallet

pa wallet

peer wallet keys

pa wallet keys

peer cert request

pa cert request

peer cert

pa cert

peer root cert TP

profile reader root cert TP

pa root cert TP

peer pa cert TP

pa peer cert TP

profile reader pa cert TP

profile reader peer cert TP

peer user cert

pa user cert

Adding Clusterware entries to inittab

CRS-2672: Attempting to start 'ora.mdnsd' on 'LHRTEST2101'

CRS-2676: Start of 'ora.mdnsd' on 'LHRTEST2101' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'LHRTEST2101'

CRS-2676: Start of 'ora.gpnpd' on 'LHRTEST2101' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'LHRTEST2101'

CRS-2672: Attempting to start 'ora.gipcd' on 'LHRTEST2101'

CRS-2676: Start of 'ora.gipcd' on 'LHRTEST2101' succeeded

CRS-2676: Start of 'ora.cssdmonitor' on 'LHRTEST2101' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'LHRTEST2101'

CRS-2672: Attempting to start 'ora.diskmon' on 'LHRTEST2101'

CRS-2676: Start of 'ora.diskmon' on 'LHRTEST2101' succeeded

CRS-2676: Start of 'ora.cssd' on 'LHRTEST2101' succeeded

ASM created and started successfully.

Disk Group OCR created successfully.

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'system'..

Operation successful.

CRS-4256: Updating the profile

Successful addition of voting disk 04bd1fe1816f4f55bfc976416720128d.

Successfully replaced voting disk group with +OCR.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 04bd1fe1816f4f55bfc976416720128d (/dev/rhdisk10) [OCR]

Located 1 voting disk(s).

CRS-2672: Attempting to start 'ora.asm' on 'LHRTEST2101'

CRS-2676: Start of 'ora.asm' on 'LHRTEST2101' succeeded

CRS-2672: Attempting to start 'ora.OCR.dg' on 'LHRTEST2101'

CRS-2676: Start of 'ora.OCR.dg' on 'LHRTEST2101' succeeded

CRS-2672: Attempting to start 'ora.registry.acfs' on 'LHRTEST2101'

CRS-2676: Start of 'ora.registry.acfs' on 'LHRTEST2101' succeeded

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

AIX系统静默安装Oracle 11gR2 RAC

[LHRTEST2101:root]/]> ps -ef|grep d.bin

root 6815752 1 0 17:16:23 - 0:01 /u01/app/11.2.0/grid/bin/orarootagent.bin

root 6881442 1 2 17:15:26 - 0:04 /u01/app/11.2.0/grid/bin/crsd.bin reboot

root 7209048 1 2 17:15:04 - 0:06 /u01/app/11.2.0/grid/bin/osysmond.bin

root 8061058 6488154 0 17:19:26 pts/1 0:00 grep d.bin

grid 8126564 1 0 17:16:29 - 0:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit

grid 8192252 13631536 0 17:15:29 - 0:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o /u01/app/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/11.2.0/grid/evm/log/evmlogger.log

root 10420390 1 0 17:14:13 - 0:00 /u01/app/11.2.0/grid/bin/cssdmonitor

root 10551502 1 0 17:14:14 - 0:00 /u01/app/11.2.0/grid/bin/cssdagent

grid 11731188 1 0 17:16:31 - 0:00 /u01/app/11.2.0/grid/bin/scriptagent.bin

grid 12845094 1 0 17:14:09 - 0:01 /u01/app/11.2.0/grid/bin/oraagent.bin

root 12976196 1 0 17:14:14 - 0:00 /bin/sh /u01/app/11.2.0/grid/bin/ocssd

grid 13631536 1 0 17:15:27 - 0:02 /u01/app/11.2.0/grid/bin/evmd.bin

grid 14221350 1 0 17:14:09 - 0:00 /u01/app/11.2.0/grid/bin/mdnsd.bin

grid 15007882 1 1 17:14:13 - 0:02 /u01/app/11.2.0/grid/bin/gipcd.bin

grid 15859816 1 0 17:16:11 - 0:00 /u01/app/11.2.0/grid/bin/oraagent.bin

root 16056384 1 0 17:15:02 - 0:02 /u01/app/11.2.0/grid/bin/octssd.bin

grid 16122020 12976196 1 17:14:14 - 0:04 /u01/app/11.2.0/grid/bin/ocssd.bin

root 16515114 1 3 17:11:26 - 0:07 /u01/app/11.2.0/grid/bin/ohasd.bin reboot

root 16711732 1 1 17:12:38 - 0:01 /u01/app/11.2.0/grid/bin/orarootagent.bin

grid 16777306 1 0 17:14:11 - 0:00 /u01/app/11.2.0/grid/bin/gpnpd.bin

[LHRTEST2101:root]/]> crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora....N1.lsnr ora....er.type ONLINE ONLINE LHRTEST2101

ora.OCR.dg ora....up.type ONLINE ONLINE LHRTEST2101

ora.asm ora.asm.type ONLINE ONLINE LHRTEST2101

ora.cvu ora.cvu.type ONLINE ONLINE LHRTEST2101

ora.gsd ora.gsd.type OFFLINE OFFLINE

ora....network ora....rk.type ONLINE ONLINE LHRTEST2101

ora.oc4j ora.oc4j.type ONLINE ONLINE LHRTEST2101

ora.ons ora.ons.type ONLINE ONLINE LHRTEST2101

ora....ry.acfs ora....fs.type ONLINE ONLINE LHRTEST2101

ora.scan1.vip ora....ip.type ONLINE ONLINE LHRTEST2101

ora....SM1.asm application ONLINE ONLINE LHRTEST2101

ora....101.gsd application OFFLINE OFFLINE

ora....101.ons application ONLINE ONLINE LHRTEST2101

ora....101.vip ora....t1.type ONLINE ONLINE LHRTEST2101

[LHRTEST2101:root]/]> crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.OCR.dg

ONLINE ONLINE LHRTEST2101

ora.asm

ONLINE ONLINE LHRTEST2101 Started

ora.gsd

OFFLINE OFFLINE LHRTEST2101

ora.net1.network

ONLINE ONLINE LHRTEST2101

ora.ons

ONLINE ONLINE LHRTEST2101

ora.registry.acfs

ONLINE ONLINE LHRTEST2101

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE LHRTEST2101

ora.cvu

1 ONLINE ONLINE LHRTEST2101

ora.oc4j

1 ONLINE ONLINE LHRTEST2101

ora.scan1.vip

1 ONLINE ONLINE LHRTEST2101

ora.LHRTEST2101.vip

1 ONLINE ONLINE LHRTEST2101

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]> ps -ef|grep asm

grid 4391000 1 0 17:15:17 - 0:00 asmdbw0+ASM1

grid 8519868 1 0 17:15:17 - 0:00 asmlmhb+ASM1

grid 8650940 1 0 17:15:17 - 0:00 asmmmon+ASM1

grid 8847532 1 0 17:15:17 - 0:00 asmmman+ASM1

grid 10289152 1 0 17:15:17 - 0:00 asmdiag+ASM1

grid 10354890 1 0 17:15:17 - 0:00 asmlms0+ASM1

grid 10682428 1 0 17:15:17 - 0:00 asmlmd0+ASM1

grid 11010164 1 0 17:15:17 - 0:00 asmmmnl+ASM1

root 11796632 6488154 0 17:22:17 pts/1 0:00 grep asm

grid 12714016 1 0 17:15:17 - 0:00 asmdia0+ASM1

grid 12910704 1 0 17:15:17 - 0:00 asmrbal+ASM1

grid 13303898 1 0 17:15:27 - 0:00 asmasmb+ASM1

grid 13435084 1 0 17:15:17 - 0:00 asmlmon+ASM1

grid 13697226 1 0 17:15:18 - 0:00 asmlck0+ASM1

grid 13828112 1 0 17:15:17 - 0:00 asmckpt+ASM1

grid 14155956 1 0 17:15:17 - 0:00 asmgen0+ASM1

grid 14418088 1 0 17:15:17 - 0:00 asmvktm+ASM1

grid 14680284 1 0 17:15:17 - 0:00 asmping+ASM1

grid 15073388 1 0 17:15:27 - 0:00 oracle+ASM1asmb+asm1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

grid 15400976 1 0 17:15:17 - 0:00 asmsmon+ASM1

grid 15990812 1 0 17:15:17 - 0:00 asmgmon+ASM1

grid 16187420 1 0 17:15:17 - 0:00 asmlgwr+ASM1

grid 16449694 1 0 17:15:16 - 0:00 asmpmon+ASM1

grid 16580744 1 0 17:15:16 - 0:00 asmpsp0+ASM1

[LHRTEST2101:root]/]>

[LHRTEST2101:root]/]> lquerypv -h /dev/rhdisk10

00000000 00820101 00000000 80000000 B6FE0F29 |...............)|

00000010 00000000 00000000 00000000 00000000 |................|

00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|

00000030 00000000 00000000 00000000 00000000 |................|

00000040 0B200000 00000103 4F43525F 30303030 |. ......OCR_0000|

00000050 00000000 00000000 00000000 00000000 |................|

00000060 00000000 00000000 4F435200 00000000 |........OCR.....|

00000070 00000000 00000000 00000000 00000000 |................|

00000080 00000000 00000000 4F43525F 30303030 |........OCR_0000|

00000090 00000000 00000000 00000000 00000000 |................|

000000A0 00000000 00000000 00000000 00000000 |................|

000000B0 00000000 00000000 00000000 00000000 |................|

000000C0 00000000 00000000 01F80D69 66A0E000 |...........if...|

000000D0 01F80D69 70C48800 02001000 00100000 |...ip...........|

000000E0 0001BC80 0002001C 00000003 00000001 |................|

000000F0 00000002 00000002 00000000 00000000 |................|

[LHRTEST2101:root]/]>

另外一个节点执行

As a root user, execute the following script(s):

1. /u01/app/oraInventory/orainstRoot.sh

2. /u01/app/11.2.0/grid/root.sh

[LHRTEST1101:root]/]> /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[LHRTEST1101:root]/]> $ORACLE_HOME/root.sh

Check /u01/app/11.2.0/grid/install/root_LHRTEST1101_2016-03-11_09-54-09.log for the output of root script

[LHRTEST1101:root]/]>

回车后一直在等待。。。。。直到自动跳出才是完成,单独开窗口查看日志:

[LHRTEST1101:root]/]> tail -200f /u01/app/11.2.0/grid/install/root_LHRTEST1101_2016-03-11_09-54-09.log

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/11.2.0/grid

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

User ignored Prerequisites during installation

User grid has the required capabilities to run CSSD in realtime mode

OLR initialization - successful

Adding Clusterware entries to inittab

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node LHRTEST2101, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[LHRTEST1101:root]/]> ps -ef|grep asm

grid 9961498 1 0 09:57:39 - 0:00 asmgmon+ASM2

grid 10813654 1 0 09:57:39 - 0:00 asmmmon+ASM2

root 11599892 4587988 0 10:00:26 pts/0 0:00 grep asm

grid 11862082 1 0 09:57:39 - 0:00 asmdiag+ASM2

grid 12124202 1 0 09:57:41 - 0:00 asmlck0+ASM2

grid 12320918 1 0 09:57:39 - 0:00 asmlmhb+ASM2

grid 12386418 1 1 09:57:39 - 0:00 asmvktm+ASM2

grid 12517574 1 0 09:57:39 - 0:00 asmlms0+ASM2

grid 12648524 1 0 09:57:46 - 0:00 asmo000+ASM2

grid 12845130 1 1 09:57:39 - 0:00 asmdia0+ASM2

grid 14221316 1 0 09:57:46 - 0:00 oracle+ASM2asmb+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

grid 14942382 1 0 09:57:39 - 0:00 asmmmnl+ASM2

grid 15270102 1 0 09:57:39 - 0:00 asmping+ASM2

grid 15597756 1 0 09:57:39 - 0:00 asmlgwr+ASM2

grid 2359724 1 0 09:57:38 - 0:00 asmpsp0+ASM2

grid 3014926 1 0 09:57:39 - 0:00 asmckpt+ASM2

grid 3080676 1 0 09:57:39 - 0:00 asmdbw0+ASM2

grid 3211710 1 0 09:57:39 - 0:00 asmmman+ASM2

grid 3539244 1 0 09:57:37 - 0:00 asmpmon+ASM2

grid 3670514 1 1 09:57:39 - 0:00 asmlmon+ASM2

grid 4129072 1 0 09:57:46 - 0:00 oracle+ASM2o000+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

grid 4522356 1 0 09:57:45 - 0:00 asmasmb+ASM2

grid 4784516 1 0 09:57:39 - 0:00 asmsmon+ASM2

grid 5112192 1 0 09:57:39 - 0:00 asmrbal+ASM2

grid 5243238 1 1 09:57:39 - 0:00 asmlmd0+ASM2

grid 5702040 1 0 09:57:39 - 0:00 asmgen0+ASM2

[LHRTEST1101:root]/]> crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.OCR.dg

ONLINE ONLINE LHRTEST1101

ONLINE ONLINE LHRTEST2101

ora.asm

ONLINE ONLINE LHRTEST1101 Started

ONLINE ONLINE LHRTEST2101 Started

ora.gsd

OFFLINE OFFLINE LHRTEST1101

OFFLINE OFFLINE LHRTEST2101

ora.net1.network

ONLINE ONLINE LHRTEST1101

ONLINE ONLINE LHRTEST2101

ora.ons

ONLINE ONLINE LHRTEST1101

ONLINE ONLINE LHRTEST2101

ora.registry.acfs

ONLINE ONLINE LHRTEST1101

ONLINE ONLINE LHRTEST2101

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE LHRTEST2101

ora.cvu

1 ONLINE ONLINE LHRTEST2101

ora.oc4j

1 ONLINE ONLINE LHRTEST2101

ora.scan1.vip

1 ONLINE ONLINE LHRTEST2101

ora.LHRTEST1101.vip

1 ONLINE ONLINE LHRTEST1101

ora.LHRTEST2101.vip

1 ONLINE ONLINE LHRTEST2101

db安装

准备安装文件

unzip p10404530_112030_AIX64-5L_1of7.zip && unzip p10404530_112030_AIX64-5L_2of7.zip

[LHRTEST2101:root]/]> cd /soft*

[LHRTEST2101:root]/softtmp]> l

total 9644880

drwxr-xr-x 9 root system 4096 Oct 28 2011 grid

drwxr-xr-x 2 root system 256 Mar 08 16:10 lost+found

-rw-r----- 1 root system 1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r----- 1 root system 1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r----- 1 root system 2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[LHRTEST2101:root]/softtmp]> unzip p10404530_112030_AIX64-5L_1of7.zip && unzip p10404530_112030_AIX64-5L_2of7.zip

Archive: p10404530_112030_AIX64-5L_1of7.zip

creating: database/

creating: database/stage/

inflating: database/stage/shiphomeproperties.xml

creating: database/stage/Components/

《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》

inflating: database/doc/server.11203/E22487-03.mobi

inflating: database/doc/server.11203/e22487.pdf

inflating: database/welcome.html

creating: database/sshsetup/

inflating: database/sshsetup/sshUserSetup.sh

inflating: database/readme.html

Archive: p10404530_112030_AIX64-5L_2of7.zip

creating: database/stage/Components/oracle.ctx/

creating: database/stage/Components/oracle.ctx/11.2.0.3.0/

creating: database/stage/Components/oracle.ctx/11.2.0.3.0/1/

creating: database/stage/Components/oracle.ctx/11.2.0.3.0/1/DataFiles/

《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》

creating: database/stage/Components/oracle.javavm.containers/

creating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/

creating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/

creating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/

inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup4.jar

inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup3.jar

inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup2.jar

inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup1.jar

[LHRTEST2101:root]/softtmp]>

[LHRTEST2101:root]/softtmp]> l

total 9644888

drwxr-xr-x 9 root system 4096 Oct 28 2011 database

drwxr-xr-x 9 root system 4096 Oct 28 2011 grid

drwxr-xr-x 2 root system 256 Mar 08 16:10 lost+found

-rw-r----- 1 root system 1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r----- 1 root system 1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r----- 1 root system 2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[LHRTEST2101:root]/softtmp]>

执行runcluvfy.sh脚本预检测

[grid@LHRTEST2101:/home/grid]$ /softtmp/grid/runcluvfy.sh stage -pre dbinst -n LHRTEST2101,LHRTEST1101 -verbose -fixup

Performing pre-checks for database installation

Checking node reachability...

Check: Node reachability from node "LHRTEST2101"

Destination Node Reachable?

------------------------------------ ------------------------

LHRTEST2101 yes

LHRTEST1101 yes

Result: Node reachability check passed from node "LHRTEST2101"

Checking user equivalence...

Check: User equivalence for user "grid"

Node Name Status

------------------------------------ ------------------------

LHRTEST2101 passed

LHRTEST1101 passed

Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Node Name Status

------------------------------------ ------------------------

LHRTEST2101 passed

LHRTEST1101 passed

Verification of the hosts config file successful

Interface information for node "LHRTEST2101"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

en0 20.188.187.158 20.188.187.0 20.188.187.158 20.188.187.1 C6:03:AE:03:97:83 1500

en0 20.188.187.158 20.188.187.0 20.188.187.158 20.188.187.1 C6:03:AE:03:97:83 1500

en0 20.188.187.158 20.188.187.0 20.188.187.158 20.188.187.1 C6:03:AE:03:97:83 1500

en1 220.188.187.158 220.188.187.0 220.188.187.158 20.188.187.1 C6:03:A7:3E:FE:01 1500

en1 220.188.187.158 220.188.187.0 220.188.187.158 20.188.187.1 C6:03:A7:3E:FE:01 1500

Interface information for node "LHRTEST1101"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

en0 20.188.187.148 20.188.187.0 20.188.187.148 UNKNOWN FE:B6:72:EF:12:83 1500

en0 20.188.187.148 20.188.187.0 20.188.187.148 UNKNOWN FE:B6:72:EF:12:83 1500

en1 220.188.187.148 220.188.187.0 220.188.187.148 UNKNOWN FE:B6:7D:9F:6C:01 1500

en1 220.188.187.148 220.188.187.0 220.188.187.148 UNKNOWN FE:B6:7D:9F:6C:01 1500

Check: Node connectivity for interface "en0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101[20.188.187.158] LHRTEST2101[20.188.187.158] yes

LHRTEST2101[20.188.187.158] LHRTEST2101[20.188.187.158] yes

LHRTEST2101[20.188.187.158] LHRTEST1101[20.188.187.148] yes

LHRTEST2101[20.188.187.158] LHRTEST1101[20.188.187.148] yes

LHRTEST2101[20.188.187.158] LHRTEST2101[20.188.187.158] yes

LHRTEST2101[20.188.187.158] LHRTEST1101[20.188.187.148] yes

LHRTEST2101[20.188.187.158] LHRTEST1101[20.188.187.148] yes

LHRTEST2101[20.188.187.158] LHRTEST1101[20.188.187.148] yes

LHRTEST2101[20.188.187.158] LHRTEST1101[20.188.187.148] yes

LHRTEST1101[20.188.187.148] LHRTEST1101[20.188.187.148] yes

Result: Node connectivity passed for interface "en0"

Check: TCP connectivity of subnet "20.188.187.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101:20.188.187.158 LHRTEST1101:20.188.187.148 passed

Result: TCP connectivity check passed for subnet "20.188.187.0"

Check: Node connectivity for interface "en1"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101[220.188.187.158] LHRTEST2101[220.188.187.158] yes

LHRTEST2101[220.188.187.158] LHRTEST1101[220.188.187.148] yes

LHRTEST2101[220.188.187.158] LHRTEST1101[220.188.187.148] yes

LHRTEST2101[220.188.187.158] LHRTEST1101[220.188.187.148] yes

LHRTEST2101[220.188.187.158] LHRTEST1101[220.188.187.148] yes

LHRTEST1101[220.188.187.148] LHRTEST1101[220.188.187.148] yes

Result: Node connectivity passed for interface "en1"

Check: TCP connectivity of subnet "220.188.187.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

LHRTEST2101:220.188.187.158 LHRTEST1101:220.188.187.148 passed

Result: TCP connectivity check passed for subnet "220.188.187.0"

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "20.188.187.0".

Subnet mask consistency check passed for subnet "220.188.187.0".

Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "20.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "20.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "220.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "220.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 4GB (4194304.0KB) 1GB (1048576.0KB) passed

LHRTEST1101 48GB (5.0331648E7KB) 1GB (1048576.0KB) passed

Result: Total memory check passed

Check: Available memory

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 224.293MB (229676.0KB) 50MB (51200.0KB) passed

LHRTEST1101 41.4106GB (4.3422168E7KB) 50MB (51200.0KB) passed

Result: Available memory check passed

Check: Swap space

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 8GB (8388608.0KB) 4GB (4194304.0KB) passed

LHRTEST1101 8GB (8388608.0KB) 16GB (1.6777216E7KB) failed

Result: Swap space check failed

Check: Free disk space for "LHRTEST2101:/tmp"

Path Node Name Mount point Available Required Status

---------------- ------------ ------------ ------------ ------------ ------------

/tmp LHRTEST2101 /tmp 3.899GB 1GB passed

Result: Free disk space check passed for "LHRTEST2101:/tmp"

Check: Free disk space for "LHRTEST1101:/tmp"

Path Node Name Mount point Available Required Status

---------------- ------------ ------------ ------------ ------------ ------------

/tmp LHRTEST1101 /tmp 18.1031GB 1GB passed

Result: Free disk space check passed for "LHRTEST1101:/tmp"

Check: User existence for "grid"

Node Name Status Comment

------------ ------------------------ ------------------------

LHRTEST2101 passed exists(1025)

LHRTEST1101 passed exists(1025)

Checking for multiple users with UID value 1025

Result: Check for multiple users with UID value 1025 passed

Result: User existence check passed for "grid"

Check: Group existence for "oinstall"

Node Name Status Comment

------------ ------------------------ ------------------------

LHRTEST2101 passed exists

LHRTEST1101 passed exists

Result: Group existence check passed for "oinstall"

Check: Group existence for "dba"

Node Name Status Comment

------------ ------------------------ ------------------------

LHRTEST2101 passed exists

LHRTEST1101 passed exists

Result: Group existence check passed for "dba"

Check: Membership of user "grid" in group "oinstall" [as Primary]

Node Name User Exists Group Exists User in Group Primary Status

---------------- ------------ ------------ ------------ ------------ ------------

LHRTEST2101 yes yes yes yes passed

LHRTEST1101 yes yes yes yes passed

Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Membership of user "grid" in group "dba"

Node Name User Exists Group Exists User in Group Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 yes yes yes passed

LHRTEST1101 yes yes yes passed

Result: Membership check for user "grid" in group "dba" passed

Check: Run level

Node Name run level Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 2 2 passed

LHRTEST1101 2 2 passed

Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 hard 9223372036854776000 65536 passed

LHRTEST1101 hard 9223372036854776000 65536 passed

Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 soft 9223372036854776000 1024 passed

LHRTEST1101 soft 9223372036854776000 1024 passed

Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 hard 16384 16384 passed

LHRTEST1101 hard 16384 16384 passed

Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"

Node Name Type Available Required Status

---------------- ------------ ------------ ------------ ----------------

LHRTEST2101 soft 16384 2047 passed

LHRTEST1101 soft 16384 2047 passed

Result: Soft limits check passed for "maximum user processes"

Check: System architecture

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 powerpc powerpc passed

LHRTEST1101 powerpc powerpc passed

Result: System architecture check passed

Check: Kernel version

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 7.1-7100.03.03.1415 7.1-7100.00.01.1037 passed

LHRTEST1101 7.1-7100.02.05.1415 7.1-7100.00.01.1037 passed

WARNING:

PRVF-7524 : Kernel version is not consistent across all the nodes.

Kernel version = "7.1-7100.02.05.1415" found on nodes: LHRTEST1101.

Kernel version = "7.1-7100.03.03.1415" found on nodes: LHRTEST2101.

Result: Kernel version check passed

Check: Kernel parameter for "ncargs"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 256 128 passed

LHRTEST1101 256 128 passed

Result: Kernel parameter check passed for "ncargs"

Check: Kernel parameter for "maxuproc"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 16384 2048 passed

LHRTEST1101 16384 2048 passed

Result: Kernel parameter check passed for "maxuproc"

Check: Kernel parameter for "tcp_ephemeral_low"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 32768 9000 failed (ignorable)

LHRTEST1101 32768 9000 failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_low"

Check: Kernel parameter for "tcp_ephemeral_high"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 65535 65500 failed (ignorable)

LHRTEST1101 65535 65500 failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_high"

Check: Kernel parameter for "udp_ephemeral_low"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 32768 9000 failed (ignorable)

LHRTEST1101 32768 9000 failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_low"

Check: Kernel parameter for "udp_ephemeral_high"

Node Name Current Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 65535 65500 failed (ignorable)

LHRTEST1101 65535 65500 failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_high"

Check: Package existence for "bos.adt.base"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.adt.base-7.1.3.15-0 bos.adt.base-... passed

LHRTEST1101 bos.adt.base-7.1.3.15-0 bos.adt.base-... passed

Result: Package existence check passed for "bos.adt.base"

Check: Package existence for "bos.adt.lib"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.adt.lib-7.1.2.15-0 bos.adt.lib-... passed

LHRTEST1101 bos.adt.lib-7.1.2.15-0 bos.adt.lib-... passed

Result: Package existence check passed for "bos.adt.lib"

Check: Package existence for "bos.adt.libm"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.adt.libm-7.1.3.0-0 bos.adt.libm-... passed

LHRTEST1101 bos.adt.libm-7.1.3.0-0 bos.adt.libm-... passed

Result: Package existence check passed for "bos.adt.libm"

Check: Package existence for "bos.perf.libperfstat"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.perf.libperfstat-7.1.3.15-0 bos.perf.libperfstat-... passed

LHRTEST1101 bos.perf.libperfstat-7.1.3.15-0 bos.perf.libperfstat-... passed

Result: Package existence check passed for "bos.perf.libperfstat"

Check: Package existence for "bos.perf.perfstat"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.perf.perfstat-7.1.3.15-0 bos.perf.perfstat-... passed

LHRTEST1101 bos.perf.perfstat-7.1.3.15-0 bos.perf.perfstat-... passed

Result: Package existence check passed for "bos.perf.perfstat"

Check: Package existence for "bos.perf.proctools"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

LHRTEST2101 bos.perf.proctools-7.1.3.15-0 bos.perf.proctools-... passed

LHRTEST1101 bos.perf.proctools-7.1.3.15-0 bos.perf.proctools-... passed

Result: Package existence check passed for "bos.perf.proctools"

Check: Package existence for "xlC.aix61.rte"

Node Name Available Required Status

本人提供Oracle(OCP、OCM)、MySQL(OCP)、PostgreSQL(PGCA、PGCE、PGCM)等数据库的培训和考证业务,私聊QQ646634621或微信db_bao,谢谢!
AIX系统静默安装Oracle 11gR2 RAC后续精彩内容已被小麦苗无情隐藏,请输入验证码解锁本站所有文章
验证码:
请关注本站微信公众号,回复“小麦苗博客”,获取验证码。在微信里搜索“DB宝”或者“www_xmmup_com”或者微信扫描右侧二维码都可以关注本站微信公众号。

标签:

Avatar photo

小麦苗

学习或考证,均可联系麦老师,请加微信db_bao或QQ646634621

您可能还喜欢...

发表回复

嘿,我是小麦,需要帮助随时找我哦
  • 18509239930
  • 个人微信

  • 麦老师QQ聊天
  • 个人邮箱
  • 点击加入QQ群
  • 个人微店

  • 回到顶部