最后如下:此处不要点击OK,直到下面操作进行完后再点击OK。
重点注意下操作: 依据该窗口提示:
第一行脚本先后在两个节点操作(可开两个终端窗口,以ROOT用户执行)。 第二行脚本在YCWEB1节点执行时,
如下,当执行到Expecting the CRS daemons to be up within 600 seconds时,需另开一终端以root用户如下操作(不要等待失败退出再执行): [root@ycweb1 etc]# /etc/init.d/init.evmd run & [1] 21952
[root@ycweb1 etc]# /etc/init.d/init.cssd fatal & [2] 22035
[root@ycweb1 etc]# /etc/init.d/init.crsd run & [3] 22121
正常执行root.sh脚本后结果如下:
[root@ycweb1 ~]# /rdbm/orasrv/product/10g/crs/root.sh
WARNING: directory '/rdbm/orasrv/product/10g' is not owned by root WARNING: directory '/rdbm/orasrv/product' is not owned by root WARNING: directory '/rdbm/orasrv' is not owned by root WARNING: directory '/rdbm' is not owned by root
Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/rdbm/orasrv/product/10g' is not owned by root WARNING: directory '/rdbm/orasrv/product' is not owned by root WARNING: directory '/rdbm/orasrv' is not owned by root WARNING: directory '/rdbm' is not owned by root Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
Creating OCR keys for user 'root', privgrp 'root'.. Operation successful.
Now formatting voting device: /dev/raw/raw3 Now formatting voting device: /dev/raw/raw4 Now formatting voting device: /dev/raw/raw5 Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds. Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes. ycweb1
CSS is inactive on these nodes. ycweb2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
同样在节点ycweb2上执行root.sh脚本如上操作,执行结果如下: # /rdbm/orasrv/product/10g/crs/root.sh
WARNING: directory '/rdbm/orasrv/product/10g' is not owned by root WARNING: directory '/rdbm/orasrv/product' is not owned by root WARNING: directory '/rdbm/orasrv' is not owned by root WARNING: directory '/rdbm' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/rdbm/orasrv/product/10g' is not owned by root WARNING: directory '/rdbm/orasrv/product' is not owned by root WARNING: directory '/rdbm/orasrv' is not owned by root WARNING: directory '/rdbm' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration.
Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes. ycweb1 ycweb2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps
/rdbm/orasrv/product/10g/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
此处的error: libpthread.so.0: cannot open shared object file 可以不理会,但是要如下操作:
1)以orasrv用户编辑vipca文件找到(两台节点都需要做):
if [ \ then
LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL fi
unset LD_ASSUME_KERNEL //增加该行
#End workaround
2)以orasrv用户编辑srvctl文件(两台节点都需要做):
LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL //增加该行 # Run ops control utility
3) 在ycweb1节点上,su到root用户执行#./vipca,报错信息如下:
Error 0(Native: listNetInterfaces:[3]) Error 0(Native: listNetInterfaces:[3]) 执行下操作:
[root@ycweb1 bin]# oifcfg setif -global bond0/10.36.106.0:public
[root@ycweb1 bin]# oifcfg setif –global bond1/10.10.2.0:cluster_interconnect [root@ycweb1 bin]# oifcfg getif //查看当前节点的网口信息 bond0 10.36.106.0 global public
bond1 10.10.2.0 global cluster_interconnect [root@ycweb1 bin]# export LANG= [root@ycweb1 bin]# ./vipca 该过程如下:
点击NEXT,
点击NEXT,配置如下:
点击NEXT