grid用户:
node1上:
[root@node1 ~]# su - grid
[grid@node1 ~]$ ssh-keygen -t rsa (一路回车,默认空密码) [grid@node1 ~]$ ssh-keygen -t dsa (一路回车,默认空密码) [grid@node1 ~]$ cd /home/grid/.ssh/ (.ssh目录需要先运行上面的命令才会生成) [grid@node1 .ssh]$ ls
id_dsa id_dsa.pub id_rsa id_rsa.pub
node2上:
[root@node2 ~]# su - grid
[grid@node2 ~]$ ssh-keygen -t rsa (一路回车,默认空密码) [grid@node2 ~]$ ssh-keygen -t dsa (一路回车,默认空密码) [grid@node2 ~]$ cd /home/grid/.ssh/ (.ssh目录需要先运行上面的命令才会生成) [grid@node2 .ssh]$ ls
id_dsa id_dsa.pub id_rsa id_rsa.pub
node1上:
[grid@node1 .ssh]$ cat id_rsa.pub >> authorized_keys
(把node1自己的rsa密钥导入authorized_keys文件中) [grid@node1 .ssh]$ cat id_dsa.pub >> authorized_keys
(把node1自己的dsa密钥导入authorized_keys文件中)
[grid@node1 .ssh]$ ssh 80.8.29.2 cat /home/grid/.ssh/id_rsa.pub >> authorized_keys (把node2的rsa密钥导入node1的authorized_keys文件中)
[grid@node1 .ssh]$ ssh 80.8.29.2 cat /home/grid/.ssh/id_dsa.pub >> authorized_keys (把node2的rsa密钥导入node1的authorized_keys文件中)
[grid@node1 .ssh]$ scp authorized_keys 80.8.29.2:/home/grid/.ssh/authorized_keys
(此时的authorized_keys文件中已经包含了所有节点,即node1,node2的密钥信息,将它scp给node2的对应目录下即可)
测试,所有节点的grid和grid用户必须都做,最终目的是下列所有的操作,都不需要yes/no确认! (其实个人觉得测试时不加date,效果看得明显点)
node1上:
[grid@node1 .ssh]$ ssh 80.8.29.1 date [grid@node1 .ssh]$ ssh 80.8.29.2 date [grid@node1 .ssh]$ ssh 10.20.89.1 date [grid@node1 .ssh]$ ssh 10.20.89.2 date [grid@node1 .ssh]$ ssh node1 date [grid@node1 .ssh]$ ssh node2 date
[grid@node1 .ssh]$ ssh node1.cty.com date [grid@node1 .ssh]$ ssh node2.cty .com date [grid@node1 .ssh]$ ssh node1-priv date
[grid@node1 .ssh]$ ssh node2-priv date
[grid@node1 .ssh]$ ssh node1-priv.cty.com date [grid@node1 .ssh]$ ssh node2-priv.cty.com date [grid@node1 .ssh]$ ls
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
node2上:
[grid@node2 .ssh]$ ssh 80.8.29.1 date [grid@node2 .ssh]$ ssh 80.8.29.2 date [grid@node2 .ssh]$ ssh 10.20.89.1 date [grid@node2 .ssh]$ ssh 10.20.89.2 date [grid@node2 .ssh]$ ssh node1 date [grid@node2 .ssh]$ ssh node2 date
[grid@node2 .ssh]$ ssh node1.cty.com date [grid@node2 .ssh]$ ssh node2.cty.com date [grid@node2 .ssh]$ ssh node1-priv date [grid@node2 .ssh]$ ssh node2-priv date
[grid@node2 .ssh]$ ssh node1-priv.cty.com date [grid@node2 .ssh]$ ssh node2-priv.cty.com date [grid@node2 .ssh]$ ls
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
保证以上所有的ssh测试都不需要yes/no确认后,ssh的配置就算完成了,这个步骤很重要,不难,但是折腾人,一定认真做。
13:这个时候,去node1上运行ntpq -p 命令查看一下reach值: [root@node1 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter ============================================================== *LOCAL(0) .LOCL. 10 l 39 64 377 0.000 0.000 0.000 OK,reach值早已超过17,此时可以去node2上同步时间了 [root@node2 ~]# /etc/init.d/ntpd restart
Shutting down ntpd: [ OK ] Starting ntpd: [ OK ]
[root@node2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
====================================================================== *LOCAL(0) .LOCL. 10 l 1 64 1 0.000 0.000 0.000 node1.cty.com LOCAL(0) 11 u 36 64 1 0.192 1.618 0.000
OK,时间同步配置结束
14:配置存储,导出ASM磁盘组所需的共享磁盘:
首先,关闭storage端,如下图,为ASM磁盘组添加1块大小为50G的新硬盘:
然后,启动storage端,准备把新添加的磁盘导出成为共享存储。
[root@storage ~]# yum install scsi-target-utils -y
[root@storage ~]# vim /etc/tgt/targets.conf 编辑配置文件如下:
如上图,参照38-40行,写出42-47行的内容,其中的initiator-address参数设置的是访问控制列表,使指定的存储只能被指定的客户机发现。(重要!)
[root@storage ~]# /etc/init.d/tgtd restart 重启服务
Stopping SCSI target daemon: [ OK ] Starting SCSI target daemon: Starting target framework daemon
查看导出的存储:
[root@storage ~]# tgtadm --lld iscsi --mode target --op show Target 1: iqn.2014-05.cty.com:rac_disk System information: Driver: iscsi
State: ready
I_T nexus information: LUN information: LUN: 0
Type: controller
SCSI ID: IET 00010000 SCSI SN: beaf10
Size: 0 MB, Block size: 1 Online: Yes
Removable media: No Prevent removal: No Readonly: No
Backing store type: null Backing store path: None Backing store flags: LUN: 1
Type: disk
SCSI ID: IET 00010001 SCSI SN: beaf11
Size: 53687 MB, Block size: 512 Online: Yes
Removable media: No Prevent removal: No Readonly: No
Backing store type: rdwr Backing store path: /dev/sdb Backing store flags: Account information: ACL information: 80.8.29.1 80.8.29.2
如上,Target 1中的LUN: 1即为导出的存储
[root@storage ~]# chkconfig tgtd on 最后不要忘了这步
15:导入存储 (node1和node2)
[root@node1 ~]# yum install iscsi-initiator-utils -y [root@node1 ~]# rm -rf /var/lib/iscsi/* [root@node1 ~]# /etc/init.d/iscsid start
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 80.8.29.10:3260 Starting iscsid: [ OK ] 80.8.29.10:3260,1 iqn.2014-05.cty.com:rac_disk 发现storage端导出的存储
[root@node1 ~]# iscsiadm -m node -T iqn.2014-05.cty.com:rac_disk -p 80.8.29.10:3260 -l 导入为RAC提供的存储
Logging in to [iface: default, target: iqn.2014-05.cty.com:rac_disk, portal: 80.8.29.10,3260] (multiple)
Login to [iface: default, target: iqn.2014-05.cty.com:rac_disk, portal: 80.8.29.10,3260] successful. [root@node1 ~]# chkconfig iscsi on [root@node1 ~]# chkconfig iscsid on
然后别忘了在node2上也执行以上操作导入存储!
16:绑定udev (node1和node2)
首先用fdisk -l命令查看一下导入的存储: [root@node1 ~]# fdisk -l
Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b0f9b
Device Boot Start End Blocks Id System /dev/sda1 * 1 26 204800 83 Linux Partition 1 does not end on cylinder boundary.
/dev/sda2 26 6400 51200000 83 Linux
/dev/sda3 6400 6922 4194304 82 Linux swap / Solaris
Disk /dev/sdb: 53.7 GB, 53687091200 bytes 64 heads, 32 sectors/track, 51200 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
如上,/dev/sdb为导入的存储