1.3.2、配置zone文件
# vi /etc/named.rfc1912.zones ##追加以下信息
zone "localdomain" IN {
type master;
file "forward.zone";
allow-update { none; };
};
zone "0.168.192.in-addr.arpa" IN {
type master;
file "reverse.zone";
allow-update { none; };
};
1.3.3、生成正向解析数据库文件
# cd /var/named/
# cp -p named.localhost forward.zone
# vi forward.zone ##正向解析数据库文件末尾加入如下
$TTL 1D
@ IN SOA node1.localdomain. root.localdomain. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
IN NS node1.localdomain.
node1 IN A 127.0.0.1
scan-cluster.localdomain IN A 192.168.0.8
scan-cluster IN A 192.168.0.8
~
1.3.4、生成方向解析数据库文件
# cp -p named.loopback reverse.zone
# vi reverse.zone ##反向解析数据库文件末尾加入如下
$TTL 1D
@ IN SOA node1.localdomain. root.localdomain. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
IN NS node1.localdomain.
3 IN PTR node1.localdomain.
8 IN PTR scan-cluster.localdomain.
8 IN PTR scan-cluster.
# service named restart
# vi /etc/resolv.conf ##文件末尾加入如下
search localdomain
nameserver 192.168.0.3
1.3、配置grid/oracle用户ssh对等性
##node1
node1->env |grep ORA
ORACLE_UNQNAME=devdb
ORACLE_SID=devdb1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOSTNAME=node1.localdomain
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
##node2
node2->env |grep ORA
ORACLE_UNQNAME=devdb
ORACLE_SID=devdb2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOSTNAME=node1.localdomain
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
在集群就绪服务 (CRS) 和 RAC 安装过程中,Oracle Universal Installer (OUI) 必须能够以 oracle 的身份将软件复制到所有 RAC 节点
而不提示输入口令
##node1
# su - oracle
node1->mkdir ~/.ssh
node1-> chmod 700 ~/.ssh
node1->ssh-keygen -t rsa ##一路回车
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
1f:a5:6f:8b:c9:e9:a3:04:4d:99:27:6a:3d:77:4d:1e oracle@node1.localdomain
node1->ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
fc:83:1c:73:37:52:50:90:50:aa:8b:41:9f:61:b9:6b oracle@node1.localdomain
##node2
# su - oracle
node2->mkdir ~/.ssh
node2->chmod 700 ~/.ssh
node2->ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
e9:fd:b0:9b:1e:57:56:07:51:9c:2e:99:33:dc:0d:03 oracle@node2.localdomain
node2->ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
67:f7:9a:09:e1:9e:c2:4f:6b:9a:e3:71:ce:c4:eb:08 oracle@node2.localdomain
##回到node1
node1->cd ~
node1->ll -a
node1->cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
node1->cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
node1->ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'node2 (192.168.0.4)' can't be established.
RSA key fingerprint is 36:5f:59:0d:52:d3:24:68:7d:60:59:da:10:37:b1:47.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.0.4' (RSA) to the list of known hosts.
oracle@node2's password:
node1->ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@node2's password:
*****把公钥文件拷贝到node2
node1->scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys
oracle@node2's password:
authorized_keys 100% 2040 2.0KB/s 00:00
##验证oracle用户对等性
每个节点执行如下命令,第二次执行时不再需要输入口令
ssh node1 date
ssh node2 date
ssh node1-priv date
ssh node2-priv date
ssh node1.localdomain date
ssh node2.localdomain date
ssh node1-priv.localdomain date
ssh node2-priv.localdomain date
根据以上方法,grid用户配置对等性
1.5、安装rpm包,配置asm磁盘
1、格式化分区(两个节点任选其一)
2、安装asm包(每个节点)
# rpm -ivh kmod-oracleasm-2.0.6.rh1-3.el6.x86_64.rpm
# rpm -ivh oracleasm-support-2.1.8-1.el5.x86_64.rpm
# rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm
# oracleasm configure -i
# oracleasm init
3、清理asm磁盘头
# dd if=/dev/zero of=/dev/sdb1 bs=8192 count=12800
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 2.18097 s, 48.1 MB/s
# dd if=/dev/zero of=/dev/sdc1 bs=8192 count=12800
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 1.29817 s, 80.8 MB/s
# dd if=/dev/zero of=/dev/sdd1 bs=8192 count=12800
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 1.31786 s, 79.6 MB/s
# dd if=/dev/zero of=/dev/sde1 bs=8192 count=12800
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 1.43146 s, 73.3 MB/s
# dd if=/dev/zero of=/dev/sdf1 bs=8192 count=12800
4、映射分区到asm磁盘(两个节点任选其一)
# oracleasm scandisks ##刷新asm磁盘信息,不然不是最新信息
# oracleasm createdisk VOL1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
# oracleasm createdisk VOL2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
# oracleasm createdisk VOL3 /dev/sdd1
Writing disk header: done
# oracleasm createdisk VOL4 /dev/sde1
Writing disk header: done
Instantiating disk: done
# oracleasm createdisk VOL5 /dev/sdf1
5、关闭NetworkManager
# chkconfig --level 123456 NetworkManager off
# service NetworkManager stop
Stopping NetworkManager daemon: [ OK ]
1.6、安装 cvuqdisk包,验证grid环境,并修复failed
1、安装cvuqdisk(每个节点)
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
# rpm -qa |grep cvuqdisk
cvuqdisk-1.0.9-1.x86_64
# ll /home/grid/grid/rpm
total 12
-rwxr-xr-x 1 root root 8551 Sep 22 2011 cvuqdisk-1.0.9-1.rpm
# ls -l /usr/sbin/cvuqdisk
-rwsr-xr-x 1 root oinstall 10928 Sep 4 2011 /usr/sbin/cvuqdisk
2、验证grid环境
node1->find /home/grid/grid/ -name runcluvfy.sh
/home/grid/grid/runcluvfy.sh
Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
node1 passed
node2 (failed) ASMLib does not list the disks "[VOL2, VOL1, VOL5, VOL4, VOL3]"
node1->./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2 -verbose >> /home/grid/runcluvfy.check
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 missing sysstat-5.0.5 failed
node1 missing sysstat-5.0.5 failed
Result: Package existence check failed for "sysstat"
Check: Package existence for "pdksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 missing pdksh-5.2.14 failed
node1 missing pdksh-5.2.14 failed
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "node2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
node2 failed
node1 failed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: node2,node1
(1)其他节点必须执行oracleasm scandisks
# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "VOL1"
Instantiating disk "VOL3"
Instantiating disk "VOL2"
Instantiating disk "VOL4"
Instantiating disk "VOL5"
# oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
VOL5
(2)可以不安装sysstat
# yum --disablerepo=* --enablerepo=c6-media install sysstat -y
(3)卸载ksh安装pdksh
# yum --disablerepo=* --enablerepo=c6-media install pdksh
Loaded plugins: fastestmirror, refresh-packagekit
Loading mirror speeds from cached hostfile
c6-media | 4.0 kB 00:00 ...
Setting up Install Process
No package pdksh available.
Error: Nothing to do
# rpm -e ksh-20120801-10.el6.x86_64
# rpm -ivh pdksh-5.2.14-30.x86_64.rpm
warning: pdksh-5.2.14-30.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 73307de6: NOKEY
Preparing... ########################################### [100%]
1:pdksh ########################################### [100%]
(3)PRVF-5636和PRVF-5637,rhel6以上版本的bug引起的,11.2.0.4修复
# sed -i 's/recursion yes/recursion no/1' /etc/named.conf ##解决PRVF-5636
# service named restart
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "node2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
node2 failed
node1 failed
PRVF-5637 : DNS response time could not be checked on following nodes: node2,node1
##添加段
# sed -i 's/allow-query { any; };/&\n\ allow-query-cache\ { any; };/' /etc/named.conf ##解决PRVF-5637
# vi /etc/named.conf ##仅修改3行
allow-query { any; };
allow-query-cache { any; };
recursion no;
# service named restart
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "node2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
node2 passed
node1 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes