设为首页 收藏本站
查看: 409|回复: 0

[经验分享] RedHat EL5 安装Oracle 10g RAC之--CRS 安装

[复制链接]
累计签到:2 天
连续签到:1 天
发表于 2014-5-8 08:46:14 | 显示全部楼层 |阅读模式
系统环境:
操作系统:RedHat EL5
Cluster: Oracle CRS 10.2.0.1.0
Oracle:  Oracle 10.2.0.1.0

如图所示:RAC 系统架构
wKiom1Np81bhMmCXAAGvUOVml00019.jpg
二、CRS 安装
  Cluster Ready Service是Oracle 构建RAC,负责集群资源管理的软件,在搭建RAC中必须首先安装.
安装需采用图形化方式,以Oracle用户的身份安装(在node1上):
注意:修改安装配置文件,增加redhat-5的支持
[oracle@node1 install]$ pwd
/home/oracle/cluster/install
[oracle@node1 install]$ ls
addLangs.sh  images   oneclick.properties  oraparamsilent.ini  response
addNode.sh   lsnodes  oraparam.ini         resource            unzip
[oracle@node1 install]$ vi oraparam.ini
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,redhat-5,UnitedLinux-1.0,asianux-1,asianux-2
[oracle@node1 cluster]$./runInstaller
wKiom1Np9D_gta0TAAOU3ODbpYo235.jpg
欢迎界面
wKiom1Np9Fqw7dMxAAQ877duc5k567.jpg
注意安装CRS的主目录,不能和Oracle软件的目录一致,需单独在另一个目录
[oracle@node1 ~]$ ls -l /u01
total 24
drwxr-xr-x  3 oracle oinstall  4096 May  5 17:04 app
drwxr-xr-x 36 oracle   oinstall  4096 May  7 11:08 crs_1
drwx------  2 oracle oinstall 16384 May  4 15:59 lost+found
[oracle@node1 ~]$
wKiom1Np9N3i4N9IAAS5W-8Dhxo331.jpg
添加节点(如果主机间信任关系配置有问题,这里就无法发现node 2)
wKioL1Np9Umw2PB1AAPy6AzmTDQ846.jpg
修改public 网卡属性(public 网卡用于和Client 通讯)
wKioL1Np9YCz2b3-AAUJHGqPOUk691.jpg
OCR必须采用RAW设备(Exteneral Redundancy只需一个RAW,安装后可以添加mirror)
wKioL1Np9eXiMcc_AAVxqZm2wrM209.jpg
VOTE DISK必须采用RAW设备(Exteneral Redundancy只需一个RAW,安装后可以添加多个raw构成冗余)
wKiom1Np9nmj6NqtAARaCrLEH4c513.jpg
开始安装(并将安装软件传送到node2)
wKiom1Np9pvSgFDUAARmCJbea54508.jpg
安装提示分别在两个节点按顺序执行script
node1:

[iyunv@node1 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
node2:
[iyunv@node2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
node1:
[iyunv@node1 ~]# /u01/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
assigning default hostname node1 for node 1.
assigning default hostname node2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
       node1
CSS is inactive on these nodes.
       node2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
node1执行成功!
node2:

[iyunv@node2 ~]# /u01/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname node1 for node 1.
assigning default hostname node2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
       node1
       node2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
出现以上错误,解决方法:
[iyunv@node2 bin]# vi vipca
Linux) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
      export LD_LIBRARY_PATH
      #Remove this workaround when the bug 3937317 is fixed
      arch=`uname -m`
      if [ "$arch" = "i686" -o "$arch" = "ia64" ]
      then
           LD_ASSUME_KERNEL=2.4.19
           export LD_ASSUME_KERNEL
      fi
unset LD_ASSUME_KERNEL (添加此行信息)
      #End workaround

[iyunv@node2 bin]# vi srvctl
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL(添加此行信息)

在node 2重新执行root.sh:
注意:root.sh只能执行一次,如果再次执行,需执行rootdelete.sh
[iyunv@node2 bin]# /u01/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
[iyunv@node2 bin]# cd ../install
[iyunv@node2 install]# ls
cluster.ini         install.incl   rootaddnode.sbs    rootdelete.sh  templocal
cmdllroot.sh        make.log       rootconfig         rootinstall
envVars.properties  paramfile.crs  rootdeinstall.sh   rootlocaladd
install.excl        preupdate.sh   rootdeletenode.sh  rootupgrade
[iyunv@node2 install]# ./rootdelete.sh
CRS-0210: Could not find resource 'ora.node2.LISTENER_NODE2.lsnr'.
CRS-0210: Could not find resource 'ora.node2.ons'.
CRS-0210: Could not find resource 'ora.node2.vip'.
CRS-0210: Could not find resource 'ora.node2.gsd'.
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
[iyunv@node2 install]#
node 2 再次出错:
[iyunv@node2 install]# /u01/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname node1 for node 1.
assigning default hostname node2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
       node1
       node2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
解决方法:(配置网络)
[iyunv@node2 bin]# ./oifcfg iflist
eth0  192.168.8.0
eth1  10.10.10.0
[iyunv@node2 bin]# ./oifcfg getif
[iyunv@node2 bin]# ./oifcfg setif -global eth0/192.168.8.0:public
[iyunv@node2 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
[iyunv@node2 bin]# ./oifcfg getif
eth0  192.168.8.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
并在node2上执行VIPCA:
wKioL1Np-f7jjJnSAAKJogr5OU8992.jpg
以root身份执行vipca(在/u01/crs_1/bin)
wKioL1Np-jGz7zABAAM2hLcunZg293.jpg
配置信息应和/etc/hosts文件一致
wKiom1Np-pvheqYiAAPZFXo4_C0630.jpg
开始配置
wKioL1Np-pqhEWP-AAGGYpYYzMI076.jpg
vipca配置成功后,crs服务正常工作
wKioL1Np-vfiQeraAAKdYARXLR8570.jpg
安装完成!

验证CRS:
[iyunv@node2 bin]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.node1.gsd  application    ONLINE    ONLINE    node1      
ora.node1.ons  application    ONLINE    ONLINE    node1      
ora.node1.vip  application    ONLINE    ONLINE    node1      
ora.node2.gsd  application    ONLINE    ONLINE    node2      
ora.node2.ons  application    ONLINE    ONLINE    node2      
ora.node2.vip  application    ONLINE    ONLINE    node2      
[iyunv@node1 ~]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.node1.gsd  application    ONLINE    ONLINE    node1      
ora.node1.ons  application    ONLINE    ONLINE    node1      
ora.node1.vip  application    ONLINE    ONLINE    node1      
ora.node2.gsd  application    ONLINE    ONLINE    node2      
ora.node2.ons  application    ONLINE    ONLINE    node2      
ora.node2.vip  application    ONLINE    ONLINE    node2      

附:错误案例
如果在运行root.sh时出现以下错误:
wKioL1Np_SDQMHUEAAIfZxOAV4s961.jpg
在出现错误的节点上运行(root)vipca 解决!

@至此CRS安装成功!




运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-18875-1-1.html 上篇帖子: oracle数据库连接慢的问题 下篇帖子: RedHat EL5 安装Oracle 10g RAC之--建库 Oracle
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表