设为首页 收藏本站
查看: 1322|回复: 0

[经验分享] vmware server 安装oracle 10G RAC 实践(三)

[复制链接]
YunVN网友  发表于 2015-4-5 18:08:41 |阅读模式
  NO1的配置算完成了。接下来配置NOD2。
  关闭NOD1电源,拷贝NOD1目录下的文件到NOD2
  在VMWARE SERVER控制台按ctrl+o,打开NOD2下的Red Hat Enterprise Linux 4.vmx
  右击setting,打开options,改成NOD2
  
  
  确定,点击start this virtul machine
  选择 create a new identifier
  以root用户登录修改网卡配置
  出现下面错误,原因是我在公司配置143这个IP已经被使用了,该个未使用的IP地址,待会再来修改NOD1的hosts。修改主机名魏NOD2
  
  查看主机名,添加/etc/hosts
  [iyunv@NOD1 ~]# uname -n
  NOD2
  [iyunv@NOD1 ~]# vim /etc/hosts
  127.0.0.1 localhost
  128.1.100.144   nod1
  128.1.100.145   nod2
  128.1.100.201   nod1-vip
  128.1.100.202   nod2-vip
  10.10.10.100    nod1-priv
  10.10.10.101    nod2-priv
  
  修改[iyunv@NOD1 ~]# vim /home/oracle/.bash_profile
  ORACLE_SID=ORCL2
  
  增加ssh用户等效性,在集群就需服务CRS和RAC安装过程中,oui必须以ORACLE用户身份将软件复制到所有RAC节点,而不提示输入口令。在ORACLE 10G中可以用ssh来完成这项工作。
  重启NOD2,开启NOD1电源,我还是现在NOD1上配置
  NOD2在重启过程中启动了硬件勘察工具,原因是刚才网卡配置探测了新的MAC地址,旧的没有删掉,删除掉就行了。
  登录NOD1,修改/etc/hosts文件
  128.1.100.145 nod2
  
  以ORACLE用户生成公钥和私钥
  [iyunv@NOD1 ~]# su - oracle
  [oracle@NOD1 ~]$ mkdir ~/.ssh
  [oracle@NOD1 ~]$ chmod 700 ~/.ssh
  [oracle@NOD1 ~]$ ssh-keygen -t rsa
  Generating public/private rsa key pair.
  Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
  Enter passphrase (empty for no passphrase):
  Enter same passphrase again:
  Your identification has been saved in /home/oracle/.ssh/id_rsa.
  Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
  The key fingerprint is:
  60:96:10:80:22:b9:82:7b:1e:db:ab:10:78:a4:5e:fb oracle@NOD1
  [oracle@NOD1 ~]$ ssh-keygen -t dsa
  Generating public/private dsa key pair.
  Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
  Enter passphrase (empty for no passphrase):
  Enter same passphrase again:
  Your identification has been saved in /home/oracle/.ssh/id_dsa.
  Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
  The key fingerprint is:
  6c:c4:cb:94:9e:db:7d:38:02:e7:ae:f2:3a:46:88:dd oracle@NOD1
  [oracle@NOD1 ~]$
  
  在NOD2上执行
  Last login: Thu Dec 31 10:30:10 2009 from nod1
  [iyunv@NOD2 ~]# su - oracle
  [oracle@NOD2 ~]$ mkdir ~/.ssh
  [oracle@NOD2 ~]$ chmod 700 ~/.ssh
  [oracle@NOD2 ~]$ ssh-keygen -t rsa
  Generating public/private rsa key pair.
  Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
  Enter passphrase (empty for no passphrase):
  Enter same passphrase again:
  Your identification has been saved in /home/oracle/.ssh/id_rsa.
  Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
  The key fingerprint is:
  6b:66:fe:ed:f1:bc:73:6e:e7:34:84:7a:43:ba:4e:03 oracle@NOD2
  [oracle@NOD2 ~]$ ssh-keygen -t dsa
  Generating public/private dsa key pair.
  Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
  Enter passphrase (empty for no passphrase):
  Enter same passphrase again:
  Your identification has been saved in /home/oracle/.ssh/id_dsa.
  Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
  The key fingerprint is:
  84:5a:2a:6b:50:70:6f:4b:40:0b:3a:c0:64:6c:0f:f7 oracle@NOD2
  [oracle@NOD2 ~]$
  
  回到NOD1执行
  [iyunv@NOD1 ~]# su - oracle
  [oracle@NOD1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  [oracle@NOD1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
  [oracle@NOD1 ~]$ ssh nod2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  The authenticity of host 'nod2 (128.1.100.145)' can't be established.
  RSA key fingerprint is e8:f1:5a:88:be:6e:ef:ad:5c:a1:2e:36:9c:74:4d:a0.
  Are you sure you want to continue connecting (yes/no)? yes
  Warning: Permanently added 'nod2,128.1.100.145' (RSA) to the list of known hosts.
  oracle@nod2's password:
  [oracle@NOD1 ~]$ ssh nod2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
  oracle@nod2's password:
  [oracle@NOD1 ~]$ scp ~/.ssh/authorized_keys nod2:~/.ssh/authorized_keys
  oracle@nod2's password:
  authorized_keys                               100% 1644     1.6KB/s   00:00   
  [oracle@NOD1 ~]$
  
  验证登录是否需要密码
  [oracle@NOD1 ~]$ ssh nod1 date
  The authenticity of host 'nod1 (127.0.0.1)' can't be established.
  RSA key fingerprint is e8:f1:5a:88:be:6e:ef:ad:5c:a1:2e:36:9c:74:4d:a0.
  Are you sure you want to continue connecting (yes/no)? yes
  Warning: Permanently added 'nod1' (RSA) to the list of known hosts.
  四 12月 31 10:57:52 CST 2009
  [oracle@NOD1 ~]$ ssh nod1-priv date
  The authenticity of host 'nod1-priv (10.10.10.100)' can't be established.
  RSA key fingerprint is e8:f1:5a:88:be:6e:ef:ad:5c:a1:2e:36:9c:74:4d:a0.
  Are you sure you want to continue connecting (yes/no)? yes
  Warning: Permanently added 'nod1-priv,10.10.10.100' (RSA) to the list of known hosts.
  四 12月 31 10:58:00 CST 2009
  [oracle@NOD1 ~]$ ssh nod2-priv date
  The authenticity of host 'nod2-priv (10.10.10.101)' can't be established.
  RSA key fingerprint is e8:f1:5a:88:be:6e:ef:ad:5c:a1:2e:36:9c:74:4d:a0.
  Are you sure you want to continue connecting (yes/no)? yes
  Warning: Permanently added 'nod2-priv,10.10.10.101' (RSA) to the list of known hosts.
  四 12月 31 10:50:07 CST 2009
  [oracle@NOD1 ~]$
  
  配置ASMLIB(两个节点上)
  必须以ROOT用户执行
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm config
  Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status}
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm configure
  Configuring the Oracle ASM library driver.
  
  This will configure the on-boot properties of the Oracle ASM library
  driver. The following questions will determine whether the driver is
  loaded on boot and what permissions it will have. The current values
  will be shown in brackets ('[]'). Hitting  without typing an
  answer will keep that current value. Ctrl-C will abort.
  
  Default user to own the driver interface []: oracle
  Default group to own the driver interface []: dba
  Start Oracle ASM library driver on boot (y/n) [n]: y
  Scan for Oracle ASM disks on boot (y/n) [y]: y
  Writing Oracle ASM library driver configuration: done
  Initializing the Oracle ASMLib driver: [ OK ]
  Scanning the system for Oracle ASMLib disks: [ OK ]
  [iyunv@NOD1 oracle]#
  
  创建ASM磁盘
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc
  Marking disk "VOL1" as an ASM disk: [FAILED]
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1
  Marking disk "VOL1" as an ASM disk: [ OK ]
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm createdisk VOL2 /dev/sde1
  Marking disk "VOL2" as an ASM disk: [ OK ]
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
  Marking disk "VOL3" as an ASM disk: [ OK ]
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm scandisks
  Scanning the system for Oracle ASMLib disks: [ OK ]
  [iyunv@NOD1 oracle]# /etc/init.d/oracleasm listdisks
  VOL1
  VOL2
  VOL3
  
  配置ORACLE集群文件系统OCFS2
  下载ocfs2 软件包
  http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL4/i386/1.2.9-1/
  http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL4/i386/1.2.7-1/
  
  下载三个RPM包
  ocfs2-2.6.9-78.EL-1.2.9-1.el4.i686.rpm
  ocfs2console-1.2.7-1.el4.i386.rpm
  ocfs2-tools-1.2.7-1.el4.i386.rpm
  在两个节点上安装
  [iyunv@NOD1 ~]# ls
  anaconda-ks.cfg     ocfs2-2.6.9-78.EL-1.2.9-1.el4.i686.rpm
  Desktop             ocfs2console-1.2.7-1.el4.i386.rpm
  install.log         ocfs2-tools-1.2.7-1.el4.i386.rpm
  install.log.syslog
  [iyunv@NOD1 ~]# rpm -ivh ocfs2-2.6.9-78.EL-1.2.9-1.el4.i686.rpm
  錯誤:相依性失敗:
          ocfs2-tools >= 1.2.6 是 ocfs2-2.6.9-78.EL-1.2.9-1.el4.i686 所需要的
  [iyunv@NOD1 ~]# rpm -ivh ocfs2-tools-1.2.7-1.el4.i386.rpm
  準備中...                ########################################### [100%]
     1:ocfs2-tools            ########################################### [100%]
  [iyunv@NOD1 ~]# rpm -ivh ocfs2console-1.2.7-1.el4.i386.rpm
  準備中...                ########################################### [100%]
     1:ocfs2console           ########################################### [100%]
  [iyunv@NOD1 ~]# rpm -ivh ocfs2-2.6.9-78.EL-1.2.9-1.el4.i686.rpm
  準備中...                ########################################### [100%]
     1:ocfs2-2.6.9-78.EL      ########################################### [100%]
  [iyunv@NOD1 ~]# rpm -qa | grep ocfs
  ocfs2-2.6.9-78.EL-1.2.9-1.el4
  ocfs2-tools-1.2.7-1.el4
  ocfs2console-1.2.7-1.el4
  [iyunv@NOD1 ~]#
  
  打开Xmanager连接NOD1,启动终端输入
  Export DISPLAY=128.1.100.204:0.0
  遭遇错误
  
  
  检查了一下内核和OCFS2包
  发现时内核版本不对,还真不容易看出来
  因该是
  ocfs2-2.6.9-78.ELsmp-1.2.9-1.el4.i686.rpm
  我下了
  ocfs2-2.6.9-78.EL-1.2.9-1.el4.i686.rpm
  少了一个smp。
  卸载ocfs2-2.6.9-78.EL-1.2.9-1.el4
  Rpm –e ocfs2-2.6.9-78.EL-1.2.9-1.el4
  重新下载安装
  安装完之后在两个节点上配置/etc/init.d/o2cb
  [iyunv@NOD1 ~]# /etc/init.d/o2cb configure
  Configuring the O2CB driver.
  
  This will configure the on-boot properties of the O2CB driver.
  The following questions will determine whether the driver is loaded on
  boot. The current values will be shown in brackets ('[]'). Hitting
   without typing an answer will keep that current value. Ctrl-C
  will abort.
  
  Load O2CB driver on boot (y/n) [y]: y
  Cluster to start on boot (Enter "none" to clear) [ocfs2]:
  Specify heartbeat dead threshold (>=7) [61]: 61
  Specify network idle timeout in ms (>=5000) [30000]: 30000
  Specify network keepalive delay in ms (>=1000) [2000]: 2000
  Specify network reconnect delay in ms (>=2000) [2000]: 2000
  Writing O2CB configuration: OK
  Starting O2CB cluster ocfs2: Failed
  Cluster ocfs2 created
  o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local. A node is considered local when its node name in the configuration matches this machine's host name.
  Stopping O2CB cluster ocfs2: OK
  [iyunv@NOD1 ~]# /etc/init.d/o2cb status
  Module "configfs": Loaded
  Filesystem "configfs": Mounted
  Module "ocfs2_nodemanager": Loaded
  Module "ocfs2_dlm": Loaded
  Module "ocfs2_dlmfs": Loaded
  Filesystem "ocfs2_dlmfs": Mounted
  Checking O2CB cluster ocfs2: Offline
  [iyunv@NOD1 ~]#
  
  启动ocfs2console
  添加nod1,nod2节点。
  选择Propagate Configuration 复制到nod2
  NOD2上查看
  [iyunv@NOD2 ~]# cat /etc/ocfs2/cluster.conf
  node:
          ip_port = 7777
          ip_address = 128.1.100.144
          number = 0
          name = nod1
          cluster = ocfs2
  
  node:
          ip_port = 7777
          ip_address = 128.1.100.145
          number = 1
          name = nod2
          cluster = ocfs2
  
  cluster:
          node_count = 2
          name = ocfs2
  
  [iyunv@NOD2 ~]#
  
  
  重新在两个节点上配置O2CB驱动
  [iyunv@NOD1 ~]# /etc/init.d/o2cb unload
  Stopping O2CB cluster ocfs2: OK
  Unmounting ocfs2_dlmfs filesystem: OK
  Unloading module "ocfs2_dlmfs": OK
  Unmounting configfs filesystem: OK
  Unloading module "configfs": OK
  [iyunv@NOD1 ~]# /etc/init.d/o2cb configure
  Configuring the O2CB driver.
  
  This will configure the on-boot properties of the O2CB driver.
  The following questions will determine whether the driver is loaded on
  boot. The current values will be shown in brackets ('[]'). Hitting
   without typing an answer will keep that current value. Ctrl-C
  will abort.
  
  Load O2CB driver on boot (y/n) [y]: y
  Cluster to start on boot (Enter "none" to clear) [ocfs2]:
  Specify heartbeat dead threshold (>=7) [61]: 60
  Specify network idle timeout in ms (>=5000) [30000]: 30000
  Specify network keepalive delay in ms (>=1000) [2000]:
  Specify network reconnect delay in ms (>=2000) [2000]:
  Writing O2CB configuration: OK
  Loading module "configfs": OK
  Mounting configfs filesystem at /config: OK
  Loading module "ocfs2_nodemanager": OK
  Loading module "ocfs2_dlm": OK
  Loading module "ocfs2_dlmfs": OK
  Mounting ocfs2_dlmfs filesystem at /dlm: OK
  Starting O2CB cluster ocfs2: OK
  [iyunv@NOD1 ~]#
  
  
  格式化文件系统
  打开ocfs2console---tasks----format
  
  在两个节点上挂载文件系统
  [iyunv@NOD1 /]# su - oracle
  [oracle@NOD1 ~]$ mkdir ocfs
  [oracle@NOD1 ~]$ ls
  ln ocfs p
  [oracle@NOD1 ~]$ su
  Password:
  [iyunv@NOD1 oracle]#
  [oracle@NOD1 ~]$ su
  Password:
  [iyunv@NOD1 oracle]# mount /dev/sdb
  sdb   sdb1
  [iyunv@NOD1 oracle]# mount /dev/sdb1 ocfs/
  [iyunv@NOD1 oracle]#
  
  
  启动时挂载文件系统
  [iyunv@NOD2 ~]# vim /etc/fstab
  /dev/sdb1 /home/oracle/ocfs ocfs2 _netdev,datavolume,nointr 0 0
  
  创建 Oracle 集群件目录。在 OCR 和表决磁盘将驻留的 OCFS2 文件系统中创建目录
  [iyunv@NOD1 oracle]# mkdir /home/oracle/ocfs/clusterware
  [iyunv@NOD1 oracle]# chown -R oracle:dba /home/oracle/ocfs/
  [iyunv@NOD1 oracle]#
  
  到目前为止,已经完成了再两个节点上的OCFS设置
  
  安装VMWARE TOOLS,用来同步虚拟机和主机的时间
  关闭NOD1,NOD2,载入linux.iso镜像F:\Program Files\VMware\VMware Server\linux.iso
  启动,安装VMWARE TOOLS。

执行“vmware-toolbox”以显示 VMware Tools Properties 窗口。/* Wilson注:vmware-toolbox是一个可执行文件,直接在Terminal窗口中以root身份敲vmware-toolbox就可以执行它了 */ 在Options 选项卡下,选择Time synchronization between the virtual machine and the host operating system。您应该发现 tools.syncTime = "TRUE" 参数已经追加到虚拟机配置文件d:\vm\rac\rac1\Red Hat Enterprise Linux 4.vmx 中。

  

编辑 /boot/grub/grub.conf,并将选项“clock=pit nosmp noapic nolapic”添加到读取内核 /boot/ 的那一行。您已经将选项添加到两个内核,现在只需对特定内核进行更改。

3. #boot=/dev/sda

4. default=0

5. timeout=5

6. splashimage=(hd0,0)/boot/grub/splash.xpm.gz

7. hiddenmenu

8. title Enterprise (2.6.9-42.0.0.0.1.ELsmp)

9. root (hd0,0)

10. kernel /boot/vmlinuz-2.6.9-42.0.0.0.1.ELsmp ro

11. root=LABEL=/ rhgb quiet clock=pit nosmp noapic nolapic

12. initrd /boot/initrd-2.6.9-42.0.0.0.1.ELsmp.img

13. title Enterprise-up (2.6.9-42.0.0.0.1.EL)

14. root (hd0,0)

15. kernel /boot/vmlinuz-2.6.9-42.0.0.0.1.EL ro root=LABEL=/

16. rhgb quiet clock=pit nosmp noapic nolapic

17. initrd /boot/initrd-2.6.9-42.0.0.0.1.EL.img

18. 重新引导 rac1。

  # reboot
  
  
  
  
  使用SSH secure FILE TRANSFER把clusterware 和10G database安装程序上传到NOD1上
  Unzip 到/home/目录下
  Su 到oracle用户
  进入到clusterware目录执行
  [oracle@NOD1 clusterware]$ ./runInstaller
  选择ORACrs10g_HOME安装目录
  忽视内存要求警告
  添加NOD2节点
  NOD2
  NOD2-PRIV
  NOD2-VIP
  指定网卡,一般默认就可以了
  指定 Oracle 集群注册表 (OCR) 位置:选择External Redundancy。为了简单起见,这里将不镜像 OCR。在生产环境中,您可能会考虑复用 OCR 以实现更高的冗余。
  指定 OCR 位置:/home/oracle/ocfs/clusterware/ocr
  指定表决磁盘位置:选择 External Redundancy。同样,为了简单起见,我们选择不镜像表决磁盘。
  表决磁盘位置:/home/oracle/ocfs/clusterware/votingdisk
  安装
  
  依次在NOD1,NOD2上以ROOT用户运行脚本
  [iyunv@NOD1 oracle]# /home/ora10g/oraInventory/orainstRoot.sh
  [iyunv@NOD2 ~]# /home/ora10g/oraInventory/orainstRoot.sh
  [iyunv@NOD1 oracle]# /home/ora10g/product/10.2.0/crs_1/root.sh
  [iyunv@NOD1 oracle]# /home/ora10g/product/10.2.0/crs_1/root.sh
  
    在NOD2上执行root.sh报错
  [iyunv@NOD2 home]# /home/ora10g/product/10.2.0/crs_1/root.sh
  WARNING: directory '/home/ora10g/product/10.2.0' is not owned by root
  WARNING: directory '/home/ora10g/product' is not owned by root
  WARNING: directory '/home/ora10g' is not owned by root
  Checking to see if Oracle CRS stack is already configured
  /etc/oracle does not exist. Creating it now.
  
  Setting the permissions on OCR backup directory
  Setting up NS directories
  Oracle Cluster Registry configuration upgraded successfully
  WARNING: directory '/home/ora10g/product/10.2.0' is not owned by root
  WARNING: directory '/home/ora10g/product' is not owned by root
  WARNING: directory '/home/ora10g' is not owned by root
  clscfg: EXISTING configuration version 3 detected.
  clscfg: version 3 is 10G Release 2.
  assigning default hostname nod1 for node 1.
  assigning default hostname nod2 for node 2.
  Successfully accumulated necessary OCR keys.
  Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
  node :   
  node 1: nod1 nod1-priv nod1
  node 2: nod2 nod2-priv nod2
  clscfg: Arguments check out successfully.
  
  NO KEYS WERE WRITTEN. Supply -force parameter to override.
  -force is destructive and will destroy any previous cluster
  configuration.
  Oracle Cluster Registry for cluster has already been initialized
  Startup will be queued to init within 90 seconds.
  Adding daemons to inittab
  Expecting the CRS daemons to be up within 600 seconds.
  CSS is active on these nodes.
          nod1
          nod2
  CSS is active on all nodes.
  Waiting for the Oracle CRSD and EVMD to start
  Oracle CRS stack installed and running under init(1M)
  Running vipca(silent) for configuring nodeapps
  在输入参数中指定的节点名 "NOD1" 无效。
  在输入参数中指定的节点名 "NOD1" 无效。
  
  在OUI界面点确定OCA检查失败。点确定退出
  在XMANAGER窗口重新打开一个窗口,以ROOT身份运行
  /home/ora10g/product/10.2.0/crs_1/bin/vipca弹出图形窗口,点击下一步
  选择eth0,下一步,输入IP别名Nod1-vip,其他的信息自动弹出来。下一步,安装VIP,
  GSD,ONS
  确定后退出。
  
  在NOD1,NOD2上验证集群服务设置
  [iyunv@NOD1 bin]# su - oracle
  [oracle@NOD1 ~]$ cd /home/ora10g/product/10.2.0/crs_1/bin/
  [oracle@NOD1 bin]$ ./cluvfy stage -post crsinst -n nod1,nod2
  
  执行 群集服务设置 的后期检查
  
  正在检查节点的可访问性...
  节点 "NOD1" 的节点可访问性检查已通过。
  
  
  正在检查等同用户...
  用户 "oracle" 的等同用户检查已通过。
  
  正在检查集群管理器完整性...
  
  
  正在检查 CSS 守护程序...
  "CSS daemon" 的 守护程序状态 检查已通过。
  
  集群管理器完整性检查已通过。
  
  正在检查集群完整性...
  
  
  集群完整性检查已通过
  
  
  正在检查 OCR 完整性...
  
  正在检查是否缺少非集群配置...
  所有节点都没有非集群的, 仅限本地的配置。
  
  OCR 设备的唯一性检查已通过。
  
  正在检查 OCR 的版本...
  正确版本 "2" 的 OCR 存在。
  
  正在检查 OCR 数据完整性...
  OCR 数据完整性检查已通过。
  
  OCR 完整性检查已通过。
  
  正在检查 CRS 完整性...
  
  正在检查守护程序的活动性...
  "CRS daemon" 的活动性检查已通过。
  
  正在检查守护程序的活动性...
  "CSS daemon" 的活动性检查已通过。
  
  正在检查守护程序的活动性...
  "EVM daemon" 的活动性检查已通过。
  
  正在检查 CRS 健康状况...
  CRS 健康状况检查已通过。
  
  CRS 完整性检查已通过。
  
  正在检查节点应用程序是否存在...
  
  
  正在检查 VIP 节点应用程序是否存在(必需)
  检查已通过。
  
  正在检查 ONS 节点应用程序是否存在(可选)
  检查已通过。
  
  正在检查 GSD 节点应用程序是否存在(可选)
  检查已通过。
  
  
  群集服务设置 的后期检查成功。
  [oracle@NOD1 bin]$
  检查通过
  这里还是不知道那个节点名无效错误时怎么产生的。网上搜了一下,很多人遇到的是eth0 is not public。。。。是因为公共接口是不可路由的IP导致。哪位兄弟遇到过把解决方法告诉我一下。。。
  
  
  
  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-54105-1-1.html 上篇帖子: vmware centos ADSL共享上网设置 . 下篇帖子: vmware 虚拟机扩大虚拟硬盘的方法
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表