设为首页 收藏本站
查看: 2093|回复: 0

[经验分享] OCM_Session7_10_安装clusterware

[复制链接]

尚未签到

发表于 2015-11-9 12:30:30 | 显示全部楼层 |阅读模式
  
十、安装clusterware
开始安装,我是使用xmanager 4安装的,我的本机ip为192.168.1.103
[iyunv@rac1 ~]# xhost+-bash: xhost+: command not found[iyunv@rac1 ~]# export DISPLAY=192.168.1.103:0.0[iyunv@rac1 ~]# su - oracle[oracle@rac1 ~]$ cd /stage/clustware/Disk1/clusterware/[oracle@rac1 clusterware]$ lltotal 36drwxr-xr-x 2 oracle oinstall 4096 Jul  3  2005 cluvfydrwxr-xr-x 6 oracle oinstall 4096 Jul  3  2005 docdrwxr-xr-x 4 oracle oinstall 4096 Jul  3  2005 installdrwxr-xr-x 2 oracle oinstall 4096 Jul  3  2005 responsedrwxr-xr-x 2 oracle oinstall 4096 Jul  3  2005 rpm-rwxr-xr-x 1 oracle oinstall 1328 Jul  3  2005 runInstallerdrwxr-xr-x 9 oracle oinstall 4096 Jul  3  2005 stagedrwxr-xr-x 2 oracle oinstall 4096 Jul  3  2005 upgrade-rw-r--r-- 1 oracle oinstall 3445 Jul  3  2005 welcome.html[oracle@rac1 clusterware]$ ./runInstaller Starting Oracle Universal Installer...
Checking installer requirements...
Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2                                      Passed

All installer requirements met.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-03-22_03-39-48PM. Please wait ...[oracle@rac1 clusterware]$ Oracle Universal Installer, Version 10.2.0.1.0 ProductionCopyright (C) 1999, 2005, Oracle. All rights reserved.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
如果出现以下错误,则是因为缺少-ivh libXp-1.0.0-8.1.el5.i386.rpm的原因,之前我在检查缺少包时,已经安装过了。
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Exception java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-03-22_08-31-31AM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: cannot open shared object file: No such file or directory occurred..java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-03-22_08-31-31AM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: cannot open shared object file: No such file or directory        at java.lang.ClassLoader$NativeLibrary.load(Native Method)        at java.lang.ClassLoader.loadLibrary0(Unknown Source)        at java.lang.ClassLoader.loadLibrary(Unknown Source)        at java.lang.Runtime.loadLibrary0(Unknown Source)        at java.lang.System.loadLibrary(Unknown Source)        at sun.security.action.LoadLibraryAction.run(Unknown Source)        at java.security.AccessController.doPrivileged(Native Method)        at sun.awt.NativeLibLoader.loadLibraries(Unknown Source)        at sun.awt.DebugHelper.<clinit>(Unknown Source)        at java.awt.Component.<clinit>(Unknown Source)        at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager.<init>(OiifmGraphicInterfaceManager.java:222)        at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.createInterfaceManager(OiicSessionInterfaceManager.java:193)        at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.getInterfaceManager(OiicSessionInterfaceManager.java:202)        at oracle.sysman.oii.oiic.OiicInstaller.getInterfaceManager(OiicInstaller.java:436)        at oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:926)        at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:866)Exception in thread &quot;main&quot; java.lang.NoClassDefFoundError        at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager.<init>(OiifmGraphicInterfaceManager.java:222)        at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.createInterfaceManager(OiicSessionInterfaceManager.java:193)        at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.getInterfaceManager(OiicSessionInterfaceManager.java:202)        at oracle.sysman.oii.oiif.oiifm.OiifmAlert.<clinit>(OiifmAlert.java:151)        at oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:984)        at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:866)------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
缺少包[oracle@rac1 clusterware]$ su -Password: [iyunv@rac1 ~]# mount /dev/cdrom /mnt/mount: you must specify the filesystem type[iyunv@rac1 ~]# mount /dev/cdrom /mnt/mount: block device /dev/cdrom is write-protected, mounting read-only[iyunv@rac1 ~]# cd /mnt/Server/[iyunv@rac1 Server]#  rpm -ivh libXp-1.0.0-8.1.el5.i386.rpm warning: libXp-1.0.0-8.1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing...                ########################################### [100%]   1:libXp                  ########################################### [100%]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1.稍微等几秒钟,便可看见弹出的clusterware安装界面,点击“Next”安装oracle clusterware 10.2.0.1.0开始,进入欢迎界面点击“Next”
DSC0000.jpg

2.将oraInventory目录安装到/u01/app/oracle/oraInventory下,点击“Next”指定操作系统组:oinstall组默认
DSC0001.jpg
3.定义clusterware的安装路径,点击“Next”Name:OraCrs10g_homePath:/u01/app/oracle/product/10.2.0/crs_1
DSC0002.jpg
4.先决条件prerequisite检查,0 requirements to beverified。安装前期的环境检查,如图所示
DSC0003.jpg

5. 指定集群节点rac1 rac2 Specify Cluster Configuration 这个地方系统默认只显示一个节点,另一个节点需要手动添加点击“Add…”-> Public Node Name:rac2.localdomain  -> Private Node Name:rac2-priv.localdomain -> virtual Host Name: rac2-vip.localdomain

DSC0004.jpg
DSC0005.jpg
6. 指定网卡接口用途,系统把所有网卡都扫描进来点击“Edit”修改eth0设置为public网卡eth1设置为private网卡
DSC0006.jpg
7.指定“OCR 集群注册表”所对应裸设备路径,点击“Next”OCR:oracle cluster register集群注册表文件,这个文件中记录了oracle RAC的所有可用资源,例如:实例, 数据库 ,监听 ,节点 ,ASM 磁盘组, service服务等资源,只有把新资源注册到OCR才能被RAC调配使用。例如 新加入一个节点到RAC架构,就需要把新节点注册到OCR,由于OCR文件非常重要因此需要我们拥有冗余方案。选择“External Redundancy 外部冗余”【你将用磁盘管理系统提供OCR冗余】Specify OCR Location:/dev/raw/raw1
DSC0007.jpg
8.指定“Voting Disk 表决磁盘”所对应裸设备路径,点击“Next”Voting Disk:表决磁盘是用于防止脑裂现象的一种判断机制,一般为单数个磁盘,当某个节点发生通信故障或不能继续扮演RAC角色时需要表决磁盘来判断是否剔除它。选择“External Redundancy 外部冗余”Specify Voting Disk Location:/dev/raw/raw2
DSC0008.jpg
9.当上述属性设置完之后,下面开始正式的安装过程,点击“Install”,主节点安装完成后,系统会自动往rac2对应目录下推送clusterware的所有文件。
DSC0009.jpg
DSC00010.jpg
----------------------------------------------------------------------------------------------------------------------------------------如果出现以下错误,则是缺少包的原因,安装包后,在retry,就可以了。
DSC00011.jpg
[iyunv@rac1 Server]# rpm -ivh kernel-headers-2.6.18-274.el5.i386.rpm warning: kernel-headers-2.6.18-274.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing...                ########################################### [100%]   1:kernel-headers         ########################################### [100%][iyunv@rac1 Server]# rpm -ivh glibc-headers-2.5-65.i386.rpm warning: glibc-headers-2.5-65.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing...                ########################################### [100%]   1:glibc-headers          ########################################### [100%][iyunv@rac1 Server]# rpm -ivh glibc-devel-2.5-65.i386.rpmwarning: glibc-devel-2.5-65.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing...                ########################################### [100%]   1:glibc-devel            ########################################### [100%][iyunv@rac1 Server]# rpm -ivh gcc-4.1.2-51.el5.i386.rpmwarning: gcc-4.1.2-51.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing...                ########################################### [100%]   1:gcc                    ########################################### [100%][iyunv@rac1 Server]# rpm -ivh libstdc&#43;&#43;-devel-4.1.2-51.el5.i386.rpm warning: libstdc&#43;&#43;-devel-4.1.2-51.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing...                ########################################### [100%]   1:libstdc&#43;&#43;-devel        ########################################### [100%][iyunv@rac1 Server]# rpm -ivh gcc-c&#43;&#43;-4.1.2-51.el5.i386.rpm warning: gcc-c&#43;&#43;-4.1.2-51.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing...                ########################################### [100%]   1:gcc-c&#43;&#43;                ########################################### [100%][iyunv@rac1 Server]#
---------------------------------------------------------------------------------------------------------------------------

10.安装完成之后,需要分别在两个节点以root身份运行两个脚本第一个脚本在所有节点上都执行 /u01/app/oracle/oraInventory/orainstRoot.sh第二个脚本在所有节点上都执行 /u01/app/oracle/product/10.2.0/crs_1/root.sh
DSC00012.jpg

按顺序执行su - root先在rac1节点上执行:
root@rac1 ~]# /u01/app/oracle/oraInventory/orainstRoot.shChanging permissions of /u01/app/oracle/oraInventory to 770.Changing groupname of /u01/app/oracle/oraInventory to oinstall.The execution of the script is complete[iyunv@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.shWARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/oracle/product' is not owned by rootWARNING: directory '/u01/app/oracle' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/oracle/product' is not owned by rootWARNING: directory '/u01/app/oracle' is not owned by rootassigning default hostname rac1 for node 1.assigning default hostname rac2 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: rac1 rac1-priv rac1node 2: rac2 rac2-priv rac2Creating OCR keys for user 'root', privgrp 'root'..Operation successful.Now formatting voting device: /dev/raw/raw2Format of 1 voting devices complete.Startup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes.        rac1CSS is inactive on these nodes.        rac2Local node checking complete.Run root.sh on remaining nodes to start CRS daemons.[iyunv@rac1 ~]# ----------------------------------------------------------------------------------------------------------------------
再在rac2节点上执行
[iyunv@rac2 ~]# /u01/app/oracle/oraInventory/orainstRoot.shChanging permissions of /u01/app/oracle/oraInventory to 770.Changing groupname of /u01/app/oracle/oraInventory to oinstall.The execution of the script is complete
------------------------------------------------------------------------------------------------------
在rac2执行/u01/app/oracle/product/10.2.0/crs_1/root.sh之前要修改两个文件,否则会出现以下错误:/u01/app/oracle/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
----------------------------------------------------------------------------------------------------------------


11.下面我们在rac2上执行root.sh脚本之前需要先在两个节点中都编辑两个文件,使用root用户修改其实这里需要在两个节点上都需要这两个文件的修改。
第一个文件[iyunv@rac2 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/vipca搜索/LD_ASSUME_KERNEL

       #Remove this workaround when the bug 3937317 is fixed       arch=`uname -m`       if [ &quot;$arch&quot; = &quot;i686&quot; -o &quot;$arch&quot; = &quot;ia64&quot; ]       then            LD_ASSUME_KERNEL=2.4.19            export LD_ASSUME_KERNEL       fi       unset LD_ASSUME_KERNEL ---添加一行:清除环境变量       #End workaround
-------------------------------------------------------------------------------------------------------第二个文件[iyunv@rac2 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl

#Remove this workaround when the bug 3937317 is fixedLD_ASSUME_KERNEL=2.4.19export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL---添加一行:清除环境变量# Run ops control utility
--------------------------------------------------------------------------------------               
12.然后在rac2节点下用root身份执行
[iyunv@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.shWARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/oracle/product' is not owned by rootWARNING: directory '/u01/app/oracle' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/oracle/product' is not owned by rootWARNING: directory '/u01/app/oracle' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.assigning default hostname rac1 for node 1.assigning default hostname rac2 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: rac1 rac1-priv rac1node 2: rac2 rac2-priv rac2clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes.        rac1        rac2CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startWaiting for the Oracle CRSD and EVMD to startWaiting for the Oracle CRSD and EVMD to startWaiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeappsError 0(Native: listNetInterfaces:[3])  [Error 0(Native: listNetInterfaces:[3])][iyunv@rac2 ~]#
--------------------------------------------------------------------这里报错,有的文档这样修改,我没有验证过。我是在rac1节点上修改 /u01/app/oracle/product/10.2.0/crs_1/bin/vipca和 /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl后,再次在rac2节点上运行 /u01/app/oracle/product/10.2.0/crs_1/root.sh,如下:
1.这里我没有验证过。有的文档说明可以这样修改。
Waiting for the Oracle CRSD and EVMD to start          Oracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeapps           运行vipca配置节点Error 0(Native: listNetInterfaces:[3])                   本地网卡错误[Error 0(Native: listNetInterfaces:[3])]                 本地网卡错误
cd /u01/app/oracle/product/10.2.0/crs_1/bin./oifcfg            这是oracle网卡配置工具,我们可以使用这个工具来检查网卡配置是否正确oifcfg iflist       检查网卡配置oifcfg setif -global eth0/192.168.1.0:public                       指定全局公有网卡oifcfg setif -global eth1/172.168.1.0:cluster_interconnect         指定全局私有网卡oifcfg getif        获取配置结果,当rac2配置好后rac1自动生成vipca文件,oifcfg getif
-----------------------------------------------------------------------------------------

2.我是在第一个节点rac1修改后,再在rac2上执行一下/u01/app/oracle/product/10.2.0/crs_1/root.sh脚本
-- -------------------------------------------------------------------------------------------rac1节点
第一个文件[iyunv@rac1 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/vipca搜索/LD_ASSUME_KERNEL

       #Remove this workaround when the bug 3937317 is fixed       arch=`uname -m`       if [ &quot;$arch&quot; = &quot;i686&quot; -o &quot;$arch&quot; = &quot;ia64&quot; ]       then            LD_ASSUME_KERNEL=2.4.19            export LD_ASSUME_KERNEL       fi       unset LD_ASSUME_KERNEL ---添加一行:清除环境变量       #End workaround

第二个文件[iyunv@rac1 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl

#Remove this workaround when the bug 3937317 is fixedLD_ASSUME_KERNEL=2.4.19export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL---添加一行:清除环境变量# Run ops control utility
--------------------------------------------------------------------------------         
3.rac2节点再次执行,没有出现错误:
[iyunv@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.shWARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/app/oracle/product' is not owned by rootWARNING: directory '/u01/app/oracle' is not owned by rootChecking to see if Oracle CRS stack is already configuredOracle CRS stack is already configured and will be running under init(1M)[iyunv@rac2 ~]#

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------13.两个节点执行完毕后,点击OK,出现如下配置信息。
DSC00013.jpg
14.在执行oracle cluster verification utility出现错误,此时需要在任意节点上执行配置虚拟ip。
DSC00014.jpg
DSC00015.jpg
15.使用root用户配置虚拟ip此步骤在rac1和rac2节点上都可操作执行/u01/app/oracle/product/10.2.0/crs_1/bin/vipca自动弹出图形化界面我们可以使用vipca来创建和配置VIP GSD ONS 资源
DSC00016.jpg


16.打开欢迎界面,点击“Next”
DSC00017.jpg
17. 系统自动找到public的eth0,点击“Next”,【虚拟ip是基于公有网卡eth0】
DSC00018.jpg
18 . 补填各节点对应的vip名称和ip地址 mask地址,点击“Next”Node name  IP Alias Name  IP address      Subnet Maskrac1       rac1-vip.localdomain       192.168.1.152   255.255.255.0rac2       rac2-vip.localdomain       192.168.1.154   255.255.255.0
DSC00019.jpg

19 .检查概述信息,点击“Finish”开始安装,重点检查返填出来的IP地址是否正确千万不能错.
DSC00020.jpg
20.安装完之后就可以点击“ok”查看结果,点击“Exit”退出vipca.
DSC00021.jpg
DSC00022.jpg
21.检查vip服务 gsd ons vip 包含三个进程.
crs_stat -t  全是ONLINE才算正常,在所有节点上都相同
target是希望的最终状态,state是当前状态
[iyunv@rac1 bin]# crs_stat -tName           Type           Target    State     Host        ------------------------------------------------------------ora.rac1.gsd   application    ONLINE    ONLINE    rac1        ora.rac1.ons   application    ONLINE    ONLINE    rac1        ora.rac1.vip   application    ONLINE    ONLINE    rac1        ora.rac2.gsd   application    ONLINE    ONLINE    rac2        ora.rac2.ons   application    ONLINE    ONLINE    rac2        ora.rac2.vip   application    ONLINE    ONLINE    rac2      

22 .vipca执行成功后,那么相当于rac2的root.sh脚本也顺利完成,下一步需要做的就是返回到rac1节点,执行剩下的步骤,点击“OK”在Retry,
DSC00023.jpg


23 .启动三个组件并通过前提条件检查.如下,状态为全部成功》Oracle     Notification    Server   Configuration Assistance     通知服务Oracle     Private  Interconnect   Configuration Assistance      私有互联服务Oracle     Cluster   Verification    Utility                     集群检查工具
DSC00024.jpg
24 .等待配置完成后,将会收到“The installation of Oracle Clusterware was successful”提示信息,点击“Exit”,“Yes”退出整个clusterware的安装界面
DSC00025.jpg
DSC00026.jpg
25.验证安装成功.至此,clusterware安装成功。
在rac1和rac2上分别执行crs_stat -t 检查集群软件状态,必须都为ONLINE才算正常[iyunv@rac1 bin]# crs_stat -tName           Type           Target    State     Host        ------------------------------------------------------------ora.rac1.gsd   application    ONLINE    ONLINE    rac1        ora.rac1.ons   application    ONLINE    ONLINE    rac1        ora.rac1.vip   application    ONLINE    ONLINE    rac1        ora.rac2.gsd   application    ONLINE    ONLINE    rac2        ora.rac2.ons   application    ONLINE    ONLINE    rac2        ora.rac2.vip   application    ONLINE    ONLINE    rac2    ----------------------------------------------------------------------[iyunv@rac2 bin]# crs_stat -tName           Type           Target    State     Host        ------------------------------------------------------------ora.rac1.gsd   application    ONLINE    ONLINE    rac1        ora.rac1.ons   application    ONLINE    ONLINE    rac1        ora.rac1.vip   application    ONLINE    ONLINE    rac1        ora.rac2.gsd   application    ONLINE    ONLINE    rac2        ora.rac2.ons   application    ONLINE    ONLINE    rac2        ora.rac2.vip   application    ONLINE    ONLINE    rac2
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  
         版权声明:本文为博主原创文章,未经博主允许不得转载。

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-137019-1-1.html 上篇帖子: 11.2.0.3 下篇帖子: OCM_Session1_2_Server-side Network Configuration
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表