设为首页 收藏本站
查看: 730|回复: 0

[经验分享] Oracle 11gR2 RAC Install – cleaning up a failed install on Linux

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2016-1-20 09:05:18 | 显示全部楼层 |阅读模式
This article describes how to clean up a failed Grid Infrastructure installation. It specifically focuses on what to do if the "root.sh" script fails during this process and you want to rewind and start again.

    Grid Infrastructure

    ASM Disks

Grid Infrastructure

On all cluster nodes except the last, run the following command as the "root" user.

# perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
[iyunv@tbird1 install]# ./rootcrs.pl -deconfig -verbose -force
2012-10-28 17:04:38: Parsing the host name
2012-10-28 17:04:38: Checking for super user privileges
2012-10-28 17:04:38: User has super user privileges
Using configuration parameter file: ./crsconfig_params
VIP exists.:tbird1

<output removed to aid clarity>

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'tbird1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

On the last cluster node, run the following command as the "root" user.

# perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

This final command will blank the OCR configuration and voting disk.

You should be in a position to rerun the "root.sh" file now, but if you are using ASM, you will need to prepare your ASM disks before doing so.


If that doesn’t work, we need to resort to somewhat more belligerent methods.

The following will wipe out the Oracle Grid install completely, allowing you start over with install media.

First, make sure any CRS software is shut down. If it is not shut down then use the crsctl command to stop all the clusterware software:

[iyunv@tbird2 oraInventory]# . oraenv
ORACLE_SID = [root] ? +ASM1     

The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle

[iyunv@tbird2 oraInventory]# crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'tbird2'
CRS-2673: Attempting to stop 'ora.crsd' on 'tbird2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'tbird2'
CRS-2673: Attempting to stop 'ora.tbird2.vip' on 'tbird2'

<output removed to aid clarity>

CRS-2677: Stop of 'ora.gipcd' on 'tbird2' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'tbird2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'tbird2' has completed
CRS-4133: Oracle High Availability Services has been stopped.

Make sure that nothing is running as Oracle:

[iyunv@tbird2 oraInventory]# ps -ef | grep oracleroot     19214  4529  0 16:51 pts/1    00:00:00 grep oracle
Now we can remove the Oracle install as follows:Disable the OHASD Daemon from starting on reboot – do this on all nodes:
[iyunv@tbird2 etc]# cat /etc/inittab
# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon
h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null

Remove the last line that spawns the ohasd daemon, and save the file.
Now locate the Oracle Inventory and the location of the current Oracle installs. I am assuming in this case you want to remove everything.
The Oracle inventory location is stored in the oraInst.loc file:
[iyunv@tbird2 etc]# cat /etc/oraInst.locinventory_loc=/u01/app/oraInventoryinst_group=oinstall

Navigate to the Oracle Inventory, listed here at /u01/app/oraInventory and inspect the contents of the ContentsXML/inventory.xml file – do this on all nodes:
[iyunv@tbird2 oraInventory]# cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2009, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.1.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="tbird1"/>
      <NODE NAME="tbird2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY>

We can see we have a Grid install at /u01/app/11.2.0/grid. We can remove this as follows:[iyunv@tbird2 oraInventory]# rm -R /u01/app/11.2.0

Now we can remove the inventory directory – do this on all nodes:
[iyunv@tbird2 oraInventory]# rm -R /u01/app/oraInventory

Now we can remote the Oracle directory and files under /etc – do this on all nodes.
[iyunv@tbird2 ~]# rm -R /etc/oracle [iyunv@tbird2 ~]# rm /etc/oraInst.loc [iyunv@tbird2 ~]# rm /etc/oratab

Now we delete the files added to /usr/local/bin – do this on all nodes.
[iyunv@tbird2 ~]# rm /usr/local/bin/dbhome[iyunv@tbird2 ~]# rm /usr/local/bin/oraenv[iyunv@tbird2 ~]# rm /usr/local/bin/coraenv

Reset the permissions on /u01/app – do this on all nodes.
[iyunv@tbird2 ~]# chown oracle:dba /u01/app

Now we need to clear the ASM devices we created – do this on both nodes.
[iyunv@tbird2 ~]# oracleasm deletedisk DATAClearing disk header: done
Dropping disk: done

Finally re-stamp the devices for ASM.
[iyunv@tbird1 ~]# oracleasm createdisk DATA /dev/sdc1Writing disk header: done
Instantiating disk: done

And scan it on the secondary nodes:
[iyunv@tbird2 ~]# oracleasm scandisksReloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DATA"

ASM Disks

Once you attempt an installation, your ASM disks are marked as being used, so they can no longer be used as candidate disks. To revert them to candidate disk do the following.

Overwrite the header for the relevant partitions using the "dd" command.

# dd if=/dev/zero of=/dev/sdb1 bs=1024 count=100

Remove and create the ASM disk for each partition.

# /etc/init.d/oracleasm deletedisk DATA /dev/sdb1
# /etc/init.d/oracleasm createdisk DATA /dev/sdb1

The disks will now be available as candidate disks.


Now Oracle is completely removed, you can start your Grid install again.




运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-166831-1-1.html 上篇帖子: The listener supports no services TNS-12541: TNS-12560: TNS-00511: Linux Err 下篇帖子: 11gR2 RAC时间同异常导致节点down掉问题处理 Oracle failed Linux
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表