当前位置: 首页 > BACKUP & RESTORE > 正文

      下面的测试来至于MOS文档

Minimal downtime patching via cloning 11gR2 ORACLE_HOME directories (Doc ID 1136544.1)

       本方案采用的是复制ORACLE_HOME/GRID_HOME到新目录,对新目录升级后,将数据库或者ASM切换到新目录环境。此方法在单机环境相当有效,GRID单节节点环境打PSUOPATCH要运行大概30分钟,如果采用此方法大概5分钟就可以切换完成。但是对RAC没有任何作用,因为RAC可以采用滚动升级的方式,并且可以不用停生产。虽然在此方法测试中,GRID同时运行在两个不同的ORACLE_HOME目录下,其中一个节点的实例仍正常运行,但是MOS上面没有找到文档说支持此方式。

环境介绍

www.htz.pw > select * from V$version;

 

BANNER

——————————————————————————–

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

PL/SQL Release 11.2.0.3.0 – Production

CORE    11.2.0.3.0      Production

TNS for Linux: Version 11.2.0.3.0 – Production

NLSRTL Version 11.2.0.3.0 – Production

 

www.htz.pw > !lsb_release -a

LSB Version:    :core-4.0-amd64:core-4.0-ia32:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-ia32:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-ia32:printing-4.0-noarch

Distributor ID: RedHatEnterpriseServer

Description:    Red Hat Enterprise Linux Server release 5.8 (Tikanga)

Release:        5.8

Codename:       Tikanga

1,准备工作

准备工作见RAC 11.2.0.3滚动升级11.2.0.3.10的准备工作部分,并且包括了OPATCH工具安装

2,升级GRID目录

 

2.1 创建新GRID目录与COPY文件

COPY文件前,建议手动删除一些不需要的日志文件,一般情况下日志文件都上10G,浪费空间与时间。

[root@11rac1 ~]# mkdir -p /u01/app/11.2.0/grid_2

[root@11rac1 ~]# chmod 755 /u01/app/11.2.0/grid_2

[root@11rac1 ~]# ls -ld /u01/app/11.2.0/grid_2

drwxr-xr-x 2 root root 4096 Oct 14 07:26 /u01/app/11.2.0/grid_2

[root@11rac1 ~]# chown root:dba /u01/app/11.2.0/grid_2

[root@11rac1 ~]# ls -ld /u01/app/11.2.0/grid_2

drwxr-xr-x 2 root dba 4096 Oct 14 07:26 /u01/app/11.2.0/grid_2

[root@11rac1 grid]# cd /u01/app/11.2.0/grid

[root@11rac1 grid]# tar cfp – . | ( cd /u01/app/11.2.0/grid_2; tar xf – )

2.2 克隆GRID

[root@11rac1 grid]#  export ORACLE_HOME=/u01/app/11.2.0/grid_2

[root@11rac1 grid]# /usr/bin/perl $ORACLE_HOME/OPatch/crs/patch112.pl -unlock -desthome=$ORACLE_HOME

Prototype mismatch: sub main::trim: none vs ($) at /u01/app/11.2.0/grid_2/OPatch/crs/patch112.pl line 401.

opatch auto log file location is /u01/app/11.2.0/grid_2/crs/install/../../cfgtoollogs/opatchauto2014-10-14_07-32-13.log

Detected Oracle Clusterware install

Using configuration parameter file: /u01/app/11.2.0/grid_2/crs/install/crsconfig_params

Successfully unlock /u01/app/11.2.0/grid_2

[grid@11rac1 grid]#  export ORACLE_HOME=/u01/app/11.2.0/grid_2

[grid@11rac1 ContentsXML]$ /usr/bin/perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid  ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Ora11g_gridinfrahome1_2 INVENTORY_LOCATION=/u01/app/oraInventory -O’"CLUSTER_NODES={11rac1,11rac2}"’ -O’"LOCAL_NODE=11rac1"’ CRS=false -O"SHOW_ROOTSH_CONFIRMATION=false"

 

./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/u01/app/grid" "ORACLE_HOME=/u01/app/11.2.0/grid_2" "ORACLE_HOME_NAME=Ora11g_gridinfrahome1_2" "INVENTORY_LOCATION=/u01/app/oraInventory" "CLUSTER_NODES={11rac1,11rac2}" "LOCAL_NODE=11rac1" "CRS=false" SHOW_ROOTSH_CONFIRMATION=false -silent -noConfig -nowait

Starting Oracle Universal Installer…

 

Checking swap space: must be greater than 500 MB.   Actual 3999 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-10-14_07-41-33AM. Please wait …Oracle Universal Installer, Version 11.2.0.3.0 Production

Copyright (C) 1999, 2011, Oracle. All rights reserved.

 

You can find the log of this install session at:

 /u01/app/oraInventory/logs/cloneActions2014-10-14_07-41-33AM.log

.

Performing tests to see whether nodes 11rac2 are available

……………………………………………………… 100% Done.

 

 

 

Installation in progress (Monday, October 13, 2014 4:41:43 PM PDT)

………………………………………………………………                                                        72% Done.

Install successful

 

Linking in progress (Monday, October 13, 2014 4:41:46 PM PDT)

Link successful

 

Setup in progress (Monday, October 13, 2014 4:42:18 PM PDT)

……………..                                               100% Done.

Setup successful

 

End of install phases.(Monday, October 13, 2014 4:42:41 PM PDT)

The cloning of Ora11g_gridinfrahome1_2 was successful.

Please check ‘/u01/app/oraInventory/logs/cloneActions2014-10-14_07-41-33AM.log’ for more details.

 

 

下面查看一下新环境是否增加到inventory.xml文件中

[grid@11rac1 ContentsXML]$ cat inventory.xml

<?xml version="1.0" standalone="yes" ?>

<!– Copyright (c) 1999, 2011, Oracle. All rights reserved. –>

<!– Do not modify the contents of this file by hand. –>

<INVENTORY>

<VERSION_INFO>

   <SAVED_WITH>11.2.0.3.0</SAVED_WITH>

   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>

</VERSION_INFO>

<HOME_LIST>

<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">

   <NODE_LIST>

      <NODE NAME="11rac1"/>

      <NODE NAME="11rac2"/>

   </NODE_LIST>

</HOME>

<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">

   <NODE_LIST>

      <NODE NAME="11rac1"/>

      <NODE NAME="11rac2"/>

   </NODE_LIST>

</HOME>

<HOME NAME="Ora11g_gridinfrahome1_2" LOC="/u01/app/11.2.0/grid_2" TYPE="O" IDX="3">

   <NODE_LIST>

      <NODE NAME="11rac1"/>

      <NODE NAME="11rac2"/>

   </NODE_LIST>

</HOME>

</HOME_LIST>

<COMPOSITEHOME_LIST>

</COMPOSITEHOME_LIST>

</INVENTORY>

2.3 升级新GRID目录

这里采用了AUTO的方式自动升级,这里需要注意在OPATCH的时候,增加上-oh参数,指定新GRID目录

[root@11rac1 grid]# /u01/app/11.2.0/grid_2/OPatch/opatch auto /tmp/patch/ -ocmrf /tmp/ocm.rsp  -oh /u01/app/11.2.0/grid_2/

Executing /u01/app/11.2.0/grid/perl/bin/perl /u01/app/11.2.0/grid_2/OPatch/crs/patch11203.pl -patchdir /tmp -patchn patch -ocmrf /tmp/ocm.rsp -oh /u01/app/11.2.0/grid_2/ -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

 

This is the main log file: /u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_08-01-07.log

 

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:

/u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_08-01-07.report.log

 

2014-10-14 08:01:07: Starting Clusterware Patch Setup

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

 

Stopping RAC /u01/app/11.2.0/grid_2 …

Stopped RAC /u01/app/11.2.0/grid_2 successfully

 

patch /tmp/patch/17592127/custom/server/17592127  apply successful for home  /u01/app/11.2.0/grid_2

patch /tmp/patch/18031683  apply successful for home  /u01/app/11.2.0/grid_2

 

Starting RAC /u01/app/11.2.0/grid_2 …

Started RAC /u01/app/11.2.0/grid_2 successfully

 

opatch auto succeeded.

3 升级ORACLE目录

 

3.1 创建ORACLE目录与复制软件

这里我使用的root用户执行的,MOS文件使用的ORACLE用户执行,但是会报几个文件错误,属主root

[root@11rac1 ~]# export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_2

[root@11rac1 ~]# cd /u01/app/oracle/product/11.2.0/db_1/

[root@11rac1 db_1]# tar cpf – . | ( cd $ORACLE_HOME ; tar xf -)

3.2 克隆ORACLE目录

[root@11rac1 db_1]# su – oracle

[oracle@11rac1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_2

[oracle@11rac1 ~]$ cd $ORACLE_HOME/clone/bin

[oracle@11rac1 bin]$ ./clone.pl ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB_home2 ORACLE_BASE=/u01/app/oracle

 

./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_2" "ORACLE_HOME_NAME=OraDB_home2" "ORACLE_BASE=/u01/app/oracle" -silent -noConfig -nowait

Starting Oracle Universal Installer…

 

Checking swap space: must be greater than 500 MB.   Actual 3999 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-10-14_09-25-55AM. Please wait …Oracle Universal Installer, Version 11.2.0.3.0 Production

Copyright (C) 1999, 2011, Oracle. All rights reserved.

 

You can find the log of this install session at:

 /u01/app/oraInventory/logs/cloneActions2014-10-14_09-25-55AM.log

………………………………………………………………………………………. 100% Done.

 

 

 

Installation in progress (Monday, October 13, 2014 6:26:07 PM PDT)

…………………………………………………………………….                                                 79% Done.

Install successful

 

Linking in progress (Monday, October 13, 2014 6:26:12 PM PDT)

Link successful

 

Setup in progress (Monday, October 13, 2014 6:26:52 PM PDT)

Setup successful

 

End of install phases.(Monday, October 13, 2014 6:27:15 PM PDT)

WARNING:

The following configuration scripts need to be executed as the "root" user.

/u01/app/oracle/product/11.2.0/db_2/root.sh

To execute the configuration scripts:

    1. Open a terminal window

    2. Log in as "root"

    3. Run the scripts

   

The cloning of OraDB_home2 was successful.

Please check ‘/u01/app/oraInventory/logs/cloneActions2014-10-14_09-25-55AM.log’ for more details.

[oracle@11rac1 bin]$ exit

logout

[root@11rac1 db_1]# /u01/app/oracle/product/11.2.0/db_2/root.sh

Check /u01/app/oracle/product/11.2.0/db_2/install/root_11rac1_2014-10-14_09-27-39.log for the output of root script

[oracle@11rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={11rac1,11rac2}" -silent  -noClusterEnabled CRS=false local_node=11rac1

Starting Oracle Universal Installer…

 

Checking swap space: must be greater than 500 MB.   Actual 3995 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

‘UpdateNodeList’ was successful.

 

 

 

[grid@11rac1 ContentsXML]$ cat inventory.xml

<?xml version="1.0" standalone="yes" ?>

<!– Copyright (c) 1999, 2011, Oracle. All rights reserved. –>

<!– Do not modify the contents of this file by hand. –>

<INVENTORY>

<VERSION_INFO>

   <SAVED_WITH>11.2.0.3.0</SAVED_WITH>

   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>

</VERSION_INFO>

<HOME_LIST>

<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">

   <NODE_LIST>

      <NODE NAME="11rac1"/>

      <NODE NAME="11rac2"/>

   </NODE_LIST>

</HOME>

<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">

   <NODE_LIST>

      <NODE NAME="11rac1"/>

      <NODE NAME="11rac2"/>

   </NODE_LIST>

</HOME>

<HOME NAME="Ora11g_gridinfrahome1_2" LOC="/u01/app/11.2.0/grid_2" TYPE="O" IDX="3">

   <NODE_LIST>

      <NODE NAME="11rac1"/>

      <NODE NAME="11rac2"/>

   </NODE_LIST>

</HOME>

<HOME NAME="OraDB_home2" LOC="/u01/app/oracle/product/11.2.0/db_2" TYPE="O" IDX="4">

   <NODE_LIST>

      <NODE NAME="11rac1"/>

      <NODE NAME="11rac2"/>

   </NODE_LIST>

</HOME>

</HOME_LIST>

<COMPOSITEHOME_LIST>

</COMPOSITEHOME_LIST>

</INVENTORY>

3.3 升级新ORACLE目录

[root@11rac1 db_1]# /u01/app/oracle/product/11.2.0/db_2/OPatch/opatch auto /tmp/patch -oh /u01/app/oracle/product/11.2.0/db_2 -ocmrf /tmp/ocm.rsp

Executing /u01/app/11.2.0/grid/perl/bin/perl /u01/app/oracle/product/11.2.0/db_2/OPatch/crs/patch11203.pl -patchdir /tmp -patchn patch -oh /u01/app/oracle/product/11.2.0/db_2 -ocmrf /tmp/ocm.rsp -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

 

This is the main log file: /u01/app/oracle/product/11.2.0/db_2/cfgtoollogs/opatchauto2014-10-14_09-33-34.log

 

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:

/u01/app/oracle/product/11.2.0/db_2/cfgtoollogs/opatchauto2014-10-14_09-33-34.report.log

 

2014-10-14 09:33:34: Starting Clusterware Patch Setup

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

 

Stopping RAC /u01/app/oracle/product/11.2.0/db_2 …

Stopped RAC /u01/app/oracle/product/11.2.0/db_2 successfully

 

patch /tmp/patch/17592127/custom/server/17592127  apply successful for home  /u01/app/oracle/product/11.2.0/db_2

patch /tmp/patch/18031683  apply successful for home  /u01/app/oracle/product/11.2.0/db_2

 

Starting RAC /u01/app/oracle/product/11.2.0/db_2 …

Started RAC /u01/app/oracle/product/11.2.0/db_2 successfully

 

opatch auto succeeded.

4 2节点操作

另一个节点的操作步骤与1节点相同,这里就不写了

5 切换GRID到新目录

手动停数据库与相同的服务

[oracle@11rac1 ~]$ /u01/app/oracle/product/11.2.0/db_1/bin/srvctl stop home -o /u01/app/oracle/product/11.2.0/db_1 -n 11rac1  -s /tmp/11rac1.txt

[oracle@11rac2 ~]$ /u01/app/oracle/product/11.2.0/db_1/bin/srvctl stop home -o /u01/app/oracle/product/11.2.0/db_1 -n 11rac2  -s /tmp/11rac2.txt

 

[grid@11rac1 ~]$ su – grid

Password:

[grid@11rac1 ~]$ crsctl stat resource -t

——————————————————————————–

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

——————————————————————————–

Local Resources

——————————————————————————–

ora.CRS.dg

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.DATA.dg

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.LISTENER.lsnr

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.asm

               ONLINE  ONLINE       11rac1                   Started            

               ONLINE  ONLINE       11rac2                   Started            

ora.gsd

               OFFLINE OFFLINE      11rac1                                      

               OFFLINE OFFLINE      11rac2                                      

ora.net1.network

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.ons

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

——————————————————————————–

Cluster Resources

——————————————————————————–

ora.11rac1.vip

      1        ONLINE  ONLINE       11rac1                                      

ora.11rac2.vip

      1        ONLINE  ONLINE       11rac2                                      

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       11rac1                                      

ora.cvu

      1        ONLINE  ONLINE       11rac2                                      

ora.oc4j

      1        ONLINE  ONLINE       11rac1                                      

ora.power.db

      1        OFFLINE OFFLINE                               Instance Shutdown  

      2        OFFLINE OFFLINE                               Instance Shutdown  

ora.power.power1.svc

      1        OFFLINE OFFLINE                                                  

ora.power.power2.svc

      1        OFFLINE OFFLINE                                                  

ora.power.powera.svc

      1        OFFLINE OFFLINE                                                  

ora.power.powerb.svc

      1        OFFLINE OFFLINE                                                  

ora.scan1.vip

      1        ONLINE  ONLINE       11rac1   

这里看到GRID还正常运行的

patch11203.pl脚本会自动停GRID并修改切换文件,切换到新目录环境

[root@11rac1 utl]#  /usr/bin/perl /u01/app/11.2.0/grid_2/OPatch/crs/patch11203.pl -patch -desthome=/u01/app/11.2.0/grid_2

 

This is the main log file: /u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_19-10-19.log

 

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:

/u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_19-10-19.report.log

 

2014-10-14 19:10:19: Starting Clusterware Patch Setup

Using configuration parameter file: /u01/app/11.2.0/grid_2/crs/install/crsconfig_params

CRS-4544: Unable to connect to OHAS

CRS-4000: Command Stop failed, or completed with errors.

CRS-4123: Oracle High Availability Services has been started.

 

 

[root@11rac1 utl]# ps -ef|grep css

root     11401     1  0 19:10 ?        00:00:00 /u01/app/11.2.0/grid_2/bin/cssdmonitor

root     11421     1  0 19:10 ?        00:00:00 /u01/app/11.2.0/grid_2/bin/cssdagent

grid     11435     1  1 19:10 ?        00:00:01 /u01/app/11.2.0/grid_2/bin/ocssd.bin

这里可以看到节点1的新目录,节点2是旧目录,但是GRID仍是正常运行的,说明GRID可以支持两个ORACLE_HOME可以不一致                                  

下面看一看节点2运行patch11203.pl脚本的详细输出

[root@11rac2 grid]# /usr/bin/perl /u01/app/11.2.0/grid_2/OPatch/crs/patch11203.pl -patch -desthome=/u01/app/11.2.0/grid_2

 

This is the main log file: /u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_19-16-26.log

 

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:

/u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_19-16-26.report.log

 

2014-10-14 19:16:26: Starting Clusterware Patch Setup

Using configuration parameter file: /u01/app/11.2.0/grid_2/crs/install/crsconfig_params

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ’11rac2′

CRS-2673: Attempting to stop ‘ora.crsd’ on ’11rac2′

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ’11rac2′

CRS-2673: Attempting to stop ‘ora.oc4j’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.cvu’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.CRS.dg’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ’11rac2′

CRS-2677: Stop of ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.scan1.vip’ on ’11rac2′

CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.11rac2.vip’ on ’11rac2′

CRS-2677: Stop of ‘ora.scan1.vip’ on ’11rac2′ succeeded

CRS-2672: Attempting to start ‘ora.scan1.vip’ on ’11rac1′

CRS-2677: Stop of ‘ora.11rac2.vip’ on ’11rac2′ succeeded

CRS-2672: Attempting to start ‘ora.11rac2.vip’ on ’11rac1′

CRS-2677: Stop of ‘ora.DATA.dg’ on ’11rac2′ succeeded

CRS-2676: Start of ‘ora.scan1.vip’ on ’11rac1′ succeeded

CRS-2672: Attempting to start ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac1′

CRS-2676: Start of ‘ora.11rac2.vip’ on ’11rac1′ succeeded

CRS-2676: Start of ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac1′ succeeded

CRS-2677: Stop of ‘ora.oc4j’ on ’11rac2′ succeeded

CRS-2672: Attempting to start ‘ora.oc4j’ on ’11rac1′

CRS-2677: Stop of ‘ora.cvu’ on ’11rac2′ succeeded

CRS-2672: Attempting to start ‘ora.cvu’ on ’11rac1′

CRS-2676: Start of ‘ora.cvu’ on ’11rac1′ succeeded

CRS-2676: Start of ‘ora.oc4j’ on ’11rac1′ succeeded

CRS-2677: Stop of ‘ora.CRS.dg’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.asm’ on ’11rac2′

CRS-2677: Stop of ‘ora.asm’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.ons’ on ’11rac2′

CRS-2677: Stop of ‘ora.ons’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.net1.network’ on ’11rac2′

CRS-2677: Stop of ‘ora.net1.network’ on ’11rac2′ succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on ’11rac2′ has completed

CRS-2677: Stop of ‘ora.crsd’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.crf’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.ctssd’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.evmd’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.asm’ on ’11rac2′

CRS-2673: Attempting to stop ‘ora.mdnsd’ on ’11rac2′

CRS-2677: Stop of ‘ora.crf’ on ’11rac2′ succeeded

CRS-2677: Stop of ‘ora.evmd’ on ’11rac2′ succeeded

CRS-2677: Stop of ‘ora.mdnsd’ on ’11rac2′ succeeded

CRS-2677: Stop of ‘ora.ctssd’ on ’11rac2′ succeeded

CRS-2677: Stop of ‘ora.asm’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ’11rac2′

CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.cssd’ on ’11rac2′

CRS-2677: Stop of ‘ora.cssd’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.gipcd’ on ’11rac2′

CRS-2677: Stop of ‘ora.gipcd’ on ’11rac2′ succeeded

CRS-2673: Attempting to stop ‘ora.gpnpd’ on ’11rac2′

CRS-2677: Stop of ‘ora.gpnpd’ on ’11rac2′ succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ’11rac2′ has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

 

 

[root@11rac2 cfgtoollogs]# cat /u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_19-16-26.log

2014-10-14 19:16:26: Using Oracle CRS home /u01/app/11.2.0/grid_2

2014-10-14 19:16:26: Checking parameters from paramfile /u01/app/11.2.0/grid_2/crs/install/crsconfig_params to validate installer variables

2014-10-14 19:16:26: The configuration parameter file /u01/app/11.2.0/grid_2/crs/install/crsconfig_params is valid

2014-10-14 19:16:26: ### Printing the configuration values from files:

2014-10-14 19:16:26:    /u01/app/11.2.0/grid_2/crs/install/crsconfig_params

2014-10-14 19:16:26:    /u01/app/11.2.0/grid/crs/install/s_crsconfig_defs

2014-10-14 19:16:26: ASM_AU_SIZE=1

2014-10-14 19:16:26: ASM_DISCOVERY_STRING=

2014-10-14 19:16:26: ASM_DISKS=

2014-10-14 19:16:26: ASM_DISK_GROUP=

2014-10-14 19:16:26: ASM_REDUNDANCY=

2014-10-14 19:16:26: ASM_SPFILE=

2014-10-14 19:16:26: ASM_UPGRADE=false

2014-10-14 19:16:26: CLSCFG_MISSCOUNT=

2014-10-14 19:16:26: CLUSTER_GUID=

2014-10-14 19:16:26: CLUSTER_NAME=

2014-10-14 19:16:26: CRFHOME="/u01/app/11.2.0/grid_2"

2014-10-14 19:16:26: CRS_LIMIT_CORE=unlimited

2014-10-14 19:16:26: CRS_LIMIT_MEMLOCK=unlimited

2014-10-14 19:16:26: CRS_LIMIT_OPENFILE=65536

2014-10-14 19:16:26: CRS_LIMIT_STACK=2048

2014-10-14 19:16:26: CRS_NODEVIPS=

2014-10-14 19:16:26: CRS_STORAGE_OPTION=0

2014-10-14 19:16:26: CSS_LEASEDURATION=400

2014-10-14 19:16:26: DIRPREFIX=

2014-10-14 19:16:26: DISABLE_OPROCD=0

2014-10-14 19:16:26: EXTERNAL_ORACLE=/opt/oracle

2014-10-14 19:16:26: EXTERNAL_ORACLE_BIN=/opt/oracle/bin

2014-10-14 19:16:26: GNS_ADDR_LIST=

2014-10-14 19:16:26: GNS_ALLOW_NET_LIST=

2014-10-14 19:16:26: GNS_CONF=false

2014-10-14 19:16:26: GNS_DENY_ITF_LIST=

2014-10-14 19:16:26: GNS_DENY_NET_LIST=

2014-10-14 19:16:26: GNS_DOMAIN_LIST=

2014-10-14 19:16:26: GPNPCONFIGDIR=/u01/app/11.2.0/grid_2

2014-10-14 19:16:26: GPNPGCONFIGDIR=/u01/app/11.2.0/grid_2

2014-10-14 19:16:26: GPNP_PA=

2014-10-14 19:16:26: HOST_NAME_LIST=

2014-10-14 19:16:26: ID=/etc/init.d

2014-10-14 19:16:26: INIT=/sbin/init

2014-10-14 19:16:26: INITCTL=/sbin/initctl

2014-10-14 19:16:26: ISROLLING=true

2014-10-14 19:16:26: IT=/etc/inittab

2014-10-14 19:16:26: JLIBDIR=/u01/app/11.2.0/grid_2/jlib

2014-10-14 19:16:26: JREDIR=/u01/app/11.2.0/grid_2/jdk/jre/

2014-10-14 19:16:26: LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8

2014-10-14 19:16:26: MSGFILE=/var/adm/messages

2014-10-14 19:16:26: NETWORKS=

2014-10-14 19:16:26: NEW_HOST_NAME_LIST=

2014-10-14 19:16:26: NEW_NODEVIPS=

2014-10-14 19:16:26: NEW_NODE_NAME_LIST=

2014-10-14 19:16:26: NEW_PRIVATE_NAME_LIST=

2014-10-14 19:16:26: NODELIST=

2014-10-14 19:16:26: NODE_NAME_LIST=

2014-10-14 19:16:26: OCFS_CONFIG=

2014-10-14 19:16:26: OCRCONFIG=/etc/oracle/ocr.loc

2014-10-14 19:16:26: OCRCONFIGDIR=/etc/oracle

2014-10-14 19:16:26: OCRID=

2014-10-14 19:16:26: OCRLOC=ocr.loc

2014-10-14 19:16:26: OCR_LOCATIONS=

2014-10-14 19:16:26: OLASTGASPDIR=/etc/oracle/lastgasp

2014-10-14 19:16:26: OLD_CRS_HOME=

2014-10-14 19:16:26: OLRCONFIG=/etc/oracle/olr.loc

2014-10-14 19:16:26: OLRCONFIGDIR=/etc/oracle

2014-10-14 19:16:26: OLRLOC=olr.loc

2014-10-14 19:16:26: OPROCDCHECKDIR=/etc/oracle/oprocd/check

2014-10-14 19:16:26: OPROCDDIR=/etc/oracle/oprocd

2014-10-14 19:16:26: OPROCDFATALDIR=/etc/oracle/oprocd/fatal

2014-10-14 19:16:26: OPROCDSTOPDIR=/etc/oracle/oprocd/stop

2014-10-14 19:16:26: ORACLE_BASE=/u01/app/grid

2014-10-14 19:16:26: ORACLE_HOME=/u01/app/11.2.0/grid_2

2014-10-14 19:16:26: ORACLE_OWNER=grid

2014-10-14 19:16:26: ORA_ASM_GROUP=dba

2014-10-14 19:16:26: ORA_DBA_GROUP=dba

2014-10-14 19:16:26: PRIVATE_NAME_LIST=

2014-10-14 19:16:26: RCALLDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc3.d /etc/rc.d/rc4.d /etc/rc.d/rc5.d /etc/rc.d/rc6.d

2014-10-14 19:16:26: RCKDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc3.d /etc/rc.d/rc4.d /etc/rc.d/rc6.d

2014-10-14 19:16:26: RCSDIR=/etc/rc.d/rc3.d /etc/rc.d/rc5.d

2014-10-14 19:16:26: RC_KILL=K15

2014-10-14 19:16:26: RC_KILL_OLD=K96

2014-10-14 19:16:26: RC_KILL_OLD2=K19

2014-10-14 19:16:26: RC_START=S96

2014-10-14 19:16:26: REUSEDG=false

2014-10-14 19:16:26: SCAN_NAME=

2014-10-14 19:16:26: SCAN_PORT=0

2014-10-14 19:16:26: SCRBASE=/etc/oracle/scls_scr

2014-10-14 19:16:26: SILENT=true

2014-10-14 19:16:26: SO_EXT=so

2014-10-14 19:16:26: SRVCFGLOC=srvConfig.loc

2014-10-14 19:16:26: SRVCONFIG=/var/opt/oracle/srvConfig.loc

2014-10-14 19:16:26: SRVCONFIGDIR=/var/opt/oracle

2014-10-14 19:16:26: TZ=Asia/Shanghai

2014-10-14 19:16:26: UPSTART_INIT_DIR=/etc/init

2014-10-14 19:16:26: USER_IGNORED_PREREQ=true

2014-10-14 19:16:26: VNDR_CLUSTER=false

2014-10-14 19:16:26: VOTING_DISKS=

2014-10-14 19:16:26: ### Printing other configuration values ###

2014-10-14 19:16:26: CLSCFG_EXTRA_PARMS=

2014-10-14 19:16:26: HAS_GROUP=dba

2014-10-14 19:16:26: HAS_USER=root

2014-10-14 19:16:26: HOST=11rac2

2014-10-14 19:16:26: OLR_DIRECTORY=/u01/app/11.2.0/grid_2/cdata

2014-10-14 19:16:26: OLR_LOCATION=/u01/app/11.2.0/grid_2/cdata/11rac2.olr

2014-10-14 19:16:26: ORA_CRS_HOME=/u01/app/11.2.0/grid_2

2014-10-14 19:16:26: SUPERUSER=root

2014-10-14 19:16:26: VF_DISCOVERY_STRING=

2014-10-14 19:16:26: crscfg_trace=1

2014-10-14 19:16:26: crscfg_trace_file=/u01/app/11.2.0/grid_2/cfgtoollogs/opatchauto2014-10-14_19-16-26.log

2014-10-14 19:16:26: hosts=

2014-10-14 19:16:26: osdfile=/u01/app/11.2.0/grid/crs/install/s_crsconfig_defs

2014-10-14 19:16:26: parameters_valid=1

2014-10-14 19:16:26: paramfile=/u01/app/11.2.0/grid_2/crs/install/crsconfig_params

2014-10-14 19:16:26: platform_family=unix

2014-10-14 19:16:26: srvctl_trc_suff=0

2014-10-14 19:16:26: user_is_superuser=1

2014-10-14 19:16:26: ### Printing of configuration values complete ###

2014-10-14 19:16:26: Executing /u01/app/11.2.0/grid/bin/crsctl stop crs -f

2014-10-14 19:16:26: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl stop crs -f

2014-10-14 19:18:28: Command output:

>  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.crsd’ on ’11rac2′

>  CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.oc4j’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.cvu’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.CRS.dg’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.scan1.vip’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.11rac2.vip’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.scan1.vip’ on ’11rac2′ succeeded

>  CRS-2672: Attempting to start ‘ora.scan1.vip’ on ’11rac1′

>  CRS-2677: Stop of ‘ora.11rac2.vip’ on ’11rac2′ succeeded

>  CRS-2672: Attempting to start ‘ora.11rac2.vip’ on ’11rac1′

>  CRS-2677: Stop of ‘ora.DATA.dg’ on ’11rac2′ succeeded

>  CRS-2676: Start of ‘ora.scan1.vip’ on ’11rac1′ succeeded

>  CRS-2672: Attempting to start ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac1′

>  CRS-2676: Start of ‘ora.11rac2.vip’ on ’11rac1′ succeeded

>  CRS-2676: Start of ‘ora.LISTENER_SCAN1.lsnr’ on ’11rac1′ succeeded

>  CRS-2677: Stop of ‘ora.oc4j’ on ’11rac2′ succeeded

>  CRS-2672: Attempting to start ‘ora.oc4j’ on ’11rac1′

>  CRS-2677: Stop of ‘ora.cvu’ on ’11rac2′ succeeded

>  CRS-2672: Attempting to start ‘ora.cvu’ on ’11rac1′

>  CRS-2676: Start of ‘ora.cvu’ on ’11rac1′ succeeded

>  CRS-2676: Start of ‘ora.oc4j’ on ’11rac1′ succeeded

>  CRS-2677: Stop of ‘ora.CRS.dg’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.asm’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.asm’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.ons’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.ons’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.net1.network’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.net1.network’ on ’11rac2′ succeeded

>  CRS-2792: Shutdown of Cluster Ready Services-managed resources on ’11rac2′ has completed

>  CRS-2677: Stop of ‘ora.crsd’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.crf’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.ctssd’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.evmd’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.asm’ on ’11rac2′

>  CRS-2673: Attempting to stop ‘ora.mdnsd’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.crf’ on ’11rac2′ succeeded

>  CRS-2677: Stop of ‘ora.evmd’ on ’11rac2′ succeeded

>  CRS-2677: Stop of ‘ora.mdnsd’ on ’11rac2′ succeeded

>  CRS-2677: Stop of ‘ora.ctssd’ on ’11rac2′ succeeded

>  CRS-2677: Stop of ‘ora.asm’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.cssd’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.cssd’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.gipcd’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.gipcd’ on ’11rac2′ succeeded

>  CRS-2673: Attempting to stop ‘ora.gpnpd’ on ’11rac2′

>  CRS-2677: Stop of ‘ora.gpnpd’ on ’11rac2′ succeeded

>  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ’11rac2′ has completed

>  CRS-4133: Oracle High Availability Services has been stopped.

>End Command output

2014-10-14 19:18:28: /u01/app/11.2.0/grid/bin/crsctl stop crs -f

2014-10-14 19:18:28: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:18:30: Command output:

>  CRS-4639: Could not contact Oracle High Availability Services

>  CRS-4000: Command Check failed, or completed with errors.

>End Command output

2014-10-14 19:18:30: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check has

2014-10-14 19:18:32: Command output:

>  CRS-4639: Could not contact Oracle High Availability Services

>End Command output

2014-10-14 19:18:32: Validating /etc/oracle/olr.loc file for OLR location /u01/app/11.2.0/grid_2/cdata/11rac2.olr

2014-10-14 19:18:32: /etc/oracle/olr.loc already exists. Backing up /etc/oracle/olr.loc to /etc/oracle/olr.loc.orig

2014-10-14 19:18:32: Done setting permissions on file /etc/oracle/olr.loc

2014-10-14 19:18:32: 11.2 CHM upgrade actions are already performed

2014-10-14 19:18:32: Patching Oracle Clusterware

2014-10-14 19:18:32: norestart flag is set to 0

2014-10-14 19:18:32: Checking if OCR is on ASM

2014-10-14 19:18:32: Retrieving OCR main disk location

2014-10-14 19:18:32: Opening file /etc/oracle/ocr.loc

2014-10-14 19:18:32: Value (+CRS) is set for key=ocrconfig_loc

2014-10-14 19:18:32: Retrieving OCR mirror disk location

2014-10-14 19:18:32: Opening file /etc/oracle/ocr.loc

2014-10-14 19:18:32: Value () is set for key=ocrmirrorconfig_loc

2014-10-14 19:18:32: Retrieving OCR loc3 disk location

2014-10-14 19:18:32: Opening file /etc/oracle/ocr.loc

2014-10-14 19:18:32: Value () is set for key=ocrconfig_loc3

2014-10-14 19:18:32: Retrieving OCR loc4 disk location

2014-10-14 19:18:32: Opening file /etc/oracle/ocr.loc

2014-10-14 19:18:32: Value () is set for key=ocrconfig_loc4

2014-10-14 19:18:32: Retrieving OCR loc5 disk location

2014-10-14 19:18:32: Opening file /etc/oracle/ocr.loc

2014-10-14 19:18:32: Value () is set for key=ocrconfig_loc5

2014-10-14 19:18:32: Executing cmd: /u01/app/11.2.0/grid_2/bin/acfsdriverstate supported

2014-10-14 19:18:32: Command output:

>  ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘2.6.32-300.10.1.el5uek’

>  ACFS-9201: Not Supported

>End Command output

2014-10-14 19:18:32: acfs is not supported

2014-10-14 19:18:32: Starting Oracle Clusterware

2014-10-14 19:18:32: Executing /u01/app/11.2.0/grid_2/bin/crsctl start crs

2014-10-14 19:18:32: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl start crs

2014-10-14 19:18:40: Command output:

>  CRS-4123: Oracle High Availability Services has been started.

>End Command output

2014-10-14 19:18:40: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check crs

2014-10-14 19:18:40: Command output:

>  CRS-4638: Oracle High Availability Services is online

>  CRS-4535: Cannot communicate with Cluster Ready Services

>  CRS-4530: Communications failure contacting Cluster Synchronization Services daemon

>  CRS-4534: Cannot communicate with Event Manager

>End Command output

2014-10-14 19:18:40: Running /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:18:40: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:18:40: Checking the status of cluster

2014-10-14 19:18:45: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:18:45: Checking the status of cluster

2014-10-14 19:18:50: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:18:50: Checking the status of cluster

2014-10-14 19:18:55: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:18:55: Checking the status of cluster

2014-10-14 19:19:00: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:00: Checking the status of cluster

2014-10-14 19:19:05: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:05: Checking the status of cluster

2014-10-14 19:19:10: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:10: Checking the status of cluster

2014-10-14 19:19:15: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:15: Checking the status of cluster

2014-10-14 19:19:20: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:20: Checking the status of cluster

2014-10-14 19:19:25: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:25: Checking the status of cluster

2014-10-14 19:19:30: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:30: Checking the status of cluster

2014-10-14 19:19:35: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:35: Checking the status of cluster

2014-10-14 19:19:40: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:41: Checking the status of cluster

2014-10-14 19:19:46: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:46: Checking the status of cluster

2014-10-14 19:19:51: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:51: Checking the status of cluster

2014-10-14 19:19:56: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:19:56: Checking the status of cluster

2014-10-14 19:20:01: Executing cmd: /u01/app/11.2.0/grid_2/bin/crsctl check cluster -n 11rac2

2014-10-14 19:20:01: Oracle CRS stack installed and running

6 切换DATABASE到新目录

需要修改一下ORACLE_HOME的路径,在一个节点改就可以了

[oracle@11rac1 ~]$ srvctl modify database -d power -o /u01/app/oracle/product/11.2.0/db_2

[oracle@11rac1 ~]$ srvctl config database -d power -a

Database unique name: power

Database name: power

Oracle home: /u01/app/oracle/product/11.2.0/db_2

Oracle user: oracle

Spfile: +DATA/power/spfilepower.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: power

Database instances: power1,power2

Disk Groups: DATA

Mount point paths:

Services: power1,power2,powera,powerb

Type: RAC

Database is enabled

 

Database is administrator managed

修改两个节点的/etc/oratab文件,如下

[grid@11rac1 ~]$ tail -2 /etc/oratab

+ASM1:/u01/app/11.2.0/grid_2:N          # line added by Agent

power:/u01/app/oracle/product/11.2.0/db_2:N             # line added by Agent

 

启动之前停止的资源

[oracle@11rac1 ~]$ srvctl start home -o $ORACLE_HOME -n  11rac1 -s /tmp/11rac1.txt

 

 

[root@11rac1 ~]# su – grid

[grid@11rac1 ~]$ crsctl stat resource -t

——————————————————————————–

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

——————————————————————————–

Local Resources

——————————————————————————–

ora.CRS.dg

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.DATA.dg

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.LISTENER.lsnr

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.asm

               ONLINE  ONLINE       11rac1                   Started            

               ONLINE  ONLINE       11rac2                   Started            

ora.gsd

               OFFLINE OFFLINE      11rac1                                      

               OFFLINE OFFLINE      11rac2                                      

ora.net1.network

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

ora.ons

               ONLINE  ONLINE       11rac1                                      

               ONLINE  ONLINE       11rac2                                      

——————————————————————————–

Cluster Resources

——————————————————————————–

ora.11rac1.vip

      1        ONLINE  ONLINE       11rac1                                      

ora.11rac2.vip

      1        ONLINE  ONLINE       11rac2                                      

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       11rac1                                      

ora.cvu

      1        ONLINE  ONLINE       11rac1                                      

ora.oc4j

      1        ONLINE  ONLINE       11rac1                                      

ora.power.db

      1        ONLINE  ONLINE       11rac1                   Open               

      2        ONLINE  ONLINE       11rac2                   Open               

ora.power.power1.svc

      1        ONLINE  ONLINE       11rac1                                      

ora.power.power2.svc

      1        ONLINE  ONLINE       11rac1                                      

ora.power.powera.svc

      1        ONLINE  ONLINE       11rac1                                      

ora.power.powerb.svc

      1        ONLINE  ONLINE       11rac1                                      

ora.scan1.vip

      1        ONLINE  ONLINE       11rac1

 

查看一下版本    

[grid@11rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches

18031683;Database Patch Set Update : 11.2.0.3.10 (18031683)

17592127;Grid Infrastructure Patch Set Update : 11.2.0.3.9 (HAS Components)

 

 

[oracle@11rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches

18031683;Database Patch Set Update : 11.2.0.3.10 (18031683)

17592127;Grid Infrastructure Patch Set Update : 11.2.0.3.9 (HAS Components)

7,更新数据字典

更新数据字典略,见滚动升级文档

最小停机时间:给GRID/DB打PSU补丁或者UPDATE:等您坐沙发呢!

发表评论

gravatar

? razz sad evil ! smile oops grin eek shock ??? cool lol mad twisted roll wink idea arrow neutral cry mrgreen

快捷键:Ctrl+Enter