11.2.0.3升级到11.2.0.4.5补充
May182015
升级操作连接:http://www.htz.pw/2015/05/17/gridcrs-11-2-0-3%E5%8D%87%E7%BA%A7%E5%88%B011-2-0-4-5%E6%93%8D%E4%BD%9C%E6%AD%A5%E9%AA%A4.html
下面将介绍之前写的升级的一些补充日志记录
1,升级前的runcluvfy执行 |
建议在升级前执行runcluvfy验证主机环境是否满足要求。
[root@11rac2 ~]# id grid uid=501(grid) gid=501(dba) groups=501(dba) [root@11rac2 ~]# id oracle uid=502(oracle) gid=501(dba) groups=501(dba) [grid@11rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n 11rac1,11rac2 -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/app/11.2.0/grid_1 -dest_version 11.2.0.4.0 -fixup -fixupdir /home/grid/fixup -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "11rac1" Destination Node Reachable? ------------------------------------ ------------------------ 11rac1 yes 11rac2 yes Result: Node reachability check passed from node "11rac1" Checking user equivalence... Check: User equivalence for user "grid" Node Name Status ------------------------------------ ------------------------ 11rac2 passed 11rac1 passed Result: User equivalence check passed for user "grid" Checking CRS user consistency Result: CRS user consistency check successful Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ 11rac2 passed 11rac1 passed Verification of the hosts config file successful Interface information for node "11rac2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.111.14 192.168.111.0 0.0.0.0 192.168.111.1 00:0C:29:97:ED:5C 1500 eth0 192.168.111.16 192.168.111.0 0.0.0.0 192.168.111.1 00:0C:29:97:ED:5C 1500 eth1 192.168.112.14 192.168.112.0 0.0.0.0 192.168.111.1 00:0C:29:97:ED:66 1500 eth1 169.254.118.80 169.254.0.0 0.0.0.0 192.168.111.1 00:0C:29:97:ED:66 1500 bond0 192.168.113.14 192.168.113.0 0.0.0.0 192.168.111.1 00:0C:29:97:ED:70 1500 Interface information for node "11rac1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.111.13 192.168.111.0 0.0.0.0 192.168.111.1 00:0C:29:47:2D:44 1500 eth0 192.168.111.17 192.168.111.0 0.0.0.0 192.168.111.1 00:0C:29:47:2D:44 1500 eth0 192.168.111.15 192.168.111.0 0.0.0.0 192.168.111.1 00:0C:29:47:2D:44 1500 eth1 192.168.112.13 192.168.112.0 0.0.0.0 192.168.111.1 00:0C:29:47:2D:4E 1500 eth1 169.254.118.37 169.254.0.0 0.0.0.0 192.168.111.1 00:0C:29:47:2D:4E 1500 Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 11rac2[192.168.111.14] 11rac2[192.168.111.16] yes 11rac2[192.168.111.14] 11rac1[192.168.111.13] yes 11rac2[192.168.111.14] 11rac1[192.168.111.17] yes 11rac2[192.168.111.14] 11rac1[192.168.111.15] yes 11rac2[192.168.111.16] 11rac1[192.168.111.13] yes 11rac2[192.168.111.16] 11rac1[192.168.111.17] yes 11rac2[192.168.111.16] 11rac1[192.168.111.15] yes 11rac1[192.168.111.13] 11rac1[192.168.111.17] yes 11rac1[192.168.111.13] 11rac1[192.168.111.15] yes 11rac1[192.168.111.17] 11rac1[192.168.111.15] yes Result: Node connectivity passed for interface "eth0" Check: TCP connectivity of subnet "192.168.111.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 11rac1:192.168.111.13 11rac2:192.168.111.14 passed 11rac1:192.168.111.13 11rac2:192.168.111.16 passed 11rac1:192.168.111.13 11rac1:192.168.111.17 passed 11rac1:192.168.111.13 11rac1:192.168.111.15 passed Result: TCP connectivity check passed for subnet "192.168.111.0" Check: Node connectivity for interface "eth1" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 11rac2[192.168.112.14] 11rac1[192.168.112.13] yes Result: Node connectivity passed for interface "eth1" Check: TCP connectivity of subnet "192.168.112.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 11rac1:192.168.112.13 11rac2:192.168.112.14 passed Result: TCP connectivity check passed for subnet "192.168.112.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.111.0". Subnet mask consistency check passed for subnet "192.168.112.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.111.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.111.0" for multicast communication with multicast group "230.0.1.0" passed. Checking subnet "192.168.112.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.112.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Checking OCR integrity... OCR integrity check passed Checking ASMLib configuration. Node Name Status ------------------------------------ ------------------------ 11rac2 passed 11rac1 passed Result: Check for ASMLib configuration passed. Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 3.8582GB (4045660.0KB) 1.5GB (1572864.0KB) passed 11rac1 3.5907GB (3765080.0KB) 1.5GB (1572864.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 2.784GB (2919188.0KB) 50MB (51200.0KB) passed 11rac1 2.2307GB (2339012.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 3.9062GB (4095992.0KB) 3.8582GB (4045660.0KB) passed 11rac1 3.9062GB (4095992.0KB) 3.5907GB (3765080.0KB) passed Result: Swap space check passed Check: Free disk space for "11rac2:/u01/app/11.2.0/grid_1,11rac2:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid_1 11rac2 / 20.0596GB 7.5GB passed /tmp 11rac2 / 20.0596GB 7.5GB passed Result: Free disk space check passed for "11rac2:/u01/app/11.2.0/grid_1,11rac2:/tmp" Check: Free disk space for "11rac1:/u01/app/11.2.0/grid_1,11rac1:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid_1 11rac1 / 21.0671GB 7.5GB passed /tmp 11rac1 / 21.0671GB 7.5GB passed Result: Free disk space check passed for "11rac1:/u01/app/11.2.0/grid_1,11rac1:/tmp" Check: User existence for "grid" Node Name Status Comment ------------ ------------------------ ------------------------ 11rac2 passed exists(501) 11rac1 passed exists(501) Checking for multiple users with UID value 501 Result: Check for multiple users with UID value 501 passed Result: User existence check passed for "grid" Check: Group existence for "dba" Node Name Status Comment ------------ ------------------------ ------------------------ 11rac2 passed exists 11rac1 passed exists Result: Group existence check passed for "dba" Check: Membership of user "grid" in group "dba" [as Primary] Node Name User Exists Group Exists User in Group Primary Status ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 yes yes yes yes passed 11rac1 yes yes yes yes passed Result: Membership check for user "grid" in group "dba" [as Primary] passed Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 5 3,5 passed 11rac1 5 3,5 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 11rac2 hard 65536 65536 passed 11rac1 hard 65536 65536 passed Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 11rac2 soft 1024 1024 passed 11rac1 soft 1024 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 11rac2 hard 16384 16384 passed 11rac1 hard 16384 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 11rac2 soft 2047 2047 passed 11rac1 soft 2047 2047 passed Result: Soft limits check passed for "maximum user processes" There are no oracle patches required for home "/u01/app/11.2.0/grid". There are no oracle patches required for home "/u01/app/11.2.0/grid_1". Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 x86_64 x86_64 passed 11rac1 x86_64 x86_64 passed Result: System architecture check passed Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 2.6.32-300.10.1.el5uek 2.6.18 passed 11rac1 2.6.32-300.10.1.el5uek 2.6.18 passed Result: Kernel version check passed Check: Kernel parameter for "semmsl" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 250 250 250 passed 11rac1 250 250 250 passed Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 32000 32000 32000 passed 11rac1 32000 32000 32000 passed Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 100 100 100 passed 11rac1 100 100 100 passed Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 128 128 128 passed 11rac1 128 128 128 passed Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 11036870912 11036870912 2071377920 passed 11rac1 11036870912 11036870912 1927720960 passed Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 4096 4096 4096 passed 11rac1 4096 4096 4096 passed Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 2097152 2097152 2097152 passed 11rac1 2097152 2097152 2097152 passed Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 6815744 6815744 6815744 passed 11rac1 6815744 6815744 6815744 passed Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed 11rac1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 262144 262144 262144 passed 11rac1 262144 262144 262144 passed Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 4194304 4194304 4194304 passed 11rac1 4194304 4194304 4194304 passed Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 262144 262144 262144 passed 11rac1 262144 262144 262144 passed Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 1048586 1048586 1048576 passed 11rac1 131071 unknown 1048576 failed Current value incorrect. Configured value unknown. Result: Kernel parameter check failed for "wmem_max" Check: Kernel parameter for "aio-max-nr" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 11rac2 1048576 1048576 1048576 passed 11rac1 1048576 1048576 1048576 passed Result: Kernel parameter check passed for "aio-max-nr" Check: Package existence for "make" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 make-3.81-3.el5 make-3.81 passed 11rac1 make-3.81-3.el5 make-3.81 passed Result: Package existence check passed for "make" Check: Package existence for "binutils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 binutils-2.17.50.0.6-20.el5 binutils-2.17.50.0.6 passed 11rac1 binutils-2.17.50.0.6-20.el5 binutils-2.17.50.0.6 passed Result: Package existence check passed for "binutils" Check: Package existence for "gcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 gcc(x86_64)-4.1.2-52.el5 gcc(x86_64)-4.1.2 passed 11rac1 gcc(x86_64)-4.1.2-52.el5 gcc(x86_64)-4.1.2 passed Result: Package existence check passed for "gcc(x86_64)" Check: Package existence for "libaio(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed 11rac1 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed Result: Package existence check passed for "libaio(x86_64)" Check: Package existence for "glibc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 glibc(x86_64)-2.5-81 glibc(x86_64)-2.5-24 passed 11rac1 glibc(x86_64)-2.5-81 glibc(x86_64)-2.5-24 passed Result: Package existence check passed for "glibc(x86_64)" Check: Package existence for "compat-libstdc++-33(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed 11rac1 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed Result: Package existence check passed for "compat-libstdc++-33(x86_64)" Check: Package existence for "elfutils-libelf(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed 11rac1 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed Result: Package existence check passed for "elfutils-libelf(x86_64)" Check: Package existence for "elfutils-libelf-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed 11rac1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed Result: Package existence check passed for "elfutils-libelf-devel" Check: Package existence for "glibc-common" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 glibc-common-2.5-81 glibc-common-2.5 passed 11rac1 glibc-common-2.5-81 glibc-common-2.5 passed Result: Package existence check passed for "glibc-common" Check: Package existence for "glibc-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 glibc-devel(x86_64)-2.5-81 glibc-devel(x86_64)-2.5 passed 11rac1 glibc-devel(x86_64)-2.5-81 glibc-devel(x86_64)-2.5 passed Result: Package existence check passed for "glibc-devel(x86_64)" Check: Package existence for "glibc-headers" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 glibc-headers-2.5-81 glibc-headers-2.5 passed 11rac1 glibc-headers-2.5-81 glibc-headers-2.5 passed Result: Package existence check passed for "glibc-headers" Check: Package existence for "gcc-c++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 gcc-c++(x86_64)-4.1.2-52.el5 gcc-c++(x86_64)-4.1.2 passed 11rac1 gcc-c++(x86_64)-4.1.2-52.el5 gcc-c++(x86_64)-4.1.2 passed Result: Package existence check passed for "gcc-c++(x86_64)" Check: Package existence for "libaio-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed 11rac1 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed Result: Package existence check passed for "libaio-devel(x86_64)" Check: Package existence for "libgcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 libgcc(x86_64)-4.1.2-52.el5 libgcc(x86_64)-4.1.2 passed 11rac1 libgcc(x86_64)-4.1.2-52.el5 libgcc(x86_64)-4.1.2 passed Result: Package existence check passed for "libgcc(x86_64)" Check: Package existence for "libstdc++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 libstdc++(x86_64)-4.1.2-52.el5 libstdc++(x86_64)-4.1.2 passed 11rac1 libstdc++(x86_64)-4.1.2-52.el5 libstdc++(x86_64)-4.1.2 passed Result: Package existence check passed for "libstdc++(x86_64)" Check: Package existence for "libstdc++-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 libstdc++-devel(x86_64)-4.1.2-52.el5 libstdc++-devel(x86_64)-4.1.2 passed 11rac1 libstdc++-devel(x86_64)-4.1.2-52.el5 libstdc++-devel(x86_64)-4.1.2 passed Result: Package existence check passed for "libstdc++-devel(x86_64)" Check: Package existence for "sysstat" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed 11rac1 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed Result: Package existence check passed for "sysstat" Check: Package existence for "ksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 ksh-20100621-5.el5 ksh-20060214 passed 11rac1 ksh-20100621-5.el5 ksh-20060214 passed Result: Package existence check passed for "ksh" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ 11rac2 passed 11rac1 passed Check for consistency of root user's primary group passed Check: Package existence for "cvuqdisk" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 11rac2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed 11rac1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed Result: Package existence check passed for "cvuqdisk" Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... The NTP configuration file "/etc/ntp.conf" is available on all nodes NTP Configuration file check passed No NTP Daemons or Services were found to be running PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s): 11rac2,11rac1 Result: Clock synchronization check using Network Time Protocol(NTP) failed Checking Core file name pattern consistency... Core file name pattern consistency check passed. Checking to make sure user "grid" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ 11rac2 passed does not exist 11rac1 passed does not exist Result: User "grid" is not part of "root" group. Check passed Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- 11rac2 0022 0022 passed 11rac1 0022 0022 passed Result: Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined File "/etc/resolv.conf" does not have both domain and search entries defined Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes... domain entry in file "/etc/resolv.conf" is consistent across nodes Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes... search entry in file "/etc/resolv.conf" is consistent across nodes Checking file "/etc/resolv.conf" to make sure that only one search entry is defined All nodes have one search entry defined in file "/etc/resolv.conf" Checking all nodes to make sure that search entry is "localdomain" as found on node "11rac2" All nodes of the cluster have same value for 'search' Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ 11rac2 failed 11rac1 failed PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: 11rac2,11rac1 File "/etc/resolv.conf" is not consistent across nodes UDev attributes check for OCR locations started... Result: UDev attributes check passed for OCR locations UDev attributes check for Voting Disk locations started... Result: UDev attributes check passed for Voting Disk locations Check: Time zone consistency Result: Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Checking Oracle Cluster Voting Disk configuration... ASM Running check passed. ASM is running on all specified nodes Oracle Cluster Voting Disk configuration check passed Clusterware version consistency passed Fixup information has been generated for following node(s): 11rac1 Please run the following script on each node as "root" user to execute the fixups: '/tmp/CVU_11.2.0.4.0_grid/runfixup.sh' Pre-check for cluster services setup was unsuccessful on all the nodes.
这里看到DNS报错,是正常的,因为此环境没有使用DNS来解析SCANIP。
2, rootupgrade.sh执行的命令 |
第一个节点执行rootupgrade.sh脚本的后天输出部分信息
[root@11rac1 crsconfig]# grep -E "Executing|Running|Sync" rootcrs_11rac1.log 2015-05-13 00:39:14: Executing cmd: /u01/app/11.2.0/grid_1/crs/install/tfa_setup.sh -silent -crshome /u01/app/11.2.0/grid_1 2015-05-13 00:39:37: Running as user grid: pwd 2015-05-13 00:39:37: s_run_as_user2: Running /bin/su grid -c ' pwd ' 2015-05-13 00:39:37: Executing cmd: /u01/app/11.2.0/grid_1/bin/sqlplus -V 2015-05-13 00:39:37: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK 2015-05-13 00:39:37: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK ' 2015-05-13 00:39:37: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -pname VERSION 2015-05-13 00:39:37: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -pname VERSION ' 2015-05-13 00:39:38: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -listckpt -name ROOTCRS_STACK -pname VERSION 2015-05-13 00:39:38: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -listckpt -name ROOTCRS_STACK -pname VERSION ' 2015-05-13 00:39:39: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -state START 2015-05-13 00:39:39: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -state START ' 2015-05-13 00:39:39: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:39: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:39: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -pname VERSION -pvalue 11.2.0.4.0 2015-05-13 00:39:39: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -pname VERSION -pvalue 11.2.0.4.0 ' 2015-05-13 00:39:40: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:40: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:40: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -status 2015-05-13 00:39:40: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -status ' 2015-05-13 00:39:40: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_OLDHOMEINFO 2015-05-13 00:39:40: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_OLDHOMEINFO ' 2015-05-13 00:39:40: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -state START 2015-05-13 00:39:40: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -state START ' 2015-05-13 00:39:40: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:40: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:40: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl query crs activeversion 2015-05-13 00:39:41: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl get css clusterguid 2015-05-13 00:39:41: Executing ocrcheck to get ocrid 2015-05-13 00:39:45: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -pname OLD_CRS_HOME -pvalue /u01/app/11.2.0/grid 2015-05-13 00:39:45: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -pname OLD_CRS_HOME -pvalue /u01/app/11.2.0/grid ' 2015-05-13 00:39:45: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:45: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:45: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -pname OLD_CRS_VERSION -pvalue 11.2.0.3.0 2015-05-13 00:39:45: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -pname OLD_CRS_VERSION -pvalue 11.2.0.3.0 ' 2015-05-13 00:39:45: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:45: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:45: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -state SUCCESS 2015-05-13 00:39:45: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLDHOMEINFO -state SUCCESS ' 2015-05-13 00:39:46: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:46: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:46: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl query crs softwareversion 11rac1 2015-05-13 00:39:46: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl query crs softwareversion 11rac2 2015-05-13 00:39:46: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl check crs 2015-05-13 00:39:46: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluvfy stage -pre crsinst -n 11rac1,11rac2 -upgrade -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/app/11.2.0/grid_1 -dest_version 11.2.0.4.0 -_patch_only 2015-05-13 00:39:46: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluvfy stage -pre crsinst -n 11rac1,11rac2 -upgrade -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/app/11.2.0/grid_1 -dest_version 11.2.0.4.0 -_patch_only ' 2015-05-13 00:39:55: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -status 2015-05-13 00:39:55: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -status ' 2015-05-13 00:39:55: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PARAM 2015-05-13 00:39:55: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PARAM ' 2015-05-13 00:39:55: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_PARAM -state START 2015-05-13 00:39:55: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_PARAM -state START ' 2015-05-13 00:39:56: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:56: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:56: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PARAM 2015-05-13 00:39:56: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PARAM ' 2015-05-13 00:39:56: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PARAM -status 2015-05-13 00:39:56: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PARAM -status ' 2015-05-13 00:39:56: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_PARAM -pfile /u01/app/11.2.0/grid_1/crs/install/crsconfig_params 2015-05-13 00:39:56: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_PARAM -pfile /u01/app/11.2.0/grid_1/crs/install/crsconfig_params ' 2015-05-13 00:39:56: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_PARAM -state SUCCESS 2015-05-13 00:39:56: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_PARAM -state SUCCESS ' 2015-05-13 00:39:57: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:57: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:57: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ONETIME 2015-05-13 00:39:57: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ONETIME ' 2015-05-13 00:39:57: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ONETIME -state START 2015-05-13 00:39:57: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ONETIME -state START ' 2015-05-13 00:39:57: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:57: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:57: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ONETIME -status 2015-05-13 00:39:57: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ONETIME -status ' 2015-05-13 00:39:57: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ONETIME -state SUCCESS 2015-05-13 00:39:57: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ONETIME -state SUCCESS ' 2015-05-13 00:39:58: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:39:58: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:39:58: Running as user grid: /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/11rac1/wallets/peer" -wu=peer 2015-05-13 00:39:58: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/11rac1/wallets/peer" -wu=peer ' 2015-05-13 00:39:58: Running as user grid: /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/11rac1/wallets/prdr" -wu=peer 2015-05-13 00:39:58: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/11rac1/wallets/prdr" -wu=peer ' 2015-05-13 00:39:58: Running as user grid: /u01/app/11.2.0/grid_1/bin/gpnptool getpval -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -o="/tmp/fileHf2CLz" -prf_cn -prf_cid -prf_sq -prf_pa 2015-05-13 00:39:58: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/gpnptool getpval -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -o="/tmp/fileHf2CLz" -prf_cn -prf_cid -prf_sq -prf_pa ' 2015-05-13 00:39:58: Running as user grid: /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/wallets/peer" -wu=peer 2015-05-13 00:39:58: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/wallets/peer" -wu=peer ' 2015-05-13 00:39:59: Running as user grid: /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/wallets/prdr" -wu=peer 2015-05-13 00:39:59: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/wallets/prdr" -wu=peer ' 2015-05-13 00:39:59: Running as user grid: /u01/app/11.2.0/grid_1/bin/gpnptool getpval -p="/u01/app/11.2.0/grid_1/gpnp/profiles/peer/profile.xml" -o="/tmp/file9k5wMO" -prf_cn -prf_cid -prf_sq -prf_pa 2015-05-13 00:39:59: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/gpnptool getpval -p="/u01/app/11.2.0/grid_1/gpnp/profiles/peer/profile.xml" -o="/tmp/file9k5wMO" -prf_cn -prf_cid -prf_sq -prf_pa ' 2015-05-13 00:39:59: Running as user grid: /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/wallets/peer" -wu=peer 2015-05-13 00:39:59: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/gpnptool verify -p="/u01/app/11.2.0/grid_1/gpnp/11rac1/profiles/peer/profile.xml" -w="file:/u01/app/11.2.0/grid_1/gpnp/wallets/peer" -wu=peer ' 2015-05-13 00:39:59: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl check crs 2015-05-13 00:39:59: Executing cmd: /u01/app/11.2.0/grid_1/bin/ocrcheck -local -debug 2015-05-13 00:40:31: Executing as grid: /u01/app/11.2.0/grid_1/bin/asmca -silent -upgradeLocalASM -firstNode 2015-05-13 00:40:31: Running as user grid: /u01/app/11.2.0/grid_1/bin/asmca -silent -upgradeLocalASM -firstNode 2015-05-13 00:40:31: Executing /bin/su grid -c "/u01/app/11.2.0/grid_1/bin/asmca -silent -upgradeLocalASM -firstNode" 2015-05-13 00:40:31: Executing cmd: /bin/su grid -c "/u01/app/11.2.0/grid_1/bin/asmca -silent -upgradeLocalASM -firstNode" 2015-05-13 00:40:40: Executing /u01/app/11.2.0/grid/bin/crsctl stop crs -f 2015-05-13 00:40:40: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl stop crs -f 2015-05-13 00:42:39: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:42:41: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has 2015-05-13 00:42:43: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -status 2015-05-13 00:42:43: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_STACK -status ' 2015-05-13 00:42:44: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_OLR 2015-05-13 00:42:44: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_OLR ' 2015-05-13 00:42:44: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLR -state START 2015-05-13 00:42:44: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLR -state START ' 2015-05-13 00:42:45: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:42:45: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:42:45: Executing /u01/app/11.2.0/grid_1/bin/ocrconfig -local -upgrade grid dba 2015-05-13 00:42:45: Executing cmd: /u01/app/11.2.0/grid_1/bin/ocrconfig -local -upgrade grid dba 2015-05-13 00:42:45: Executing cmd: /u01/app/11.2.0/grid_1/bin/clscfg -localupgrade 2015-05-13 00:42:45: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLR -state SUCCESS 2015-05-13 00:42:45: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLR -state SUCCESS ' 2015-05-13 00:42:45: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:42:45: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:42:45: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLR -state SUCCESS 2015-05-13 00:42:45: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OLR -state SUCCESS ' 2015-05-13 00:42:46: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:42:46: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:42:46: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_OHASD 2015-05-13 00:42:46: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_OHASD ' 2015-05-13 00:42:46: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OHASD -state START 2015-05-13 00:42:46: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OHASD -state START ' 2015-05-13 00:42:46: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:42:46: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:42:46: Executing cmd: /bin/rpm -q sles-release 2015-05-13 00:42:46: Executing cmd: /bin/rpm -q sles-release 2015-05-13 00:42:46: Executing cmd: /bin/rpm -q sles-release 2015-05-13 00:42:46: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OHASD -state SUCCESS 2015-05-13 00:42:46: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OHASD -state SUCCESS ' 2015-05-13 00:42:46: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:42:46: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:42:46: Executing cmd: /bin/rpm -q sles-release 2015-05-13 00:42:46: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has 2015-05-13 00:42:53: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has 2015-05-13 00:43:00: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has 2015-05-13 00:43:07: Executing cmd: /bin/rpm -qf /sbin/init 2015-05-13 00:43:07: Executing /sbin/init q 2015-05-13 00:43:07: Executing cmd: /sbin/init q 2015-05-13 00:43:12: Executing /sbin/init q 2015-05-13 00:43:12: Executing cmd: /sbin/init q 2015-05-13 00:43:12: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start has 2015-05-13 00:43:12: Executing /etc/init.d/ohasd install 2015-05-13 00:43:12: Executing cmd: /etc/init.d/ohasd install 2015-05-13 00:43:12: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has 2015-05-13 00:43:12: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_INITRES 2015-05-13 00:43:12: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_INITRES ' 2015-05-13 00:43:12: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_INITRES -state START 2015-05-13 00:43:12: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_INITRES -state START ' 2015-05-13 00:43:12: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:43:12: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:43:12: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.daemon.type -basetype cluster_resource -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/ora.daemon.type -init 2015-05-13 00:43:12: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.daemon.type -basetype cluster_resource -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/ora.daemon.type -init 2015-05-13 00:43:12: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.haip.type -basetype cluster_resource -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/ora.haip.type -init 2015-05-13 00:43:12: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.haip.type -basetype cluster_resource -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/ora.haip.type -init 2015-05-13 00:43:13: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.mdns.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/mdns.type -init 2015-05-13 00:43:13: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.mdns.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/mdns.type -init 2015-05-13 00:43:13: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.gpnp.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/gpnp.type -init 2015-05-13 00:43:13: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.gpnp.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/gpnp.type -init 2015-05-13 00:43:13: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.gipc.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/gipc.type -init 2015-05-13 00:43:13: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.gipc.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/gipc.type -init 2015-05-13 00:43:13: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.cssd.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/cssd.type -init 2015-05-13 00:43:13: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.cssd.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/cssd.type -init 2015-05-13 00:43:13: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.cssdmonitor.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/cssdmonitor.type -init 2015-05-13 00:43:13: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.cssdmonitor.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/cssdmonitor.type -init 2015-05-13 00:43:14: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.crs.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/crs.type -init 2015-05-13 00:43:14: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.crs.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/crs.type -init 2015-05-13 00:43:14: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.evm.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/evm.type -init 2015-05-13 00:43:14: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.evm.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/evm.type -init 2015-05-13 00:43:14: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.ctss.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/ctss.type -init 2015-05-13 00:43:14: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.ctss.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/ctss.type -init 2015-05-13 00:43:14: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.crf.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/crf.type -init 2015-05-13 00:43:14: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.crf.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/crf.type -init 2015-05-13 00:43:14: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.asm.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/asm.type -init 2015-05-13 00:43:14: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.asm.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/asm.type -init 2015-05-13 00:43:14: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.drivers.acfs.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/drivers.acfs.type -init 2015-05-13 00:43:14: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.drivers.acfs.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/drivers.acfs.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add type ora.diskmon.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/diskmon.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add type ora.diskmon.type -basetype ora.daemon.type -file /u01/app/11.2.0/grid_1/log/11rac1/ohasd/diskmon.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.mdnsd -attr "ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx'" -type ora.mdns.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.mdnsd -attr "ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx'" -type ora.mdns.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.gpnpd -attr "ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='weak(ora.mdnsd)'" -type ora.gpnp.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.gpnpd -attr "ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='weak(ora.mdnsd)'" -type ora.gpnp.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.gipcd -attr "ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='hard(ora.gpnpd)',STOP_DEPENDENCIES=hard(intermediate:ora.gpnpd)" -type ora.gipc.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.gipcd -attr "ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='hard(ora.gpnpd)',STOP_DEPENDENCIES=hard(intermediate:ora.gpnpd)" -type ora.gipc.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.diskmon -attr " USR_ORA_ENV=ORACLE_USER=grid,START_DEPENDENCIES='weak(concurrent:ora.cssd)pullup:always(ora.cssd)',ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x' " -type ora.diskmon.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.diskmon -attr " USR_ORA_ENV=ORACLE_USER=grid,START_DEPENDENCIES='weak(concurrent:ora.cssd)pullup:always(ora.cssd)',ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x' " -type ora.diskmon.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.cssdmonitor -attr " CSS_USER=grid,ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x' " -type ora.cssdmonitor.type -init -f 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.cssdmonitor -attr " CSS_USER=grid,ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x' " -type ora.cssdmonitor.type -init -f 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.cssd -attr " CSS_USER=grid,ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x',AUTO_START=always,START_DEPENDENCIES='weak(concurrent:ora.diskmon)hard(ora.cssdmonitor,ora.gpnpd,ora.gipcd)pullup(ora.gpnpd,ora.gipcd)',STOP_DEPENDENCIES='hard(intermediate:ora.gipcd,shutdown:ora.diskmon,intermediate:ora.cssdmonitor)' " -type ora.cssd.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.cssd -attr " CSS_USER=grid,ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x',AUTO_START=always,START_DEPENDENCIES='weak(concurrent:ora.diskmon)hard(ora.cssdmonitor,ora.gpnpd,ora.gipcd)pullup(ora.gpnpd,ora.gipcd)',STOP_DEPENDENCIES='hard(intermediate:ora.gipcd,shutdown:ora.diskmon,intermediate:ora.cssdmonitor)' " -type ora.cssd.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.ctssd -attr "ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x'" -type ora.ctss.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.ctssd -attr "ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x'" -type ora.ctss.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.cluster_interconnect.haip -attr "ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x'" -type ora.haip.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.cluster_interconnect.haip -attr "ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x'" -type ora.haip.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.evmd -attr " ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='hard(ora.cssd, ora.ctssd, ora.gipcd)pullup(ora.cssd, ora.ctssd, ora.gipcd)',STOP_DEPENDENCIES='hard(intermediate:ora.cssd,intermediate:ora.gipcd)' " -type ora.evm.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.evmd -attr " ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='hard(ora.cssd, ora.ctssd, ora.gipcd)pullup(ora.cssd, ora.ctssd, ora.gipcd)',STOP_DEPENDENCIES='hard(intermediate:ora.cssd,intermediate:ora.gipcd)' " -type ora.evm.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.crf -attr "ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x'" -type ora.crf.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.crf -attr "ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x'" -type ora.crf.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.asm -attr " ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='hard(ora.cssd,ora.cluster_interconnect.haip,ora.ctssd)pullup(ora.cssd,ora.cluster_interconnect.haip,ora.ctssd)weak(ora.drivers.acfs)',STOP_DEPENDENCIES='hard(intermediate:ora.cssd,shutdown:ora.cluster_interconnect.haip)' " -type ora.asm.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.asm -attr " ACL='owner:grid:rw-,pgrp:dba:rw-,other::r--,user:grid:rwx',START_DEPENDENCIES='hard(ora.cssd,ora.cluster_interconnect.haip,ora.ctssd)pullup(ora.cssd,ora.cluster_interconnect.haip,ora.ctssd)weak(ora.drivers.acfs)',STOP_DEPENDENCIES='hard(intermediate:ora.cssd,shutdown:ora.cluster_interconnect.haip)' " -type ora.asm.type -init 2015-05-13 00:43:15: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.crsd -attr " ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x',START_DEPENDENCIES='hard(ora.asm,ora.cssd,ora.ctssd,ora.gipcd)pullup(ora.asm,ora.cssd,ora.ctssd,ora.gipcd)',STOP_DEPENDENCIES='hard(shutdown:ora.asm,intermediate:ora.cssd,intermediate:ora.gipcd)' " -type ora.crs.type -init 2015-05-13 00:43:15: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.crsd -attr " ACL='owner:root:rw-,pgrp:dba:rw-,other::r--,user:grid:r-x',START_DEPENDENCIES='hard(ora.asm,ora.cssd,ora.ctssd,ora.gipcd)pullup(ora.asm,ora.cssd,ora.ctssd,ora.gipcd)',STOP_DEPENDENCIES='hard(shutdown:ora.asm,intermediate:ora.cssd,intermediate:ora.gipcd)' " -type ora.crs.type -init 2015-05-13 00:43:16: Executing cmd: /u01/app/11.2.0/grid_1/bin/acfsdriverstate supported 2015-05-13 00:43:16: Executing /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.drivers.acfs -attr "ACL='owner:root:rwx,pgrp:dba:r-x,other::r--,user:grid:r-x'" -type ora.drivers.acfs.type -init 2015-05-13 00:43:16: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl add resource ora.drivers.acfs -attr "ACL='owner:root:rwx,pgrp:dba:r-x,other::r--,user:grid:r-x'" -type ora.drivers.acfs.type -init 2015-05-13 00:43:16: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_INITRES -state SUCCESS 2015-05-13 00:43:16: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_INITRES -state SUCCESS ' 2015-05-13 00:43:16: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:43:16: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:43:16: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ACFSINST 2015-05-13 00:43:16: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ACFSINST ' 2015-05-13 00:43:16: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ACFSINST -state START 2015-05-13 00:43:16: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ACFSINST -state START ' 2015-05-13 00:43:16: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:43:16: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:43:16: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ACFSINST -status 2015-05-13 00:43:16: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_ACFSINST -status ' 2015-05-13 00:43:17: Executing '/u01/app/11.2.0/grid_1/bin/acfsroot install' 2015-05-13 00:43:17: Executing cmd: /u01/app/11.2.0/grid_1/bin/acfsroot install 2015-05-13 00:43:21: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ACFSINST -state SUCCESS 2015-05-13 00:43:21: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_ACFSINST -state SUCCESS ' 2015-05-13 00:43:21: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:43:21: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:43:21: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.mdnsd -init 2015-05-13 00:43:23: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.gpnpd -init 2015-05-13 00:43:24: Executing cmd: /u01/app/11.2.0/grid_1/bin/gpnptool lfind 2015-05-13 00:43:24: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.cssd -init 2015-05-13 00:43:51: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.ctssd -init -env USR_ORA_ENV=CTSS_REBOOT=TRUE 2015-05-13 00:43:54: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.crf -init 2015-05-13 00:43:55: Executing /u01/app/11.2.0/grid_1/bin/crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=1" -init 2015-05-13 00:43:55: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=1" -init 2015-05-13 00:43:55: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.cluster_interconnect.haip -init 2015-05-13 00:44:05: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.asm -init 2015-05-13 00:44:12: Executing /u01/app/11.2.0/grid_1/bin/ocrconfig -upgrade grid dba 2015-05-13 00:44:12: Executing cmd: /u01/app/11.2.0/grid_1/bin/ocrconfig -upgrade grid dba 2015-05-13 00:44:17: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.crsd -init 2015-05-13 00:44:18: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start resource ora.evmd -init 2015-05-13 00:44:19: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check crs 2015-05-13 00:44:19: Running /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:19: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:25: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:30: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:35: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:40: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:45: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:50: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:50: Running /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:50: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 00:44:50: Executing /u01/app/11.2.0/grid_1/bin/clscfg -upgrade -g dba 2015-05-13 00:44:50: Executing cmd: /u01/app/11.2.0/grid_1/bin/clscfg -upgrade -g dba 2015-05-13 00:44:50: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_NODECONFIG 2015-05-13 00:44:50: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_NODECONFIG ' 2015-05-13 00:44:51: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_NODECONFIG -state START 2015-05-13 00:44:51: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_NODECONFIG -state START ' 2015-05-13 00:44:51: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:44:51: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:44:51: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl query crs softwareversion 11rac1 2015-05-13 00:44:51: Executing /u01/app/11.2.0/grid_1/bin/srvctl upgrade model -s 11.2.0.3.0 -d 11.2.0.4.0 -p first 2015-05-13 00:44:51: Executing cmd: /u01/app/11.2.0/grid_1/bin/srvctl upgrade model -s 11.2.0.3.0 -d 11.2.0.4.0 -p first 2015-05-13 00:44:52: Running as user grid: /u01/app/11.2.0/grid_1/bin/srvctl stop oc4j 2015-05-13 00:44:52: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/srvctl stop oc4j ' 2015-05-13 00:45:08: Running as user grid: /u01/app/11.2.0/grid_1/bin/srvctl disable oc4j 2015-05-13 00:45:08: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/srvctl disable oc4j ' 2015-05-13 00:45:08: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_NODECONFIG -state SUCCESS 2015-05-13 00:45:08: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_NODECONFIG -state SUCCESS ' 2015-05-13 00:45:08: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:45:08: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk 2015-05-13 00:45:08: Executing cmd: /u01/app/11.2.0/grid_1/bin/ocrconfig -local -manualbackup 2015-05-13 00:45:09: Running as user grid: /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -state SUCCESS 2015-05-13 00:45:09: s_run_as_user2: Running /bin/su grid -c ' /u01/app/11.2.0/grid_1/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -state SUCCESS ' 2015-05-13 00:45:09: Sync the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 00:45:09: Sync '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' to the physical disk
3,一节点执行rootupgrade.sh后的回退 |
[root@11rac1 11.2.0]# export ORACLE_HOME=/u01/app/11.2.0/grid_1 [root@11rac1 11.2.0]# $ORACLE_HOME/crs/install/rootcrs.pl -downgrade -force -oldcrshome /u01/app/11.2.0/grid -version 11.2.0.3.0 Using configuration parameter file: /u01/app/11.2.0/grid_1/crs/install/crsconfig_params The '-lastnode' option is not specified on the downgrade command. Reissue the downgrade command on this node with the '-lastnode' option specified after downgrading all other nodes.
报错的原因很简单,-lastnode会以nocrs模式启动CRS,需要停整个集群
下面是手动降级
其它正常的节点
ALTER SYSTEM STOP ROLLING MIGRATION; |
回到节点一
cp -p /tmp/enmo /ohasd/etc/init.d cp -p /tmp/enmo/inittab /etc cp -pr /tmp/enmo/oracle /etc cp -pr /tmp/enmo/oraInventory /u01/app/ ocrconfig -local -restore /u01/app/11.2.0/grid/cdata/11rac1/backup_20150512_091709.olr 老目录启动crs [root@11rac1 grid]# pwd /u01/app/11.2.0/grid [root@11rac1 grid]# cd bin [root@11rac1 bin]# ./crsctl start crs CRS-4123: Oracle High Availability Services has been started. [root@11rac1 bin]# ps -ef|grep 11.2.0 grid 394 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin grid 408 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin grid 420 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/gpnpd.bin grid 433 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/gipcd.bin root 436 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/orarootagent.bin root 454 1 2 01:36 ? 00:00:06 /u01/app/11.2.0/grid/bin/osysmond.bin root 469 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdmonitor root 490 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdagent grid 505 1 0 01:36 ? 00:00:01 /u01/app/11.2.0/grid/bin/ocssd.bin root 638 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/octssd.bin reboot grid 660 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/evmd.bin root 821 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/ologgerd -m 11rac2 -r -d /u01/app/11.2.0/grid/crf/db/11rac1 root 849 1 0 01:36 ? 00:00:01 /u01/app/11.2.0/grid/bin/crsd.bin reboot grid 942 660 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o /u01/app/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/11.2.0/grid/evm/log/evmlogger.log grid 983 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin root 988 1 0 01:36 ? 00:00:00 /u01/app/11.2.0/grid/bin/orarootagent.bin grid 1095 1 0 01:37 ? 00:00:00 /u01/app/11.2.0/grid/opmn/bin/ons -d grid 1096 1095 0 01:37 ? 00:00:00 /u01/app/11.2.0/grid/opmn/bin/ons -d grid 1122 1 0 01:37 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit oracle 1162 1 0 01:37 ? 00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin root 2000 6158 0 01:40 pts/0 00:00:00 grep 11.2.0 root 32549 1 0 01:33 ? 00:00:02 /u01/app/11.2.0/grid_1/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid_1/tfa/11rac1/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid_1/tfa/11rac1/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid_1/tfa/11rac1/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid_1/tfa/11rac1/tfa_home root 32737 1 0 01:35 ? 00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot 重启主机测试 [root@11rac1 ~]# crsctl check cluster -all ************************************************************** 11rac1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** 11rac2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
回退正常
4,GRID正常的回退 |
节点2回退
[root@11rac2 ~]# /u01/app/11.2.0/grid_1/crs/install/rootcrs.pl -downgrade -force -oldcrshome /u01/app/11.2.0/grid -version 11.2.0.3.0 Using configuration parameter file: /u01/app/11.2.0/grid_1/crs/install/crsconfig_params CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '11rac2' CRS-2673: Attempting to stop 'ora.crsd' on '11rac2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on '11rac2' CRS-2673: Attempting to stop 'ora.CRS.dg' on '11rac2' CRS-2673: Attempting to stop 'ora.registry.acfs' on '11rac2' CRS-2673: Attempting to stop 'ora.power.db' on '11rac2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on '11rac2' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on '11rac2' CRS-2673: Attempting to stop 'ora.oc4j' on '11rac2' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on '11rac2' CRS-2677: Stop of 'ora.LISTENER.lsnr' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.11rac2.vip' on '11rac2' CRS-2677: Stop of 'ora.scan1.vip' on '11rac2' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on '11rac1' CRS-2677: Stop of 'ora.registry.acfs' on '11rac2' succeeded CRS-2677: Stop of 'ora.power.db' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on '11rac2' CRS-2677: Stop of 'ora.DATA.dg' on '11rac2' succeeded CRS-2677: Stop of 'ora.11rac2.vip' on '11rac2' succeeded CRS-2672: Attempting to start 'ora.11rac2.vip' on '11rac1' CRS-2676: Start of 'ora.scan1.vip' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on '11rac1' CRS-2676: Start of 'ora.11rac2.vip' on '11rac1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on '11rac1' succeeded CRS-2677: Stop of 'ora.oc4j' on '11rac2' succeeded CRS-2672: Attempting to start 'ora.oc4j' on '11rac1' CRS-2677: Stop of 'ora.CRS.dg' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.asm' on '11rac2' CRS-2677: Stop of 'ora.asm' on '11rac2' succeeded CRS-2676: Start of 'ora.oc4j' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.ons' on '11rac2' CRS-2677: Stop of 'ora.ons' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on '11rac2' CRS-2677: Stop of 'ora.net1.network' on '11rac2' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on '11rac2' has completed CRS-2677: Stop of 'ora.crsd' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on '11rac2' CRS-2673: Attempting to stop 'ora.drivers.acfs' on '11rac2' CRS-2673: Attempting to stop 'ora.ctssd' on '11rac2' CRS-2673: Attempting to stop 'ora.evmd' on '11rac2' CRS-2673: Attempting to stop 'ora.asm' on '11rac2' CRS-2677: Stop of 'ora.evmd' on '11rac2' succeeded CRS-2677: Stop of 'ora.mdnsd' on '11rac2' succeeded CRS-2677: Stop of 'ora.asm' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '11rac2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '11rac2' succeeded CRS-2677: Stop of 'ora.ctssd' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.cssd' on '11rac2' CRS-2677: Stop of 'ora.drivers.acfs' on '11rac2' succeeded CRS-2677: Stop of 'ora.cssd' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.crf' on '11rac2' CRS-2677: Stop of 'ora.crf' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on '11rac2' CRS-2677: Stop of 'ora.gipcd' on '11rac2' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on '11rac2' CRS-2677: Stop of 'ora.gpnpd' on '11rac2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '11rac2' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully downgraded Oracle Clusterware stack on this node 下面看看执行了那些命令 2015-05-13 04:45:08: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl query crs activeversion 2015-05-13 04:45:08: Command output: > Oracle Clusterware active version on the cluster is [11.2.0.4.0] >End Command output 2015-05-13 04:45:08: Version String passed is: Oracle Clusterware active version on the cluster is [11.2.0.4.0] 2015-05-13 04:45:08: Version Info returned is : 11.2.0.4.0 2015-05-13 04:45:08: Got CRS active version: 11.2.0.4.0 2015-05-13 04:45:08: Got CRS active version: 11.2.0.4.0 2015-05-13 04:45:08: Downgrading to a lower 11.2 version 2015-05-13 04:45:08: Checking if OCR is on ASM 2015-05-13 04:45:08: Retrieving OCR main disk location 2015-05-13 04:45:08: Opening file /etc/oracle/ocr.loc 2015-05-13 04:45:08: Value (+CRS) is set for key=ocrconfig_loc 2015-05-13 04:45:08: Retrieving OCR mirror disk location 2015-05-13 04:45:08: Opening file /etc/oracle/ocr.loc 2015-05-13 04:45:08: Value () is set for key=ocrmirrorconfig_loc 2015-05-13 04:45:08: Retrieving OCR loc3 disk location 2015-05-13 04:45:08: Opening file /etc/oracle/ocr.loc 2015-05-13 04:45:08: Value () is set for key=ocrconfig_loc3 2015-05-13 04:45:08: Retrieving OCR loc4 disk location 2015-05-13 04:45:08: Opening file /etc/oracle/ocr.loc 2015-05-13 04:45:08: Value () is set for key=ocrconfig_loc4 2015-05-13 04:45:08: Retrieving OCR loc5 disk location 2015-05-13 04:45:08: Opening file /etc/oracle/ocr.loc 2015-05-13 04:45:08: Value () is set for key=ocrconfig_loc5 2015-05-13 04:45:08: Try to get the exact DG on which OCR/VD are settling 2015-05-13 04:45:08: Running /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac2 2015-05-13 04:45:08: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac2 2015-05-13 04:45:08: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has ............................... 2015-05-13 04:45:09: Diskgroups found: CRS CRS CRS 2015-05-13 04:45:09: Check olr.olc bakcup 2015-05-13 04:45:09: Executing '/u01/app/11.2.0/grid_1/bin/crsctl stop crs -f' 2015-05-13 04:45:09: Executing /u01/app/11.2.0/grid_1/bin/crsctl stop crs -f 2015-05-13 04:45:09: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl stop crs -f 2015-05-13 04:45:51: Command output: > CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '11rac2' ...................................... 2015-05-13 04:45:51: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check crs 2015-05-13 04:45:53: Command output: > CRS-4639: Could not contact Oracle High Availability Services >End Command output 2015-05-13 04:45:53: Restore olr.loc from backup 2015-05-13 04:45:53: restore old olr.loc file 2015-05-13 04:45:53: copy "/etc/oracle/olr.loc.bkp" => "/etc/oracle/olr.loc" 2015-05-13 04:45:53: copy "/u01/app/11.2.0/grid_1/crf/admin/crf11rac2.orabkp" => "/u01/app/11.2.0/grid/crf/admin/crf11rac2.ora" 2015-05-13 04:45:53: copy "/u01/app/11.2.0/grid_1/crf/admin/crf11rac2.cfgbkp" => "/u01/app/11.2.0/grid/crf/admin/crf11rac2.cfg" 2015-05-13 04:45:53: Remove the checkpoint file 2015-05-13 04:45:53: CkptFile: /u01/app/grid/Clusterware/ckptGridHA_11rac2.xml 2015-05-13 04:45:53: Remove the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac2.xml' 2015-05-13 04:45:53: Removing file /u01/app/grid/Clusterware/ckptGridHA_11rac2.xml 2015-05-13 04:45:53: Successfully removed file: /u01/app/grid/Clusterware/ckptGridHA_11rac2.xml 2015-05-13 04:45:53: Restore init files 2015-05-13 04:45:53: restore init scripts 2015-05-13 04:45:53: copy "/u01/app/11.2.0/grid/crs/init/init.ohasd" => "/etc/init.d/init.ohasd" 2015-05-13 04:45:53: copy "/u01/app/11.2.0/grid/crs/init/ohasd" => "/etc/init.d/ohasd" 2015-05-13 04:45:53: Remove all new version related stuff from /etc/oratab 2015-05-13 04:45:53: Copying file /etc/oratab.new.11rac2 to /etc/oratab 2015-05-13 04:45:53: copy "/etc/oratab.new.11rac2" => "/etc/oratab" 2015-05-13 04:45:53: Removing file /etc/oratab.new.11rac2 2015-05-13 04:45:53: Removing file /etc/oratab.new.11rac2 2015-05-13 04:45:53: Successfully removed file: /etc/oratab.new.11rac2 2015-05-13 04:45:53: Successfully downgraded Oracle Clusterware stack on this node 其实这里可以看到,回退就是一个简单的替换文件的操作
节点1回退
下面是最后一个节点 [root@11rac1 ~]# /u01/app/11.2.0/grid_1/crs/install/rootcrs.pl -downgrade -force -oldcrshome /u01/app/11.2.0/grid -version 11.2.0.3.0 -lastnode Using configuration parameter file: /u01/app/11.2.0/grid_1/crs/install/crsconfig_params CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '11rac1' CRS-2673: Attempting to stop 'ora.crsd' on '11rac1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on '11rac1' CRS-2673: Attempting to stop 'ora.11rac2.vip' on '11rac1' CRS-2673: Attempting to stop 'ora.cvu' on '11rac1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on '11rac1' CRS-2673: Attempting to stop 'ora.CRS.dg' on '11rac1' CRS-2673: Attempting to stop 'ora.registry.acfs' on '11rac1' CRS-2673: Attempting to stop 'ora.power.power1.svc' on '11rac1' CRS-2673: Attempting to stop 'ora.power.power2.svc' on '11rac1' CRS-2673: Attempting to stop 'ora.oc4j' on '11rac1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on '11rac1' CRS-2677: Stop of 'ora.power.power1.svc' on '11rac1' succeeded CRS-2677: Stop of 'ora.power.power2.svc' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.power.db' on '11rac1' CRS-2677: Stop of 'ora.cvu' on '11rac1' succeeded CRS-2677: Stop of 'ora.LISTENER.lsnr' on '11rac1' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on '11rac1' CRS-2677: Stop of 'ora.11rac2.vip' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.11rac1.vip' on '11rac1' CRS-2677: Stop of 'ora.power.db' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on '11rac1' CRS-2677: Stop of 'ora.registry.acfs' on '11rac1' succeeded CRS-2677: Stop of 'ora.DATA.dg' on '11rac1' succeeded CRS-2677: Stop of 'ora.scan1.vip' on '11rac1' succeeded CRS-2677: Stop of 'ora.11rac1.vip' on '11rac1' succeeded CRS-2677: Stop of 'ora.oc4j' on '11rac1' succeeded CRS-2677: Stop of 'ora.CRS.dg' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.asm' on '11rac1' CRS-2677: Stop of 'ora.asm' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.ons' on '11rac1' CRS-2677: Stop of 'ora.ons' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on '11rac1' CRS-2677: Stop of 'ora.net1.network' on '11rac1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on '11rac1' has completed CRS-2677: Stop of 'ora.crsd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.drivers.acfs' on '11rac1' CRS-2673: Attempting to stop 'ora.crf' on '11rac1' CRS-2673: Attempting to stop 'ora.ctssd' on '11rac1' CRS-2673: Attempting to stop 'ora.evmd' on '11rac1' CRS-2673: Attempting to stop 'ora.asm' on '11rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on '11rac1' CRS-2677: Stop of 'ora.evmd' on '11rac1' succeeded CRS-2677: Stop of 'ora.crf' on '11rac1' succeeded CRS-2677: Stop of 'ora.mdnsd' on '11rac1' succeeded CRS-2677: Stop of 'ora.ctssd' on '11rac1' succeeded CRS-2677: Stop of 'ora.asm' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '11rac1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on '11rac1' CRS-2677: Stop of 'ora.drivers.acfs' on '11rac1' succeeded CRS-2677: Stop of 'ora.cssd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on '11rac1' CRS-2677: Stop of 'ora.gipcd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on '11rac1' CRS-2677: Stop of 'ora.gpnpd' on '11rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '11rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on '11rac1' CRS-2676: Start of 'ora.mdnsd' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on '11rac1' CRS-2676: Start of 'ora.gpnpd' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on '11rac1' CRS-2672: Attempting to start 'ora.gipcd' on '11rac1' CRS-2676: Start of 'ora.cssdmonitor' on '11rac1' succeeded CRS-2676: Start of 'ora.gipcd' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on '11rac1' CRS-2672: Attempting to start 'ora.diskmon' on '11rac1' CRS-2676: Start of 'ora.diskmon' on '11rac1' succeeded CRS-2676: Start of 'ora.cssd' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on '11rac1' CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on '11rac1' CRS-2672: Attempting to start 'ora.ctssd' on '11rac1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '11rac1' CRS-2676: Start of 'ora.drivers.acfs' on '11rac1' succeeded CRS-2676: Start of 'ora.ctssd' on '11rac1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on '11rac1' CRS-2676: Start of 'ora.asm' on '11rac1' succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '11rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on '11rac1' CRS-2673: Attempting to stop 'ora.ctssd' on '11rac1' CRS-2673: Attempting to stop 'ora.asm' on '11rac1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on '11rac1' CRS-2677: Stop of 'ora.mdnsd' on '11rac1' succeeded CRS-2677: Stop of 'ora.asm' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '11rac1' CRS-2677: Stop of 'ora.ctssd' on '11rac1' succeeded CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on '11rac1' CRS-2677: Stop of 'ora.drivers.acfs' on '11rac1' succeeded CRS-2677: Stop of 'ora.cssd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on '11rac1' CRS-2677: Stop of 'ora.gipcd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on '11rac1' CRS-2677: Stop of 'ora.gpnpd' on '11rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '11rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on '11rac1' CRS-2676: Start of 'ora.mdnsd' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on '11rac1' CRS-2676: Start of 'ora.gpnpd' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on '11rac1' CRS-2672: Attempting to start 'ora.gipcd' on '11rac1' CRS-2676: Start of 'ora.cssdmonitor' on '11rac1' succeeded CRS-2676: Start of 'ora.gipcd' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on '11rac1' CRS-2672: Attempting to start 'ora.diskmon' on '11rac1' CRS-2676: Start of 'ora.diskmon' on '11rac1' succeeded CRS-2676: Start of 'ora.cssd' on '11rac1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on '11rac1' CRS-2672: Attempting to start 'ora.ctssd' on '11rac1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '11rac1' CRS-2676: Start of 'ora.ctssd' on '11rac1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on '11rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on '11rac1' CRS-2676: Start of 'ora.asm' on '11rac1' succeeded Successfully downgraded OCR to 11.2.0.3.0 CRS-2672: Attempting to start 'ora.crsd' on '11rac1' CRS-2676: Start of 'ora.crsd' on '11rac1' succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '11rac1' CRS-2673: Attempting to stop 'ora.crsd' on '11rac1' CRS-2677: Stop of 'ora.crsd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on '11rac1' CRS-2673: Attempting to stop 'ora.asm' on '11rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on '11rac1' CRS-2677: Stop of 'ora.mdnsd' on '11rac1' succeeded CRS-2677: Stop of 'ora.asm' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '11rac1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '11rac1' succeeded CRS-2677: Stop of 'ora.ctssd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on '11rac1' CRS-2677: Stop of 'ora.cssd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on '11rac1' CRS-2677: Stop of 'ora.gipcd' on '11rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on '11rac1' CRS-2677: Stop of 'ora.gpnpd' on '11rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '11rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully downgraded Oracle Clusterware stack on this node Run '/u01/app/11.2.0/grid/bin/crsctl start crs' on all nodes 下面查看命令 [root@11rac1 crsconfig]# grep -E "Executing|Running|Sync|Remov|copy|Restore" crsdowngrade_11rac1.log 2015-05-13 04:52:18: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl query crs activeversion 2015-05-13 04:52:18: Running /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 04:52:18: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 04:52:18: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has 2015-05-13 04:52:23: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl query crs activeversion 2015-05-13 04:52:23: Running /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 04:52:23: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check cluster -n 11rac1 2015-05-13 04:52:23: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check has 2015-05-13 04:52:23: Executing '/u01/app/11.2.0/grid_1/bin/crsctl stop crs -f' 2015-05-13 04:52:23: Executing /u01/app/11.2.0/grid_1/bin/crsctl stop crs -f 2015-05-13 04:52:23: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl stop crs -f 2015-05-13 04:52:52: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check crs 2015-05-13 04:52:54: Executing '/u01/app/11.2.0/grid_1/bin/crsctl start crs -excl -nocrs' 2015-05-13 04:52:54: Executing /u01/app/11.2.0/grid_1/bin/crsctl start crs -excl -nocrs 2015-05-13 04:52:54: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl start crs -excl -nocrs 2015-05-13 04:54:04: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check css 2015-05-13 04:54:04: Executing '/u01/app/11.2.0/grid_1/bin/crsctl delete css votedisk '+CRS'' 2015-05-13 04:54:04: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl delete css votedisk '+CRS' 2015-05-13 04:54:04: Executing '/u01/app/11.2.0/grid_1/bin/crsctl stop crs -f' 2015-05-13 04:54:04: Executing /u01/app/11.2.0/grid_1/bin/crsctl stop crs -f 2015-05-13 04:54:04: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl stop crs -f 2015-05-13 04:54:19: Executing cmd: /u01/app/11.2.0/grid_1/bin/crsctl check crs 2015-05-13 04:54:20: Restore olr.loc from backup 2015-05-13 04:54:20: copy "/etc/oracle/olr.loc.bkp" => "/etc/oracle/olr.loc" 2015-05-13 04:54:20: copy "/u01/app/11.2.0/grid_1/crf/admin/crf11rac1.orabkp" => "/u01/app/11.2.0/grid/crf/admin/crf11rac1.ora" 2015-05-13 04:54:20: copy "/u01/app/11.2.0/grid_1/crf/admin/crf11rac1.cfgbkp" => "/u01/app/11.2.0/grid/crf/admin/crf11rac1.cfg" 2015-05-13 04:54:20: Executing '/u01/app/11.2.0/grid/bin/crsctl start crs -excl -nocrs' 2015-05-13 04:54:20: Executing /u01/app/11.2.0/grid/bin/crsctl start crs -excl -nocrs 2015-05-13 04:54:20: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl start crs -excl -nocrs 2015-05-13 04:55:13: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl check css 2015-05-13 04:55:13: Executing /u01/app/11.2.0/grid/bin/ocrconfig -import /u01/app/11.2.0/grid_1/cdata/ocr11.2.0.3.0 2015-05-13 04:55:13: Executing cmd: /u01/app/11.2.0/grid/bin/ocrconfig -import /u01/app/11.2.0/grid_1/cdata/ocr11.2.0.3.0 2015-05-13 04:55:18: Executing cmd: /u01/app/11.2.0/grid/bin/ocrcheck -debug 2015-05-13 04:55:21: Executing '/u01/app/11.2.0/grid/bin/crsctl start resource ora.crsd -init' 2015-05-13 04:55:21: Executing /u01/app/11.2.0/grid/bin/crsctl start resource ora.crsd -init 2015-05-13 04:55:21: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl start resource ora.crsd -init 2015-05-13 04:55:23: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl check crs 2015-05-13 04:55:23: Executing '/u01/app/11.2.0/grid/bin/crsctl replace votedisk '+CRS'' 2015-05-13 04:55:23: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl replace votedisk '+CRS' 2015-05-13 04:55:23: Executing '/u01/app/11.2.0/grid/bin/crsctl stop crs -f' 2015-05-13 04:55:23: Executing /u01/app/11.2.0/grid/bin/crsctl stop crs -f 2015-05-13 04:55:23: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl stop crs -f 2015-05-13 04:55:40: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl check crs 2015-05-13 04:55:42: Remove the checkpoint file 2015-05-13 04:55:42: Remove the checkpoint file '/u01/app/grid/Clusterware/ckptGridHA_11rac1.xml' 2015-05-13 04:55:42: Removing file /u01/app/grid/Clusterware/ckptGridHA_11rac1.xml 2015-05-13 04:55:42: Restore init files 2015-05-13 04:55:42: copy "/u01/app/11.2.0/grid/crs/init/init.ohasd" => "/etc/init.d/init.ohasd" 2015-05-13 04:55:42: copy "/u01/app/11.2.0/grid/crs/init/ohasd" => "/etc/init.d/ohasd" 2015-05-13 04:55:42: Remove all new version related stuff from /etc/oratab 2015-05-13 04:55:42: Removing file /etc/oratab.new.11rac1 2015-05-13 04:55:42: Removing file /etc/oratab.new.11rac1
本文固定链接: http://www.htz.pw/?p=1069 | 认真就输
最活跃的读者