环境:OS: SOLARIS 10 X86+SF FOR RAC 5.1 ,DB:10.2.0.4.0(RAW)
1,版本要求
在metalink上面没有找到11.2.0.4的版本要求,下面给出的11.2.0.3的版本要求
Things to Consider Before Upgrading to 11.2.0.3 Grid Infrastructure/ASM (文档 ID 1363369.1)
2,升级前的检查
$ cd grid $ ls install readme.html response rootpre.sh rpm runInstaller runcluvfy.sh sshsetup stage welcome.html $ runcluvfy.sh stage -pre crsinst -upgrade -n sol1,sol2 -rolling -src_crshome /u01/app/oracle/product/10.2.0/db_1 -dest_crshome /u01/app/11.2.0/grid -dest_version 11.2.0.4.0 -fixup -fixupdir /tmp -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "sol1" Destination Node Reachable? ------------------------------------ ------------------------ sol1 yes sol2 yes Result: Node reachability check passed from node "sol1" Checking user equivalence... Check: User equivalence for user "oracle" Node Name Status ------------------------------------ ------------------------ sol2 passed sol1 passed Result: User equivalence check passed for user "oracle" Checking CRS user consistency Result: CRS user consistency check successful Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ sol2 passed sol1 passed Verification of the hosts config file successful Interface information for node "sol2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ e1000g0 192.168.111.46 192.168.111.0 192.168.111.46 192.168.111.1 00:0C:29:5A:E5:7A 1500 e1000g0 192.168.111.48 192.168.111.0 192.168.111.46 192.168.111.1 00:0C:29:5A:E5:7A 1500 e1000g1 192.168.112.46 192.168.112.0 192.168.112.46 192.168.111.1 00:0C:29:5A:E5:84 1500 Interface information for node "sol1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ e1000g0 192.168.111.46 192.168.111.0 192.168.111.46 192.168.111.1 00:0C:29:5A:E5:7A 1500 e1000g0 192.168.111.48 192.168.111.0 192.168.111.46 192.168.111.1 00:0C:29:5A:E5:7A 1500 e1000g1 192.168.112.46 192.168.112.0 192.168.112.46 192.168.111.1 00:0C:29:5A:E5:84 1500 Check: Node connectivity for interface "e1000g0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- sol2[192.168.111.46] sol2[192.168.111.48] yes sol2[192.168.111.46] sol1[192.168.111.46] yes sol2[192.168.111.46] sol1[192.168.111.48] yes sol2[192.168.111.48] sol1[192.168.111.46] yes sol2[192.168.111.48] sol1[192.168.111.48] yes sol1[192.168.111.46] sol1[192.168.111.48] yes Result: Node connectivity passed for interface "e1000g0" Check: TCP connectivity of subnet "192.168.111.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- sol1:192.168.111.46 sol2:192.168.111.46 failed ERROR: PRVF-7617 : Node connectivity between "sol1 : 192.168.111.46" and "sol2 : 192.168.111.46" failed sol1:192.168.111.46 sol2:192.168.111.48 failed ERROR: PRVF-7617 : Node connectivity between "sol1 : 192.168.111.46" and "sol2 : 192.168.111.48" failed sol1:192.168.111.46 sol1:192.168.111.48 failed ERROR: PRVF-7617 : Node connectivity between "sol1 : 192.168.111.46" and "sol1 : 192.168.111.48" failed Result: TCP connectivity check failed for subnet "192.168.111.0" Check: Node connectivity for interface "e1000g1" Source Destination Connected? ------------------------------ ------------------------------ ---------------- sol2[192.168.112.46] sol1[192.168.112.46] yes Result: Node connectivity passed for interface "e1000g1" Check: TCP connectivity of subnet "192.168.112.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- sol1:192.168.112.46 sol2:192.168.112.46 passed Result: TCP connectivity check passed for subnet "192.168.112.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.111.0". Subnet mask consistency check passed for subnet "192.168.112.0". Subnet mask consistency check passed. Result: Node connectivity check failed Checking multicast communication... Checking subnet "192.168.111.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.111.0" for multicast communication with multicast group "230.0.1.0" passed. Checking subnet "192.168.112.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.112.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Checking OCR integrity... Check for compatible storage device for OCR location "/dev/vx/rdsk/ocrvotedg/ocrvol"... OCR integrity check passed Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 4GB (4194304.0KB) 2GB (2097152.0KB) passed sol1 4GB (4194304.0KB) 2GB (2097152.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 2.0401GB (2139204.0KB) 50MB (51200.0KB) passed sol1 2.0401GB (2139204.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 8.0051GB (8393956.0KB) 4GB (4194304.0KB) passed sol1 8.0051GB (8393956.0KB) 4GB (4194304.0KB) passed Result: Swap space check passed Check: Free disk space for "sol2:/u01/app/11.2.0/grid,sol2:/var/tmp/" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid sol2 UNKNOWN NOTAVAIL 7.5GB failed /var/tmp/ sol2 UNKNOWN NOTAVAIL 7.5GB failed Result: Free disk space check failed for "sol2:/u01/app/11.2.0/grid,sol2:/var/tmp/" Check: Free disk space for "sol1:/u01/app/11.2.0/grid,sol1:/var/tmp/" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid sol1 / 19.5727GB 7.5GB passed /var/tmp/ sol1 / 19.5727GB 7.5GB passed Result: Free disk space check passed for "sol1:/u01/app/11.2.0/grid,sol1:/var/tmp/" Check: User existence for "oracle" Node Name Status Comment ------------ ------------------------ ------------------------ sol2 passed exists(100) sol1 passed exists(100) Checking for multiple users with UID value 100 Result: Check for multiple users with UID value 100 passed Result: User existence check passed for "oracle" Check: Group existence for "dba" Node Name Status Comment ------------ ------------------------ ------------------------ sol2 passed exists sol1 passed exists Result: Group existence check passed for "dba" Check: Membership of user "oracle" in group "dba" [as Primary] Node Name User Exists Group Exists User in Group Primary Status ---------------- ------------ ------------ ------------ ------------ ------------ sol2 yes yes yes yes passed sol1 yes yes yes yes passed Result: Membership check for user "oracle" in group "dba" [as Primary] passed Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- sol2 3 3 passed sol1 3 3 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- sol2 hard 4096 65536 failed sol1 hard 4096 65536 failed Result: Hard limits check failed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- sol2 soft 4096 1024 passed sol1 soft 4096 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- sol2 hard 27605 16384 passed sol1 hard 27605 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- sol2 soft 27605 2047 passed sol1 soft 27605 2047 passed Result: Soft limits check passed for "maximum user processes" There are no oracle patches required for home "/u01/app/oracle/product/10.2.0/db_1". There are no oracle patches required for home "/u01/app/11.2.0/grid". Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 64-bit amd64 kernel modules 64-bit amd64 kernel modules passed sol1 64-bit amd64 kernel modules 64-bit amd64 kernel modules passed Result: System architecture check passed Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 5.10-2011.8 5.10-2008.10 passed sol1 5.10-2011.8 5.10-2008.10 passed Result: Kernel version check passed Check: Kernel parameter for "project.max-sem-ids" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 128 100 passed sol1 128 100 passed Result: Kernel parameter check passed for "project.max-sem-ids" Check: Kernel parameter for "process.max-sem-nsems" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 512 256 passed sol1 512 256 passed Result: Kernel parameter check passed for "process.max-sem-nsems" Check: Kernel parameter for "project.max-shm-memory" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 10737418240 4294967295 passed sol1 10737418240 4294967295 passed Result: Kernel parameter check passed for "project.max-shm-memory" Check: Kernel parameter for "project.max-shm-ids" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 128 100 passed sol1 128 100 passed Result: Kernel parameter check passed for "project.max-shm-ids" Check: Kernel parameter for "tcp_smallest_anon_port" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 9000 9000 passed sol1 9000 9000 passed Result: Kernel parameter check passed for "tcp_smallest_anon_port" Check: Kernel parameter for "tcp_largest_anon_port" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 65500 65500 passed sol1 65500 65500 passed Result: Kernel parameter check passed for "tcp_largest_anon_port" Check: Kernel parameter for "udp_smallest_anon_port" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 9000 9000 passed sol1 9000 9000 passed Result: Kernel parameter check passed for "udp_smallest_anon_port" Check: Kernel parameter for "udp_largest_anon_port" Node Name Current Required Status ------------ ------------------------ ------------------------ ---------- sol2 65500 65500 passed sol1 65500 65500 passed Result: Kernel parameter check passed for "udp_largest_anon_port" Check: Package existence for "SUNWarc" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWarc-11.10.0-2005.01.21.16.34 SUNWarc-... passed sol1 SUNWarc-11.10.0-2005.01.21.16.34 SUNWarc-... passed Result: Package existence check passed for "SUNWarc" Check: Package existence for "SUNWbtool" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWbtool-11.10.0-2005.01.21.16.34 SUNWbtool-... passed sol1 SUNWbtool-11.10.0-2005.01.21.16.34 SUNWbtool-... passed Result: Package existence check passed for "SUNWbtool" Check: Package existence for "SUNWhea" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWhea-11.10.0-2005.01.21.16.34 SUNWhea-... passed sol1 SUNWhea-11.10.0-2005.01.21.16.34 SUNWhea-... passed Result: Package existence check passed for "SUNWhea" Check: Package existence for "SUNWlibm" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWlibm-5.10-2004.12.18 SUNWlibm-... passed sol1 SUNWlibm-5.10-2004.12.18 SUNWlibm-... passed Result: Package existence check passed for "SUNWlibm" Check: Package existence for "SUNWlibms" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWlibms-5.10-2004.11.23 SUNWlibms-... passed sol1 SUNWlibms-5.10-2004.11.23 SUNWlibms-... passed Result: Package existence check passed for "SUNWlibms" Check: Package existence for "SUNWsprot" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWsprot-5.10-2004.12.18 SUNWsprot-... passed sol1 SUNWsprot-5.10-2004.12.18 SUNWsprot-... passed Result: Package existence check passed for "SUNWsprot" Check: Package existence for "SUNWtoo" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWtoo-11.10.0-2005.01.21.16.34 SUNWtoo-... passed sol1 SUNWtoo-11.10.0-2005.01.21.16.34 SUNWtoo-... passed Result: Package existence check passed for "SUNWtoo" Check: Package existence for "SUNWi1of" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWi1of-6.6.2.7400-0.2004.12.15 SUNWi1of-... passed sol1 SUNWi1of-6.6.2.7400-0.2004.12.15 SUNWi1of-... passed Result: Package existence check passed for "SUNWi1of" Check: Package existence for "SUNWxwfnt" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWxwfnt-6.6.2.7400-0.2004.12.15 SUNWxwfnt-... passed sol1 SUNWxwfnt-6.6.2.7400-0.2004.12.15 SUNWxwfnt-... passed Result: Package existence check passed for "SUNWxwfnt" Check: Package existence for "SUNWlibC" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWlibC-5.10-2004.12.20 SUNWlibC-... passed sol1 SUNWlibC-5.10-2004.12.20 SUNWlibC-... passed Result: Package existence check passed for "SUNWlibC" Check: Package existence for "SUNWcsl" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- sol2 SUNWcsl-11.10.0-2005.01.21.16.34 SUNWcsl-... passed sol1 SUNWcsl-11.10.0-2005.01.21.16.34 SUNWcsl-... passed Result: Package existence check passed for "SUNWcsl" Check: Operating system patch for "Patch 139575-03" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- sol2 Patch 139556-08 Patch 139575-03 passed sol1 Patch 139556-08 Patch 139575-03 passed Result: Operating system patch check passed for "Patch 139575-03" Check: Operating system patch for "Patch 139556-08" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- sol2 Patch 139556-08 Patch 139556-08 passed sol1 Patch 139556-08 Patch 139556-08 passed Result: Operating system patch check passed for "Patch 139556-08" Check: Operating system patch for "Patch 137104-02" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- sol2 Patch 141445-09 Patch 137104-02 passed sol1 Patch 141445-09 Patch 137104-02 passed Result: Operating system patch check passed for "Patch 137104-02" Check: Operating system patch for "Patch 120754-06" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- sol2 Patch 120754-08 Patch 120754-06 passed sol1 Patch 120754-08 Patch 120754-06 passed Result: Operating system patch check passed for "Patch 120754-06" Check: Operating system patch for "Patch 119961-05" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- sol2 Patch 119961-08 Patch 119961-05 passed sol1 Patch 119961-08 Patch 119961-05 passed Result: Operating system patch check passed for "Patch 119961-05" Check: Operating system patch for "Patch 119964-14" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- sol2 Patch 119964-24 Patch 119964-14 passed sol1 Patch 119964-24 Patch 119964-14 passed Result: Operating system patch check passed for "Patch 119964-14" Check: Operating system patch for "Patch 141415-04" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- sol2 Patch 141445-09 Patch 141415-04 passed sol1 Patch 141445-09 Patch 141415-04 passed Result: Operating system patch check passed for "Patch 141415-04" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ sol2 passed sol1 passed Check for consistency of root user's primary group passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes No NTP Daemons or Services were found to be running Result: Clock synchronization check using Network Time Protocol(NTP) passed Checking Core file name pattern consistency... ERROR: PRVF-6402 : Core file name pattern is not same on all the nodes. Found core filename pattern "core" on nodes "sol1". Found core filename pattern "invalid process-id/usr/bin/grep: invalid process-id'init: invalid process-idcore: invalid process-idfile: invalid process-idpattern': invalid process-id" on nodes "sol2". Core file name pattern consistency check failed. Checking to make sure user "oracle" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ sol2 passed does not exist sol1 passed does not exist Result: User "oracle" is not part of "root" group. Check passed Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- sol2 022 0022 passed sol1 022 0022 passed Result: Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not exist on any node of the cluster. Skipping further checks File "/etc/resolv.conf" is consistent across nodes Check: Time zone consistency Result: Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Checking Oracle Cluster Voting Disk configuration... ERROR: PRVF-5449 : Check of Voting Disk location "/dev/vx/rdsk/ocrvotedg/votevol(/dev/vx/rdsk/ocrvotedg/votevol)" failed on the following nodes: sol2 sol2:GetFileInfo command failed. PRVF-5431 : Oracle Cluster Voting Disk configuration check failed Clusterware version consistency passed Pre-check for cluster services setup was unsuccessful on all the nodes.
注意看其中的报错信息
3,文件系统
# tar -cvf /soft/backup.tar ./etc/init.d/init.cssd ./etc/init.d/init.crs ./etc/init.d/init.crsd ./etc/init.d/init.evmd ./var/opt/oracle ./etc/inittab a ./etc/init.d/init.cssd 53K a ./etc/init.d/init.crs 3K a ./etc/init.d/init.crsd 5K a ./etc/init.d/init.evmd 4K a ./var/opt/oracle/ 0K a ./var/opt/oracle/oraInst.loc 1K a ./var/opt/oracle/oprocd/ 0K a ./var/opt/oracle/oprocd/check/ 0K a ./var/opt/oracle/oprocd/stop/ 0K a ./var/opt/oracle/oprocd/fatal/ 0K a ./var/opt/oracle/scls_scr/ 0K a ./var/opt/oracle/scls_scr/sol1/ 0K a ./var/opt/oracle/scls_scr/sol1/oracle/ 0K a ./var/opt/oracle/scls_scr/sol1/oracle/cssfatal 1K a ./var/opt/oracle/scls_scr/sol1/root/ 0K a ./var/opt/oracle/scls_scr/sol1/root/crsstart 1K a ./var/opt/oracle/scls_scr/sol1/root/cssrun 1K a ./var/opt/oracle/scls_scr/sol1/root/crsdboot 1K a ./var/opt/oracle/scls_scr/sol1/root/nooprocd 0K a ./var/opt/oracle/scls_scr/sol1/root/noclsmon 0K a ./var/opt/oracle/scls_scr/sol1/root/clsvmonpid 1K a ./var/opt/oracle/scls_scr/sol1/root/clsomonpid 1K a ./var/opt/oracle/scls_scr/sol1/root/cssfboot 1K a ./var/opt/oracle/scls_scr/sol1/root/daemonpid 1K a ./var/opt/oracle/ocr.loc 1K a ./var/opt/oracle/oratab 1K a ./etc/inittab 2K
注意需要在两个节点同时执行
4,vote/ocr备份
$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 327452 Used space (kbytes) : 3292 Available space (kbytes) : 324160 ID : 530298431 Device/File Name : /dev/vx/rdsk/ocrvotedg/ocrvol Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded $ crsctl query css votedisk 0. 0 /dev/vx/rdsk/ocrvotedg/votevol # vxprint -u Disk group: ocrvotedg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg ocrvotedg ocrvotedg - - - - - - dm diskc2_0 diskc2_0 - 989m - - - - dm diskc2_1 diskc2_1 - 989m - - - - v ocrvol fsgen ENABLED 320m - ACTIVE - - pl ocrvol-01 ocrvol ENABLED 320m - ACTIVE - - sd diskc2_0-01 ocrvol-01 ENABLED 320m 0 - - - pl ocrvol-02 ocrvol ENABLED 320m - ACTIVE - - sd diskc2_1-01 ocrvol-02 ENABLED 320m 0 - - - v votevol fsgen ENABLED 320m - ACTIVE - - pl votevol-01 votevol ENABLED 320m - ACTIVE - - sd diskc2_0-02 votevol-01 ENABLED 320m 0 - - - pl votevol-02 votevol ENABLED 320m - ACTIVE - - sd diskc2_1-02 votevol-02 ENABLED 320m 0 - - - 这里我们看到VOTE DISK /OCR DISK的大小都是320M,其实他们根本没有用到320 $ dd if=/dev/vx/rdsk/ocrvotedg/votevol of=/soft/votevol 655360+0 records in 655360+0 records out $ dd if=/dev/vx/rdsk/ocrvotedg/ocrvol of=/soft/ocrvol 655360+0 records in 655360+0 records out
5,开始升级
在整个升级过程中,不影响原来的数据库的正常运行,直到执行rootupgrade.sh脚本,所以我们可以先安装好gi后,再停机时间再执行rootupgrade.sh脚本
6,执行rootupgrade.sh脚本
注意在执行rootupgrade.sh的时候会自动关闭数据库 # /u01/app/11.2.0/grid/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded $ ps -ef|grep init root 1 0 0 13:27:20 ? 0:01 /sbin/init root 515 1 0 13:27:28 ? 0:00 /lib/svc/method/iscsi-initiator root 12183 1 0 14:30:15 ? 0:00 /bin/sh /etc/init.d/init.ohasd run root 5640 1 0 14:25:47 ? 0:00 /bin/sh /etc/init.d/init.tfa run $ crsctl query crs activeversion CRS active version on the cluster is [10.2.0.4.0] 2节点 # /u01/app/11.2.0/grid/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Start upgrade invoked.. Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the OCR. Started to upgrade the CSS. Started to upgrade the CRS. The CRS was successfully upgraded. Successfully upgraded the Oracle Clusterware. Oracle Clusterware operating version was successfully set to 11.2.0.4.0 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
注意只有在最后一个节点执行rootupgrade.sh,才会去更新ocr中的内容,如果我们在执行rootupgrade.sh遇到失败,我们可以还原执行,不需要去清除什么内容。
$ crsctl query crs activeversion
CRS active version on the cluster is [11.2.0.4.0]
这里可以看到已经升级到11.2.0.4.0版本
回到界面
这里可以看到相当的报错信息
/u01/app/11.2.0/grid/cfgtoollogs
$ ls -l configToolFailedCommands
-rwx—— 1 oracle dba 177 Sep 4 14:57 configToolFailedCommands
查看升级后的资源
$ ./crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.asm OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 ora.gsd OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 ora.net1.network ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.ons ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.registry.acfs OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE sol2 ora.cvu 1 ONLINE ONLINE sol2 ora.oc4j 1 ONLINE ONLINE sol1 ora.scan1.vip 1 ONLINE ONLINE sol2 ora.sol.db 1 ONLINE ONLINE sol1 ora.sol.sol1.inst 1 ONLINE ONLINE sol1 ora.sol.sol2.inst 1 OFFLINE OFFLINE ora.sol1.LISTENER_SOL1.lsnr 1 ONLINE ONLINE sol1 ora.sol1.vip 1 ONLINE ONLINE sol1 ora.sol2.LISTENER_SOL2.lsnr 1 ONLINE ONLINE sol2 ora.sol2.vip 1 ONLINE ONLINE sol2
这里看到有ASM,因为整个环境使用的RAW,所以不需要ASM,可以删除
$ ./srvctl remove asm -f $ ./crsctl status resource -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.gsd OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 ora.net1.network ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.ons ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.registry.acfs ONLINE OFFLINE sol1 ONLINE OFFLINE sol2
禁用crs自动启动功能,因为所有的资源都是由SFRAC来管理的
# ./crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
下面重启主机
重启过查看状态
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A sol1 RUNNING 0
A sol2 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B cvm sol1 Y N ONLINE
B cvm sol2 Y N ONLINE
B oradb_grp sol1 Y N ONLINE
B oradb_grp sol2 Y N ONLINE
可以看到sf for rac中资源都已经online了
# ./crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.gsd OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 ora.net1.network ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.ons ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.registry.acfs OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE sol2 ora.cvu 1 ONLINE ONLINE sol2 ora.oc4j 1 ONLINE ONLINE sol2 ora.scan1.vip 1 ONLINE ONLINE sol2 ora.sol.db 1 ONLINE ONLINE sol1 ora.sol.sol1.inst 1 ONLINE ONLINE sol1 ora.sol.sol2.inst 1 ONLINE ONLINE sol2 ora.sol1.LISTENER_SOL1.lsnr 1 ONLINE ONLINE sol1 ora.sol1.vip 1 ONLINE ONLINE sol1 ora.sol2.LISTENER_SOL2.lsnr 1 ONLINE ONLINE sol2 ora.sol2.vip 1 ONLINE ONLINE sol2 # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.111.46 netmask ffffff00 broadcast 192.168.111.255 ether 0:c:29:5a:e5:7a e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.111.48 netmask ffffff00 broadcast 192.168.111.255 e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 192.168.112.46 netmask ffffff00 broadcast 192.168.112.255 ether 0:c:29:5a:e5:84 e1000g1:1: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 3 inet 169.254.27.77 netmask ffff0000 broadcast 169.254.255.255 # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.111.47 netmask ffffff00 broadcast 192.168.111.255 ether 0:c:29:50:1b:43 e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.111.50 netmask ffffff00 broadcast 192.168.111.255 e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.111.49 netmask ffffff00 broadcast 192.168.111.255 e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 192.168.112.47 netmask ffffff00 broadcast 192.168.112.255 ether 0:c:29:50:1b:4d e1000g1:1: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 3 inet 169.254.140.162 netmask ffff0000 broadcast 169.254.255.255
数据库的资源也已经正常。
10.2.0.4 CRS升级到11.2.0.4(sol+sf for rac):等您坐沙发呢!