环境:sfrac 5.1 os :rhel5 db:11.2.0.3(gi)
迁移前 | 迁移后 | |
主机名 | rac31/rac32 | rac41/rac42 |
db/gi版本 | 11.2.0.3 | 11.2.0.3.7 |
下面只写出了迁移部分,原数据库所有操作与新环境安装过程略
1,迁移磁盘到新的主机,这里通常由主机工程师+存储工程师会完成
磁盘已经成功挂载到主机上面,这里可以看到生成了名字disk_2,disk_3,disk4三张磁盘,默认以阵列开关命名
[root@rac41 ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS disk_0 auto:cdsdisk disk_0 newocrvotedg online shared disk_1 auto:cdsdisk disk_1 newocrvotedg online shared disk_2 auto:cdsdisk - - online shared disk_3 auto:cdsdisk - - online shared disk_4 auto:cdsdisk - - online shared sda auto:none - - online invalid
2,导入磁盘组
注意,由于磁盘组是从另一个集群环境导入过来,所以会报下面的错误,需要清除原集群信息
[root@rac41 ~]# vxdg -s import ocrvotedg VxVM vxdg ERROR V-5-1-10978 Disk group ocrvotedg: import failed: Disk in use by another cluster [root@rac41 ~]# vxdg -s -C import ocrvotedg [root@rac41 ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS disk_0 auto:cdsdisk disk_0 newocrvotedg online shared disk_1 auto:cdsdisk disk_1 newocrvotedg online shared disk_2 auto:cdsdisk disk_01 ocrvotedg online shared disk_3 auto:cdsdisk - - online shared disk_4 auto:cdsdisk disk_11 ocrvotedg online shared sda auto:none - - online invalid [root@rac42 ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS disk_0 auto:cdsdisk disk_0 newocrvotedg online shared disk_1 auto:cdsdisk disk_1 newocrvotedg online shared disk_2 auto:cdsdisk disk_01 ocrvotedg online shared disk_3 auto:cdsdisk - - online shared disk_4 auto:cdsdisk disk_11 ocrvotedg online shared sda auto:none - - online invalid
3,手动挂载文件系统
[root@rac41 ~]# mount -t vxfs -o cluster /dev/vx/dsk/ocrvotedg/ocrvotevol /ocrvote [root@rac41 ~]# df / (/dev/mapper/rootvg-rootlv): 28640496 blocks 8181736 files /proc (proc ): 0 blocks 0 files /sys (sysfs ): 0 blocks 0 files /dev/pts (devpts ): 0 blocks 0 files /boot (/dev/sda1 ): 165910 blocks 26070 files /dev/shm (tmpfs ): 4329064 blocks 541132 files /proc/sys/fs/binfmt_misc(none ): 0 blocks 0 files /dev/vx (tmpfs ): 8 blocks 2097031 files /dev/odm (none ): 0 blocks 0 files /newocrvote (/dev/vx/dsk/newocrvotedg/newocrvotevol): 1121058 blocks 149470 files /ocrvote (/dev/vx/dsk/ocrvotedg/ocrvotevol): 2911486 blocks 388160 files
4,增加ocr盘与votedisk盘
这里需要注意,在增加ocr盘的时候,我们需要先touch一个文件名,不然会报错的,然而在增加votedisk盘的时候是不能touch一个文件,如果touch也会报错。
[root@rac41 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -add /ocrvote/ocr3 [root@rac41 ~]# touch /ocrvote/ocr4 [root@rac41 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -add /ocrvote/ocr4 [root@rac41 ~]# /u01/app/11.2.0/grid/bin/crsctl add css votedisk /ocrvote/vote3 Now formatting voting disk: /ocrvote/vote3. CRS-4603: Successful addition of voting disk /ocrvote/vote3. [root@rac41 ~]# /u01/app/11.2.0/grid/bin/crsctl add css votedisk /ocrvote/vote4 Now formatting voting disk: /ocrvote/vote4. CRS-4603: Successful addition of voting disk /ocrvote/vote4. [root@rac41 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -delete /newocrvote/newocr1 [root@rac41 ~]# /u01/app/11.2.0/grid/bin/crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 622b58d0dd204f24bf9e8e4d63e36d77 (/newocrvote/newvote) [] 2. ONLINE a537bc21f93e4f35bf7e4b96f2275de5 (/ocrvote/vote3) [] 3. ONLINE 80d9d3cdd7974f4abf92fe1e94d068b1 (/ocrvote/vote4) [] Located 3 voting disk(s). 删除掉新环境中原来的votedisk盘 [root@rac41 ~]# /u01/app/11.2.0/grid/bin/crsctl delete css votedisk 622b58d0dd204f24bf9e8e4d63e36d77 CRS-4611: Successful deletion of voting disk 622b58d0dd204f24bf9e8e4d63e36d77. 配置后的votedisk与ocr盘 [root@rac41 ~]# /u01/app/11.2.0/grid/bin/crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a537bc21f93e4f35bf7e4b96f2275de5 (/ocrvote/vote3) [] 2. ONLINE 80d9d3cdd7974f4abf92fe1e94d068b1 (/ocrvote/vote4) [] Located 2 voting disk(s). [root@rac41 ~]# /u01/app/11.2.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 2640 Available space (kbytes) : 259480 ID : 1749825622 Device/File Name : /ocrvote/ocr3 Device/File integrity check succeeded Device/File Name : /ocrvote/ocr4 Device/File integrity check succeeded
5,修改sfrac资源组的信息
由于对sfrac不太熟悉,所以这里我是停了sfrac后,手动修改的main.cf这个文件,如果对sfrac熟悉,我们可以使用命令在线更改
[root@rac41 config]# hacf -verify ./ [root@rac41 config]# hastop -all [root@rac41 config]# hastart [root@rac41 config]# cat main.cf include "OracleASMTypes.cf" include "types.cf" include "CFSTypes.cf" include "CRSResource.cf" include "CVMTypes.cf" include "Db2udbTypes.cf" include "MultiPrivNIC.cf" include "OracleTypes.cf" include "PrivNIC.cf" include "SybaseTypes.cf" cluster rac4_cluster ( UserNames = { admin = aHIaHChEIdIIgQIcHF } Administrators = { admin } HacliUserLevel = COMMANDROOT ) system rac41 ( ) system rac42 ( ) group cvm ( SystemList = { rac41 = 0, rac42 = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { rac41, rac42 } ) Application cssd ( Critical = 0 StartProgram = "/opt/VRTSvcs/rac/bin/cssd-online" StopProgram = "/opt/VRTSvcs/rac/bin/cssd-offline" CleanProgram = "/opt/VRTSvcs/rac/bin/cssd-clean" MonitorProgram = "/opt/VRTSvcs/rac/bin/cssd-monitor" OnlineWaitLimit = 5 ) CFSMount ocrvote_mnt_ocrvotedg ( Critical = 0 MountPoint = "/ocrvote" BlockDevice = "/dev/vx/dsk/ocrvotedg/ocrvotevol" MountOpt = "mincache=direct" ) CFSfsckd vxfsckd ( ) CVMCluster cvm_clus ( CVMClustName = rac4_cluster CVMNodeId = { rac41 = 0, rac42 = 1 } CVMTransport = gab CVMTimeout = 200 ) CVMVolDg ocrvote_voldg_ocrvotedg ( CVMDiskGroup = ocrvotedg CVMVolume = { ocrvotevol } CVMActivation = sw ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) PrivNIC ora_priv ( Critical = 0 Device @rac41 = { eth1 = 0, eth2 = 1 } Device @rac42 = { eth1 = 0, eth2 = 1 } Address @rac41 = "192.168.112.62" Address @rac42 = "192.168.112.63" NetMask = "255.255.255.0" ) cssd requires ocrvote_mnt_ocrvotedg cssd requires ora_priv cvm_clus requires cvm_vxconfigd ocrvote_mnt_ocrvotedg requires ocrvote_voldg_ocrvotedg ocrvote_mnt_ocrvotedg requires vxfsckd ocrvote_voldg_ocrvotedg requires cvm_clus vxfsckd requires cvm_clus // resource dependency tree // // group cvm // { // Application cssd // { // CFSMount ocrvote_mnt_newocrvotedg // { // CVMVolDg ocrvote_voldg_newocrvotedg // { // CVMCluster cvm_clus // { // CVMVxconfigd cvm_vxconfigd // } // } // CFSfsckd vxfsckd // { // CVMCluster cvm_clus // { // CVMVxconfigd cvm_vxconfigd // } // } // } // PrivNIC ora_priv // } // }
6,修改后的资源状态
这里可以看到一切资源都正常
[root@rac41 ~]# hastatus attempting to connect.... attempting to connect....connected group resource system message --------------- -------------------- -------------------- -------------------- rac41 RUNNING rac42 RUNNING cvm rac41 ONLINE cvm rac42 ONLINE ------------------------------------------------------------------------- cssd rac41 ONLINE cssd rac42 ONLINE ocrvote_mnt_ocrvotedg rac41 ONLINE ocrvote_mnt_ocrvotedg rac42 ONLINE vxfsckd rac41 ONLINE ------------------------------------------------------------------------- vxfsckd rac42 ONLINE cvm_clus rac41 ONLINE cvm_clus rac42 ONLINE ocrvote_voldg_ocrvotedg rac41 ONLINE ocrvote_voldg_ocrvotedg rac42 ONLINE ------------------------------------------------------------------------- cvm_vxconfigd rac41 ONLINE cvm_vxconfigd rac42 ONLINE ora_priv rac41 ONLINE ora_priv rac42 ONLINE [root@rac41 ~]# hares -state #Resource Attribute System Value cssd State rac41 ONLINE cssd State rac42 ONLINE cvm_clus State rac41 ONLINE cvm_clus State rac42 ONLINE cvm_vxconfigd State rac41 ONLINE cvm_vxconfigd State rac42 ONLINE ocrvote_mnt_ocrvotedg State rac41 ONLINE ocrvote_mnt_ocrvotedg State rac42 ONLINE ocrvote_voldg_ocrvotedg State rac41 ONLINE ocrvote_voldg_ocrvotedg State rac42 ONLINE ora_priv State rac41 ONLINE ora_priv State rac42 ONLINE vxfsckd State rac41 ONLINE vxfsckd State rac42 ONLINE
到这里数据库gi的资源我们已经成功迁移了
7,迁移数据库的磁盘组
[root@rac41 ~]# vxdg list NAME STATE ID oradatadg enabled,shared,cds 1379019287.17.rac31 newocrvotedg enabled,shared,cds 1379029700.8.rac41 ocrvotedg enabled,shared,cds 1379012329.15.rac31 [root@rac41 ~]# mkdir /oradata [root@rac41 ~]# chmod 777 /oradata 手动mount文件系统 [root@rac41 ~]# mount -t vxfs -o cluster /dev/vx/dsk/oradatadg/oradatalv1 /oradata 向gi中注意数据库资源,注意这里我们使用了-y manual手动来管理,因为在sfrac环境中,数据库的启动与关闭是由sfrac来接管。 [oracle@rac41 rac3]$ srvctl add database -d rac3 -o $ORACLE_HOME -i rac31 -p /oradata/rac3/spfilerac3.ora -r primary -s open -t immediate -n rac3 -y MANUAL [oracle@rac41 rac3]$ srvctl config database -d rac3 -a Database unique name: rac3 Database name: rac3 Oracle home: /u01/app/oracle/product/11.2.0/db_1 Oracle user: oracle Spfile: /oradata/rac3/spfilerac3.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: MANUAL Server pools: rac3 Database instances: Disk Groups: Mount point paths: Services: Type: RAC Database is enabled Database is administrator managed 注册事例 [oracle@rac41 rac3]$ srvctl add instance -d rac3 -i rac31 -n rac41 [oracle@rac41 rac3]$ srvctl add instance -d rac3 -i rac32 -n rac42
8,升级数据库到11.2.0.3.7
[oracle@rac41 ~]$ echo $ORACLE_SID rac31 [oracle@rac41 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Sep 14 06:13:22 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> create pfile='/tmp/123.ora' from spfile='/oradata/rac3/spfilerac3.ora'; File created. 手动创建adump文件 [oracle@rac41 ~]$ mkdir -p /u01/app/oracle/admin/rac3/adump rac32.__db_cache_size=197132288 rac31.__db_cache_size=197132288 rac32.__java_pool_size=4194304 rac31.__java_pool_size=4194304 rac32.__large_pool_size=4194304 rac31.__large_pool_size=4194304 rac32.__pga_aggregate_target=134217728 rac31.__pga_aggregate_target=134217728 rac32.__sga_target=394264576 rac31.__sga_target=394264576 rac32.__shared_io_pool_size=0 rac31.__shared_io_pool_size=0 rac32.__shared_pool_size=180355072 rac31.__shared_pool_size=180355072 rac32.__streams_pool_size=0 rac31.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/rac3/adump' *.audit_trail='db' *.cluster_database=true *.compatible='11.2.0.0.0' *.control_files='/oradata/rac3/control01.ctl','/oradata/rac3/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='rac3' *.db_recovery_file_dest='/oradata' *.db_recovery_file_dest_size=4558159872 *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=rac3XDB)' rac31.instance_number=1 rac32.instance_number=2 *.open_cursors=300 *.pga_aggregate_target=131072000 *.processes=150 #需要修改rac4-scan:1521参数,根据/etc/hosts文件来修改 *.remote_listener='rac4-scan:1521' *.remote_login_passwordfile='exclusive' *.sga_target=393216000 rac32.thread=2 rac31.thread=1 rac31.undo_tablespace='UNDOTBS1' rac32.undo_tablespace='UNDOTBS2 SQL> startup pfile='/tmp/123.ora'; ORACLE instance started. Total System Global Area 392495104 bytes Fixed Size 2228704 bytes Variable Size 188747296 bytes Database Buffers 197132288 bytes Redo Buffers 4386816 bytes Database mounted. Database opened. SQL> @?/rdbms/admin/catbundle.sql psu apply PL/SQL procedure successfully completed. SQL> select * from dba_registry_history; ACTION_TIME ACTION NAMESPACE VERSION ID BUNDLE_SERIES COMMENTS ------------------------------ ---------- ---------- ---------- ---------- --------------- -------------------- 17-SEP-11 10.21.11.595816 AM APPLY SERVER 11.2.0.3 0 PSU Patchset 11.2.0.2.0 13-SEP-13 05.50.28.516658 AM APPLY SERVER 11.2.0.3 0 PSU Patchset 11.2.0.2.0 14-SEP-13 06.21.02.191577 AM APPLY SERVER 11.2.0.3 7 PSU PSU 11.2.0.3.7 创建spfile文件 [root@rac41 ~]# mv /oradata/rac3/spfilerac3.ora /oradata/rac3/spfilerac3.ora.29130913 SQL> create spfile='/oradata/rac3/spfilerac3.ora' from pfile='/tmp/123.ora'; File created. [oracle@rac41 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Sep 14 06:46:00 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options SQL> startup force; [oracle@rac41 ~]$ srvctl status instance -d rac3 -i rac31 Instance rac31 is running on node rac41 [oracle@rac41 ~]$ srvctl status instance -d rac3 -i rac32 Instance rac32 is running on node rac42 这里看到已经升级成功
9,修改scan的ip地址为原来环境中的ip地址
[grid@rac41 ~]$ crsctl stop resource ora.LISTENER_SCAN1.lsnr CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac42' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac42' succeeded [grid@rac41 ~]$ crsctl stop resource ora.scan1.vip CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac42' CRS-2677: Stop of 'ora.scan1.vip' on 'rac42' succeeded [grid@rac41 ~]$ srvctl config scan SCAN name: rac4-scan, Network: 1/192.168.111.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /rac4-scan/192.168.111.66 [grid@rac41 ~]$ srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521 这里用的192.168.111.66修改成192.168.111.67 修改/etc/hosts [root@rac42 RAC3]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 #############public ip############### 192.168.111.62 rac41 192.168.111.63 rac42 #############public-vip############## 192.168.111.64 rac41-vip 192.168.111.65 rac42-vip #############scan#################### 192.168.111.67 rac4-scan #############private################# 192.168.112.62 rac41-priv 192.168.112.63 rac42-priv 192.168.113.62 rac411-priv 192.168.113.63 rac421-priv [root@rac41 ~]# /u01/app/11.2.0/grid/bin/srvctl modify scan -n rac4-scan [root@rac41 ~]# /u01/app/11.2.0/grid/bin/srvctl config scan SCAN name: rac4-scan, Network: 1/192.168.111.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /rac4-scan/192.168.111.67 [root@rac41 ~]# [grid@rac41 ~]$ crsctl start resource ora.LISTENER_SCAN1.lsnr CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac42' CRS-2676: Start of 'ora.scan1.vip' on 'rac42' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac42' CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac42' succeeded 这里看到ip地址已经生效 [root@rac42 RAC3]# ifconfig -a|grep addr eth0 Link encap:Ethernet HWaddr 00:0C:29:09:51:ED inet addr:192.168.111.63 Bcast:192.168.111.255 Mask:255.255.255.0 eth0:1 Link encap:Ethernet HWaddr 00:0C:29:09:51:ED inet addr:192.168.111.67 Bcast:192.168.111.255 Mask:255.255.255.0 eth0:2 Link encap:Ethernet HWaddr 00:0C:29:09:51:ED inet addr:192.168.111.65 Bcast:192.168.111.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 00:0C:29:09:51:F7 inet addr:192.168.112.63 Bcast:192.168.112.255 Mask:255.255.255.0 eth1:1 Link encap:Ethernet HWaddr 00:0C:29:09:51:F7 inet addr:169.254.27.25 Bcast:169.254.255.255 Mask:255.255.0.0 eth2 Link encap:Ethernet HWaddr 00:0C:29:09:51:01 inet addr:192.168.113.63 Bcast:192.168.113.255 Mask:255.255.255.0 inet addr:127.0.0.1 Mask:255.0.0.0
整个迁移过程已经完成,其实还有一步就是向sfrac中增加数据库资源组
sfrac环境中 11.2.0.3 rac迁移到新主机并升级到11.2.0.3.7:等您坐沙发呢!