当前位置: 首页 > MIGRATE > 正文

Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure

        很久没有做单机迁移到RAC环境,下面复习一下,主要是模拟客户现场在单机raw环境直接转换成rac环境,下面是在linux 5.8环境中的测试,只能做为参考。

1,环境介绍

硬件平台与版本

www.htz.pw > select * from v$version where rownum=1;

BANNER

—————————————————————-

Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 – 64bi

www.htz.pw > !uname -a

Linux test1 2.6.18-308.el5 #1 SMP Fri Jan 27 17:17:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

主机名与IP地址规划

 

主机1(数据已经运行的主机)

主机2(新增加主机)

主机名

test1

test2

永久IP地址

192.168.111.50

192.168.111.51

VIP地址

192.168.111.55

192.168.111.56

存储方式

RAW

RAW

2,配置hosts文件

echo “192.168.111.50 test1

192.168.111.51 test2

 

192.168.111.55 test1-vip

192.168.111.56 test2-vip

 

192.168.112.50 test1-priv

192.168.112.51 test2-priv”>> /etc/hosts

3,配置raw

查看磁盘中的分区个数与大小

[root@test2.htz.pw ~]# partx /dev/sdb

# 1:        63-   401624 (   401562 sectors,    205 MB)

# 2:    401625-   803249 (   401625 sectors,    205 MB)

# 3:    803250-  1204874 (   401625 sectors,    205 MB)

# 4:   1204875- 20964824 ( 19759950 sectors,  10117 MB)

# 5:   1204938-  2586464 (  1381527 sectors,    707 MB)

# 6:   2586528-  3968054 (  1381527 sectors,    707 MB)

# 7:   3968118-  5349644 (  1381527 sectors,    707 MB)

# 8:   5349708-  6731234 (  1381527 sectors,    707 MB)

# 9:   6731298-  8112824 (  1381527 sectors,    707 MB)

#10:   8112888-  8321669 (   208782 sectors,    106 MB)

#11:   8321733-  8530514 (   208782 sectors,    106 MB)

#12:   8530578-  8739359 (   208782 sectors,    106 MB)

#13:   8739423-  8948204 (   208782 sectors,    106 MB)

#14:   8948268-  9157049 (   208782 sectors,    106 MB)

#15:   9157113-  9365894 (   208782 sectors,    106 MB)

配置50-udev文件,指定raw的属主与权限

[root@test2.htz.pw  rules.d]# grep oracle 50-udev.rules

KERNEL==”raw[0-9]*”,            NAME=”raw/%k”   owner=”oracle” group=”dba”

增加raw配置规则

[root@test1.htz.pw  rules.d]# cat 60-raw.rules

# Enter raw device bindings here.

#

# An example would be:

#   ACTION==”add”, KERNEL==”sda”, RUN+=”/bin/raw /dev/raw/raw1 %N”

# to bind /dev/raw/raw1 to /dev/sda, or

#   ACTION==”add”, ENV{MAJOR}==”8″, ENV{MINOR}==”1″, RUN+=”/bin/raw /dev/raw/raw2 %M %m”

# to bind /dev/raw/raw2 to the device with major 8, minor 1.

ACTION==”add”, KERNEL==”sdb1″, RUN+=”/bin/raw /dev/raw/raw1 %N”

ACTION==”add”, KERNEL==”sdb2″, RUN+=”/bin/raw /dev/raw/raw2 %N”

ACTION==”add”, KERNEL==”sdb3″, RUN+=”/bin/raw /dev/raw/raw3 %N”

ACTION==”add”, KERNEL==”sdb5″, RUN+=”/bin/raw /dev/raw/raw5 %N”

ACTION==”add”, KERNEL==”sdb6″, RUN+=”/bin/raw /dev/raw/raw6 %N”

ACTION==”add”, KERNEL==”sdb7″, RUN+=”/bin/raw /dev/raw/raw7 %N”

ACTION==”add”, KERNEL==”sdb8″, RUN+=”/bin/raw /dev/raw/raw8 %N”

ACTION==”add”, KERNEL==”sdb9″, RUN+=”/bin/raw /dev/raw/raw9 %N”

ACTION==”add”, KERNEL==”sdb10″, RUN+=”/bin/raw /dev/raw/raw10 %N”  

ACTION==”add”, KERNEL==”sdb11″, RUN+=”/bin/raw /dev/raw/raw11 %N”

ACTION==”add”, KERNEL==”sdb12″, RUN+=”/bin/raw /dev/raw/raw12 %N” 

ACTION==”add”, KERNEL==”sdb13″, RUN+=”/bin/raw /dev/raw/raw13 %N” 

ACTION==”add”, KERNEL==”sdb14″, RUN+=”/bin/raw /dev/raw/raw14 %N” 

ACTION==”add”, KERNEL==”sdb15″, RUN+=”/bin/raw /dev/raw/raw15 %N”

查看RAW是否生效

[root@test2.htz.pw rules.d]# start_udev

Starting udev: [  OK  ]

[root@test2.htz.pw rules.d]# raw -qa

/dev/raw/raw1:  bound to major 8, minor 17

/dev/raw/raw2:  bound to major 8, minor 18

/dev/raw/raw3:  bound to major 8, minor 19

/dev/raw/raw5:  bound to major 8, minor 21

/dev/raw/raw6:  bound to major 8, minor 22

/dev/raw/raw7:  bound to major 8, minor 23

/dev/raw/raw8:  bound to major 8, minor 24

/dev/raw/raw9:  bound to major 8, minor 25

/dev/raw/raw10: bound to major 8, minor 26

/dev/raw/raw11: bound to major 8, minor 27

/dev/raw/raw12: bound to major 8, minor 28

/dev/raw/raw13: bound to major 8, minor 29

/dev/raw/raw14: bound to major 8, minor 30

/dev/raw/raw15: bound to major 8, minor 31

[root@test2.htz.pw rules.d]# ls -l /dev/raw/*

crw——- 1 oracle dba 162,  1 Dec  4 22:49 /dev/raw/raw1

crw——- 1 oracle dba 162, 10 Dec  4 22:49 /dev/raw/raw10

crw——- 1 oracle dba 162, 11 Dec  4 22:49 /dev/raw/raw11

crw——- 1 oracle dba 162, 12 Dec  4 22:49 /dev/raw/raw12

crw——- 1 oracle dba 162, 13 Dec  4 22:49 /dev/raw/raw13

crw——- 1 oracle dba 162, 14 Dec  4 22:49 /dev/raw/raw14

crw——- 1 oracle dba 162, 15 Dec  4 22:49 /dev/raw/raw15

crw——- 1 oracle dba 162,  2 Dec  4 22:49 /dev/raw/raw2

crw——- 1 oracle dba 162,  3 Dec  4 22:49 /dev/raw/raw3

crw——- 1 oracle dba 162,  5 Dec  4 22:49 /dev/raw/raw5

crw——- 1 oracle dba 162,  6 Dec  4 22:49 /dev/raw/raw6

crw——- 1 oracle dba 162,  7 Dec  4 22:49 /dev/raw/raw7

crw——- 1 oracle dba 162,  8 Dec  4 22:49 /dev/raw/raw8

crw——- 1 oracle dba 162,  9 Dec  4 22:49 /dev/raw/raw9

4 系统参数配置

这里系统参数就不写了,这些包括上面的都应该由主机工程师完成。另外ntp服务也不写了。

5,配置ssh

这里使用11g中的脚本来实现配置ssh服务,另外10g中也可以使用配置rlogin方式也一样

[oracle@test1.htz.pw soft]$ ./sshUserSetup.sh -user oracle -hosts ‘test1 test2’ -advanced -exverify

6,创建目录

[root@test2.htz.pw rules.d]# mkdir /oracle

[root@test2.htz.pw rules.d]# chown oracle:dba /oracle

7,安装crs

这里runinstaller直接安装crs,图形界面略,下给出了执行脚本部分,下面脚本依次执行

[root@test1.htz.pw ~]# /oracle/app/oracle/oraInventory/orainstRoot.sh

[root@test2.htz.pw ~]# /oracle/app/oracle/oraInventory/orainstRoot.sh

[root@test1.htz.pw ~]# /oracle/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /oracle/app/oracle/oraInventory to 770.

Changing groupname of /oracle/app/oracle/oraInventory to dba.

The execution of the script is complete

[root@test1.htz.pw ~]# /oracle/app/oracle/product/10.2.0/crs/root.sh

WARNING: directory ‘/oracle/app/oracle/product/10.2.0’ is not owned by root

WARNING: directory ‘/oracle/app/oracle/product’ is not owned by root

WARNING: directory ‘/oracle/app/oracle’ is not owned by root

WARNING: directory ‘/oracle/app’ is not owned by root

WARNING: directory ‘/oracle’ is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

 

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory ‘/oracle/app/oracle/product/10.2.0’ is not owned by root

WARNING: directory ‘/oracle/app/oracle/product’ is not owned by root

WARNING: directory ‘/oracle/app/oracle’ is not owned by root

WARNING: directory ‘/oracle/app’ is not owned by root

WARNING: directory ‘/oracle’ is not owned by root

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: test1 test1-priv test1

node 2: test2 test2-priv test2

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

Now formatting voting device: /dev/raw/raw2

Format of 1 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

        test1

CSS is inactive on these nodes.

        test2

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

在主机2上面修改2个文件

[oracle@test2.htz.pw ~]$ vi $ORA_CRS_HOME/bin/vipca

/2.4

       #Remove this workaround when the bug 3937317 is fixed

       arch=`uname -m`

       if [ “$arch” = “i686” -o “$arch” = “ia64” -o “$arch” = “x86_64” ]

       then

            LD_ASSUME_KERNEL=2.4.19

            export LD_ASSUME_KERNEL

       fi

            unset LD_ASSUME_KERNEL

 

[oracle@test2.htz.pw ~]$ vi $ORA_CRS_HOME/bin/srvctl

LD_ASSUME_KERNEL=2.4.19

export LD_ASSUME_KERNEL

unset LD_ASSUME_KERNEL

 

root用户下面执行

[root@test2.htz.pw rules.d]# /oracle/app/oracle/product/10.2.0/crs/root.sh

WARNING: directory ‘/oracle/app/oracle/product/10.2.0’ is not owned by root

WARNING: directory ‘/oracle/app/oracle/product’ is not owned by root

WARNING: directory ‘/oracle/app/oracle’ is not owned by root

WARNING: directory ‘/oracle/app’ is not owned by root

WARNING: directory ‘/oracle’ is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

 

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory ‘/oracle/app/oracle/product/10.2.0’ is not owned by root

WARNING: directory ‘/oracle/app/oracle/product’ is not owned by root

WARNING: directory ‘/oracle/app/oracle’ is not owned by root

WARNING: directory ‘/oracle/app’ is not owned by root

WARNING: directory ‘/oracle’ is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: test1 test1-priv test1

node 2: test2 test2-priv test2

clscfg: Arguments check out successfully.

 

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

        test1

        test2

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

Error 0(Native: listNetInterfaces:[3])

  [Error 0(Native: listNetInterfaces:[3])]

 

[root@test2.htz.pw rules.d]# /oracle/app/oracle/product/10.2.0/crs/bin/oifcfg  setif -global eth0/192.168.111.0:public

[root@test2.htz.pw rules.d]# /oracle/app/oracle/product/10.2.0/crs/bin/oifcfg  setif -global eth1/192.168.112.0:cluster_interconnect

[root@test2.htz.pw rules.d]# export DISPLAY=192.168.111.1:0.0

[root@test2.htz.pw rules.d]# /oracle/app/oracle/product/10.2.0/crs/bin/vipca

这里回来安装界面点下一步就可以了

安装完成的CRS如下:

[root@test2.htz.pw bin]# ./crs_stat -t

Name           Type           Target    State     Host       

————————————————————

ora.test1.gsd  application    ONLINE    ONLINE    test1      

ora.test1.ons  application    ONLINE    ONLINE    test1      

ora.test1.vip  application    ONLINE    ONLINE    test1      

ora.test2.gsd  application    ONLINE    ONLINE    test2      

ora.test2.ons  application    ONLINE    ONLINE    test2      

ora.test2.vip  application    ONLINE    ONLINE    test2

8 升级CRS10.2.0.5

直接通过runinstaller就可以申请,这里略,只给出了升级后需要执行的脚本

[root@test1.htz.pw ~]# /oracle/app/oracle/product/10.2.0/crs/bin/crsctl stop crs

Stopping resources.

Successfully stopped CRS resources

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@test1.htz.pw ~]# /oracle/app/oracle/product/10.2.0/crs/install/root102.sh

Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /oracle/app/oracle/product/10.2.0/crs

Relinking some shared libraries.

Relinking of patched files is complete.

WARNING: directory ‘/oracle/app/oracle/product/10.2.0’ is not owned by root

WARNING: directory ‘/oracle/app/oracle/product’ is not owned by root

WARNING: directory ‘/oracle/app/oracle’ is not owned by root

WARNING: directory ‘/oracle/app’ is not owned by root

WARNING: directory ‘/oracle’ is not owned by root

Preparing to recopy patched init and RC scripts.

Recopying init and RC scripts.

Startup will be queued to init within 30 seconds.

Starting up the CRS daemons.

Waiting for the patched CRS daemons to start.

  This may take a while on some systems.

.

10205 patch successfully applied.

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully deleted 1 values from OCR.

Successfully deleted 1 keys from OCR.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: test1 test1-priv test1

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

clscfg -upgrade completed successfully

Creating ‘/oracle/app/oracle/product/10.2.0/crs/install/paramfile.crs’ with data used for CRS configuration

Setting CRS configuration values in /oracle/app/oracle/product/10.2.0/crs/install/paramfile.crs

 

 

[root@test2.htz.pw bin]# /oracle/app/oracle/product/10.2.0/crs/bin/crsctl stop crs

Stopping resources.

Successfully stopped CRS resources

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@test2.htz.pw bin]# /oracle/app/oracle/product/10.2.0/crs/install/root102.sh

Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /oracle/app/oracle/product/10.2.0/crs

Relinking some shared libraries.

Relinking of patched files is complete.

WARNING: directory ‘/oracle/app/oracle/product/10.2.0’ is not owned by root

WARNING: directory ‘/oracle/app/oracle/product’ is not owned by root

WARNING: directory ‘/oracle/app/oracle’ is not owned by root

WARNING: directory ‘/oracle/app’ is not owned by root

WARNING: directory ‘/oracle’ is not owned by root

Preparing to recopy patched init and RC scripts.

Recopying init and RC scripts.

Startup will be queued to init within 30 seconds.

Starting up the CRS daemons.

Waiting for the patched CRS daemons to start.

  This may take a while on some systems.

.

10205 patch successfully applied.

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully deleted 1 values from OCR.

Successfully deleted 1 keys from OCR.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 2: test2 test2-priv test2

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

clscfg -upgrade completed successfully

Creating ‘/oracle/app/oracle/product/10.2.0/crs/install/paramfile.crs’ with data used for CRS configuration

Setting CRS configuration values in /oracle/app/oracle/product/10.2.0/crs/install/paramfile.crs

 

[oracle@test2.htz.pw ~]$ crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.5.0]

9 主机2安装软件

这里使用将主机1ORACLE_HOME目录复制到主机2相同的目录下面,使用relink all的方式安装就可以了

[oracle@test1.htz.pw 10.2.0]$ rsync  -avlR db 192.168.111.51:/oracle/app/oracle/product/10.2.0

 

编辑软件

 

[oracle@test2.htz.pw bin]$ ./relink all

10 更新inventory内容

主机2增加把ORACLE_HOME增加到inventory

主机2

[oracle@test2.htz.pw bin]$ cd $ORACLE_HOME/bin

[oracle@test2 bin]$ $ORACLE_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=test1,test2   “INVENTORY_LOCATION=/oracle/app/oracle/oraInventory” 

$ORACLE_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=test1,test2 ORACLE_HOME_NAME=”OraDb10g_home1″  “INVENTORY_LOCATION=/oracle/app/oracle/oraInventory”

Starting Oracle Universal Installer…

 

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /oracle/app/oracle/oraInventory

Please execute the ‘null’ script at the end of the session.

‘AttachHome’ was successful.

主机1

[oracle@test1.htz.pw bin]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=test1,test2   “INVENTORY_LOCATION=/oracle/app/oracle/oraInventory”

Starting Oracle Universal Installer…

 

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /oracle/app/oracle/oraInventory

‘UpdateNodeList’ was successful.

查看inventory.xml文件

[oracle@test1.htz.pw ContentsXML]$ cat inventory.xml

<?xml version=”1.0″ standalone=”yes” ?>

<!– Copyright (c) 1999, 2010, Oracle. All rights reserved. –>

<!– Do not modify the contents of this file by hand. –>

<INVENTORY>

<VERSION_INFO>

   <SAVED_WITH>10.2.0.5.0</SAVED_WITH>

   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>

</VERSION_INFO>

<HOME_LIST>

<HOME NAME=”OraDb10g_home1″ LOC=”/oracle/app/oracle/product/10.2.0/db” TYPE=”O” IDX=”1″>

   <NODE_LIST>

      <NODE NAME=”test1″/>

      <NODE NAME=”test2″/>

   </NODE_LIST>

</HOME>

<HOME NAME=”OraCrs10g_home1″ LOC=”/oracle/app/oracle/product/10.2.0/crs” TYPE=”O” IDX=”2″ CRS=”true”>

   <NODE_LIST>

      <NODE NAME=”test1″/>

      <NODE NAME=”test2″/>

   </NODE_LIST>

</HOME>

</HOME_LIST>

</INVENTORY>

11 停数据库

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

12 编译RAC功能

2. cd $ORACLE_HOME/rdbms/lib

3. make -f ins_rdbms.mk rac_on

If this step did not fail with fatal errors then proceed to step 4.

4. make -f ins_rdbms.mk ioracle

查看是否编译成功

[oracle@test1.htz.pw lib]$ ar -t $ORACLE_HOME/rdbms/lib/libknlopt.a|grep kcsm.o

kcsm.o

13 配置监听文件

直接使用netca来配置监听文件,这里略,另外配置tnsnames.ora文件,内容如下:

LISTENERS_RAC =

  (ADDRESS_LIST =

    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.111.55)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.111.56)(PORT = 1521))

  )

14,配置参数文件

这里由于RAC不够,所以我把参数文件放到本地的,生产环境应该放到共享的设备上面,另外在init文件中指定spfile指定位置就可以了

[oracle@test1.htz.pw dbs]$ strings /oracle/app/oracle/admin/rac/pfile/init.ora|grep -v ‘#’

db_block_size=8192

db_file_multiblock_read_count=16

open_cursors=300

db_domain=””

db_name=rac

background_dump_dest=/oracle/app/oracle/admin/rac/bdump

core_dump_dest=/oracle/app/oracle/admin/rac/cdump

user_dump_dest=/oracle/app/oracle/admin/rac/udump

control_files=(“/dev/raw/raw13”, “/dev/raw/raw3”)

db_recovery_file_dest=/oracle/app/oracle/flash_recovery_area

db_recovery_file_dest_size=2147483648

job_queue_processes=10

compatible=10.2.0.5.0

processes=150

sga_target=1209008128

audit_file_dest=/oracle/app/oracle/admin/rac/adump

remote_login_passwordfile=EXCLUSIVE

dispatchers=”(PROTOCOL=TCP) (SERVICE=racXDB)”

pga_aggregate_target=402653184

*.cluster_database = TRUE

*.cluster_database_instances = 2

*.undo_management=AUTO

rac1.undo_tablespace=undotbs1

rac1.instance_name=rac1

rac1.instance_number=1

rac1.thread=1

rac1.local_listener=LISTENERS_RAC

rac2.instance_name=rac2

rac2.instance_number=2

rac2.local_listener=LISTENERS_RAC

rac2.thread=2

rac2.undo_tablespace=UNDOTBS2

另外增加上

rac2.remote_listener=LISTENERS_RAC

rac1.remote_listener=LISTENERS_RAC

15 增线程与日志组

要求数据库启动到mount状态,用之前的参数文件启动

www.htz.pw > alter database

  2  add logfile thread 2

  3  group 4 (‘/dev/raw/raw15’) size 50M;

 

Database altered.

www.htz.pw > alter database

  2  add logfile thread 2

  3  group 4 (‘/dev/raw/raw14’) size 50M;

 

Database altered.

 

www.htz.pw > alter database open;

 

Database altered.

 

www.htz.pw > alter database enable public thread 2;

 

Database altered.

 

www.htz.pw > CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE

  2  ‘/dev/raw/raw8’ SIZE 100M ;

 

Tablespace created.

16 跑集群脚本

www.htz.pw > @$ORACLE_HOME/rdbms/admin/catclust.sql

Package created.

Package body created.

 

17 是节点2操作

17.1 配置tnsnames.ora内容

内容跟节点1一致就可以了

17.2 创建dump/fra目标

[oracle@test2.htz.pw rac]$ mkdir adump  bdump  cdump  dpdump  pfile  udump

www.htz.pw > !mkdir -p /oracle/app/oracle/flash_recovery_area

17.3 启动数据库

www.htz.pw > startup

18 注册数据库资源

[oracle@test1.htz.pw rac]$ srvctl add database -d rac -o $ORACLE_HOME -p $ORACLE_HOME/dbs/spfilerac.ora

[oracle@test1.htz.pw rac]$ srvctl add instance -d rac -i rac1 -n test1.htz.pw

[oracle@test1.htz.pw rac]$ srvctl add instance -d rac -i rac2 -n test2

 

 

[oracle@test1.htz.pw dbs]$ crs_stat -t

Name           Type           Target    State     Host       

————————————————————

ora.rac.db     application    ONLINE    ONLINE    test1     

ora….c1.inst application    ONLINE    ONLINE    test1     

ora….c2.inst application    ONLINE    ONLINE    test2      

ora….T1.lsnr application    ONLINE    ONLINE    test1    

ora.test1.gsd  application    ONLINE    ONLINE    test1      

ora.test1.ons  application    ONLINE    ONLINE    test1    

ora.test1.vip  application    ONLINE    ONLINE    test1     

ora….T2.lsnr application    ONLINE    ONLINE    test2      

ora.test2.gsd  application    ONLINE    ONLINE    test2      

ora.test2.ons  application    ONLINE    ONLINE    test2      

ora.test2.vip  application    ONLINE    ONLINE    test2

本文固定链接: http://www.htz.pw/2014/12/05/convert-10g-single-instance-database-to-10g-rac-using-manual-conversion-procedure.html | 认真就输

该日志由 huangtingzhong 于2014年12月05日发表在 MIGRATE 分类下, 通告目前不可用,你可以至底部留下评论。
原创文章转载请注明: Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure | 认真就输
关键字: