-- Implemented IBM-AIX 6.1 flatform
-- Upgrade from 11.2.0.2 to 11.2.0.3
-- Oracle Clusterware 11.2.0.4
-- Source Oracle Software verison = 11.2.0.2
-- Target Oracle Sofware version = 11.2.0.3
About Document:
This document can be used to migrate data using RMAN from one cluster database to another cluster database with lower version to higher version of Oracle 11gR2. It is helpful to create all required clusterware services for newly restored database.
Basic Checks:
1) OS and Oracle version
2) Check transportable_platform ( endian format)
SQL> select * from v$transportable_platform
3) all user password like root, oracle and grid (if)
# export ORACLE_HOME=/oracle/app/11.2.0/grid
# export PATH=$ORACLE_HOME/bin:$PATH
=========================Created pfile in new server=======================
========================// initPROD1.ora //==============================
*._keep_remote_column_size=TRUE
*.audit_file_dest='/oracle/app/oracle/admin/PROD/adump'
*.audit_trail='db'
*.cluster_database=false
*.compatible='11.2.0.0.0'
*.control_files='+DATA/PROD/controlfile/current.272.759561861','+FRA/PROD/controlfile/current.388.759561863'
*.cursor_sharing='EXACT'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='PROD'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=268435456000
*.diagnostic_dest='/oracle/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRODXDB)'
PROD2.instance_number=2
PROD1.instance_number=1
*.log_archive_start=TRUE
*.memory_target=6871318528
*.nls_date_format='DD-MON-YYYY HH24:MI:SS'
*.open_cursors=300
*.processes=500
*.remote_listener='PRDDB-SCAN-IP:1521'
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_manager_plan=''
*.sec_case_sensitive_logon=FALSE
*.service_names='PROD'
*.session_cached_cursors=100
*.sessions=555
PROD2.thread=2
PROD1.thread=1
PROD1.undo_tablespace='UNDOTBS1'
PROD2.undo_tablespace='UNDOTBS2'
===========================// initPROD1.ora //========================
Steps:
=================
1) Take RMAN backup from source side and transfer to target side. (make sure controlfile autobackup is on)
2) create pfile in source side and move to target side.
3) Create required directory structures in ASM for C.R.D. files.
4) Change required parameters in pfile( audit, controlfiles, FRA etc,)
5) Be sure of following changes:
*.cluster_database=false #( default true)
*.audit_file_dest='/oracle/app/oracle/admin/PROD/adump'
*.control_files='+DATA/PROD/controlfile/current.272.759561861','+FRA/PROD/controlfile/current.388.759561863'
*.remote_listener='test-cluster-scan:1521'
*.diagnostic_dest='/oracle/app/oracle'
etc.
6) Stop cluster services in node-2 ...(stop crs in node 2)
7) start with nomount
SQL> create spfile from pfile='/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/initPROD1.ora';
File created.
SQL> startup nomount;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.
.....
SQL> quit
bash-4.2$ rman target /
RMAN> set DBID=52099846; (Collect from source database)
executing command: SET DBID
RMAN> restore controlfile from '/oradb/PROD_bkp/c-62083845-20131129-01';
RMAN> startup mount;
RMAN> catalog start with '/oradb/PROD_bkp'; ( Path where all backup pieces are kept to restore)
-- run your switch datafile script.( Collect from source database)
RMAN> RUN
{
allocate channel ch1 type disk;
SET NEWNAME FOR DATAFILE 1 TO '+DATA/PROD/datafile/system.278.769561767';
SET NEWNAME FOR DATAFILE 2 TO '+DATA/PROD/datafile/sysaux.279.769561767';
....
}
RMAN>
8) Recover database
RMAN> catalog start with '/oradb/PROD_bkp/archivelogs';
( Path where all archive logs are kept to recover)
SQL> recover database ;
9) Open with upgrade / follow the upgrade process
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> alter database open resetlogs upgrade;
( if not work), then
SQL> shut immediate;
SQL> startup upgrade;
-- download following file from oracle.com and runn immediate after startup upgrade.
-- utlu112i_3.sql
SQL> @utlu112i_3.sql
SQL> @catupgrd.sql
SQL> shut immediate;
SQL> startup;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.
...
Database mounted.
Database opened.
SQL> @utlu112s.sql
SQL> @?/rdbms/admin/catuppst.sql
SQL> @?/rdbms/admin/utlrp.sql
SQL> select count(1) from dba_objects where status='INVALID';
Note: Collect from source and target databases. Target database Valid object counts should be equal or more or invalid object counts of target database should be same or lower.
10) create common spfile:
do the changes in pfile(add) :
SPFILE='+DATA/PROD/spfilePROD.ora'
*.cluster_database=true
11) SQL> shut immediate;
12) Create parameter file in ASM
SQL> create spfile='+DATA/PROD/spfilePROD.ora' from pfile='/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/initPROD1.ora';
File created.
create pfile in node 2 in $ORACLE_HOME/dbs location and keep only below parameter
$cd $ORACLE_HOME/dbs
$ cat initPROD2.ora
SPFILE='+DATA/PROD/spfilePROD.ora'
13) Try to startup
SQL> startup;
....
Database mounted.
Database opened.
SQL>
SQL> show parameter spfile;
NAME TYPE VALUE
------------- ----------- -------------------
spfile string +DATA/PROD/spfilePROD.ora
14) start crs services in Node-2, goto root and start
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# export ORACLE_HOME=/oracle/app/11.2.0/grid
# export PATH=$ORACLE_HOME/bin:$PATH
# ./crsctl start crs
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
( wait it will take time to start all services as 'online')
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
15) add database and instance to cluster
$ srvctl add database -d PROD -o /oracle/app/oracle/product/11.2.0/dbhome_1 ( oracle home)
$ srvctl config database -d PROD
$ srvctl add instance -d PROD -i PROD1 -n testclient1
( testclient1 is the hostname of node-1)
$ srvctl add instance -d PROD -i PROD2 -n testclient2
$ srvctl add service -d PROD -s node1.db -r PROD1
( Use hostname in case of node1)
$ srvctl add service -d PROD -s node2.db -r PROD2
$ srvctl start database -d PROD
Now confirm:
$ srvctl status database -d PROD
Instance PROD1 is running on node testclient1
Instance PROD2 is running on node testclient2
16) Do the following activities
a)create password file in both node
go dbs location
$ orapwd force=y file=orapwPROD1 password=sys$
$ orapwd force=y file=orapwPROD2 password=sys$
b) add in host file of client and collect scan IP(s)
$ cat /etc/hosts
test-cluster-scan ( scan IP hostname )
c) Gather schema stats & Dictionary stats
DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME => SC.USERNAME);
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS:
17) Final verification of all RAC services
# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.DATA.dg
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.FRA.dg
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.LISTENER.lsnr
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.asm
ONLINE ONLINE testclient1 Started
ONLINE ONLINE testclient2 Started
ora.gsd
OFFLINE OFFLINE testclient1
OFFLINE OFFLINE testclient2
ora.net1.network
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.ons
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.registry.acfs
ONLINE UNKNOWN testclient1
ONLINE ONLINE testclient2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE testclient1
ora.PROD.db
1 ONLINE ONLINE testclient1 Open
2 ONLINE ONLINE testclient2 Open
ora.PROD.node1.db.svc
1 ONLINE ONLINE testclient1
ora.PROD.node2.db.svc
1 ONLINE ONLINE testclient2
ora.cvu
1 ONLINE ONLINE testclient1
ora.oc4j
1 ONLINE ONLINE testclient1
ora.scan1.vip
1 ONLINE ONLINE testclient1
ora.testclient1.vip
1 ONLINE ONLINE testclient1
ora.testclient2.vip
1 ONLINE ONLINE testclient2
-- One sample example when I upgraded from 11.2.0.3 to 11.2.0.4
-- Ugrading catalog and dba registry
SQL> startup upgrade;
ORACLE instance started.
.....
Database mounted.
Database opened.
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD READ WRITE
SQL>
-- Now see the dba registry values of all database components:
set line 120;
column COMP_NAME format a35;
column VERSION format a10;
column status format a10;
select COMP_NAME, VERSION,status from dba_registry;
COMP_NAME VERSION STATUS
----------------------------------- ---------- ----------
Oracle Enterprise Manager 11.2.0.3.0 VALID
Oracle XML Database 11.2.0.3.0 VALID
Oracle Text 11.2.0.3.0 VALID
Oracle Expression Filter 11.2.0.3.0 VALID
Oracle Rules Manager 11.2.0.3.0 VALID
Oracle Workspace Manager 11.2.0.3.0 VALID
Oracle Database Catalog Views 11.2.0.3.0 VALID
Oracle Database Packages and Types 11.2.0.3.0 VALID
JServer JAVA Virtual Machine 11.2.0.3.0 VALID
Oracle XDK 11.2.0.3.0 VALID
Oracle Database Java Packages 11.2.0.3.0 VALID
Oracle Real Application Clusters 11.2.0.3.0 VALID
12 rows selected.
Note: Here database components are not upgraded.
-- Upgrade the catalog
SQL>@?/rdbms/admin/catupgrd.sql
Note: After catalog upgrade database will be down automatically. Again do startup upgrade.
SQL>startup upgrade;
-- Now see the version of database components:
SQL> startup upgrade;
ORACLE instance started.
......
Database mounted.
Database opened.
SQL>
SQL> set line 120;
column COMP_NAME format a35;
column VERSION format a10;
column status format a10;
select COMP_NAME, VERSION,status from dba_registry;
COMP_NAME VERSION STATUS
----------------------------------- ---------- ----------
Oracle Enterprise Manager 11.2.0.4.0 VALID
Oracle XML Database 11.2.0.4.0 VALID
Oracle Text 11.2.0.4.0 VALID
Oracle Expression Filter 11.2.0.4.0 VALID
Oracle Rules Manager 11.2.0.4.0 VALID
Oracle Workspace Manager 11.2.0.4.0 VALID
Oracle Database Catalog Views 11.2.0.4.0 VALID
Oracle Database Packages and Types 11.2.0.4.0 VALID
JServer JAVA Virtual Machine 11.2.0.4.0 VALID
Oracle XDK 11.2.0.4.0 VALID
Oracle Database Java Packages 11.2.0.4.0 VALID
COMP_NAME VERSION STATUS
----------------------------------- ---------- ----------
Oracle Real Application Clusters 11.2.0.4.0 VALID
12 rows selected.
A common issue:
After restore and recover database in two node RAC, node2 instance failed to start with below error message:
Example:
$ srvctl start database -d PROD
PRCR-1079 : Failed to start resource ora.prod.db
CRS-5017: The resource action "ora.prod.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00123: invalid character 0 found in the input file
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/racdb5/agent/crsd/oraagent_oracle//oraagent_oracle.log".
CRS-2674: Start of 'ora.ehishyd.db' on 'racdb5' failed
CRS-2632: There are no more servers to try to place resource 'ora.ehishyd.db' on that would satisfy its placement policy
$ srvctl status database -d PROD
Instance PROD2 is not running on node racdb5
Instance PROD1 is running on node racdb6
$
Hope this document will help more
Hope this document will help. Don't forget to post comments....
-- Upgrade from 11.2.0.2 to 11.2.0.3
-- Oracle Clusterware 11.2.0.4
-- Source Oracle Software verison = 11.2.0.2
-- Target Oracle Sofware version = 11.2.0.3
About Document:
This document can be used to migrate data using RMAN from one cluster database to another cluster database with lower version to higher version of Oracle 11gR2. It is helpful to create all required clusterware services for newly restored database.
Basic Checks:
1) OS and Oracle version
2) Check transportable_platform ( endian format)
SQL> select * from v$transportable_platform
3) all user password like root, oracle and grid (if)
# export ORACLE_HOME=/oracle/app/11.2.0/grid
# export PATH=$ORACLE_HOME/bin:$PATH
=========================Created pfile in new server=======================
========================// initPROD1.ora //==============================
*._keep_remote_column_size=TRUE
*.audit_file_dest='/oracle/app/oracle/admin/PROD/adump'
*.audit_trail='db'
*.cluster_database=false
*.compatible='11.2.0.0.0'
*.control_files='+DATA/PROD/controlfile/current.272.759561861','+FRA/PROD/controlfile/current.388.759561863'
*.cursor_sharing='EXACT'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='PROD'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=268435456000
*.diagnostic_dest='/oracle/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRODXDB)'
PROD2.instance_number=2
PROD1.instance_number=1
*.log_archive_start=TRUE
*.memory_target=6871318528
*.nls_date_format='DD-MON-YYYY HH24:MI:SS'
*.open_cursors=300
*.processes=500
*.remote_listener='PRDDB-SCAN-IP:1521'
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_manager_plan=''
*.sec_case_sensitive_logon=FALSE
*.service_names='PROD'
*.session_cached_cursors=100
*.sessions=555
PROD2.thread=2
PROD1.thread=1
PROD1.undo_tablespace='UNDOTBS1'
PROD2.undo_tablespace='UNDOTBS2'
===========================// initPROD1.ora //========================
Steps:
=================
1) Take RMAN backup from source side and transfer to target side. (make sure controlfile autobackup is on)
2) create pfile in source side and move to target side.
3) Create required directory structures in ASM for C.R.D. files.
4) Change required parameters in pfile( audit, controlfiles, FRA etc,)
5) Be sure of following changes:
*.cluster_database=false #( default true)
*.audit_file_dest='/oracle/app/oracle/admin/PROD/adump'
*.control_files='+DATA/PROD/controlfile/current.272.759561861','+FRA/PROD/controlfile/current.388.759561863'
*.remote_listener='test-cluster-scan:1521'
*.diagnostic_dest='/oracle/app/oracle'
etc.
6) Stop cluster services in node-2 ...(stop crs in node 2)
7) start with nomount
SQL> create spfile from pfile='/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/initPROD1.ora';
File created.
SQL> startup nomount;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.
.....
SQL> quit
bash-4.2$ rman target /
RMAN> set DBID=52099846; (Collect from source database)
executing command: SET DBID
RMAN> restore controlfile from '/oradb/PROD_bkp/c-62083845-20131129-01';
RMAN> startup mount;
RMAN> catalog start with '/oradb/PROD_bkp'; ( Path where all backup pieces are kept to restore)
-- run your switch datafile script.( Collect from source database)
RMAN> RUN
{
allocate channel ch1 type disk;
SET NEWNAME FOR DATAFILE 1 TO '+DATA/PROD/datafile/system.278.769561767';
SET NEWNAME FOR DATAFILE 2 TO '+DATA/PROD/datafile/sysaux.279.769561767';
....
}
RMAN>
8) Recover database
RMAN> catalog start with '/oradb/PROD_bkp/archivelogs';
( Path where all archive logs are kept to recover)
SQL> recover database ;
9) Open with upgrade / follow the upgrade process
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> alter database open resetlogs upgrade;
( if not work), then
SQL> shut immediate;
SQL> startup upgrade;
-- download following file from oracle.com and runn immediate after startup upgrade.
-- utlu112i_3.sql
SQL> @utlu112i_3.sql
SQL> @catupgrd.sql
SQL> shut immediate;
SQL> startup;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.
...
Database mounted.
Database opened.
SQL> @utlu112s.sql
SQL> @?/rdbms/admin/catuppst.sql
SQL> @?/rdbms/admin/utlrp.sql
SQL> select count(1) from dba_objects where status='INVALID';
Note: Collect from source and target databases. Target database Valid object counts should be equal or more or invalid object counts of target database should be same or lower.
10) create common spfile:
do the changes in pfile(add) :
SPFILE='+DATA/PROD/spfilePROD.ora'
*.cluster_database=true
11) SQL> shut immediate;
12) Create parameter file in ASM
SQL> create spfile='+DATA/PROD/spfilePROD.ora' from pfile='/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/initPROD1.ora';
File created.
create pfile in node 2 in $ORACLE_HOME/dbs location and keep only below parameter
$cd $ORACLE_HOME/dbs
$ cat initPROD2.ora
SPFILE='+DATA/PROD/spfilePROD.ora'
13) Try to startup
SQL> startup;
....
Database mounted.
Database opened.
SQL>
SQL> show parameter spfile;
NAME TYPE VALUE
------------- ----------- -------------------
spfile string +DATA/PROD/spfilePROD.ora
14) start crs services in Node-2, goto root and start
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# export ORACLE_HOME=/oracle/app/11.2.0/grid
# export PATH=$ORACLE_HOME/bin:$PATH
# ./crsctl start crs
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
( wait it will take time to start all services as 'online')
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
15) add database and instance to cluster
$ srvctl add database -d PROD -o /oracle/app/oracle/product/11.2.0/dbhome_1 ( oracle home)
$ srvctl config database -d PROD
$ srvctl add instance -d PROD -i PROD1 -n testclient1
( testclient1 is the hostname of node-1)
$ srvctl add instance -d PROD -i PROD2 -n testclient2
$ srvctl add service -d PROD -s node1.db -r PROD1
( Use hostname in case of node1)
$ srvctl add service -d PROD -s node2.db -r PROD2
$ srvctl start database -d PROD
Now confirm:
$ srvctl status database -d PROD
Instance PROD1 is running on node testclient1
Instance PROD2 is running on node testclient2
16) Do the following activities
a)create password file in both node
go dbs location
$ orapwd force=y file=orapwPROD1 password=sys$
$ orapwd force=y file=orapwPROD2 password=sys$
b) add in host file of client and collect scan IP(s)
$ cat /etc/hosts
test-cluster-scan ( scan IP hostname )
c) Gather schema stats & Dictionary stats
DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME => SC.USERNAME);
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS:
17) Final verification of all RAC services
# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.DATA.dg
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.FRA.dg
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.LISTENER.lsnr
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.asm
ONLINE ONLINE testclient1 Started
ONLINE ONLINE testclient2 Started
ora.gsd
OFFLINE OFFLINE testclient1
OFFLINE OFFLINE testclient2
ora.net1.network
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.ons
ONLINE ONLINE testclient1
ONLINE ONLINE testclient2
ora.registry.acfs
ONLINE UNKNOWN testclient1
ONLINE ONLINE testclient2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE testclient1
ora.PROD.db
1 ONLINE ONLINE testclient1 Open
2 ONLINE ONLINE testclient2 Open
ora.PROD.node1.db.svc
1 ONLINE ONLINE testclient1
ora.PROD.node2.db.svc
1 ONLINE ONLINE testclient2
ora.cvu
1 ONLINE ONLINE testclient1
ora.oc4j
1 ONLINE ONLINE testclient1
ora.scan1.vip
1 ONLINE ONLINE testclient1
ora.testclient1.vip
1 ONLINE ONLINE testclient1
ora.testclient2.vip
1 ONLINE ONLINE testclient2
-- One sample example when I upgraded from 11.2.0.3 to 11.2.0.4
-- Ugrading catalog and dba registry
SQL> startup upgrade;
ORACLE instance started.
.....
Database mounted.
Database opened.
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD READ WRITE
SQL>
-- Now see the dba registry values of all database components:
set line 120;
column COMP_NAME format a35;
column VERSION format a10;
column status format a10;
select COMP_NAME, VERSION,status from dba_registry;
COMP_NAME VERSION STATUS
----------------------------------- ---------- ----------
Oracle Enterprise Manager 11.2.0.3.0 VALID
Oracle XML Database 11.2.0.3.0 VALID
Oracle Text 11.2.0.3.0 VALID
Oracle Expression Filter 11.2.0.3.0 VALID
Oracle Rules Manager 11.2.0.3.0 VALID
Oracle Workspace Manager 11.2.0.3.0 VALID
Oracle Database Catalog Views 11.2.0.3.0 VALID
Oracle Database Packages and Types 11.2.0.3.0 VALID
JServer JAVA Virtual Machine 11.2.0.3.0 VALID
Oracle XDK 11.2.0.3.0 VALID
Oracle Database Java Packages 11.2.0.3.0 VALID
Oracle Real Application Clusters 11.2.0.3.0 VALID
12 rows selected.
Note: Here database components are not upgraded.
-- Upgrade the catalog
SQL>@?/rdbms/admin/catupgrd.sql
Note: After catalog upgrade database will be down automatically. Again do startup upgrade.
SQL>startup upgrade;
-- Now see the version of database components:
SQL> startup upgrade;
ORACLE instance started.
......
Database mounted.
Database opened.
SQL>
SQL> set line 120;
column COMP_NAME format a35;
column VERSION format a10;
column status format a10;
select COMP_NAME, VERSION,status from dba_registry;
COMP_NAME VERSION STATUS
----------------------------------- ---------- ----------
Oracle Enterprise Manager 11.2.0.4.0 VALID
Oracle XML Database 11.2.0.4.0 VALID
Oracle Text 11.2.0.4.0 VALID
Oracle Expression Filter 11.2.0.4.0 VALID
Oracle Rules Manager 11.2.0.4.0 VALID
Oracle Workspace Manager 11.2.0.4.0 VALID
Oracle Database Catalog Views 11.2.0.4.0 VALID
Oracle Database Packages and Types 11.2.0.4.0 VALID
JServer JAVA Virtual Machine 11.2.0.4.0 VALID
Oracle XDK 11.2.0.4.0 VALID
Oracle Database Java Packages 11.2.0.4.0 VALID
COMP_NAME VERSION STATUS
----------------------------------- ---------- ----------
Oracle Real Application Clusters 11.2.0.4.0 VALID
12 rows selected.
A common issue:
After restore and recover database in two node RAC, node2 instance failed to start with below error message:
Example:
$ srvctl start database -d PROD
PRCR-1079 : Failed to start resource ora.prod.db
CRS-5017: The resource action "ora.prod.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00123: invalid character 0 found in the input file
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/racdb5/agent/crsd/oraagent_oracle//oraagent_oracle.log".
CRS-2674: Start of 'ora.ehishyd.db' on 'racdb5' failed
CRS-2632: There are no more servers to try to place resource 'ora.ehishyd.db' on that would satisfy its placement policy
$ srvctl status database -d PROD
Instance PROD2 is not running on node racdb5
Instance PROD1 is running on node racdb6
$
I verified and found spfile is in ASM and it is common to all nodes. But local parameter file not set properly in node 2. I changed like below and started node2 instance and started also.
Change :
$vi initPROD2.ora
SPFILE='+DATA/PROD/spfilePROD.ora'
Hope this document will help more
Hope this document will help. Don't forget to post comments....
Nice. Thank you. I used your step-by-step to restore and recover a RAC database.
ReplyDeleteHello FaceDBA, thank you for such a wonderful detailed explanation.
ReplyDeleteBut I just wanted to confirm and verify below things .
1) I have 2 node RAC database which is already preconfigured and now its crashed and now I have to perform recovery
I have to set cluster_databases=false and then proceed with restoration.
2)Once database is restored then we have start with cluster_rac database and start the crs on other nodes and services.
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete