Oct 29, 2013

Solution : ORA-01442: column to be modified to NOT NULL is already NOT NULL

Fix / Solution : ORA-01442: column to be modified to NOT NULL is already NOT NULL

-- When you fire below command, then following error will come If a column having no null values at all in a table. then you can't add directly 'NOT NULL'. You can follow below method to add NOT NULL constraint.

sql> alter table HR.ADDRESSDETAILS modify UID not null;

ORA-01442: column to be modified to NOT NULL is already NOT NULL

Solution:
Syntax:
ALTER TABLE table_name ADD CONSTRAINT constraint_name CHECK(COLUMN_NAME IS NOT NULL);
Example:
ALTER TABLE HR.ADDRESSDETAILS  ADD CONSTRAINT nn_uid_hr_addrsdtls CHECK(UID IS NOT NULL) ;

Thanks, Cheers !!!

Oct 26, 2013

Process vs Sessions parameter values in Oracle

~ Process vs Sessions parameter values in Oracle
~ Setting Sessions and Transaction parameter values in Oracle 11g with respect to process parameter values.
~ Applicable to different oracle version as described.

-- Formula ( upto Oracle version 11.2.0.x.x) - I believe

processes=x
sessions=x*1.1+5
transactions=sessions*1.1
Click Here to view from Oracle document
-- To View all
$sqlplus '/as sysdba'
sql> show parameter sessions;
sql> show parameter processes;
sql> show parameter transactions;

-- To set ( as per as usual process)

alter system set processes=1000 scope=spfile;
alter system set sessions=1105 scope=spfile;
alter system set transactions=1216 scope=spfile;

See Document: http://docs.oracle.com/cd/B28359_01/server.111/b28320/initparams220.htm

-- In Oracle 11.2.0.3.x onwards

-- Deviation ( as I observed, Concluded formula) - May be a bug
-- When I set process values and bounce the database automatically values set for sessions and transactions

In 11.2.0.3 formula is :

sessions=(No_of_processes*1.5)+26

i.e.,
processes=x
sessions=x*1.5+26
transactions=sessions*1.1

Example:
If process=500, the sessions will be 776, Transactions will be 853.

-- When viewed from one of production database environment:

SQL> show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string
SQL> show parameter session;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
java_max_sessionspace_size           integer     0
java_soft_sessionspace_limit         integer     0
license_max_sessions                 integer     0
license_sessions_warning             integer     0
session_cached_cursors               integer     50
session_max_open_files               integer     10
sessions                             integer     776
shared_server_sessions               integer
SQL> show parameter transaction;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
transactions                         integer     853
transactions_per_rollback_segment    integer     5

* Some sites also explains similar issues / bugs. Please go through the below link. But I have little bit deviations.

http://hoopercharles.wordpress.com/2012/02/08/sessions-derived-value-formula-doc-bug-explains/

Note: When spfile is common location ( in asm) in RAC database, then do in mount stage and bounce the instances.

Thanks

Grants to run Tuning advisory by schema in Oracle


grant advisor to HR;
grant administer sql tuning set to HR;

Click here to see Oracle document on Database Performance Tuning Guide
Thanks

Oct 25, 2013

DDL queries for all materialized views Indexes

~ Get DDL queries for all materialized views Indexes in Oracle
~ ( to be collected from source database)
~ Requirement: When a database is migrated / restored using expdp/impdp method, then there is a chance of missing indexes in some scenarios. As indexes are created exclusively for materialized views, can't be seen in table/ mview definition. You can follow below dynamic query build approach to collect DDL script.
OR
When any Materialized view dropped and re-created, Indexes created on the materialized view should be taken care and re-created again. Otherwise, following query can be used to collect DDL scripts from source environment and run again if any index is missed during drop and re-create of materialized view.
~ Applicable to all Oracle versions.

-- step:1: If Index is not owned by table owner
select 'select dbms_metadata.get_ddl(''INDEX'',''' || INDEX_NAME || ''',''' ||
       OWNER || ''') from dual;'
  from all_indexes
 where owner != table_owner
   and table_owner not like '%SYS%';


-- All mview Index DDL - to generate GET_DDL script

select 'select dbms_metadata.get_ddl(''INDEX'',''' || INDEX_NAME || ''',''' ||
       OWNER || ''') from dual;' from all_indexes a where a.table_name in (
select v.MVIEW_NAME from all_mviews v )
and index_name not like '%PK%'

e.g., select dbms_metadata.get_ddl('INDEX','INDX_SERVICEID','BILLING') from dual;

-- Step:2: keep the above output and spool the output again for Create Script, eg.,
set heading off;
 set echo off;
 Set pages 999;
 set long 90000;
spool c:\temp\ehishyd.log

select 'select dbms_metadata.get_ddl(''INDEX'',''' || INDEX_NAME || ''',''' ||
       OWNER || ''') from dual;' from all_indexes a where a.table_name in (
select v.MVIEW_NAME from all_mviews v )
and index_name not like '%PK%'

spool off;

Now you can run spooled DDL query in target database environment.
Thanks

Oct 23, 2013

Find & repair Corrupt block in Oracle database

~ Find & repair Corrupt block in Oracle
~ Applies to any Oracle version
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Find if any block (Table) is corrupted.

SQL> select * from V$DATABASE_BLOCK_CORRUPTION -- will show if any corruped block

Below query can give you information about corrupted block:

set head on; 
set pagesize 2000 
set linesize 250 
select * from v$database_block_corruption;
SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file# 
, greatest(e.block_id, c.block#) corr_start_block# 
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block# 
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) 
- greatest(e.block_id, c.block#) + 1 blocks_corrupted 
, null description 
FROM dba_extents e, v$database_block_corruption c 
WHERE e.file_id = c.file# 
AND e.block_id <= c.block# + c.blocks - 1 
AND e.block_id + e.blocks - 1 >= c.block# 
UNION 
SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file# 
, header_block corr_start_block# 
, header_block corr_end_block# 
, 1 blocks_corrupted 
, 'Segment Header' description 
FROM dba_segments s, v$database_block_corruption c 
WHERE s.header_file = c.file# 
AND s.header_block between c.block# and c.block# + c.blocks - 1 
UNION 
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file# 
, greatest(f.block_id, c.block#) corr_start_block# 
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block# 
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) 
- greatest(f.block_id, c.block#) + 1 blocks_corrupted 
, 'Free Block' description 
FROM dba_free_space f, v$database_block_corruption c 
WHERE f.file_id = c.file# 
AND f.block_id <= c.block# + c.blocks - 1 
AND f.block_id + f.blocks - 1 >= c.block# 
order by file#, corr_start_block#; 


1) Collect file ids
sql> select distinct file_id from dba_extents;
2) Collect details
sql>
SELECT file_id,segment_name,segment_type,owner,tablespace_name,block_id,blocks
FROM   sys.dba_extents
WHERE  (file_id between 1 and 23) AND 658438 BETWEEN block_id AND block_id + blocks - 1;
3) Repair - If Table

a) Collect all data to temporary table and collect all DDL script and grants.
b) drop the table and re-create it with DDL script. (Disable refence key before drop, enable after create table)
c) Insert all records to the table

Note: This entire activity should not be taken in prod databases without Oracle support's recommendation.

Click here to get my work-around which is posted to this blog.

Hope this will help to resolve block corruption issue.

Rename the Oracle Database using DBNEWID utility


~ Rename database using DBNEWID Utility
~ Some times exiting database may kept with different name for testing purpose and with current name may be new database to be created.
~ Applies to Oracle version 10.1.x.x and above
Prior to the introduction of the DBNEWID utility alteration of the internal DBID of an instance was impossible and alteration of the DBNAME required the creation of a new controlfile. The DBNEWID utility allows the DBID to be altered for the first time and makes changing the DBNAME simpler. Changing the DBID is necessary when you want to use an RMAN catalog to backup a cloned instance. RMAN identifies instances using the DBID, preventing the original and cloned instance being managed by the same catalog. Alteration of the BID in the cloned instance removes this restriction.
·         DBID And DBNAME
·         DBNAME Only
·         DBID Only
DBID And DBNAME
·         Backup the database.
·         Mount the database after a clean shutdown:
SHUTDOWN IMMEDIATE
STARTUP MOUNT
·         Invoke the DBNEWID utility (nid) specifying the new DBNAME from the command line using a user with SYSDBA privilege:
nid TARGET=sys/password@TSH1 DBNAME=TSH2
Assuming the validation is successful the utility prompts for confirmation before performing the actions. Typical output may look something like:
C:\oracle\920\bin>nid TARGET=sys/password@TSH1 DBNAME=TSH2
DBNEWID: Release 9.2.0.3.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation.  All rights reserved.

Connected to database TSH1 (DBID=1024166118)

Control Files in database:
    C:\ORACLE\ORADATA\TSH1\CONTROL01.CTL
    C:\ORACLE\ORADATA\TSH1\CONTROL02.CTL
    C:\ORACLE\ORADATA\TSH1\CONTROL03.CTL

Change database ID and database name TSH1 to TSH2? (Y/[N]) => Y

Proceeding with operation
Changing database ID from 1024166118 to 1317278975
Changing database name from TSH1 to TSH2
    Control File C:\ORACLE\ORADATA\TSH1\CONTROL01.CTL - modified
    Control File C:\ORACLE\ORADATA\TSH1\CONTROL02.CTL - modified
    Control File C:\ORACLE\ORADATA\TSH1\CONTROL03.CTL - modified
    Datafile C:\ORACLE\ORADATA\TSH1\SYSTEM01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\UNDOTBS01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\CWMLITE01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\DRSYS01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\EXAMPLE01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\INDX01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\ODM01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\TOOLS01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\USERS01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\XDB01.DBF - dbid changed, wrote new name
    Datafile C:\ORACLE\ORADATA\TSH1\TEMP01.DBF - dbid changed, wrote new name
    Control File C:\ORACLE\ORADATA\TSH1\CONTROL01.CTL - dbid changed, wrote new name
    Control File C:\ORACLE\ORADATA\TSH1\CONTROL02.CTL - dbid changed, wrote new name
    Control File C:\ORACLE\ORADATA\TSH1\CONTROL03.CTL - dbid changed, wrote new name

Database name changed to TSH2.
Modify parameter file and generate a new password file before restarting.
Database ID for database TSH2 changed to 1317278975.
All previous backups and archived redo logs for this database are unusable.
Shut down database and open with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.
·         Shutdown the database:
SHUTDOWN IMMEDIATE
·         Modify the DB_NAME parameter in the initialization parameter file. The startup will result in an error but proceed anyway.
STARTUP MOUNT
ALTER SYSTEM SET DB_NAME=TSH2 SCOPE=SPFILE;
SHUTDOWN IMMEDIATE
·         Create a new password file:
orapwd file=c:\oracle\920\database\pwdTSH2.ora password=password entries=10
·         Rename the SPFILE to match the new DBNAME.
·         If you are using Windows you must recreate the service so the correct name and parameter file are used:
oradim -delete -sid TSH1
oradim -new -sid TSH2 -intpwd password -startmode a -pfile c:\oracle\920\database\spfileTSH2.ora
If you are using UNIX/Linux simply reset the ORACLE_SID environment variable:
ORACLE_SID=TSH2; export ORACLE_SID
·         Alter the listener.ora and tnsnames.ora setting to match the new database name and restart the listener:
lsnrctl reload
·         Open the database with RESETLOGS:
STARTUP MOUNT
ALTER DATABASE OPEN RESETLOGS;
·         Backup the database.
DBNAME Only
Repeat the process as before except use the following command to start the DBNEWID utility:
nid TARGET=sys/password@TSH2 DBNAME=TSH3 SETNAME=YES
The SETNAME parameter tells the DBNEWID utility to only alter the database name.

When opening the database the RESETLOGS option is not needed so the database can be started using the STARTUP command.
DBID Only
·         Backup the database.
·         Mount the database after a clean shutdown:
SHUTDOWN IMMEDIATE
STARTUP MOUNT
·         Invoke the DBNEWID utility (nid) from the command line using a user with SYSDBA privilege. Do not specify a new DBNAME:
nid TARGET=sys/password@TSH3
·         Shutdown and open the database with RESETLOGS:
SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER DATABASE OPEN RESETLOGS;
·         Backup the database. 

Note: Be sure you have full RMAN backup piece  / full backup logical dump before starting the activity.

Very soon another approach (safe approach)  will be published by me.

Hope this document definitely help you.

Oct 16, 2013

Configuring Automatic Memory Management (AMM) in Oracle11g

~ Configuring Automatic Memory Management (AMM)
~ Applicable to : Oracle Database 11g

-- Using DBCA
 When creating the database manually, simply set the appropriate MEMORY_TARGET and MEMORY_MAX_TARGET initialization parameters before creating the database using DBCA

-- Reconfigure AMM if not configured

Enabling automatic memory management on a system that didn't previously use it is a simple task. Assuming you want to use a similar amount of memory to your current settings you will need to use the following calculation.

MEMORY_TARGET = SGA_TARGET + GREATEST(PGA_AGGREGATE_TARGET, "maximum PGA allocated")

The following queries show you how to display the relevant information and how to combine it in a single statement to calculate the required value.

-- Individual values.
COLUMN name FORMAT A30
COLUMN value FORMAT A10

SELECT name, value
FROM   v$parameter
WHERE  name IN ('pga_aggregate_target', 'sga_target')
UNION
SELECT 'maximum PGA allocated' AS name, TO_CHAR(value) AS value
FROM   v$pgastat
WHERE  name = 'maximum PGA allocated';

-- Calculate MEMORY_TARGET
SELECT sga.value + GREATEST(pga.value, max_pga.value) AS memory_target
FROM (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'sga_target') sga,
     (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'pga_aggregate_target') pga,
     (SELECT value FROM v$pgastat WHERE name = 'maximum PGA allocated') max_pga;


Assuming our required setting was 5G, we might issue the following statements.

CONN / AS SYSDBA
-- Set the static parameter. Leave some room for possible future growth without restart.
ALTER SYSTEM SET MEMORY_MAX_TARGET=6G SCOPE=SPFILE;

-- Set the dynamic parameters. Assuming Oracle has full control.
ALTER SYSTEM SET MEMORY_TARGET=5G SCOPE=SPFILE;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET=0 SCOPE=SPFILE;
ALTER SYSTEM SET SGA_TARGET=0 SCOPE=SPFILE;

-- Restart instance.
SHUTDOWN IMMEDIATE;
STARTUP;Once the database is restarted the MEMORY_TARGET parameter can be amended as required without an instance restart.

ALTER SYSTEM SET MEMORY_TARGET=4G SCOPE=SPFILE;

-- Bounce the database

Note: Create pfile before all change. So that you can use your earlier setups.

Oct 15, 2013

Block sessions which are connected through "Toad" or "plsqldeveloper" in production database

-- Restrict / block sessions which are connected through third party tools in prod database
-- Applicable to any oracle version

rem -----------------------------------------------------------------------
rem Filename:   NoTools.sql
rem Purpose:    Block developers from using TOAD or PLSQL Developer and other tools on production databases.
rem Date:       15-OCT-2013
rem Author:     Gouranga Mohapatra
rem -----------------------------------------------------------------------

CONNECT / AS SYSDBA;

CREATE OR REPLACE TRIGGER t_block_tools_from_prod
  AFTER LOGON ON DATABASE
DECLARE
  v_prog sys.v_$session.program%TYPE;
BEGIN
  SELECT program INTO v_prog
    FROM sys.v_$session
  WHERE  audsid = USERENV('SESSIONID')
    AND  audsid != 0  -- Don't Check SYS Connections
    AND  rownum = 1;  -- Parallel processes will have the same AUDSID's

  IF UPPER(v_prog) LIKE '%TOAD%' OR UPPER(v_prog) LIKE '%T.O.A.D%' OR -- Toad
     UPPER(v_prog) LIKE '%SQLNAV%' OR    -- SQL Navigator
     UPPER(v_prog) LIKE '%PLSQLDEV%' OR -- PLSQL Developer
     UPPER(v_prog) LIKE '%BUSOBJ%' OR   -- Business Objects
     UPPER(v_prog) LIKE '%EXCEL%'       -- MS-Excel plug-in
  THEN
     RAISE_APPLICATION_ERROR(-20000, 'Third party tools are not allowed on production database');
  END IF;
END;
/
Thanks

Oct 10, 2013

Manage Table Partitions in Oracle 11g

About Table Partition:
Partitioning addresses key issues in supporting very large tables and indexes by letting you decompose them into smaller and more manageable pieces called partitions. SQL queries and DML statements do not need to be modified in order to access partitioned tables. However, after partitions are defined, DDL statements can access and manipulate individuals partitions rather than entire tables or indexes. This is how partitioning can simplify the manageability of large database objects. Also, partitioning is entirely transparent to applications.

Each partition of a table or index must have the same logical attributes, such as column names, datatypes, and constraints, but each partition can have separate physical attributes such as pctfree, pctused, and tablespaces.

Partitioning is useful for many different types of applications, particularly applications that manage large volumes of data. OLTP systems often benefit from improvements in manageability and availability, while data warehousing systems benefit from performance and manageability.

Advantages:

1) Partitioning enables data management operations such data loads, index creation and rebuilding, and backup/recovery at the partition level, rather than on the entire table. This results in significantly reduced times for these operations.
2) Partitioning improves query performance. In many cases, the results of a query can be achieved by accessing a subset of partitions, rather than the entire table. For some queries, this technique (called partition pruning) can provide order-of-magnitude gains in performance.
3) Partitioning can significantly reduce the impact of scheduled downtime for maintenance operations.
4) Partition independence for partition maintenance operations lets you perform concurrent maintenance operations on different partitions of the same table or index. You can also run concurrent SELECT and DML operations against partitions that are unaffected by maintenance operations.
5) Partitioning increases the availability of mission-critical databases if critical tables and indexes are divided into partitions to reduce the maintenance windows, recovery times, and impact of failures.
6) Partitioning can be implemented without requiring any modifications to your applications. For example, you could convert a nonpartitioned table to a partitioned table without needing to modify any of the SELECT statements or DML statements which access that table. You do not need to rewrite your application code to take advantage of partitioning.
7) No. archival required for your table in case of big table and less use of old records.

Note: It is a pricing option with Oracle. Be sure before use. Again it is an enterprise option only.

Useful queries related to table-partition:

-- find all partitions count

select table_owner,
  table_name,
  partition_name,
  segment_created,
  high_value 
from all_tab_partitions
where table_owner in('HR','CRM','PAYROLL','XYZ')
    

-- drop partition (If a partition not in use / wrongly high value given but don't have data in it)
SQL> alter table HR.EMP drop partition EMP_PART_1;

-- find max value of primary key column
select max(EMPid) from HR.EMP # If used sequence based pk

-- find no. of records of a parttion
select count(*) 
from HR.EMP partition(EMP_PART_2)

-- add new partition
alter table HR.EMP 
add PARTITION EMP_PART_3 values less than (1000000)

--  alter table to add new range partition dynamically ( New feature in Oracle 11g onwards)

alter table HR.EMP  set interval (100000);

Click here to read more about Table and Index partitioning

Oct 9, 2013

OS Watcher Black Box ( OSWBB) configuration - Oracle tool

Oracle OS watcher is especially useful for UNIX-based OS environments which will monitor the OS to identifying CPU, Memory or Network stress/ bottlenecks.  OS Watcher may invoke these popular UNIX/Linux utilities, depending on the platform (Solaris, HP/UX, Linux and IBM-AIX etc) and gives best output in case node eviction in RAC databases or system restarts. From my experience I will recommend to use by any DBA.

What OS Watcher monitors?

vmstat
iostat
top
netstat
traceroute

-- Starting Oracle OS Watcher (OSWatcher)

You can start Oracle OS Watcher with below command, with specifying the data collection interval (in seconds) and the max number of hours to keep archive files.

In this example we submit the collector as a background job to collect every 1 minute and keep 24 hours of archive files ( can be customized log duration and log retention), writing all messages to oswatcher.log:

Example:
nohup /home/oracle/scripts/oswbb/startOSW.sh 60 24 & > /home/oracle/scripts/oswbb/oswatcher.log

Installation :
1) Download OSwatcher.tar file from Oracle support.
e.g., oswbb701.tar (download from oracle metalink), Choose any higher version
Note: Time to time new osw versions are released.
2) Copy to Oracle DB server of any location
3) Xtract the .tar file
e.g., $ tar -xvf  oswbb701.tar
4) Give full permissions to the OSWBB folder.

Requirement:
1) GUI connection ( for graphical output generation)
2) Below rpm should be installed ( most of cases may not required)
       # rpm -ihv osw-service-0.0.6-1.noarch.rpm
3) software to be downloaded
    oswbb511.tar / oswbb601.tar / any higher version (download from oracle metalink)

Note: Always use OSW higher version

-- To Start OSWBB
$ export PATH=$ORACLE_HOME/jdk/jre/bin:$PATH
$ cd /u01/app/oswbb
$ nohup ./startOSWbb.sh 60 10 &  #1 minute interval and logs will be available for 10 hrs

Note : If you want to keep logs for 48 hours and log interval will be 15 seconds, then follow the below command:
$ nohup ./startOSWbb.sh 15 48 &  # 15 seconds interval and logs will be available for 2 days
OR
$ nohup ./startOSWbb.sh 15 48 /dev/null & > nohup.out ( to make null for nohup)

Cron job:

If you want to start as soon as server re-booted, then you can schedule same in cron job like below:

00 * * * * /u01/app/oswbb/start_oswatcher.sh 1>>/u01/app/oswbb/oswatcher.log 2>&1

$cat /u01/app/oswatcher/oswbb/start_oswatcher.sh
cd /u01/app/oswbb
nohup ./startOSWbb.sh 60 168 gzip &
$
--- To Stop OSWBB
$ export PATH=$ORACLE_HOME/jdk/jre/bin:$PATH
$ cd /u01/app/oswbb
$./stopOSWbb.sh

 -- Generating OSWatcher .html log for specific period:
$ export PATH=$ORACLE_HOME/jdk/jre/bin:$PATH
$ cd /u01/app/oswbb
$ java  -jar  oswbba.jar -i /u01/app/oswbb/archive/ -P osw_rpt  -6 -B JUNE 14 11:00:00 2013 -E JUNE 14 13:00:00 2013

Note: The above java call will generate a graphical based report which will cover 2 hrs data and kept in osw_rpt file.

$ java  -jar  oswbba.jar -i /u01/app/oswbb/archive/ -P osw_rpt  -6 -B JUNE 14 11:00:00 2013 -E JUNE 14 13:00:00 2013

-- To take anyalyze report

$ java  -jar  oswbba.jar -i /u01/app/oswbb/archive/ -P host1_rpt  -6 -B JUNE 15 10:00:00 2013 -E JUNE 15 13:00:00 2013

( Instead of -P , use -A  |<>| P - Profile; A - Analysis )


If you are getting below issues while generating GUI reports, then increase JAVA heap size. Follow the below command:

Issue came when heap memory specified:

$ export PATH=$ORACLE_HOME/jdk/jre/bin:$PATH

$ java -jar oswbba.jar -i /u01/app/oswbb/archive/ -P osw_rpt -6 -B JUN 22 09:00:00 2015 -E JUN 22 09:30:00 2015

 /u01/OSB_Reports/oswbb/javacore.20150622.151439.43122732.0003.txt
 JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
 Exception in thread "main" java.lang.OutOfMemoryError
          at java.lang.reflect.Array.newArrayImpl(Native Method)
          at java.lang.reflect.Array.newInstance(Array.java:283)
          at java.util.ArrayList.toArray(ArrayList.java:328)
          at o.a(Unknown Source)
          at j.a(Unknown Source)
          at o.a(Unknown Source)
          at OSWGraph.OSWGraph.main(Unknown Source)

Now use below command which will work for you:

$java -jar -Xmx1024M oswbba.jar -i /u01/app/oswbb/archive/ -P osw_example01 -6 -B JUN 25 11:30:00 2013 -E JUN 25 11:45:00 2013

Thanks
Please feel free to post a comment...

Active dataguard setup / Disaster Recovery(DR) in Oracle 11g



Article on: Oracle Active Data Guard:

Active Data Guard provides the management, monitoring, and automation software to create and maintain one or more synchronized replicas (standby databases) of a production database (primary database). An Active Data Guard standby database is an exact copy of the primary that is open read-only while it continuously applies changes transmitted by the primary database. An active standby can offload ad-hoc queries, reporting, and fast incremental backups from the primary database, improving performance and scalability while preventing data loss or downtime due to data corruptions, database and site failures, human error, or natural disaster.

Scope: Oracle 11.2.x.x and in any platform, Best solution for DR plan (Disaster Recovery)

Assumptions:
•You have two servers (physical or VMs) with an operating system and Oracle installed on them. In this case I've used Oracle Linux 5.6 and Oracle Database 11.2.0.2.
The primary server has a running instance.
The standby server has a oracle 11g software only installation.

Note: SAme OS and Oracle software version.
Prerequisite for Active DataGuard

1) Same version of OS
2) Oracle Software Version : Oracle11gR2

Step by step Active Dataguad configuration. Follow the below two servers.

Info:    
Primary database Name: DEVDB
IP : 10.40.32.49

Proposed Standby Database name: DEVDBS
IP:  10.33.33.184

Configure the tnsnames to support the database on both nodes
a) Listener configuration on primary

adding of standby tnsentry in primary side:

Tnsnames.ora

DEVDBS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.33.33.184)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DEVDBS)
    )
  )

b) tns entry on STANDBY  ( add folloing tns entry after Installing Oracle11g Software)


Tnsnames.ora

DEVDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.40.32.49)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DEVDB)
    )
  )
DEVDBS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.33.33.184)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DEVDBS)
    )
  )

Steps:

1) Login into Standby Database . ( Now we don't have any database)

Note: Install only oracle software. plan for datafile, controlfile. redologfile, archivelog, fra and other admin file locations and create also.

2) Enable Archiving in PRIMARY1 database (if, required)
eg.,
telnet 10.33.33.184
User name: oracle
pwd:

alter system set DB_RECOVERY_FILE_DEST_SIZE=5G;
alter system set db_recovery_file_dest='/oracle1/DEVDB/FRA/';
OR

*.db_recovery_file_dest='/oracle1/DEVDB/FRA'
*.db_recovery_file_dest_size=5368709120

-- add in pfile
*.log_archive_dest_1='Location=/oracle1/DEVDB/FRA/archivelogs'

sql> shutdown immediate;
sql> alter database mount;
sql> alter database archivelog;
sql> alter database open;
3) Enable Forced Logging
Place the primary database in FORCE LOGGING mode after database creation using the following SQL statement:
eg.,
telnet 10.33.33.184
User name: oracle
pwd:

$ sqlplus / as sysdba
SQL> SELECT force_logging FROM v$database;
SQL> alter database force logging;
SQL> SELECT force_logging FROM v$database;
FOR
---
YES

4) Create pfile from spfile in PRIMARY

SQL> create pfile from spfile;

Note: Copy $ORACLE_HOME/dbs location/address from STANDBY database ( 10.40.32.49)

5) Create Password file in PRIMARY
Create a password file if one does not already exist. Every database in a Data Guard configuration must use a password file, and the password for the SYS user must be identical on every system for redo data transmission to succeed.
Note: use sys password in your password file.
$ orapwd force=y file=orapwDEVDB password=sys

Note: Move the same Password file to standby side and rename it with Sandby SID.
6) Configure a Standby Redo Log
A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone.
You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.
Note: if any standby log file found, then drop using following command.
To see stand by log files:
sql> select GROUP#,SEQUENCE# from v$standby_log;
To drop stand by log files:
sql > alter database drop standby logfile group 4;

Note: if, any stand by redo log is dropped, then use following command to add stand by redo log.
sql> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/oracle1/oradata/UATHYDIP/redo04.log') SIZE 50M reuse;
To see the path of redo log file location:
SQL> select GROUP#,member from v$logfile;
To add stand by log file:
Perform the following steps to configure the standby redo log
ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/oracle1/DEVDB/oradata/DEVDB/redo04.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/oracle1/DEVDB/oradata/DEVDB/redo05.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/oracle1/DEVDB/oradata/DEVDB/redo05.log') SIZE 50M;

7) Copy pfile file from Primary database to Standby database.

      $ cd $ORACLE_HOME/dbs
dbs]$ ls -lrt
........
-rw-r--r--  1 oracle11g oinstall    1865 Jun 27 11:18 initDEVDB.ora
-rw-r-----  1 oracle11g oinstall    1536 Jun 27 11:34 orapwDEVDB

dbs]$ scp initUATHYDIP.ora oracle11g@172.18.1.18:/Oracle/app/oracle11g/product/11.2.0/dbhome_1/dbs

 8) Add following into pfile and make sure the following parameters are exists or not.

Set Primary Database Initialization Parameters

*.audit_file_dest='/oracle/app/oracle11g/admin/DEVDB/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.DB_UNIQUE_NAME='DEVDB'
*.db_name='DEVDB'
*.control_files='/oracle1/DEVDB/oradata/DEVDB/control01.ctl','/oracle1/DEVDB/oradata/DEVDB/control02.ctl'
*.FAL_CLIENT='DEVDB'
*.FAL_SERVER='DEVDBS'
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(DEVDB,DEVDBS)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/oracle1/DEVDB/FRA/archivelogs/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DEVDB'
*.LOG_ARCHIVE_DEST_2='SERVICE=DEVDB LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)  DB_UNIQUE_NAME=DEVDBS'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.DB_FILE_NAME_CONVERT='/home/oracle/STANDBY/DEVDBS/oradata/','/oracle1/DEVDB/oradata/DEVDB/'
*.LOG_FILE_NAME_CONVERT='/home/oracle/STANDBY/DEVDBS/oradata/','/oracle1/DEVDB/oradata/DEVDB/'
*.db_block_size=8192
*.db_domain=''
*.db_recovery_file_dest_size=5368709120
*.db_recovery_file_dest='/oracle1/DEVDB/FRA/'
*.diagnostic_dest='/oracle/app/oracle11g'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DEVDBXDB)'
#*.log_archive_dest_1='Location=/oracle1/DEVDB/FRA/archivelogs'
*.nls_date_format='DD-MON-YYYY HH24:MI:SS'
*.open_cursors=300
*.pga_aggregate_target=134217728
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=536870912
*.undo_tablespace='UNDOTBS1'


8.1) Do the following steps after changing parameters:

sql> shut immediate;

sql> create spfile from pfile='/oracle/app/oracle11g/product/11.2.0/dbhome_1/dbs/initDEVDB.ora';

sql> startup mount;

sql> alter database open;

Note: Cross check all parameters what are added in pfile.

9) Create following directory structure in STANDBY  ( DEVDBS) database side.

a)
Create paths for Control file, Redolog and Datafiles (CRD):

/home/oracle/STANDBY/DEVDBS/adump
/home/oracle/STANDBY/DEVDBS/oradata
/home/oracle/STANDBY/DEVDBS/FRA

b) The pfile of STANDBY( UATHYDIPDR) database:

*.db_name='DEVDB'
*.db_unique_name='DEVDBS'
*.fal_client='DEVDBS'
*.fal_server='DEVDB'
*.db_recovery_file_dest_size=5368709120
*.db_recovery_file_dest='/home/oracle/STANDBY/DEVDBS/FRA'
.log_archive_dest_1='Location=/home/oracle/STANDBY/DEVDBS/FRA/archivelogs'
*.diagnostic_dest='/oracle/diag'
*.max_shared_servers=15
*.nls_date_format='DD-MON-RRRR HH24:MI:SS'
*.open_cursors=300
*.pga_aggregate_target=134217728
*.processes=400
*.remote_login_passwordfile='EXCLUSIVE'
*.sec_case_sensitive_logon=FALSE
*.sessions=444
*.sga_max_size=536870912
*.sga_target=536870912
*.shared_servers=8
*.undo_tablespace='UNDOTBS1'
DB_FILE_NAME_CONVERT='/oracle1/DEVDB/oradata/DEVDB/','/home/oracle/STANDBY/DEVDBS/oradata/'
LOG_FILE_NAME_CONVERT='/oracle1/DEVDB/oradata/DEVDB/','/home/oracle/STANDBY/DEVDBS/oradata/'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_config='DG_CONFIG=(DEVDB,DEVDBS)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle/STANDBY/DEVDBS/FRA/archivelogs  VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DEVDBS'
*.LOG_ARCHIVE_DEST_2='SERVICE=DEVDB  ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)  DB_UNIQUE_NAME=DEVDB'
*.standby_file_management='AUTO'
*.log_archive_max_processes=4


10) Password file in STANDBY Database

Move the passwordfile of Primary and rename in standby side with snadby SID. (if not done)

dbs]$ ls

initDEVDBS.ora
orapwDEVDBS

11) Do the following in STANDBY database:

dbs]$ export ORACLE_SID=DEVDBS
dbs]$ sqlplus / as sysdba

SQL> startup nomount  pfile='/home/oracle/app/product/11.2.0/dbhome_1/dbs/initDEVDBS.ora';
SQL> shu immediate;
SQL> startup nomount;

12) Do the following in PRIMRY Database: ( Restore process for STANDBY)

dbs]$ rman target /

RMAN> connect  auxiliary sys/sys@DEVDBS;

RMAN>
run {
allocate channel ch1 type disk;
allocate auxiliary channel stby1 type disk;
duplicate target database for standby from active database nofilenamecheck;
}

13) Reover the standby database

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

Note: Now the PRIMARY Database is in OPEN and role is PRIMARY. All transactions can be done into PRIMARY.

SQL> select instance_name,status from v$instance;

INSTANCE_NAME          STATUS
----------------                    ------------
DEVDB             OPEN

Now check the same in STANDBY Database;

SQL> select instance_name,status from v$instance;

INSTANCE_NAME         STATUS
----------------                    ------------
DEVDBS                       MOUNTED


Now you can keep this stage, so that if any archive log will ship, then it apply automatically.

14) Cancel the Recovery and keep it in Read only mode:

In this stage Archive logs will be applying into STANDBY, But do the following to Keep the STANDBY Database in Readonly mode.

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open read only;

Database altered.

Note: Now SATANDBY database is in Read-only mode, No Redo-log will apply on STANDBY. So recent transactions can't be reflected.

15) To Keep the STANDBY database in Read-only mode and to apply redo-logs also, execute the following command in STANDBY database.

SQL> alter database recover managed standby database using  current logfile disconnect;

Database altered.

      – – – Finish – – – –

Useful Queries to verify status:

1) Check the Final Status of Standby database:

SQL> select NAME, INSTANCE_NAME, STATUS, OPEN_MODE, database_role FROM V$DATABASE, V$INSTANCE;

NAME               INSTANCE_NAME         STATUS            OPEN_MODE                          DATABASE_ROLE
---------               ----------------                    ------------            --------------------                             ----------------
DEVDB            DEVDBS                      OPEN               READ ONLY WITH APPLY        PHYSICAL STANDBY

2) Check the Final Status of PRIMARY database:

SQL> select NAME, INSTANCE_NAME, STATUS, OPEN_MODE, database_role FROM V$DATABASE, V$INSTANCE;

NAME               INSTANCE_NAME          STATUS           OPEN_MODE               DATABASE_ROLE
---------               ----------------                    ------------            --------------------                ----------------
DEVDB            DEVDBS                      OPEN               READ WRITE                PRIMARY

3) Current applied archive log in Standby side:

SQL> select max(sequence#) from v$archived_log where applied='YES';

MAX(SEQUENCE#)
--------------
          6712


4) Archivelog status between primary and standby database

SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE ARCH.THREAD# = APPL.THREAD#
ORDER BY 1;

output:

    Thread    Last Sequence        Received Last Sequence           Applied Difference
    ----------   --------------------           --------------------------------    ------------------------
         1                   6775                  6730                                       5


Issues faced in Active dataguard
Case:1:
Error 1017 received logging on to the standby
< ------------------------------------------------------------
< Check that the primary and standby are using a password file
< and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
< and that the SYS password is same in the password files.
<       returning error ORA-16191
< ------------------------------------------------------------
< PING[ARC2]: Heartbeat failed to connect to standby 'EHISBLRS'. Error is 16191.
< Sat Jul 06 15:23:28 2013
< Error 1017 received logging on to the standby

If above error found from alert log in DR side,

i.e., some body changed Primary sys password.

Solution: Recreate password file both Primary and DR side with same password and re-synch again.

Case:2:
channel ORA_DISK_1: SID=33 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 06/27/2012 15:05:53
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 06/27/2012 15:05:53
ORA-17629: Cannot connect to the remote database server
ORA-17627: ORA-01017: invalid username/password; logon denied
ORA-17629: Cannot connect to the remote database server

Solution:
RMAN> duplicate target database for standby from active database nofilenamecheck;
This may not work
do the following
RMAN>

run {
allocate channel ch1 type disk;
allocate auxiliary channel stby1 type disk;
duplicate target database for standby from active database nofilenamecheck;
}

 Thanks

Translate >>