Jul 24, 2014

Block corruption & Correction: a case study

Logical Block corruption & Correction: a case study

In one of a production database, I have received following errors in UI ( front-end) and back-end as well though mail. I started investigation though logs and conclude about logical corruption with few requested logs. Please go through my analysis.

1) Error received Front End :

Timestamp: 7/22/2014 4:21:34 PM
Message: OraDataServiceProvider.AddIssueDetails Message : ORA-01476: divisor is equal to zero
ORA-06512: at "SALES.F_UPDATEINVENOTARY", line 517
ORA-00600: internal error code, arguments: [4511], [], [], [], [], [], [], [], [], [], [], []
ORA-06512: at "SALES.P_ADDISSUE", line 334
ORA-06512: at line 1
----------------------------------------
Timestamp: 7/22/2014 4:21:34 PM
Message: OraDataServiceProvider.AddIssueDetails StackTrace :    at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, 
OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure, Boolean bCheck)
   at Oracle.DataAccess.Client.OracleException.HandleError(Int32 errCode, OracleConnection conn, String procedure, 
IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, Boolean bCheck)
   at Oracle.DataAccess.Client.OracleCommand.ExecuteNonQuery()
   at eHIS.OracleDataAccessHelper.ORACLEHelper.ExecuteNonQuery(String connectionString, CommandType cmdType, String 
sqlCommandText, OracleParameter[] parameterArray)
   at eHIS.SALES.DataAccess.OraDataServiceProvider.AddIssueDetails(String Issue, String& IssueCode)

2)Error received back End : ( in alert log)

2.1) from alert Log:

Tue Jul 22 17:05:49 2014
Errors in file /u01/app/oracle/diag/rdbms/prod/PROD1/trace/PROD1_ora_11731176.trc  (incident=82147):
ORA-00600: internal error code, arguments: [4511], [], [], [], [], [], [], [], [], [], [], []
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.

2.2) from related .trc file:

DDE: Problem Key 'ORA 600 [4511]' was flood controlled (0x6) (incident: 82875)
ORA-00600: internal error code, arguments: [4511], [], [], [], [], [], [], [], [], [], [], []
DDE: Problem Key 'ORA 600 [4511]' was flood controlled (0x6) (incident: 82876)
*** 2014-07-22 16:44:40.284
*** CLIENT ID:() 2014-07-22 16:44:40.284

ORA-00600: internal error code, arguments: [4511], [], [], [], [], [], [], [], [], [], [], []

2.3) from related incident .trc file:

Dump continued from file: /u01/app/oracle/diag/rdbms/prod/PROD2/trace/PROD2_ora_13893936.trc
ORA-00600: internal error code, arguments: [4511], [], [], [], [], [], [], [], [], [], [], []

========= Dump for incident 95001 (ORA 600 [4511]) ========
*** 2014-07-22 16:22:26.385
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=8ax21gq9d3rb1) -----
UPDATE BOOKSTORE I SET I.QOHISTORY = (SELECT SUM(QTY) FROM QOHISTORY WHERE BOOKCODE = :B2 AND STORECODE = :B1 ), 

I.HOLDQOHISTORY=I.HOLDQOHISTORY+ABS(:B3 ) WHERE I.BOOKCODE = :B2 AND I.STORECODE = :B1 
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
7000003c7a76dd8       241  function SALES.F_UPDATEINVENOTARY
7000004b1c7e110       334  procedure SALES.P_ADDISSUE
7000004aecaf568         1  anonymous block


3) Issue Description:

From the above errors I investigated related procedure and function. From the error, it seems there is a transaction issue with "BOOKSTORE" table. Same issue also logged by support team. But ORA-00600 is related to so many bugs. But this error which is clearly described in related incident .trc file that no update is happening in "BOOKSTORE" table. So initially it is clear. But we need to investigate more and confirm about the issue.

Note: From all all alert logs ( both node) and incident file, I came to know two table corrupted logically. BOOKSTORE and PURCHASEITEMS.

4) Confirmation investigations:

Here are some confirmation investigations:

4.1) Analyze the structure for validation:

e.g.,
While running "analyze table SALES.PURCHASEITEMS validate structure online;", we are getting below error:

SQL> analyze table SALES.PURCHASEITEMS validate structure online;
analyze table SALES.PURCHASEITEMS validate structure online
*
ERROR at line 1:
ORA-01498: block check failure - see trace file

Then I started tracing this with following way:

Analyze the table and upload tracefile generated if analyze fails, trace name should end in _ANALYZE.trc.

SQL> alter session set tracefile_identifier='ANALYZE';
SQL> analyze table SALES.PURCHASEITEMS validate structure online;

Here are some contents from generated .trc file:

*** 2014-07-17 21:33:09.884
*** SESSION ID:(408.36503) 2014-07-17 21:33:09.884
*** CLIENT ID:() 2014-07-17 21:33:09.884
*** SERVICE NAME:(SYS$USERS) 2014-07-17 21:33:09.884
*** MODULE NAME:(sqlplus@ehdb2 (TNS V1-V3)) 2014-07-17 21:33:09.884
*** ACTION NAME:() 2014-07-17 21:33:09.884

Block Checking: DBA = 185398234, Block Type = KTB-managed data block
data header at 0x70000017729c0ac
kdbchk: row locked by non-existent transaction
        table=0   slot=19
        lockid=4   ktbbhitc=5
Block header dump:  0x0b0cf3da
 Object id on Block? Y
 seg/obj: 0x122c8  csc: 0x0f.e2b866de  itc: 5  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0xb0cf300 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x001d.011.000237cd  0x01001e9e.4eb3.2a  C---    0  scn 0x000e.fa71110c
0x02   0x0177.008.000004cd  0x0100022d.0092.11  C---    0  scn 0x000e.5f6767bd
0x03   0x0092.00b.0003c7b9  0x00c0d3e4.7168.0b  C---    0  scn 0x000e.aebfb808
0x04   0x000d.01f.0006477b  0x0107008d.c219.25  C---    0  scn 0x000e.d03f9c6f
0x05   0x0017.001.00036313  0x01017ea0.67db.36  C---    0  scn 0x000e.fa70d161
bdba: 0x0b0cf3da
data_block_dump,data header at 0x70000017729c0ac
===============
tsiz: 0x1f50
hsiz: 0x8c
pbl: 0x70000017729c0ac
     76543210
flag=--------
ntab=1
nrow=61
frre=-1
fsbo=0x8c
fseo=0x952
avsp=0x909
tosp=0x924
...........
..........

But this may not understandable directly unless we do readable form. If you have Oracle support ID, then you can raise SR and upload this trace file. If without error stucture validated, probably there is other cause. Then let us proceed

some other tests.

4.2) Using RMAN method:

check the files used by these two tables PURCHASEITEMS and BOOKSTORE with RMAN to see how many blocks are affected

$ rman target / nocatalog

b) RMAN> run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup check logical validate datafile x,y,z;
}

/** where x,y,z should be replaced by the file#

Output:

$ rman target / nocatalog

RMAN>

run { 
allocate channel d1 type disk; 
allocate channel d2 type disk; 
allocate channel d3 type disk; 
allocate channel d4 type disk; 
backup check logical validate datafile 44,45,46; 
} 2> 3> 4> 5> 6> 7> 

allocated channel: d1
channel d1: SID=39 instance=PROD1 device type=DISK

allocated channel: d2
channel d2: SID=424 instance=PROD1 device type=DISK

allocated channel: d3
channel d3: SID=629 instance=PROD1 device type=DISK

allocated channel: d4
channel d4: SID=1358 instance=PROD1 device type=DISK

Starting backup at 22-JUL-14
channel d1: starting full datafile backup set
channel d1: specifying datafile(s) in backup set
input datafile file number=00044 name=+DATA/prod/datafile/sales01.dbf
channel d2: starting full datafile backup set
channel d2: specifying datafile(s) in backup set
input datafile file number=00045 name=+DATA/prod/datafile/sales_index01.dbf
channel d3: starting full datafile backup set
channel d3: specifying datafile(s) in backup set
input datafile file number=00046 name=+DATA/prod/datafile/sales_indx_01.dbf
channel d3: backup set complete, elapsed time: 00:00:16
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
46   OK     0              309          18640           24375064492
  File Name: +DATA/prod/datafile/sales_indx_01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0               
  Index      0              17782           
  Other      0              549             

channel d2: backup set complete, elapsed time: 00:01:19
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
45   OK     0              37946        664640          88021468605
  File Name: +DATA/prod/datafile/sales_index01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0               
  Index      0              616016          
  Other      0              10678          

channel d1: backup set complete, elapsed time: 00:01:32
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
44   FAILED 0              48213        963536          88021469422
  File Name: +DATA/prod/datafile/sales01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       7              907742          
  Index      0              1               
  Other      0              7580          

validate found one or more corrupt blocks
See trace file /u01/app/oracle/diag/rdbms/prod/PROD1/trace/PROD1_ora_21823514.trc for details
Finished backup at 22-JUL-14
released channel: d1
released channel: d2
released channel: d3
released channel: d4

RMAN> 

RMAN> quit


Recovery Manager complete.


Here, RMAN reported no. of blocks corrupted with this statement, "Block Type Blocks Failing Blocks Processed". i.e., here we have some confirmation about block corruption. Next we will do some acid tests for max clarity.

4.3) Using DBMS packages:

Pre-requisites:

a) create 'REPAIR_TABLE' table :
Examples: Building a Repair Table or Orphan Key Table
The ADMIN_TABLE procedure is used to create, purge, or drop a repair table or an orphan key table.
A repair table provides information about the corruptions that were found by the CHECK_OBJECT procedure and how these will be addressed if the FIX_CORRUPT_BLOCKS procedure is run. Further, it is used to drive the execution of the FIX_CORRUPT_BLOCKS procedure.

Example: Creating a Repair TableThe following example creates a repair table for the users tablespace.

SET SERVEROUTPUT ON
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
     TABLE_NAME => 'REPAIR_TABLE',
     TABLE_TYPE => dbms_repair.repair_table,
     ACTION     => dbms_repair.create_action,
     TABLESPACE => 'USERS');
END;
/

b) Detecting Corruption :

The CHECK_OBJECT procedure checks the specified object, and populates the repair table with information about corruptions and repair directives. You can optionally specify a range, partition name, or subpartition name when you want to check a portion of an object.

Validation consists of checking all blocks in the object that have not previously been marked corrupt. For each block,  the transaction and data layer portions are checked for self consistency. During CHECK_OBJECT, if a block is encountered that has a corrupt buffer cache header, then that block is skipped.

The following is an example of executing the CHECK_OBJECT procedure for the SALES.PURCHASEITEMS table.

SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
     SCHEMA_NAME => 'SALES',
     OBJECT_NAME => 'PURCHASEITEMS',
     REPAIR_TABLE_NAME => 'REPAIR_TABLE',
     CORRUPT_COUNT =>  num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/

output:

number corrupt: 2

PL/SQL procedure successfully completed

set serveroutput on
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'SALES',
OBJECT_NAME => 'BOOKSTORE',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
corrupt_count => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/

output:
number corrupt: 5

PL/SQL procedure successfully completed

Here it is confirmed, no. of corrupted blocks.

Run the below view to find comoplete information:

SQL> SELECT OBJECT_NAME, BLOCK_ID, CORRUPT_TYPE, MARKED_CORRUPT,CORRUPT_DESCRIPTION, REPAIR_DESCRIPTION
FROM REPAIR_TABLE;
OR
select object_id,object_name,BLOCK_ID, CORRUPT_TYPE, CORRUPT_DESCRIPTION from REPAIR_TABLE;

see: corrupt_blocks_output.xls



5) Corrective actions:

5.1) I can recommend some simplest method which needs to perform in downtime: Create copy tables after disbling constraints and truncate the currupted tables and then re-insert them. Example:


Step_1: take the counts
Check counts:

select count(1) from SALES.PURCHASEITEMS -- 491609
select count(1) from SALES.copy_PURCHASEITEMS -- 491609
select count(1) from SALES.BOOKSTORE -- 168806
select count(1) from  SALES.copy_BOOKSTORE -- 168806

Step_2: Take logical backup -- when downtime starts, to avoid any other issues if entire schema size is less or take only required tables.

$ expdp directory=DATA_PUMP dumpfile=sales_22Jul14.dmp logfile=sales_22Jul14_exp.log schemas=SALES parallel=3 exclude=statistics cluster=NO

Step_3: Create copy table for two issued tables:

create table copy_PURCHASEITEMS as select * from SALES.PURCHASEITEMS;
alter table SALES.PURCHASEITEMS disable constraint FK_PURCHASEITEMS_PO;
truncate table SALES.PURCHASEITEMS;
insert into SALES.PURCHASEITEMS select * from SALES.copy_PURCHASEITEMS;
alter table SALES.PURCHASEITEMS enable constraint FK_PURCHASEITEMS_PO;

create table SALES.copy_BOOKSTORE as select * from SALES.BOOKSTORE ;
truncate table SALES.BOOKSTORE;
insert into SALES.BOOKSTORE select * from SALES.copy_BOOKSTORE;


Step_4: Analyze tables:

exec dbms_stats.gather_table_stats(ownname => 'SALES',tabname => 'PURCHASEITEMS',cascade => TRUE);
exec dbms_stats.gather_table_stats(ownname => 'SALES',tabname => 'BOOKSTORE',cascade => TRUE);


5.2) Using DBMS Packages

Fixing Corrupt Blocks

Use the FIX_CORRUPT_BLOCKS procedure to fix the corrupt blocks in specified objects based on information in the repair table that was generated by the CHECK_OBJECT procedure. Before changing a block, the block is checked to ensure that the block is still corrupt. Corrupt blocks are repaired by marking the block software corrupt. When a repair is performed, the associated row in the repair table is updated with a timestamp.

This example fixes the corrupt block in table SALES.PURCHASEITEMS that was reported by the CHECK_OBJECT procedure.

SET SERVEROUTPUT ON
DECLARE num_fix INT;
BEGIN
num_fix := 0;
DBMS_REPAIR.FIX_CORRUPT_BLOCKS (
     SCHEMA_NAME => 'SALES',
     OBJECT_NAME=> 'PURCHASEITEMS',
     OBJECT_TYPE => dbms_repair.table_object,
     REPAIR_TABLE_NAME => 'REPAIR_TABLE',
     FIX_COUNT=> num_fix);
DBMS_OUTPUT.PUT_LINE('num fix: ' || TO_CHAR(num_fix));
END;
/

Simple & strait workaround:

-- start DBMS API to skip corrupt block flag

BEGIN
DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SALES',
OBJECT_NAME => 'BOOKSTORE',
OBJECT_TYPE => dbms_repair.table_object,
FLAGS => dbms_repair.SKIP_FLAG);
END;
/

-- Create a repair table

> create table SALES.BOOKSTORE_REPAIR as select * from SALES.BOOKSTORE;

-- Check counts

select count(*) from SALES.BOOKSTORE;
select count(*) from SALES.BOOKSTORE_REPAIR;

-- If the count is close 

>truncate table SALES.BOOKSTORE;
>insert into SALES.BOOKSTORE select * from SALES.BOOKSTORE_REPAIR;

Note: In my experience, I found some junk data updated in table and anonyms row inserted in the table. Manually data corrected and anaonymous record deleted with a pl/sql programming. Here are few errors:
error-1:
insert into SALES.BOOKSTORE
select * from SALES.servicerequestdetails_repair ;

ORA-12899: value too large for column "SALES"."BOOKSTORE"."EXTRACHARGEFLAG" (actual: 83, maximum: 5)

error-2:
insert into SALES.BOOKSTORE
select * from SALES.BOOKSTORE_repair ;

ORA-12899: value too large for column "SALES"."BOOKSTORE"."CONSULTATIONDATE" (actual: 120, maximum: 7)

etc.

-- after correction, check the counts again.

>select count(*) from SALES.BOOKSTORE;
>select count(*) from SALES.BOOKSTORE_REPAIR;

-- Again set "no skip" flag

BEGIN
DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SALES',
OBJECT_NAME => 'BOOKSTORE',
OBJECT_TYPE => dbms_repair.table_object,
FLAGS => dbms_repair.NOSKIP_FLAG);
END;
/

-- Analyze the table


exec DBMS_STATS.gather_table_stats(ownname=> 'SALES',tabname=>'BOOKSTORE',estimate_percent => 100,cascade=>TRUE,degree=>1,granularity=>'AUTO',method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO');

-- To validate database and backup sets




Start RMAN and connect to a target database.
Execute the VALIDATE command with the desired options.
For example, to validate all datafiles and control files (and the server parameter file if one is in use), execute the following command at the RMAN prompt:
RMAN> VALIDATE DATABASE;
e.g.,
RMAN> VALIDATE DATABASE;

Starting validate at 15-MAY-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=547 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00053 name=/oradata/datafiles/PROD/PAYROLL01.dbf
......
.....

-- Validate backupset:
Alternatively, you can validate a particular backup set by using the form of the command shown in the following example (sample output included).
RMAN> VALIDATE BACKUPSET 22;

Below query can be used to find any corrupted block is avialbe in database or not:

set head on; 
set pagesize 2000 
set linesize 250 
select * from v$database_block_corruption;
SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file# 
, greatest(e.block_id, c.block#) corr_start_block# 
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block# 
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) 
- greatest(e.block_id, c.block#) + 1 blocks_corrupted 
, null description 
FROM dba_extents e, v$database_block_corruption c 
WHERE e.file_id = c.file# 
AND e.block_id <= c.block# + c.blocks - 1 
AND e.block_id + e.blocks - 1 >= c.block# 
UNION 
SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file# 
, header_block corr_start_block# 
, header_block corr_end_block# 
, 1 blocks_corrupted 
, 'Segment Header' description 
FROM dba_segments s, v$database_block_corruption c 
WHERE s.header_file = c.file# 
AND s.header_block between c.block# and c.block# + c.blocks - 1 
UNION 
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file# 
, greatest(f.block_id, c.block#) corr_start_block# 
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block# 
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) 
- greatest(f.block_id, c.block#) + 1 blocks_corrupted 
, 'Free Block' description 
FROM dba_free_space f, v$database_block_corruption c 
WHERE f.file_id = c.file# 
AND f.block_id <= c.block# + c.blocks - 1 
AND f.block_id + f.blocks - 1 >= c.block# 
order by file#, corr_start_block#; 






Note: If you have "NOLOGING" block corruption, you can follow aboev method, but in alert log message will come. Below query can be used to find "NOLOGGING" block corruptions. 


select * from v$database_block_corruption 
where CORRUPTION_TYPE='NOLOGGING';




You can use below query since when this corruption occured:

select file#, block#, first_time, next_time
from   v$archived_log, v$database_block_corruption
where  CORRUPTION_CHANGE# between first_change# and next_change#
  and CORRUPTION_TYPE='NOLOGGING';


Fix: Keep your database in FORCE LOGGING mode.

sql> connect sys as sysdba
sql> alter database force logging;

If you are getting LOB segments as corrupted block, then first find which segment it refers. Use below query:
e.g., Assume, 'SYS_LOB0004142294C00005$$' came as corrupted segment.

select * from dba_lobs where segment_name='SYS_LOB0004142294C00005$$'

Thanks
Post comments, if you have some doubts...

Jul 21, 2014

Basic audit on user level activities in Oracle

Configuring and Administering Auditing : Using Oracle 11g
============================================
Auditing is always about accountability, and is frequently done to protect and preserve privacy for the information stored in databases. Concern about privacy policies and practices has been rising steadily with the ubiquitous use of databases in businesses and on the Internet. Oracle Database provides a depth of
auditing that readily enables system administrators to implement enhanced protections, early detection of suspicious activities, and finely-tuned security responses.


Pre-requisites :
===========
ALTER SYSTEM SET audit_trail=db SCOPE=SPFILE;
restart the database

Enabling and Disabling Auditing :
=========================
Any authorized database user can set statement, privilege, and object auditing options at any time, but Oracle does not generate audit information for the database audit trail unless database auditing is enabled. The security administrator is normally responsible for controlling auditing. This section discusses the initialization parameters that enable and disable auditing.

Note: 
All of the initialization parameters affecting auditing are static. This means that is you change the values for the AUDIT_SYS_OPERATIONS, AUDIT_TRAIL, and AUDIT_FILE_DEST initialization parameters, you must shut down and restart your database for the new values to take effect.

Caution : 
Enabling audit affects application performance. If you have good volume of CPU head-room, then you thing about audit.


############################################################
#### DEMO ######### DEMO ######### DEMO ######### DEMO #####
############################################################

-- create user / owner

create user SAMPLE
identified by SAMPLE
  default tablespace USERS
  temporary tablespace TEMP01
  profile DEFAULT;
grant CONNECT to SAMPLE;
grant RESOURCE to SAMPLE;
grant UNLIMITED TABLESPACE to SAMPLE;

--AUDIT ALL BY l2_user BY ACCESS;
AUDIT UPDATE TABLE, DELETE TABLE BY l2_user BY ACCESS;
AUDIT alter any PROCEDURE BY l2_user BY ACCESS;
AUDIT drop any PROCEDURE BY l2_user BY ACCESS;

NOAUDIT NETWORK;
NOAUDIT SELECT TABLE BY SAMPLE;
NOAUDIT EXECUTE PROCEDURE BY SAMPLE;

-- Enable audit on different user who is accessing SAMPLE's data.

AUDIT UPDATE TABLE, DELETE TABLE BY L2_USER;
AUDIT TRUNCATE BY L2_USER;
AUDIT alter any PROCEDURE BY L2_USER;
AUDIT drop any PROCEDURE BY L2_USER;

NOAUDIT NETWORK;
NOAUDIT SELECT TABLE BY L2_USER;
NOAUDIT EXECUTE PROCEDURE BY L2_USER;

-- View to verify

SELECT *
FROM   dba_audit_trail
WHERE
/*action_name='UPDATE' */ -- if to get only who updated
/* and username='L2_MASTER'*/ -- if to get who did required action
trunc(extended_timestamp)=trunc(sysdate) -- date frame to find data
/*and object_name='EMPLOYEE'*/ -- which object you want to search for as part of audit
and action_name not like 'LOG%' -- avoid LOGON and LOGOFF

OR

select * from DBA_COMMON_AUDIT_TRAIL
where trunc(extended_timestamp)=trunc(sysdate)
and statement_type not like 'LOG%';
/* add more filter conditions as per your requirement */

-- Find all action

SELECT * FROM DBA_AUDIT_TRAIL
SELECT * FROM  DBA_AUDIT_SESSION
select * from dba_audit_object


Views associated with audit:
====================

VIEW_NAME
------------------------------
DBA_AUDIT_EXISTS
DBA_AUDIT_OBJECT
DBA_AUDIT_POLICIES
DBA_AUDIT_POLICY_COLUMNS
DBA_AUDIT_SESSION
DBA_AUDIT_STATEMENT
DBA_AUDIT_TRAIL
DBA_COMMON_AUDIT_TRAIL
DBA_FGA_AUDIT_TRAIL
DBA_OBJ_AUDIT_OPTS
DBA_PRIV_AUDIT_OPTS
DBA_REPAUDIT_ATTRIBUTE
DBA_REPAUDIT_COLUMN
DBA_STMT_AUDIT_OPTS

-- main views

DBA_AUDIT_TRAIL - Standard auditing only (from AUD$).
DBA_FGA_AUDIT_TRAIL - Fine-grained auditing only (from FGA_LOG$).
DBA_COMMON_AUDIT_TRAIL - Both standard and fine-grained auditing.


Purging Audit Records from the Audit Trail:
================================

After auditing is enabled for some time, the security administrator may want to delete records from the database audit trail both to free audit trail space and to facilitate audit trail management.

For example, to delete all audit records from the audit trail, enter the following statement:

SQL> DELETE FROM SYS.AUD$;

Alternatively, to delete all audit records from the audit trail generated as a result of auditing the table emp, enter the following statement:

SQL> DELETE FROM SYS.AUD$ WHERE obj$name='EMP';

Archiving Audit Trail Information:
=======================

If audit trail information must be archived for historical purposes, then the security administrator can copy the relevant records to a normal database table (for example, using INSERT INTO table SELECT ... FROM SYS.AUD$ ...) or export the audit trail table to an operating system file.


Listing Active Statement Audit Options:
=============================

The following query returns all the statement audit options that are set:

SELECT * FROM DBA_STMT_AUDIT_OPTS;

SELECT * FROM DBA_PRIV_AUDIT_OPTS;

FAQ:
1) what does 'SESSION REC' mean in the action_name column of dba_audit_trail?
Ans:
SESSION REC is an audit record inserted in AUD$ table each time a DDL or DML operation is audited BY SESSION.

An AUDIT can be activated with 2 possible options to indicate how audit records should be generated if the audited statement is issued multiple time within a single user session:

1. By session
2. By access

2) When i have enabled the audting by the AUDIT Statements......How/Where should I check the Audited information....?
Ans:
to review the contents of the audit trail:

DBA_AUDIT_EXISTS    - lists audit trail entries produced by AUDIT NOT EXISTS.
DBA_AUDIT_OBJECT    - contains audit trail records for all objects in the system.
DBA_AUDIT_SESSION   - lists all audit trail records concerning CONNECT and
                                            DISCONNECT.
DBA_AUDIT_STATEMENT - lists audit trail records concerning GRANT, REVOKE,
                    AUDIT, NOAUDIT, and ALTER SYSTEM statements throughout the 
                    database.
DBA_AUDIT_TRAIL     - lists all audit trail entries.
DBA_OBJ_AUDIT_OPTS   - describes auditing options on all objects.
DBA_PRIV_AUDIT_OPTS  - describes current system privileges being audited across
                       the system and by user.
DBA_STMT_AUDIT_OPTS  - describes current system auditing options across the
                       system and by user.

3) How to audit log on and log off time :
Ans :
Use audit statement like :
AUDIT ALL BY scott BY ACCESS;

Click here to view all audit events from Oracle Documents
In case of doubt, please feel free to write a mail.

Jul 17, 2014

AWR Baseline Enhancements in Oracle Database 11g

AWR Baseline Enhancements in Oracle Database 11g : Compare AWR reports during issue period

In the world, there are 'n' number production databases are running. But one common question you may be listened, "My application is slow". Some time people used to tell database is very slow etc.

Myth # Database was never slow. Neither it was slow nor It will be slow. Some performance may be degraded.

Performance degradation of the database over time happens when your database was performing optimally in the past, such as 3 months ago, but has gradually degraded to a point where it becomes noticeable to the users. The Automatic Workload Repository (AWR) Compare Periods report enables you to compare database performance between two periods of time. While an AWR report shows AWR data between two snapshots (or two points in time), the AWR Compare Periods report shows the difference between two periods (or two AWR reports, which equates to four snapshots). Using the AWR Compare Periods report helps you to identify detailed performance attributes and configuration settings that differ between two time periods. The two time periods selected for the AWR Compare Periods report can be of different durations. The report normalizes the statistics by the amount of time spent on the database for each time period and presents statistical data ordered by the largest difference between the periods.

For example, a batch workload that historically completed in the maintenance window between 10:00 p.m. and midnight is currently showing poor performance and completing at 2 a.m. You can generate an AWR Compare Periods report from 10:00 p.m. to midnight on a day when performance was good and from 10:00 a.m. to 2 a.m. on a day when performance was poor. The comparison of these reports should identify configuration settings, workload profile, and statistics that were different in these two time periods. Based on the differences identified, you can more easily diagnose the cause of the performance degradation.

Note:
Most of the procedures and functions in the DBMS_WORKLOAD_REPOSITORY package accept a DBID parameter, which defaults to the local database identifier. For that reason the following examples will omit this parameter.

This Topic contains the following sections:

1) Managing Baselines
2) Running the AWR Compare Periods Reports
3) Using the AWR Compare Periods Reports

1) Managing Baselines:

Baselines are an effective way to diagnose performance problems. AWR supports the capture of baseline data by enabling you to specify and preserve a pair or a range of snapshots as a baseline. The snapshots contained in a baseline are excluded from the automatic AWR purging process and are retained indefinitely.
A moving window baseline corresponds to all AWR data that exists within the AWR retention period. Oracle Database automatically maintains a system-defined moving window baseline. The default size of the window is the current AWR retention period, which by default is 8 days.

This section contains the following topics:

1.1) Creating a Baseline
1.2) Deleting a Baseline
1.3) Computing Threshold Statistics for Baselines

1.1) Creating a Baseline:
Before creating a baseline, carefully consider the time period you choose as a baseline because it should represent the database operating at an optimal level. In the future, you can compare these baselines with other baselines or snapshots captured during periods of poor performance to analyze performance degradation over time.

You can create the following types of baseline:

1.1.1) Fixed Baselines
1.1.2) The Moving Window Baseline
1.1.3) Baseline Templates

1.1.1) Fixed Baselines:

The fixed, or static, baseline functionality is a little more flexible in Oracle 11g compared to that of Oracle 10g. Originally, the DBMS_WORKLOAD_REPOSITORY package included a single CREATE_BASELINE procedure allowing you to define baselines using specific snapshot IDs. It now includes overloaded procedures and functions allowing baselines to be created using start and end times, which are used to estimate the relevant snapshot IDs. The functions have the same parameter lists as the procedures, but return the baseline ID. By default baselines are kept forever, but the new expiration parameter allows them to be automatically expired after a specified number of days.

E.g.,

-- To Create :

SQL>
exec DBMS_WORKLOAD_REPOSITORY.create_baseline(start_snap_id => 15807,end_snap_id => 15808,baseline_name => 'When_Good_perf',expiration => 60);

OR

SQL>
 exec DBMS_WORKLOAD_REPOSITORY.create_baseline(start_time => TO_DATE('16-JUL-2014 15:30', 'DD-MON-YYYY HH24:MI') ,end_time => TO_DATE('17-JUL-2014 15:30', 'DD-MON-YYYY HH24:MI') ,baseline_name => 'Tmp_when_good_perf',expiration => NULL );

-- To View :

SQL> SELECT baseline_id, baseline_name, START_SNAP_ID, 
       TO_CHAR(start_snap_time, 'DD-MON-YYYY HH24:MI') AS start_snap_time,       
       END_SNAP_ID,            
       TO_CHAR(end_snap_time, 'DD-MON-YYYY HH24:MI') AS end_snap_time
FROM   dba_hist_baseline
WHERE  baseline_type = 'STATIC'
ORDER BY baseline_id;

output:

BASELINE_ID BASELINE_NAME   START_SNAP_ID START_SNAP_TIME            END_SNAP_ID END_SNAP_TIME
-----------             -------------------------- ----------------------------------------------                          ----------- --------------------------
          1 When_Good_perf          15807 16-JUL-2014 10:30                15808 16-JUL-2014 11:30
 2 Tmp_when_good_perf         15812 16-JUL-2014 15:30                15836 17-JUL-2014 15:30
            
-- To Rename : Baselines are renamed using the RENAME_BASELINE procedure.

SQL>
exec DBMS_WORKLOAD_REPOSITORY.rename_baseline(old_baseline_name => 'Tmp_when_good_perf',new_baseline_name => 'when_good_perf2');

-- To Drop : Baselines are dropped using the DROP_BASELINE procedure.

SQL>
exec DBMS_WORKLOAD_REPOSITORY.drop_baseline(baseline_name => 'Tmpt_when_good_perf');

Note:
The current AWR retention period can be displayed by querying the RETENTION column of the DBA_HIST_WR_CONTROL view.

SQL> SELECT retention FROM dba_hist_wr_control;

RETENTION
-------------------------------------------------------------------------------
+00008 00:00:00.0

The retention period is altered using the MODIFY_SNAPSHOT_SETTINGS procedure, which accepts a RETENTION parameter in minutes.

SQL> exec DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(retention => 43200);  -- Minutes (= 30 Days).

1.1.2) The Moving Window Baseline:

Oracle 11g introduces the concept of a moving window baseline, which is used to calculate metrics for the adaptive thresholds. The window is a view of the AWR data within the retention period. The default size of the window matches the default AWR retention period of 8 days, but it can be set as a subset of this value. Before you can increase the size of the window you must first increase the size of the AWR retention period.

The current moving window size is displayed by querying the DBA_HIST_BASELINE view.

SQL> SELECT moving_window_size
FROM   dba_hist_baseline
WHERE  baseline_type = 'MOVING_WINDOW';

Output :

MOVING_WINDOW_SIZE
------------------
                 8

1 row selected.

The size of the moving window baseline is altered using the MODIFY_BASELINE_WINDOW_SIZE procedure, which accepts a WINDOW_SIZE parameter in days.

-- To Change value

SQL> DBMS_WORKLOAD_REPOSITORY.modify_baseline_window_size(window_size => 25);


SQL> SELECT moving_window_size
FROM   dba_hist_baseline
WHERE  baseline_type = 'MOVING_WINDOW';

MOVING_WINDOW_SIZE
------------------
                25

1 row selected.

Note :
Oracle recommend of window size greater than or equal to 30 days when using adaptive thresholds.

1.1.3) Baseline Templates

Baseline templates allow you to define baselines you would like to capture in the future. Overloads of the CREATE_BASELINE_TEMPLATE procedure define the capture of individual baselines, or repeating baselines. Creating a single baseline template is similar to creating a time-based baseline, except the time is in the future.

SQL> 
begin
DBMS_WORKLOAD_REPOSITORY.create_baseline_template(
    start_time    => TO_DATE('17-JUL-2014 10:00', 'DD-MON-YYYY HH24:MI'),
    end_time      => TO_DATE('17-JUL-2014 18:00', 'DD-MON-YYYY HH24:MI'),
    baseline_name => '17jul2014_05_baseline',
    template_name => '17jul2014_05_template',
    expiration    => 100);
END;
/

Templates for repeating baselines are a little different as they require some basic scheduling information. The START_TIME and END_TIME parameters define when the template is activated and deactivated. The DAY_OF_WEEK, HOUR_IN_DAY and DURATION parameters define the day (MONDAY -
SUNDAY or ALL) the baselines are generated on and the start and end point of the baseline. Since the template will generate multiple baselines, the baseline name is derived from the BASELINE_NAME_PREFIX concatenated to the date. The following example creates a template that will run for the next six months, gathering a baseline every Monday between 00:00 and 05:00.


BEGIN
  DBMS_WORKLOAD_REPOSITORY.create_baseline_template(
   day_of_week          => 'MONDAY',
   hour_in_day          => 0, 
   duration             => 5,
   start_time           => SYSDATE,
   end_time             => ADD_MONTHS(SYSDATE, 6),
   baseline_name_prefix => 'mon_morning_baseline',
   template_name        => 'mon_morning_template',
   expiration           => NULL);
END;
/

Information about baseline templates is displayed using the DBA_HIST_BASELINE_TEMPLATE view.

SELECT template_name,template_type, baseline_name_prefix, start_time,
       end_time, day_of_week, hour_in_day, duration, expiration
FROM   dba_hist_baseline_template;


TEMPLATE_NAME                  TEMPLATE_TYPE BASELINE_NAME_PREFIX           START_TIME  END_TIME    DAY_OF_WEEK HOUR_IN_DAY   

DURATION EXPIRATION
------------------------------ ------------- ------------------------------ ----------- ----------- ----------- ----------- ---------- ----------
17jul2014_05_template          SINGLE        17jul2014_05_baseline          7/17/2014 1 7/17/2014 6                                           100

SQL>









Fig: awr_baseline.jpg

-- To drop base line template

Baseline templates are dropped using the DROP_BASELINE_TEMPLATE procedure.

SQL> exec DBMS_WORKLOAD_REPOSITORY.drop_baseline_template (template_name => '17jul2014_05_baseline');
SQL> exec DBMS_WORKLOAD_REPOSITORY.drop_baseline_template (template_name => 'mon_morning_template');

2) Running the AWR Compare Periods Reports

Comparing a Baseline to Another Baseline or Pair of Snapshots: When performance degradation happens to a database over time, you should run the AWR Compare Periods report to compare the degraded performance, captured as a new baseline or a pair of snapshots, to an existing baseline. You will need a baseline that represents the system operating at an optimal level. If an existing baseline is not available, then you can compare database performance between two periods of time by using two arbitrary pairs of snapshots, as described in "Comparing Two Pairs of Snapshots"

AWR Compare Periods Report
– awrddrpt.sql – single instance
– awrgdrpt.sql - RAC

While taking compare period AWR reports, you have to enter begin and end snap values of Good performace period first and then your issue period snap start and end value.













See : awr_baseline_graph.jpg

3) Using the AWR Compare Periods Reports:

You may face below statements as you are working as DBA:

... My database was running fine yesterday but it is really slow today? What has changed? ....

The best way is, Use AWR Compare Periods Report to Identify Changes in Database Performance. As you have good performance snap periods for comparison, now you can take advantage of it.

See below figures while comparing two snaps:

compare_top_timed_ev.jpg

compare_timemodel.jpg

compare_loadprofile.jpg

The Load Profile/ Time model shows a reduction in DB Time per second and per transaction after SQL query tuning. overall performance has improved. The tuning activity was a successful.

Hope this document may help you.

Click here to read more

Jul 15, 2014

Resolve DBA_2PC_PENDING Entries : Distibuted Transactions

Resolve DBA_2PC_PENDING Entries issues in case of distributed transactions in Oracle Database

Applies to: 
Oracle Server - Enterprise Edition - Version: > 8

Information in this document applies to any platform.

Purpose:
The purpose of this bulletin is to assist support analysts in understanding and resolving the stranded dba_2pc_entries.

Disscusion points:

1. Problem Description

2. Solutions

   2.1 Dba_2pc entries without a corresponding transaction

   2.2 Distributed transaction without corresponding dba_2pc entries

   2.3 How to PURGE the DISTRIBUTED transaction in PREPARED state, when COMMIT
       or ROLLBACK FORCE hangs ?, where we have entries for both Distributed transaction and dba_2pc entries.

1. Problem Description:

As a result of a failed commit of a distributed transaction, some entries can be left in dba_2pc views, i.e.
dba_2pc_pending and dba_2pc_neighbors. The RECO process checks these views to recover the failed txn. However, in some cases RECO cannot perform the recovery. One cause is that all sites involved in the
 transaction not being  accessible at the same  time. Another cause is dba_2pc views being  inconsistent with the  transaction table, which  is the topic of this article. This cause can further be classified as follows:


   1. dba_2pc views have entries for a non-existent distributed transaction
   2. There is a distributed transaction for which there are no entries in dba_2pc views
   3. How to PURGE the DISTRIBUTED transaction in PREPARED state, when COMMIT
       or ROLLBACK FORCE hangs ?, where we have entries for both Distributed transaction and dba_2pc entries.

Solutions to each subclass is provided in the rest of the article.

2. Solutions:
2.1 Dba_2pc entries without a corresponding transaction. In this case dba_2pc views show distributed transactions but there are no txns in reality. If the state of the transaction is committed,  rollback forced or
commit forced then this is normal and it can be cleaned up using dbms_transaction.purge_lost_db_entry

However, if the state of the transaction is PREPARED and there is no entry in the transaction table for it then this entry can be cleaned up manually as follows:

SQL> set transaction use rollback segment SYSTEM;
SQL> delete from sys.pending_trans$ where local_tran_id = ;
SQL> delete from sys.pending_sessions$ where local_tran_id = ;
SQL> delete from sys.pending_sub_sessions$ where local_tran_id = ;

SQL> commit;

Example: The following query reports a dist. txn. in prepared state

SQL> select local_tran_id, state from dba_2pc_pending;

           LOCAL_TRAN_ID          STATE
           ---------------------- ----------------
           29.32.93725             prepared

Given that a transaction id is composed of  triple, '29.32.93725' is located in rollback segment# 1. To find out the list of  active transactions in that rollback segment, use:

SQL>  SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */
             KTUXESTA Status, KTUXECFL Flags
      FROM x$ktuxe
      WHERE ktuxesta!='INACTIVE'
      AND ktuxeusn= 1; <== this is the rollback segment#

     no rows selected

It is not possible to rollback force or commit force this transaction.

SQL>  rollback force '29.32.93725';

ORA-02058: no prepared transaction found with ID 29.32.93725

Hence, we have to manually cleanup that transaction:

set transaction use rollback segment SYSTEM;

SQL>  delete from sys.pending_trans$
 where local_tran_id = '29.32.93725';
SQL>  delete from sys.pending_sessions$ where local_tran_id = '29.32.93725';
SQL>  delete from sys.pending_sub_sessions$ where local_tran_id = '29.32.93725';
SQL>  commit;

2.2 Distributed transaction without corresponding dba_2pc entries

In this case dba_2pc views are empty but users are receiving distributed txn related errors, e.g. ORA-2054, ORA-1591. Normally such a case should not appear and if it is reproducible a bug should be filed. Here is the list of several alternative solutions that can be used in this case:

     a. Perform incomplete recovery
     b. Truncate the objects referenced by that transaction and import them
     c. Use _corrupted_rollback_segments parameter to drop that rollback segment
     d. Insert dummy entries into dba_2pc views and either commit or rollback force the distributed transaction.

The first three solutions are discussed in Backup and Recovery manuals and in  the notes referred above. In the 4th solution a dummy entry is inserted into  the dictionary so that the transaction can be manually committed or rolled back.

Note that RECO will not be able to process this txn and distributed txn recovery should be disabled before using this method. Furthermore, please take a BACKUP of your database before using this method.

The following example describes how to diagnose and resolve this case. Suppose that users are receiving

ORA-1591: lock held by in-doubt distributed transaction 29.32.93725 and the following query returns no rows:

SQL> select local_tran_id, state from dba_2pc_pending where local_tran_id='29.32.93725';
     no rows selected

Furthermore querying the rollback segment shows that 29.32.93725 remains in prepared state

SQL> SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */
           KTUXESTA Status, KTUXECFL Flags
    FROM x$ktuxe
    WHERE ktuxesta!='INACTIVE'
     AND ktuxeusn= 1;  /* <== Replace this value with your txn undo seg#

Which is displayed in the first part of the transaction ID */

     KTUXEUSN   KTUXESLT   KTUXESQN STATUS           FLAGS
    ---------- ---------- ---------- ---------------- ------------------------
             1         92      66874 PREPARED         SCO|COL|REV|DEAD

Trying to manually commit or rollback this transaction
SQL> commit force '29.32.93725';

ORA-02058: no prepared transaction found with ID 29.32.93725

raises ORA-02058 since dba_2pc views are empty. In order to use commit force or  rollback force a dummy record should be inserted into pending_trans$ as follows:

SQL> alter system disable distributed recovery;
    insert into pending_trans$ ( LOCAL_TRAN_ID, GLOBAL_TRAN_FMT,
        GLOBAL_ORACLE_ID, STATE, STATUS, SESSION_VECTOR,
        RECO_VECTOR, TYPE#, FAIL_TIME, RECO_TIME)
    values( '29.32.93725', /* <== Replace this with your local tran id */
        306206,                  /*                                         */
        'XXXXXXX.12345.1.2.3',   /*  These values can be used without any    */
        'prepared','P',          /*  modification. Most of the values are   */
        hextoraw( '00000001' ),  /*  constant.                              */
        hextoraw( '00000000' ),  /*                                         */
        0, sysdate, sysdate );

    insert into pending_sessions$
    values( '29.32.93725',/* <==Replace only this with your local tran id */
        1, hextoraw('05004F003A1500000104'),
        'C', 0, 30258592, '',
        146
      );

   commit;
   commit force '29.32.93725';

 If commit force raises an error then note the errormessage and execute the following:

SQL> delete from pending_trans$ where local_tran_id='29.32.93725';
SQL> delete from pending_sessions$ where local_tran_id='29.32.93725';
SQL>   commit;
SQL> alter system enable distributed recovery;

Otherwise run purge the dummy entry from the dictionary, using    
SQL> alter system enable distributed recovery;
SQL> connect / as sysdba
SQL> COMMIT;

Use following query to retrieve the value for such _smu_debug_mod parameter:
col Parameter for a20
col "Session Value" for a20
col "Instance Value" for a20

SQL> SELECT a.ksppinm "Parameter",b.ksppstvl "Session Value",c.ksppstvl "Instance Value"
FROM x$ksppi a, x$ksppcv b, x$ksppsv c
WHERE a.indx = b.indx AND a.indx = c.indx AND a.ksppinm = '_smu_debug_mode'
/

-- set it temporarily to 4:
SQL> alter system set "_smu_debug_mode" = 4;  /* if automatic undo management
is being used */

2.3 How to PURGE the DISTRIBUTED transaction in PREPARED state, when COMMIT
 or ROLLBACK FORCE hangs ?, where we have entries for both Distributed
 transaction and dba_2pc entries.

ORA-01591: lock held by in-doubt distributed transaction 19.8.406254

The row exist from dba_2pc_pending & Rollback segment

SQL> SELECT LOCAL_TRAN_ID,STATE FROM DBA_2PC_PENDING;
LOCAL_TRAN_ID STATE
----------------- -----------
19.8.406254 prepared

SQL> SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */
KTUXESTA Status, KTUXECFL Flags
FROM x$ktuxe
WHERE ktuxesta!='INACTIVE'
AND ktuxeusn= 44; /* <== Replace this value with your txn undo seg#

Which is displayed in the first part of the transaction ID */

KTUXEUSN KTUXESLT KTUXESQN STATUS FLAGS
---------- ---------- ---------- ---------------- ------------------------
44 88 85589 PREPARED SCO|COL|REV|DEAD

SQL> Commit force 19.8.406254;
SQL> rollback force 19.8.406254;
Executing COMMIT or ROLLBACK FORCE hangs :
The wait event is ""free global transaction table entry"
Purging the transaction should fail with below error:

SQL> EXEC DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('19.8.406254');

BEGIN DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('19.8.406254'); END;
*
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.DBMS_TRANSACTION", line 94
ORA-06512: at line 1

Solution:
--------
You have to implement both the solution :
2.1 Dba_2pc entries without a corresponding transaction
2.2 Distributed transaction without corresponding dba_2pc entries
1.
SQL> delete from sys.pending_trans$ where local_tran_id = '19.8.406254';
SQL> delete from sys.pending_sessions$ where local_tran_id = '19.8.406254';
SQL> delete from sys.pending_sub_sessions$ where local_tran_id ='19.8.406254';
SQL>    commit;

2. Now insert the dummy record as explained in section:
2.2 Distributed transaction without corresponding dba_2pc entries
       commit;
3. Commit force '19.8.406254'
4. Purge the transaction:

SQL>   exec dbms_transaction.purge_lost_db_entry('19.8.406254');
You can prepare the dynamic query to clean if you have more no. of force commited transactions / rollbacked transactions.

SQL> select 'execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('''||local_tran_id||''');' ||chr(10)||'commit;' from dba_2pc_pending where state='forced commit';

Take the output to a .sql file and run it in sql window.

Thanks

Jul 7, 2014

Oracle database startup stages and commands

Database startup stages and commands in Oracle

Simply starting an Oracle database with startup command is not too complex. Actually it is just running a few commands and your database will come up happily and be ready for normal operation.

For example to start your Oracle database you can just login and execute startup command as follows.

$sqlplus / as sysdba

SQL>startup

This command will start your database. But in the background there are a few stages that are hidden when you use the above command. Understanding these stages will help you to get a better insight view of Oracle startup process.

Oracle startup process consists of three stages

Stage 1: NOMOUNT

Stage 2: MOUNT

Stage 3: OPEN


Stage 1: NOMOUNT

This is the first stage in the startup process.  You can start Oracle database in nomount mode using the command

SQL>startup nomount;

When you execute the above command, an Oracle instance is started. When instance starts it will read the initialisation file (commonly known as parameter file) called init.ora file/ initDBNAME.ora( in case spfileinitDBNAME.ora). From this parameter file the instance will know about the size of SGA, PGA, database buffer size and other configurable parameters. The instance will also start the Oracle background process such as (PMON, SMON, LGWR etc). This stage also opens the alert log and the trace files.


Stage 2: MOUNT

The next stage after NOMOUNT is called MOUNT. You can manually start an Oracle database in MOUNT stage using the command

SQL>startup mount;

Or when database is already in nomount stage then you can change the stage by running the command

SQL>alter database mount;

When database goes into mount stage, it will read the control files specified in the parameter file. Remember the parameter file was read in the first stage (nomount). The control files contain the information about the physical structure of the database. So the control file will have the names and locations of all the datafiles and online redo log files. At this stage these datafiles and log files are not opened.

Some database administration operations can only be performed when the Oracle database is MOUNT stage. For example Oracle full database recovery can be done only when the database is in mount stage. If you want to rename a datafile you may need to take the database to mount stage unless the tablespace of the datafile is already offline.


Stage 3: OPEN

The final stage in the Oracle startup process. When the database is open then only normal database operations can takes place. Which means users and applications can login and start reading/writing data.

Running the command below will start the Oracle database and put into OPEN stage.

SQL>startup

And if the database is already in MOUNT stage then you can open the database using the command

SQL> alter database open;

When database is open it will open the datafiles and redo log files. If any of these files are missing or corrupted then Oracle will not open successfully and will return error.


Demo:

-- Verify running database pmon process

$ ps -ef|grep pmon
oracle   21502     1  0 Jun03 ?        00:06:20 ora_pmon_PROD
oracle   28932 28900  0 20:41 pts/0    00:00:00 grep pmon

$ export ORACLE_SID=PROD

-- Close / Down the database

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Mon Jul 7 20:41:57 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

-- Open the database

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 2137886720 bytes
Fixed Size                  2230072 bytes
Variable Size             452987080 bytes
Database Buffers         1677721600 bytes
Redo Buffers                4947968 bytes
SQL>
SQL> alter database mount;

Database altered.

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
PROD       MOUNTED

SQL> alter database open;

Database altered.

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
PROD       READ WRITE

SQL>

Translate >>