Mar 27, 2015

Troubleshoot RMAN-06214: Archivelog issues

How to Clear Obsolete archivelogs and expired archivelogs in Oracle:

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Archive Log          12627  07-MAR-15          /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11105_bhp4q3tb_.arc
Archive Log          12628  07-MAR-15          /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11106_bhpcx03g_.arc
Archive Log          12629  07-MAR-15          /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11107_bhpjbn67_.arc

-- Delete obsolete achivelogs, It may not delete expired ones

RMAN> delete noprompt obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1180 device type=DISK
Deleting the following obsolete backups and copies:
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Archive Log          12627  07-MAR-15          /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11105_bhp4q3tb_.arc
Archive Log          12628  07-MAR-15          /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11106_bhpcx03g_.arc
Archive Log          12629  07-MAR-15          /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11107_bhpjbn67_.arc

RMAN-06207: WARNING: 3 objects could not be deleted for DISK channel(s) due
RMAN-06208:          to mismatched status.  Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212:   Object Type   Filename/Handle
RMAN-06213: --------------- ---------------------------------------------------
RMAN-06214: Archivelog      /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11105_bhp4q3tb_.arc
RMAN-06214: Archivelog      /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11106_bhpcx03g_.arc
RMAN-06214: Archivelog      /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11107_bhpjbn67_.arc



RMAN> Crosscheck archivelog all;


released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1180 device type=DISK
validation failed for archived log
archived log file name=/u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11105_bhp4q3tb_.arc RECID=12627 STAMP=873750716

.....
.....

-- Delete all expired archivelogs

RMAN> delete expired archivelog all;


released channel: ORA_DISK_1

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=47 device type=DISK

List of Archived Log Copies for database with db_unique_name EHISBBS

=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
12627   1    11105   X 07-MAR-15 Name: /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11105_bhp4q3tb_.arc
12628   1    11106   X 07-MAR-15 Name: /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11106_bhpcx03g_.arc
12629   1    11107   X 07-MAR-15 Name: /u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11107_bhpjbn67_.arc


Do you really want to delete the above objects (enter YES or NO)? y

deleted archived log

archived log file name=/u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11105_bhp4q3tb_.arc RECID=12627 STAMP=873750716

deleted archived log

archived log file name=/u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11106_bhpcx03g_.arc RECID=12628 STAMP=873758072

deleted archived log

archived log file name=/u03/fast_recovery_area/EHISBBS/archivelog/2015_03_07/o1_mf_1_11107_bhpjbn67_.arc RECID=12629 STAMP=873762604

Deleted 3 EXPIRED objects


-- Now Crosscheck once

RMAN> delete expired archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1180 device type=DISK
specification does not match any archived log in the repository


RMAN> exit


Now no expired backups.


Mar 18, 2015

ORA-31685 error during import using IMPDP

Issue:

While you are importing a schema/ remaping a schema, then you may face this issue if any foreign keys are created by the import schema. See the error below:


ORA-31685: Object type REF_CONSTRAINT:"CRM"."PARAMMAPPINGID_FK" failed due to insufficient privileges. Failing sql is:
ALTER TABLE "CRM"."OPIASTATUS" ADD CONSTRAINT "PARAMMAPPINGID_FK" FOREIGN KEY ("PARAMMAPPINGID") REFERENCES "HR"."PARAMETERMAPPING" ("PARAMMAPPINGID") ENABLE

Solution:

SQL> grant REFERENCES on "HR"."PARAMETERMAPPING" to CRM;

Give grant like above for all failed FK constraints, and take the failed statements from import log and re-create the script and execute it.


Mar 17, 2015

enq: SS - contention" and "DFS lock handle" oracle event in Oracle 11g RAC

Issue Analysis on "enq: SS - contention" and "DFS lock handle" oracle event in Oracle 11g RAC

This enque is usually come for Sorting cases ( called Sort Segment issues).

In some cases you can see a "hang" situation when someone is modifying a file used for temp space.  IN these cases, the wait event for TEMP space will include:

-- This script was used to Find the blocking sessions

select INST_ID,
       sid,
       serial#,
       BLOCKING_SESSION,
       username,
       EVENT,
       status,
       BLOCKING_SESSION_STATUS "block_stat",
       program,
       sql_id
  from v$session a
 where BLOCKING_SESSION IS NOT NULL;


-- Some sample:

SID : 1712
EVENT enq: SS - contention
P1TEXT name|mode
P2TEXT tablespace #
P3TEXT dba
WAIT_CLASS# 0
WAIT_CLASS Other

-- This script was used to find the source of the TEMP usage, in this case, SS contention:

select distinct u.username,
                u.osuser,
                w.event,
                w.p2text as reason,
                ts.name as tablespace,
                nvl(ddf.file_name, dtf.file_name)
  from v$session_wait w, v$session u, v$tablespace ts
  left outer join dba_data_files ddf on ddf.tablespace_name = ts.name
  left outer join DBA_TEMP_FILES dtf on dtf.tablespace_name = ts.name
 where u.sid = w.sid
   and w.p2 = ts.TS#
   and w.event = 'enq: SS - contention';

-- Find block change tracking

select p1 "File #". p2 "Block #", p3 "Reason Code"
  from v$session_wait
 where event = 'xxx';

Note:

Next, you can find the source of the hanging contention.  Here is a complete article on getting the values from p1, p2 and p3.

P1-->The absolute file number for the data file involved in the wait.
P2-->The block number within the data file referenced in P1 that is being waited upon.
P3-->The reason code describing why the wait is occurring.


Solution I applied and working fine:

1) If you have 'n' nodes in RAC, then add 'n' number tempfiles for your TEMP tablespace.
2) Give suuficient space to tempfiles.
3) GTT use should be controlled in application.
4) Avoid un-necessary sorting, like if indexed column is used in as first in "select" and same is used by order by clause.

Background:- DFS Lock Handle

DFS stands for distributed file system is an ancient name, associated with cluster file system operations, in a Lock manager supplied by vendors in Oracle Parallel Server Environment (prior name for RAC). But, this wait event has morphed and is now associated with waits irrelevant to database files also.

This will occur in RAC environment, possible with sequences especially when you have sequences + cache + Ordered set.

Means created like this, create sequence s1 min value 1 cache 20 order;

As RAC is multi instance environment, the values of the sequences need to be synchronized as they are need to be ordered.

Showing a real-time issue:

For example consider this sequence of sessions and their possible waits while accessing sequence next value:-

Session 1 on node-A: nextval -> 1001 (DFS Lock handle) (CR read)
Session 2 on node-A: nextval –> 1002
Session 1 on node-B: nextval -> 1003 (DFS Lock handle)
Session 1 on node-B: nextval –> 1004
Session 1 on node-A: nextval -> 1005 (DFS Lock handle)
Session 1 on node-A: nextval -> 1006 (more selects)
Session 1 on node-A: nextval –> 1998
Session 1 on node-B: nextval -> 1999 (DFS Lock handle)
Session 1 on node-B: nextval -> 2000 (CR read)

If you look in the gv$session_wait_history it shows as “DFS lock handle” with the “p1″ parameter been the object_id of the sequence.

Solution( In case of high transaction based OLTP):

Create sequences with nocache.



Mar 12, 2015

Drilling Index usage in Database - Index Administration

Drilling Index usage in Database  (Index  Administration)
Here we will discuss following about index usage:

  • About Index usage
  • Checking Index usage.
  • Index Monitoring
  • Querying  the history data


1) About Index usage
Indexes are optional object in database but they exist in each database, small or big.
Indexes only purpose is that they can fetch the data faster using the rowid or Fast Full scan method.

Fast Full Scan: when all the required columns of the query are present in the index.

To tune a query, people think about to create an Index but that's not right approach , just to tune a query do not create a index instead think of other tuning techniques. 

To get something you have to pay something apply to indexes , DML operation become slower as Index stores the row id along with value of the Indexed column data , Whenever a DML happens , Oracle Needs to update the Index Structure accordingly , so time consuming and Extra storage.

In this section we will not discuss when to create an Index, but we will focus on that the index which we created for our Apps are used by our application Queries or not?

There is no short cut as such to check whether your indexes are used or not, especially if your application is third party application. We have few methods by which we can conclude that our indexes are used or not and which queries & programs are using them.


2) Checking Index usage.

Query to find out the number of time Index used for a Particular query.

select count(*) "#SQL", sum(executions) "#EXE", object_name, options
  from v$sql_plan_statistics_all
 where operation = 'INDEX'
   and object_owner = 'Schema Name'
   and object_name in
       (select index_name from all_indexes where table_name IN ('xx'))
 group by object_name, options
 order by object_name, options

Query to find out the Access predicates and Filter predicates which is important.

select *
  from v$sql
 where sql_id in (select sql_id
                    from v$sql_plan_statistics_all
                   where operation = 'INDEX'
                     and object_owner = 'SCHEMA'
                     and object_name = 'INDEX_NAME'
                     and access_predicates like '%xx%')

if you want to find the exact query in order use  V$SQLTEXT_WITH_NEWLINES instead of V$SQL.
This View gives you the information like AWR report by which you can make decision about the index usage.


3) Index Monitoring:

Easiest method is to enable the index monitoring for the all the indexes.

Syntax:
Alter Index <Index_name> monitoring usage;

Data dictionary table: V$OBJECT_USAGE

Give the information that Index was used or not during that period in which Monitoring is Enabled.

 V$OBJECT_USAGE
Column Name
Content
INDEX Name
Self Explanatory
Monitoring
Yes or No
Used
Yes or No
Start Monitoring
Start date
End Monitoring
End Date

Drawback of the method is it will not give you details which query using the index and how many time this index was used. It gives only info that Index was used but no information like once or thousand times.

To disable the Monitoring syntax.

Alter Index <Index_name> no monitoring usage;

4) Querying the history data:

DBA_HIST_SQL_TEXT
DBA_HIST_SNAPSHOT 
DBA_HIST_SQLSTAT
DBA_HIST_SQL_PLAN

Query to find the SQLTEXT using the Index from the history data.

select *
  from dba_hist_sqltext
 where sql_id in (select pl.sql_id
                    from dba_hist_snapshot sn,
                         dba_hist_sqlstat  sq,
                         dba_hist_sql_plan pl
                   where SN.DBID = SQ.DBID
                     and SN.SNAP_ID = SQ.SNAP_ID
                     and SN.INSTANCE_NUMBER = SQ.INSTANCE_NUMBER
                     and SQ.SQL_ID = PL.SQL_ID
                     and PL.OBJECT_NAME = 'INDEX_NAME');

Now you got to know that a particular index was not in use and not useful but in production you can’t drop your index straightway because it could be risky .In 11g we have feature of Invisible indexes.

When you make an index invisible then optimizer can not see it and did't use this index except in case of ifoptimizer_use_invisible_indexes parameter is true which is unlikely a case, but in the background it will keep the index structure up to date.

select * from V$PARAMETER where name like '%invisible%'

Syntax for making an index invisible
alter Index Indexname invisible;

More details about Invisible indexes is on my Next Blog Diff B/w making and Index Unusable and Making and Index Invisible.

You can query DBA_VIEWS to get the information about the data dictionary views.I did this exercise to remove the unusable index from my application in this application we have lot of composite indexes on the transaction tables in some cases Index size is more then the data size.




Mar 11, 2015

ORA-07445 issues for SYS_AUTO_SQL_TUNING_TASK job

ORA-07445 for SYS_AUTO_SQL_TUNING_TASK

Automatic SQL Tuning in Oracle Database 11g

  • As part of Automatic SQL Tuning, Oracle 11g automatically runs the SQL Tuning Advisor against high impact SQL statements during maintenance windows. This process involves the following steps:
  • AWR statistics are used to compile an ordered list of the SQL statements with the greatest performance impact on the system, where the impact is the sum of the CPU and I/O times for the statement during the past week. The list excludes statements that are inherently less tunable, such as recently (within a month) tuned recursive statements, parallel queries, DML, DDL and SQL statements whose performance problems are caused by concurrency issues.
  • The SQL tuning advisor is run against each statement in turn. The outcome may include both SQL profiles and other recommendations.
  • Suggested SQL profiles are performance tested, and those that result in at least a threefold improvement are accepted if the ACCEPT_SQL_PROFILES parameter is set to TRUE, or reported if it is set to FALSE.The accepted SQL profiles are optionally implemented . Several factors many prevent SQL profiles from being implemented automatically, including stale optimizer statistics of dependent objects. The TYPE column of the DBA_SQL_PROFILES view indicates if SQL profiles are created manually (MANUAL) or automatically (AUTO-TUNE).
Oracle Database 11g Release 2 (11.2.0.2 onward) has some minor changes to the Automatic SQL Tuning feature introduced on Oracle 11g Release 1. In the previous release, reports and amendments to the automatic tuning task parameters was performed using the DBMS_SQLTUNE package. From 11.2.0.2 onward, this should be done using the DBMS_AUTO_SQLTUNE package, which requires the DBA role.

Note. The top-level enabling and disabling of the admin task is still done using the DBMS_AUTO_TASK_ADMIN package,


Issue:

If you are facing issues with "auto tuning task" issue, then it will lead more core dump generation issues.


Below sample errors may be found from alert log.
-----------------------------------------------

< Wed Mar 11 22:00:02 2015
< Begin automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
< Wed Mar 11 22:00:35 2015
< Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x20] [PC:0x47EDC6F, __intel_new_memcpy()+3343] [flags: 0x0, count: 1]
< Errors in file /u02/app/oracle/diag/rdbms/PROD/PROD/trace/PROD_j000_53026.trc  (incident=36860):
< ORA-07445: exception encountered: core dump [__intel_new_memcpy()+3343] [SIGSEGV] [ADDR:0x20] [PC:0x47EDC6F] [Address not mapped to object] []
< Incident details in: /u02/app/oracle/diag/rdbms/PROD/PROD/incident/incdir_36860/PROD_j000_53026_i36860.trc
< Use ADRCI or Support Workbench to package the incident.
< See Note 411.1 at My Oracle Support for error and packaging details.
< Wed Mar 11 22:00:43 2015
< Dumping diagnostic data in directory=[cdmp_20150311220043], requested by (instance=1, osid=53026 (J000)), summary=[incident=36860].
< Wed Mar 11 22:00:44 2015
< Sweep [inc][36860]: completed
< Sweep [inc2][36860]: completed
< Wed Mar 11 22:01:11 2015
< Thread 1 cannot allocate new log, sequence 2199
< Private strand flush not complete
<   Current log# 3 seq# 2198 mem# 0: /u02/PROD/oradata/redo03.log
< Thread 1 advanced to log sequence 2199 (LGWR switch)
<   Current log# 4 seq# 2199 mem# 0: /u02/PROD/oradata/redo04.log
< Wed Mar 11 22:01:16 2015
< Archived Log entry 3026 added for thread 1 sequence 2198 ID 0x81a6dea4 dest 1:


Action:

Disable auto tuning task advisor.

To disable:

BEGIN
  DBMS_AUTO_TASK_ADMIN.disable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;
/


If required to enable:

BEGIN
  DBMS_AUTO_TASK_ADMIN.enable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;
/

To verify:

SELECT parameter_name, parameter_value
FROM   dba_advisor_parameters
WHERE  task_name = 'SYS_AUTO_SQL_TUNING_TASK'
AND    parameter_name = 'ACCEPT_SQL_PROFILES';

output:

PARAMETER_NAME                 PARAMETER_VALUE
------------------------------       --------------------------------
ACCEPT_SQL_PROFILES            FALSE




Mar 6, 2015

enq: KO - fast object checkpoint - a described bug in Oracle

Bug fixes on wait even "enq: KO - fast object checkpoint"

I found "enq: KO - fast object checkpoint" wait event one of the production database. When I serach from the metalink, I found this is one of bug where are fixed in higher versions:

Bug in : 11.2.0.2 / 11.2.0.3 
-- Bug 16342845  Excessive CPU in DBW processes for fast object checkpoints

Fixed on : 

•12.2 (Future Release)
•12.1.0.2 (Server Patch Set)
•11.2.0.4 (Server Patch Set)


See the below Documents:

1) Doc ID 16463153.8
2) Doc ID 16342845.8
3) Doc ID 1377830.1

Symptoms:

•Excessive CPU Usage
•Performance Affected (General)
•Waits for "enq: KO - fast object checkpoint"


But you analyze the application code and find which procedure / query is causing the issue:

Reason:1 - Analyzing "enq: KO - fast object checkpoint" enque

It is normal you get this wait event and this "slow" as TRUNCATE is a heavy and complex operation due to the fact that Oracle must guarantee database consistency even if there is a crash during this operation.

The only action you can make is to truncate less.

(You can also put this table in recycle buffer cache knowing this will slow down queries.)

Reason:2 -  If Less DBWR process

As per your system configuration increase DB writers. Follow Oracle document.

Reason:3 - Give value to DB_CACHE_SIZE

Fix value for DB_CACHE_SIZE parameter. Usually give 25% to 30% of SGA to this parameter. Follow Oracle document.

Reason:4 - "none" may be in filesystemio_options

Change filesystemio_option parameter "none" to "ASYNCH" in case better storage configuration or "SETALL". Follow Oracle document.

Here are some parameters I have changed ( SGA set based on my database requirement and DB_WRITERs based on my avialble cores and transaction ratios)

filesystemio_options= none # Set to 'ASYNCH' ( set SETALL for local disks)
sga_max_size =10737418240 # set to 12G
sga_target= 8589934592 # set 12G
DB_CACHE_SIZE = # set to 4G
DB_WRITER_PROCESSES = 2 set to 3

and Set " DB_KEEP_CACHE_SIZE" value to a non-zero value ( as granule_size * cpu_count )

Note: DB_KEEP_CACHE_SIZE is a dynamic parameter, you can change online.

You can calculate the value for " DB_KEEP_CACHE_SIZE" as like following:
Example:
1) No. of CPUs in the system/ server ( from show parameter cpu)
Assume CPU count is =10

2) Granule size :
SQL> connect / as sysdba;
SQL> select name,bytes from V$SGAINFO where name='Granule Size';

NAME                                  BYTES
-------------------------------- ----------
Granule Size                       33554432

i.e., 33554432*10=335544320 to be set for DB_KEEP_CACHE_SIZE parameter.

If above steps not help you, you may follow below steps with the help of Oracle Support. Must be executed in complete off peak time.

 Check below kernel parameters in your linux environment. If any thing missing add it.

$cat /etc/security/limits.conf

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 4096
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768

Set the below value :

# ulimit -Hs 32768
-- Check the output
# ulimit -Hs
32768

-- If you are observing the same issue again, change the below parameter and fush buffer_cache and shared_pool. ( you may do at your own risk).

SQL> connect / as sysdba;
SQL> alter system set "_db_fast_obj_ckpt"=FALSE';
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;
SQL> alter system flush shared_pool;

I hope this may help you.
Add your feedbacks....



Resolving ORA-02374, 12899, 02372 issues during data-migration with different characterset

Resolving ORA-02374, ORA-12899, ORA-02372 issues during data-migration with different database character set

-- Error in import log

ORA-02374: conversion error loading table "crm"."transaction"
ORA-12899: value too large for column DETAILS_NOTE (actual: 4024, maximum: 4000)
ORA-02372: data for row: DETAILS_NOTE: 0X'434552564943414C205350494E450D0A0D0A4D756C7469706C' 

Issue Description:

I have transaction data in table "crm.transaction" where my database Characterset is "WE8MSWIN1252". In this table "DETAILS_NOTE" column datatype is varchar2 where some of the column values fully occupied with 4000 characters. When I am trying to import using IMPDP with another characterset database "AL32UTF8" then above error is coming.


Root cause:

RCA: When data are coming from source character-set to target new character set, special charters taking 3 char space, so it is showing as "ORA-12899: value too large for column RESULT (actual: 4024, maximum: 4000)".

Solution :

Target Character-set : AL32UTF8

Method-1:

  • Migrate your schema or database.
  • Create dblink with souuce to target database.
  • Disable constraints.
  • alter the table as example shown:
            SQL> alter table crm.transaction modify DETAILS_NOTE  VARCHAR2(4000 CHAR);
  • Now insert the whole data .


Method-2:

  • Migrate your schema or database.
  • Disable constraints.
  • alter the table as example shown:
            SQL> alter table crm.transaction modify DETAILS_NOTE  VARCHAR2(4000 CHAR);
  • import the table from logical dump with below format.

impdp full=N directory=data_pump dumpfile=PROD_full_%U.dmp logfile=PROD_trx_imp.log TABLE_EXISTS_ACTION=TRUNCATE schemas=CRM TABLES=CRM.transaction REMAP_TABLE=HR.transaction:transaction

See the DDL of the Table after alter:

create table crm.transaction
(
  trnxid      NUMBER not null,
  rsltrpthdrid    NUMBER not null,
  paramdetailid    NUMBER not null,
  details_note     VARCHAR2(4000 CHAR),
  unitid      NUMBER,
) tablespace CRM_TBLS;

Thanx


Translate >>