Sep 30, 2014

Upgrade Oracle database from 10.2.0.4 to 11.2.0.4 ( Real time steps)

Upgrade your database from Oracle 10.2.0.4 to 11.2.0.4 database
Sharing real time knowledge,issues on Oracle DBA

1. Upgrade Procedure

      1. Current database version is 10.2.0.4.0 64 Bit
      2. Install 11.2.0.3 Software on current server
      3. Direct upgrade to 11.2.0.4

2. Download 11.2.0.4 software

Patch 10404530: 11.2.0.4.0 PATCH SET FOR ORACLE DATABASE SERVER
Platform: Linux x86-64

Download above software from Oracle Support and keep in a staging location

3. Check Kernel version & OS Version

2.6.32 or later
uname –a (Use this command to check Kernel version and OS version

4. Create new ORACLE_HOME location

mkdir -p /u01/app/oracle/product/11.2.0/db_1

4. Set Oracle Inventory

Edit /etc/oraInst.loc to point to /u01/app/oraInventory
mkdir -p /u01/app/oraInventory

5. Install 11.2.0.4 Enterprise edition Oracle software

Enable VNC, Enter following command on VNC to start OUI for installation
/u03/software/oracle11204/runInstaller

Use new ORACLE_HOME for 11.2.0.4 as /u01/app/oracle/product/11.2.0/db_1

6. Install Oracle Database 11g Products from the 11g Examples CD

Enable VNC, Enter following command on VNC to start Example Installer
/u03/stage/11203_sw/examples/runInstaller

7. Set the environment

export ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1
export PATH  = $ORACLE_HOME/bin: $ORACLE_HOME/perl/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib.


8. Take full backup before upgrade activity.
       RMAN and EXPDP
9. Apply recommended Patches for 11.2.0.4 ORACLE_HOME

Use Opatch to apply above patch

10. Apply 11.2.0.4 Performance patches:

11. Set the environment to Existing 10.2.0.4 Environment

Set SID,ORACLE_HOME,PATH,LD_LIBRARY_PATH

login to PROD database

12. Run Pre-Upgrade Information Tool in 10.2.0.4 database (Must)

sqlplus '/ as sysdba'
spool pre_upg_11204_info.log
@/u01/app/oracle/product/11.2.0/db_1/rdbms/admin/utlu112i.sql
spool off

Check the output of the Pre-Upgrade Information Tool in upgrade_info.log
Run utlrp.sql to fix invalid objects or components issue

Important: If you see a warning about the presence of the release 10g DMSYS schema in the database, then you must drop the DMSYS schema before proceeding with the upgrade. Better raise a SR with Oracle support for confirmation. This is one way. Without drop also we can proceed. Follow my next steps.

13.  Check timezone version of 10.2.0.4 database and 11.2.0.3 and compare if both are same

SQL> SELECT version FROM v$timezone_file;

   VERSION
   ---------
         4

Note: When upgrading from 10.1.0.x , 10.2.0.x or 11.1.0.x to 11.2.0.3: (1358166.1)

* For 10.2.0.2 , 10.2.0.3 , 10.2.0.4 or 10.2.0.5 there is no need to apply any patchset before upgrading to 11.2.0.3 Upgrade to 11.2.0.3.No need to apply DST patches on the 10.2.0.2 , 10.2.0.3 , 10.2.0.4 or 10.2.0.5 side first. You can skip any DST related upgrade instructions. The 11.2.0.3 RDBMS DST version after the upgrade to 11.2.0.3 will be the same DST version as used in 10.2.0.2 , 10.2.0.3 , 10.2.0.4 or

10.2.0.5.

* (recommended) update the 11.2.0.3 database(s) to DSTv14 (standard DST version of 11.2.0.3) by following Note 977512.1 Updating the RDBMS. DST version in 11gR2 (11.2.0.1 and up) using DBMS_DST from step 3a) using "14" as (<the new DST version number>) in that note. When going to

DSTv14 there is no need to apply any DST patch to the 11.2.0.3 home.

14. Upgrade 11.2.0.1 to 11.2.0.3 using Manual upgrade method

15. Get DBA_REGISTRY information

sqlplus '/ as sysdba'
select COMP_ID,COMP_NAME,VERSION,STATUS,MODIFIED from dba_registry;

16.  Purge Recycle bin

PURGE DBA_RECYCLEBIN;

17. Gather Dictionary stats

EXECUTE dbms_stats.gather_dictionary_stats;

18.  Shutdown PROD database listener and database

shut immediate
lsnrctl stop PROD

19. Copy of parameter files and edit for upgrade

Copy the init parameter file from 10.2.0.4 ORACLE_HOME/dbs into 11.2.0.4 ORACLE_HOME/dbs

Adjust the parameter file for the upgrade

Adjust initialization parameters that might cause upgrade problems.

• Remove obsolete initialization parameters based on pre-upgrade tool output
• Set the COMPATIBLE parameter if not already explicitly set

Adjust the parameter file for the upgrade

Adjust initialization parameters that might cause upgrade problems.

• Remove obsolete initialization parameters based on pre-upgrade tool output
• The DIAGNOSTIC_DEST initialization parameter replaces the USER_DUMP_DEST, BACKGROUND_DUMP_DEST
• Set the COMPATIBLE parameter if not already explicitly set

If you are upgrading from 10.1.0.x or 10.2.0.x then you can leave the COMPATIBLE parameter set to it's current value until the upgrade has

been completed successfully. This will avoid any unnecessary ORA-942 errors from being reported in SMON trace files during the upgrade

(because the upgrade is looking for 10.2 objects that have not yet been created).

grep -i comp initPROD.ora
*.compatible = 10.2.0.4.0

20. Prepare Oracle environment

export ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1
export PATH  = $ORACLE_HOME/bin: $ORACLE_HOME/perl/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib.
export ORACLE_BASE=/u01/oracle/PROD/

21. Verify /etc/oratab

PROD should point to /u01/app/oracle/product/11.2.0/db_1 as ORACLE_HOME

22. Run upgrade scripts in VNC session

cd $ORACLE_HOME/rdbms/admin
sqlplus '/ as sysdba'
STARTUP UPGRADE

Monitor the alert log file and check for any errors.

SPOOL upgrade_PROD_11203.log
@catupgrd.sql

The output is at:
/u01/app/oracle/product/11.2.0/db_1/rdbms/admin/upgrade_PROD_11203.log

STARTUP

Post-Upgrade Status Tool.
@utlu112s.sql

Verify the results, and if there any errors, must be resolved at this point.

@catuppst.sql

Identifying Invalid Objects:

Any invalid SYS/SYSTEM objects found before upgrading the database are stored in the table named registry$sys_inv_objs. Any invalid non-SYS/SYSTEM objects found before upgrading the database are stored in registry$nonsys_inv_objs.To identify any new invalid objects due to the upgrade

After the upgrade, run ORACLE_HOME/rdbms/admin/utluiobj.sql

Compile invalids:

@?/rdbms/admin/utlrp.sql

23. Post Upgrade Steps

24. Actions For DST Updates When Upgrading To Or Applying The 11.2.0.3 Patchset [ID 1358166.1]

A) Applying 11.2.0.3 on 11.2.0.2 or 11.2.0.1

 Check the current version of the 11.2.0.2 or 11.2.0.1 Oracle RDBMS time zone definitions - this needs to be done for ALL databases in the

11.2.0.2 or 11.2.0.1 home:

SQL> conn / as sysdba
Connected.

SQL>SELECT version FROM v$timezone_file;

VERSION
----------
14

A.1)  The result is lower than 14 for all databases (typically 11 or 14)

Note : Every database in 11.2.0.1 home needs to be checked, if one or more has a result higher than 14 you need to follow step A.3)

If your current RDBMS time timezone version is  lower than 14 ,  install 11.2.0.3 in a new home and update the 11.2.0.2 or 11.2.0.1 databaseto 11.2.0.3 You can skip any DST related sections in the patchset documentation , there is no need to apply DST patches or check for DST issues for theupdate to 11.2.0.3
The 11.2.0.3 RDBMS DST version after the update to 11.2.0.3 will be the same as your 11.2.0.2 or 11.2.0.1 system.After the upgrade to 11.2.0.3 you can* (recommended) update to DSTv14 (standard DST version of 11.2.0.3) by following Note 977512.1 Updating the RDBMS DST version in 11gR2
(11.2.0.1 and up) using DBMS_DST from step 3a) onwards, when going to DSTv14 there is no need to apply any DST patch to the 11.2.0.3 home

* (optional) update to a higher DST than DSTv14 version if this is needed.

The latest available DST patch is found in Note 412160.1 Updated DST transitions and new Time Zones in Oracle Time Zone File patches

A.2)  The result is 14 for all databases
Note : Every database in 11.2.0.2 or 11.2.0.1 home needs to be checked, if one or more has a result higher than 14 you need to follow steps.

A.3)

If your current RDBMS time timezone version is 14 , install 11.2.0.3 in a new home and update the 11.2.0.2 or 11.2.0.1 database to 11.2.0.2.

You can skip any DST related sections in the patchset documentation , there is no need to apply DST patches or check for DST issues for theupdate to 11.2.0.3

The 11.2.0.3 RDBMS DST version after the update to 11.2.0.3 will be 14.

(optionally) After the upgrade to 11.2.0.3 you can check if there is a newer DST version out and , if needed, update every database to thisDST version. The latest available DST patch is found in Note 412160.1 Updated DST transitions and new Time Zones in Oracle Time Zone File

patches:

A.3)  The result is higher than 14 for one or more databases
If your current RDBMS time timezone version is higher then 14, you need to re-apply the same RDBMS DST version as found in step A) afterinstalling the 11.2.0.3 software . Or in other words, you need to apply the 11.2.0.3 RDBMS DST patch for the same DST version before openingthe database in the new 11.2.0.3 software home.

Apply the 11.2.0.3 RDBMS DST patch after installing the 11.2.0.3 software with the Oracle Universal installer.

If you also need to patch the OJVM then we suggest to apply the latest DST OJVM fix for 11.2.0.3, even if your current RDBMS DST patch is lower. This is best done right after installing the RDBMS DST patch, no need to follow the OJVM DST patch readme instructions as there is no database yet.

For the actual OJVM and RDBMS DST patches for your 11.2.0.3 release please see Note 412160.1 Updated Time Zones in Oracle Time Zone File


After installing the same RDBMS (and the OJVM - which may be the lastest/higher than the RDBMS DST version) DST patch in the 11.2.0.3 ORACLE_HOME upgrade to 11.2.0.3 without any action on TSLTZ or TSTZ data. Follow the upgrade or patchset apply instructions, you can skip any DST related sections.

The 11.2.0.3 RDBMS DST version after the upgrade to 11.2.0.2 will be the same DST version as used in the 11.2.0.2 or 11.2.0.1 Oracle RDBMS.

(optionally) After the upgrade to 11.2.0.3 you can check if there is a newer DST version out and , if needed, update every database to this DST version. The latest available DST patch is found in Note 412160.1 Updated DST transitions and new Time Zones in Oracle Time Zone File


25. Do the actual RDBMS DST version update of the database using DBMS_DST:

conn / as sysdba
shutdown immediate;
startup upgrade;
set serveroutput on

-- check if previous prepare window is ended

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

-- output should be
-- PROPERTY_NAME VALUE
-- ---------------------------- ------------------------------
-- DST_PRIMARY_TT_VERSION <the old DST version number>
-- DST_SECONDARY_TT_VERSION 0
-- DST_UPGRADE_STATE NONE

-- If DST_UPGRADE_STATE is "PREPARE" then you did not ended
-- the prepare window in step 3)

-- If there are objects containing TSTZ data in recycle bin,
-- please purge the bin now.
-- Otherwise dbms_dst.begin_upgrade will report "ORA-38301: Can not perform DDL/DML over objects in Recycle Bin".

purge dba_recyclebin;

-- clean used tables

TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
TRUNCATE TABLE sys.dst$affected_tables;
TRUNCATE TABLE sys.dst$error_table;

-- this alter session might speed up DBMS_DST on some db's
-- see Bug 10209691

alter session set "_with_subquery"=materialize;

-- to avoid the issue in note 1407273.1
alter session set "_simple_view_merging"=TRUE;

-- start upgrade window

EXEC DBMS_DST.BEGIN_UPGRADE(14);

-- the message
-- "An upgrade window has been successfully started."
-- will be seen

-- check if this select

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

-- gives this output:
-- PROPERTY_NAME VALUE
-- --------------------------- ------------------------------
-- DST_PRIMARY_TT_VERSION <the new DST version number>
-- DST_SECONDARY_TT_VERSION <the old DST version number>
-- DST_UPGRADE_STATE UPGRADE

-- Optionally you can check what user tables still need to be updated using DBMS_DST.UPGRADE_DATABASE
-- BEGIN_UPGRADE upgrades system tables that contain TSTZ data and marks user tables (containing TSTZ data) with the UPGRADE_IN_PROGRESS

property.
-- even if this select gives no rows you still need to do to the rest of the steps
-- it simply gives an indication of how many user objects need to processed in the later steps
-- some oracle provided users may be listed here, that is normal

SELECT OWNER, TABLE_NAME, UPGRADE_IN_PROGRESS FROM ALL_TSTZ_TABLES where UPGRADE_IN_PROGRESS='YES';

-- restart the database

shutdown immediate
startup

alter session set "_with_subquery"=materialize;
alter session set "_simple_view_merging"=TRUE;

-- now upgrade the tables who need action

set serveroutput on
VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/

-- ouput of this will be a list of tables like:

-- Table list: SYSMAN.AQ$_MGMT_NOTIFY_QTABLE_S
-- Number of failures: 0
-- ....
-- Table list: SYSMAN.MGMT_PROV_ASSIGNMENT
-- Number of failures: 0
-- Table list: SYSMAN.MGMT_CONFIG_ACTIVITIES
-- Number of failures: 0
-- Failures:0

-- if there where no failures then end the upgrade.

VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/

-- output that will be seen:
-- An upgrade window has been successfully ended.
-- Failures:0

-- last checks

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

-- needed output:
-- PROPERTY_NAME VALUE
-- ---------------------------- ------------------------------
-- DST_PRIMARY_TT_VERSION <the new DST version number>
-- DST_SECONDARY_TT_VERSION 0
-- DST_UPGRADE_STATE NONE


SELECT * FROM v$timezone_file;

-- needed output:
-- FILENAME VERSION
-- ------------------ ----------
-- timezlrg_<new version>.dat <new version>

-- if registry$database exists then update this static table also with the new DST version
-- this table is used during upgrade and contians the DST version from before the upgrade
-- this update is mainly to avoid confusion when people notice this has a lower version

select TZ_VERSION from registry$database;

update registry$database set TZ_VERSION = (select version FROM v$timezone_file);
commit;

26. Configuring Fine-Granined Access to External Network services After upgrading Oracle database

Create ACLs as per note id: 958129.1:

SQL> SELECT OWNER, REFERENCED_NAME FROM DBA_DEPENDENCIES
WHERE referenced_name IN ('UTL_TCP','UTL_SMTP','UTL_MAIL','UTL_HTTP','UTL_INADDR')
AND owner NOT IN ('SYS','PUBLIC','ORDPLUGINS');

exec DBMS_NETWORK_ACL_ADMIN.CREATE_ACL();
exec DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE();
exec DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL();

COMMIT;

SQL> select * from dba_network_acls;

27. Upgrade Oracle Text - None

After an upgrade to the new Oracle Database 11g release, copy the following files from the previous Oracle home to the new Oracle home:

■ Stemming user-dictionary files
■ User-modified KOREAN_MORPH_LEXER dictionary files
■ USER_FILTER executables

These files affect all databases installed in the given Oracle home.

You can obtain a list of these files by doing the following:
1. Looking at $ORACLE_HOME/ctx/admin/ctxf102.txt
2. Executing $ORACLE_HOME/ctx/admin/ctxf102.sql as database user SYS,SYSTEM, or CTXSYS

28. Configure and Start new database listener

Get your listener.ora and tnsnames.ora files from Old Home to New home
Set the TNS_ADMIN environment variable to the directory where you got your listener.ora and tnsnames.ora files.

Start listener


Cheers !!! Post a Comment



Sep 18, 2014

Temporary Tablespace for RAC Databases for optimal performance

TEMPORARY TABLESPACES FOR RAC DATABASES FOR OPTIMAL PERFORMANCE:
-- Troubleshoot "SS Enqueue" and "DFS Lock Handle"
-- For Oracle 10g / 11g RAC

What are Temporary Tablespaces?

Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables. For example, if you join two large tables, and Oracle cannot do the sort in memory (see SORT_AREA_SIZE initialisation parameter), space will be allocated in a temporary tablespace for doing the sort operation. Other SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.

The DBA should assign a temporary tablespace to each user in the database to prevent them from allocating sort space in the SYSTEM tablespace. This can be done with one of the following commands:


SQL> CREATE USER crm identified by password DEFAULT TABLESPACE data TEMPORARY TABLESPACE temp;

SQL> ALTER USER scott TEMPORARY TABLESPACE temp;

Performance analysis using Temporary tablespace:

Properly configuring the temporary tablespace helps optimize disk sort performance. Temporary tablespaces can be dictionary-managed or locally managed. Oracle recommends the use of locally managed temporary tablespaces with a UNIFORM extent size of 1 MB.

You should monitor temporary tablespace activity to check how many extents the database allocates for the temporary segment. If an application extensively uses temporary tables, as in a situation when many users are concurrently using temporary tables, then the extent size could be set smaller, such as 256K, because every usage requires at least one extent. The EXTENT MANAGEMENT LOCAL clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size. The default for SIZE is 1M.

Any DW, OLTP or mixed workload application that uses a lot of temp space for temporary tables, sort segments etc, when running low on temp space, lots of sessions would start waiting on ‘SS enqueue’ and ‘DFS lock handle’ waits. This would cause some severe performance issues. This best practice note for temporary tablespace, explains how this works in RAC environment and offer recommendations. Space allocated to one instance is managed in the SGA of that instance, and it is not visible to other instances.


  • Instances do not normally return temp space to the ‘common pool’. 
  • If all the TEMP space is allocated to instances, and there is no more temp space within an instance, user requests for temp space will cause a request for temp space to be sent to the other instances. The session requesting the space will get the ‘SS enqueue’ for the temporary tablespace and issue a cross instance call (using a CI enqueue) to the other instances (waiting for ‘DFS lock handle’). All inter instance temp space requests will serialize on this ‘CI enqueue, and this can be very expensive. 
  • A heavy query executing in one instance and using lots of temp space might cause all or most of the temp space to be allocated to this instance. This kind of imbalance will lead to increased contention for temp space.
  • As users on each instance request temp space, space will be allocated to the various instances. During this phase it is possible to get contention on the file space header blocks, and it is recommended to have at least as many temp files as there are instances in the RAC cluster. This normally shows up as ‘buffer busy’ waits and it is different from the ‘SS enqueue’/’DFS lock handle’ wait issue. 
  • Temporary tablespace groups are designed to accommodate very large temp space requirements, beyond the current limits for a single temporary tablespace: 8TB (2k block size) to 128TB (32k block size).
  • One possible advantage of temporary tablespace groups is that it provides multiple SS enqueues (one per tablespace), but this only shifts the contention to the CI enqueue (only one system wide)
  • It is easier to share space within a single temporary tablespace, rather than within a temporary tablespace group. If a session starts allocating temp space from a temporary tablespace within a temporary tablespace group, additional space cannot be allocated from another temporary tablespace within the group. With a single temporary tablespace, a session can allocate space across tempfiles.
  • The following is the recommended best practices for managing temporary tablespace in a RAC environment:
  • Make sure enough temp space is configured. Due to the way temp space is managed by instance in RAC, it might be useful to allocate a bit extra space compared to similar single instance database.
  • Isolate heavy or variable temp space users to separate temporary tablespaces. Separating reporting users from OLTP users might be one option.
  • Monitor the temp space allocation to make sure each instance has enough temp space available and that the temp space is allocated evenly among the instances. The following SQL is used:
select inst_id, tablespace_name, segment_file, total_blocks, 
used_blocks, free_blocks, max_used_blocks, max_sort_blocks 
from gv$sort_segment;

select inst_id, tablespace_name, blocks_cached, blocks_used 
from gv$temp_extent_pool;

select inst_id,tablespace_name, blocks_used, blocks_free 
from gv$temp_space_header;

select inst_id,free_requests,freed_extents 
from gv$sort_segment;

If temp space allocation between instances has become imbalanced, it might be necessary to manually drop temporary segments from an instance. The following command is used for this:
alter session set events 'immediate trace name drop_segments level <TS number + 1>';

See Oracle support site for Bug 4882834

Myth# For each temporary tablespace, allocate at least as many temp files as there are instances in the cluster.


-- Temporary tablespace / datafile management 
-- Applies to any Oracle version in any platform

-- Find temp file and temp tablespace with size
select file#,status,bytes/1024/1024 "Size_MB",name from v$tempfile;

select file_name,tablespace_name,bytes/1024/1024/1024,status from dba_temp_files

Few Temp tablespace and Tempfile commands for both RAC & Non-RAC:

-- Add tempfile to existing TEMP tablespace

ALTER TABLESPACE TEMP ADD TEMPFILE '/u02/oradata/datafile/temp01.dbf' SIZE 2048M autoextend on;

ALTER TABLESPACE TEMP ADD TEMPFILE '+DATA/PROD/tempfile/temp01.dbf' SIZE 100M autoextend on;

-- Add new temp tablespace and make as default
create temporary tablespace TEMP2 tempfile '/u02/oradata/datafile/temp01.dbf' size 2G autoextend on;

alter database default temporary tablespace TEMP2;

-- Making off line old temp tablespace
alter database tempfile '/u02/oradata/datafile/temp02.dbf' offline;

-- drop temp tablespace ( Don't drop immediatly, If no user use TEMP tablespace, then you can drop)

drop tablespace TEMP2 including contents and datafiles;

-- To drop tempfile


alter tablespace TEMP2 drop tempfile '+DATA/PROD/tempfile/temp02.365.877895953';

-- drop tempfile from TEMP tablespace

ALTER TABLESPACE TEMP DROP TEMPFILE '/u02/oradata/datafile/temp02.dbf';

ALTER TABLESPACE TEMP DROP TEMPFILE '+DATA/PROD/tempfile/temp02.dbf';
-- shrink temp tablespace

alter tablespace TEMP shrink tempfile '/u02/oradata/datafile/temp01.dbf' keep 10G;

-- Auto extend a tempfile

alter database tempfile '+DATA/PROD/tempfile/temp01.dbf' autoextend on;


--- Who is using temp tablespace / what temp tablespace is being used ?

SELECT a.username, a.osuser, a.sid||','||a.serial# SID_SERIAL, c.spid Process,b.tablespace tablespace, 
a.status, sum(b.extents)* 1024*1024 space
FROM  v$session a,v$sort_usage b, v$process c, dba_tablespaces d
WHERE    a.saddr = b.session_addr
AND      a.paddr = c.addr
AND      b.tablespace=d.tablespace_name
group by a.username, a.osuser, a.sid||','||a.serial#,c.spid,
b.tablespace, a.status;

I hope, It will help to optimize the performance.

Sep 17, 2014

SQL Monitor in Oracle 11gR2

SQL Monitor in Oracle 11gR2

SQL monitor active reports can be generated directly from EM live UI while viewing a detailed SQL monitor report. There (see save/send e-mail buttons on the top right of that page), the SQL monitor detail page can be either saved or sent by e-mail as an active report. 
Alternatively, the active report can be directly produced using command line by invoking the PL/SQL procedure dbms_sqltune.report_sql_monitor() using "active" as the report type. For example, the following SQL*Plus script shows how to generate an active report for the statement that was monitored last by Oracle:

The REPORT_SQL_MONITOR function is used to return a SQL monitoring report for a specific SQL statement. The SQL statement can be identified using a variety of parameters, but it will typically be identified using the SQL_ID parameter.

Monitored statements can be identified using the V$SQL_MONITOR view. This view was present in Oracle 11g Release 1, but has additional columns in Oracle 11g Release 2, making it much more useful. It contains an entry for each execution monitored, so it can contain multiple entries for individual SQL statements.

The report accesses several dynamic performance views, so you will most likely access it from a privileged user, or a user granted the SELECT_CATALOG_ROLE role.

-- Find which query / user is monitored:

SET LINESIZE 200
COLUMN sql_text FORMAT A80

SELECT sql_id, status, sql_text
FROM   v$sql_monitor
WHERE  username = 'HR';

Identify which sql_id to be reported. Once the SQL_ID is identified, we can generate a report using the REPORT_SQL_MONITOR function.

-- Report the monitored sql_id/ session_id

SET LONG 1000000;
SET LONGCHUNKSIZE 1000000;
SET LINESIZE 1000;
SET PAGESIZE 0;
SET TRIM ON;
SET TRIMSPOOL ON;
SET ECHO OFF;
SET FEEDBACK OFF;

SPOOL /u04/reports/report_sql_monitor.txt;

SELECT DBMS_SQLTUNE.report_sql_monitor(
  sql_id       => '526mvccm5nfy4',
  type         => 'TEXT',
  report_level => 'ALL') AS report
FROM dual;
SPOOL OFF

OR

SELECT DBMS_SQLTUNE.REPORT_SQL_MONITOR
            ( sql_id         => 'a71hfjg8576s2', 
              session_id     => 1234,
              session_serial => 12,
              report_level   => 'all',
              type           => 'ACTIVE' )
  FROM dual;


The resulting file report_sql_monitor.txt must be edited to remove the header (first line in the file) and the last line (the spool off). 

Note:

Examples of the output for each available TYPE are displayed below.

•TEXT
•HTML
•XML
•ACTIVE 

REPORT_SQL_DETAIL:

Although not documented as part of Real-Time SQL Monitoring, the REPORT_SQL_DETAIL function added in Oracle 11g Release 2 returns a report 

containing SQL monitoring information. Once again, it has several parameters (shown here), but you will probably only use a subset of them 

to target specific SQL statements, as shown below.

SET LONG 1000000
SET LONGCHUNKSIZE 1000000
SET LINESIZE 500
SET PAGESIZE 0
SET TRIM ON
SET TRIMSPOOL ON
SET ECHO OFF
SET FEEDBACK OFF

SPOOL /u04/reports/report_sql_detail.htm
SELECT DBMS_SQLTUNE.report_sql_detail(
  sql_id       => '526mvccm5nfy4',
  type         => 'ACTIVE',
  report_level => 'ALL') AS report
FROM dual;
SPOOL OFF


Views
The SQL monitoring functionality accesses a number of existing views, but two new dynamic performance views have been added specifically as part of it.

•V$SQL_MONITOR
•V$SQL_PLAN_MONITOR

Check the below link from Oracle for more details...

Real-Time SQL Monitoring

Sep 9, 2014

Service Configuration & Manage in Oracle 11gR2 RAC:

Service Configuration & Manage in Oracle 11gR2:

Services in Oracle Database 10g/11g:

In Real Application Cluster (RAC) environments it is sometimes desirable to run applications on a subset of RAC nodes, or have preferred nodes for specific applications. In Oracle 10g this is accomplished using services.

•Cluster Configuration
•Service Creation
•Jobs and Services
•Connections and Services

Cluster Configuration:

Before using services, you must check the cluster configuration is correct. The following command and output show the expected configuration for a three node database called FINANCE.

$ ps -ef|grep pmon
    grid  8519762        1   0   Mar 24      - 23:05 asm_pmon_+ASM1
  oracle 21627042        1   0   Jul 29      -  7:23 ora_pmon_FINANCE1

$ srvctl config database -d FINANCE
Database unique name: FINANCE
Database name: 
Oracle home: /oracle/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: FINANCE
Database instances: FINANCE1,FINANCE2
Disk Groups: DATA,FRA
Mount point paths: 
Services: node1.db,node2.db
Type: RAC


$ srvctl status database -d FINANCE
Instance FINANCE1 is running on node testclient1
Instance FINANCE2 is running on node testclient2

Assume : All database services are configured properly during setup and configuration activity.

Service Creation:

Using "srvctl" utility, "dbca" utility and "DBMS_SERVICES" package; service can be created and modified, but for this article we will restrict ourselves to looking at the "srvctl" utility. Let's assume we have two applications that should run in the following way.

•OLTP - Should run on nodes 1 and 2 of the RAC, but is able to run on node 3 if nodes 1 and 2 are not available.
•BATCH - Should run on node 3, but is able to run on nodes 1 and 2 if node 3 is not available.

To meet this requirement we might create the following services.

# Set environment.
export ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH

Syntax and Options:
Use the srvctl add service command with the following syntax:

$ srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P TAF_policy]

# Create services.
srvctl add service -d PROD -s OLTP_SERVICE -r PROD1,PROD2 -a PROD3
srvctl add service -d PROD -s BATCH_SERVICE -r PROD3 -a PROD1,PROD2

The OLTP_SERVICE is able to run on all RAC nodes because PROD3 is present in the available list, but will run in preference on nodes 1 and 2 (indicated by the -r option). The BATCH_SERVICE is able to run on all RAC nodes because PROD1 and PROD2 are in the available list, but will run in preference on node 3 (indicated by the -r option).

Example:
srvctl add service -d FINANCE -s OLTP_SERVICE -r FINANCE1 -a FINANCE2
srvctl add service -d FINANCE -s BATCH_SERVICE -r FINANCE2 -a FINANCE1

Now we will verify whether services are created or not. Use grid/root user to verify the created service status:

$ crsctl stat res -t

sample output:
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       testclient2                                
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       testclient1                                
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       testclient1                                
ora.finance.batch_service.svc
      1        OFFLINE OFFLINE                                                
ora.finance.db
      1        ONLINE  ONLINE       testclient1              Open              
      2        ONLINE  ONLINE       testclient2              Open              
ora.finance.node1.db.svc
      1        ONLINE  ONLINE       testclient1                                
ora.finance.node2.db.svc
      1        ONLINE  ONLINE       testclient2                                
ora.finance.oltp_service.svc
      1        OFFLINE OFFLINE                                                
ora.cvu
      1        ONLINE  ONLINE       testclient1                                
ora.oc4j
      1        ONLINE  ONLINE       testclient1                                
ora.scan1.vip
      1        ONLINE  ONLINE       testclient2                                
ora.scan2.vip
      1        ONLINE  ONLINE       testclient1                                
ora.scan3.vip
      1        ONLINE  ONLINE       testclient1                                
ora.testclient1.vip
      1        ONLINE  ONLINE       testclient1                                
ora.testclient2.vip
      1        ONLINE  ONLINE       testclient2


Here, services are created with offline status:

ora.finance.batch_service.svc
      1        OFFLINE OFFLINE  
ora.finance.oltp_service.svc
      1        OFFLINE OFFLINE 


# Start & Stop the Services

The services can be started and stopped using the following commands.

srvctl start service -d PROD -s OLTP_SERVICE
srvctl start service -d PROD -s BATCH_SERVICE

srvctl stop service -d PROD -s OLTP_SERVICE
srvctl stop service -d PROD -s BATCH_SERVICE

Note: Same as database service manage.

Example:

srvctl start service -d FINANCE -s OLTP_SERVICE
srvctl start service -d FINANCE -s BATCH_SERVICE

Now again check the service status using crsctl:

ora.finance.batch_service.svc
      1        ONLINE  ONLINE       testclient2                                  
ora.finance.oltp_service.svc
      1        ONLINE  ONLINE       testclient1  

Here, newly created services are in ONLINE status.

#Verification service status:

$ srvctl status service -d FINANCE
Service BATCH_SERVICE is running on instance(s) FINANCE2
Service node1.db is running on instance(s) FINANCE1
Service node2.db is running on instance(s) FINANCE2
Service OLTP_SERVICE is running on instance(s) FINANCE1


Connections and Services:

The use of services is not restricted to scheduled jobs. These services can be used in the tnsnames.ora file to influence which nodes are used for each applications. An example of the tnsnames.ora file entries are displayed below.

OLTP =
  (DESCRIPTION =
    (LOAD_BALANCE = ON)
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = test-cluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = OLTP_SERVICE)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 20)
        (DELAY = 1)
      )
    )
  )

BATCH =
  (DESCRIPTION =
    (LOAD_BALANCE = ON)
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = test-cluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = BATCH_SERVICE)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 20)
        (DELAY = 1)
      )
    )
  )

Provided applications use the appropriate connection identifier they should only connect to the nodes associated to the service. Use "LOAD_BALANCE = ON" for 10g and not required in 11g if SCAN is configured properly. In Oracle 10g use all your VIPs for load balancing.

Jobs and Services:

The Oracle 10g/11g scheduler allows jobs to be linked with job classes, which in turn can be linked to services to allows jobs to run on specific nodes in a RAC environment. To support our requirements we might create two job classes as follows.

-- Create OLTP and BATCH job classes.

BEGIN
  DBMS_SCHEDULER.create_job_class(
    job_class_name => 'OLTP_JOB_CLASS',
    service        => 'OLTP_SERVICE');

  DBMS_SCHEDULER.create_job_class(
    job_class_name => 'BATCH_JOB_CLASS',
    service        => 'BATCH_SERVICE');
END;
/

-- Make sure the relevant users have access to the job classes.

GRANT EXECUTE ON sys.oltp_job_class TO test_user;
GRANT EXECUTE ON sys.batch_job_class TO test_user;

These job classes can then be assigned to existing jobs or during job  creation.

-- Create a job associated with a job class.

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'test_user.oltp_job_test',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN NULL; END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'FREQ=DAILY;',
    job_class       => 'SYS.OLTP_JOB_CLASS',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Job linked to the OLTP_JOB_CLASS.');
END;
/

-- Assign a job class to an existing job.

SQL> EXEC DBMS_SCHEDULER.set_attribute('MY_BATCH_JOB', 'JOB_CLASS', 'BATCH_JOB_CLASS');


Use below link to create service using DBCA.

http://docs.oracle.com/cd/B14117_01/rac.101/b10765/hafeats.htm
http://dbaworkshop.blogspot.in/2008/07/add-new-taf-service-with-dbca.html

Hope this article helps for information about RAC services.

Sep 8, 2014

Manage SYSAUX tablespace in Oracle 11gR2

The SYSAUX tablespace was installed as an auxiliary tablespace to the SYSTEM tablespace when you created your database. Some database components that formerly created and used separate tablespaces now occupy the SYSAUX tablespace.

If the SYSAUX tablespace becomes unavailable, core database functionality will remain operational. The database features that use the SYSAUX tablespace could fail, or function with limited capability.

Monitoring Occupants :

V$SYSAUX_OCCUPANTS view can be used to monitor the occupants of the SYSAUX tablespace. This view lists the following information about the occupants of the SYSAUX tablespace:

•Name of the occupant
•Occupant description
•Schema name
•Move procedure
•Current space usage

View information is maintained by the occupants.

select occupant_desc, space_usage_kbytes
  from v$sysaux_occupants
  where space_usage_kbytes > 0 order by space_usage_kbytes desc;

output:

OCCUPANT_DESC                                            SPACE_USAGE_KBYTES
---------------------------------------------            ------------------
Server Manageability - Automatic Workload Repository                 734272
Server Manageability - Optimizer Statistics History                  123520
XDB                                                                   90368
Unified Job Scheduler                                                 67776
Analytical Workspace Object Table                                     39168
OLAP API History Tables                                               39168
Server Manageability - Advisor Framework                              28224
LogMiner                                                              12544
Enterprise Manager Repository                                         12480
Oracle Multimedia ORDDATA Components                                   8960
Server Manageability - Other Components                                7104
OLAP Catalog                                                           5248
Oracle Text                                                            3712
Expression Filter System                                               3712
Transaction Layer - SCN to TIME mapping                                3328
Workspace Manager                                                      2496
SQL Management Base Schema                                             2176
PL/SQL Identifier Collection                                           1600
Logical Standby                                                        1408
Oracle Streams                                                         1024
Enterprise Manager Monitoring User                                      896
Automated Maintenance Tasks                                             320


To see the retention:

SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         31
SQL>


To Purge:
SQL> exec DBMS_STATS.PURGE_STATS(SYSDATE-31);
SQL> exec dbms_stats.alter_stats_history_retention(31);

Best Practice: As per the retention, you can keep those data.

To Reset the retention:

SQL> exec dbms_stats.alter_stats_history_retention(14);

Controlling the Size :

The SYSAUX tablespace is occupied by a number of database components (see above output), and its total size is governed by the space consumed by those components.

The largest portion of the SYSAUX tablespace is occupied by the Automatic Workload Repository (AWR). The space consumed by the AWR is determined by several factors, including the number of active sessions in the system at any given time, the snapshot interval, and the historical data retention period. A typical system with an average of 10 concurrent active sessions may require approximately 200 to 300 MB of space for its AWR data.

The following table provides guidelines on sizing the SYSAUX tablespace based on the system configuration and expected load.

Another major occupant of the SYSAUX tablespace is the embedded Enterprise Manager (EM) repository. This repository is used by Oracle Enterprise Manager Database Control to store its metadata. The size of this repository depends on database activity and on configuration-related information stored in the repository.

Other database components in the SYSAUX tablespace will grow in size only if their associated features (for example, Oracle UltraSearch, Oracle Text, Oracle Streams) are in use. If the features are not used, then these components do not have any significant effect on the size of the SYSAUX tablespace.

Partial purge of AWR data:

For long term capacity planning, it is always a good idea to run an AWR report that shows the relative amount of data consumed by each AWR table.  If you purge data that is no longer required, then you are going to have a compact set of historical data.  Let's start by getting the Oracle table size information:

SQL> col c1 heading 'table|name' format a30
SQL> col c2 heading 'table size|meg'format 999,999,999
SQL> select segment_name c1, sum(bytes) / (1024 * 1024) c2
  from dba_extents
 where segment_type = 'TABLE'
   and segment_name like 'WR%'
 group by segment_name
 order by c2 desc;

Sample output:

C1                                                                           C2
-------------------------------------------------------------------- ----------
WRM$_SNAPSHOT_DETAILS                                                         80
WRI$_OPTSTAT_HISTGRM_HISTORY                                                  28
WRH$_SQL_PLAN                                                                 20
WRH$_SYSMETRIC_HISTORY                                                        19
WRI$_OPTSTAT_HISTHEAD_HISTORY                                                 10


Purge:

-- For no prod environments:

Now we can simply issue a truncate command to purge all SQL data:

truncate table dba_hist_sql_plan;
truncate table dba_hist_sqltext;    
etc.

Note: Not do in your production. If any hardware change / major deployment is there, then do not do at all.

-- For prod environment
SQL> delete from dba_hist_sql_plan
where sql_id =  (select sql_id from dba_hist_snapshot
       where begin_interval_time < sysdate - 90);


Here is another example of purging highly populated SQL tables in order to free-up enough space for longer AWR retention periods.

SQL> delete from dba_hist_sqltext
where sql_id =
    (select  sql_id   from dba_hist_snapshot
      where begin_interval_time < sysdate - 90);


I found the best option from Oracle community:

Sub: SYSAUX Tablespace Growing out of Control

First, I create backup tables using the below queries. (This will preserve 14 days worth of data)

create table SYS.WRI$_OPTSTAT_BAK as (select * from sys.wri$_optstat_histhead_history
where savtime > SYSDATE - 14);

create table SYS.WRI$_OPTSTAT_TAB_BAK as (select * from sys.wri$_optstat_tab_history
where savtime > SYSDATE - 14);

create table SYS.WRI$_OPTSTAT_IND_BAK as (select * from sys.wri$_optstat_ind_history
where savtime > SYSDATE - 14);


Then I truncate the original tables.

truncate table SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY;
truncate table SYS.WRI$_OPTSTAT_TAB_HISTORY;
truncate table SYS.WRI$_OPTSTAT_IND_HISTORY;


Then I insert the 14 days worth of data back into the original tables.

insert into SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY (SELECT * FROM SYS.WRI$_OPTSTAT_BAK);
insert into SYS.WRI$_OPTSTAT_TAB_HISTORY (SELECT * FROM SYS.WRI$_OPTSTAT_TAB_BAK);
insert into SYS.WRI$_OPTSTAT_IND_HISTORY (SELECT * FROM SYS.WRI$_OPTSTAT_IND_BAK);


Drop the temporary backup tables.

drop table SYS.WRI$_OPTSTAT_BAK;
drop table SYS.WRI$_OPTSTAT_TAB_BAK;
drop table SYS.WRI$_OPTSTAT_IND_BAK;


Drop the related indexes on those tables:

SQL> drop index I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST;

SQL> drop index I_WRI$_OPTSTAT_HH_ST;

Recreate the indexes:

CREATE UNIQUE INDEX "SYS"."I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST" ON "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" ("OBJ#", "INTCOL#", SYS_EXTRACT_UTC("SAVTIME"), "COLNAME")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSAUX";

CREATE INDEX "SYS"."I_WRI$_OPTSTAT_HH_ST" ON "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" (SYS_EXTRACT_UTC("SAVTIME"))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSAUX"; 


Make sure indexes are in usable state:

SQL> select index_name from dba_indexes where status='UNUSABLE';

Then I am able to run the below statement in a matter of minutes.

SQL> exec dbms_stats.purge_stats(SYSDATE-14);

Run Gather Schema Statistics:

I would like to note that I have had an SR open with Oracle for 8 months and they have identified this as a bug, but have not been able to provide me with a solution. My tablespace can't afford to wait any longer, so this is why I have decided to use a workaround. I have implemented this workaround in many test environments and have not had any issues. I am just a bit gun shy to pull the trigger in Production.

scheduler$_event_log tips:

The sys.scheduler$_event_log is a table that resides in the SYSAUX tablespace. The purpose of the scheduler$_event_log table is to store details about past scheduler events.

The scheduler$_event_log consumes an inordinate amount of space in the SYSAUX tablespace and it should be periodically truncated to keep the SYSAUX tablespace  from becoming full.  Either of these statements will purge the rows on demand:

SQL> exec DBMS_SCHEDULER.PURGE_LOG();
SQL> truncate table sys.scheduler$_event_log;


In addition to the conceptual job table, the scheduler uses several other tables to store metadata about scheduler objects.

SQL>
select table_name
from user_tables
where table_name like '%SCHEDULER$%'
and table_name not like '%SCHEDULER$_JOB%';


output;

TABLE_NAME
------------------------------
SCHEDULER$_EVENT_LOG
SCHEDULER$_STEP_STATE
SCHEDULER$_WINDOW_DETAILS
AQ$_SCHEDULER$_EVENT_QTAB_L
AQ$_SCHEDULER$_EVENT_QTAB_S
AQ$_SCHEDULER$_REMDB_JOBQTAB_L
AQ$_SCHEDULER$_REMDB_JOBQTAB_S
SCHEDULER$_CHAIN
SCHEDULER$_CLASS
SCHEDULER$_CREDENTIAL
SCHEDULER$_DESTINATIONS
SCHEDULER$_EVENT_QTAB
SCHEDULER$_EVTQ_SUB
SCHEDULER$_FILEWATCHER_HISTORY
SCHEDULER$_FILEWATCHER_RESEND
SCHEDULER$_FILE_WATCHER
SCHEDULER$_GLOBAL_ATTRIBUTE
...
...
AQ$_SCHEDULER$_REMDB_JOBQTAB_H
AQ$_SCHEDULER$_REMDB_JOBQTAB_I
AQ$_SCHEDULER$_REMDB_JOBQTAB_T

42 rows selected

SQL>

Under normal circumstances, one would not expect to interact with any of the scheduler tables directly. Information about the scheduler is displayed using the dba_scheduler_% views, and the dbms_scheduler package is used for the creation and manipulation of several scheduler objects including:


This script will display details from scheduler$_event_log and scheduler$_job_run_details.

Collected from : Burleson Consulting

SELECT j.LOG_ID,
       j.LOG_DATE,
       e.OWNER,
       DECODE(instr(e.NAME, '"'),
              0,
              e.NAME,
              substr(e.NAME, 1, instr(e.NAME, '"') - 1)) "JOB_NAME",
       DECODE(instr(e.NAME, '"'),
              0,
              NULL,
              substr(e.NAME, instr(e.NAME, '"') + 1)) "CHILD_JOB",
       e.STATUS,
       j.ERROR#,
       j.REQ_START_DATE,
       j.START_DATE,
       j.RUN_DURATION,
       j.INSTANCE_ID,
       j.SESSION_ID,
       j.SLAVE_PID,
       j.CPU_USED,
       decode(e.credential,
              NULL,
              NULL,
              substr(e.credential, 1, instr(e.credential, '"') - 1)) "Credential1",
       decode(e.credential,
              NULL,
              NULL,
              substr(e.credential,
                     instr(e.credential, '"') + 1,
                     length(e.credential) - instr(e.credential, '"'))) "Credential2",
       decode(bitand(e.flags, 1),
              0,
              NULL,
              substr(e.destination, 1, instr(e.destination, '"') - 1)) "Flag1",
       decode(bitand(e.flags, 1),
              0,
              e.destination,
              substr(e.destination,
                     instr(e.destination, '"') + 1,
                     length(e.destination) - instr(e.destination, '"'))) "Flag2",
       j.ADDITIONAL_INFO
  FROM scheduler$_job_run_details j, scheduler$_event_log e
 WHERE j.log_id = e.log_id
   AND e.type# = 66
   and e.dbid is null
   AND (e.owner = SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') or /* user has object privileges */
       (select jo.obj#
           from obj$ jo, user$ ju
          where DECODE(instr(e.NAME, '"'),
                       0,
                       e.NAME,
                       substr(e.NAME, 1, instr(e.NAME, '"') - 1)) = jo.name
            and e.owner = ju.name
            and jo.owner# = ju.user#
            and jo.subname is null
            and jo.type# = 66) in
       (select oa.obj#
           from sys.objauth$ oa
          where grantee# in (select kzsrorol from x$kzsro)) or /* user has system privileges */
       (exists (select null
                   from v$enabledprivs
                  where priv_number = -265 /* CREATE ANY JOB */
                 ) and e.owner != 'SYS'));


I hope sure It will help. Cheers !!!

Sep 5, 2014

v$session_wait Tips & Tricks

v$session_wait Tips - Analyzing waits during bottleneck situation

Analyzing real-time physical I/O waits is an important step in improving performance

what are in v$session_wait?

The v$session_wait view displays information about wait events for which active sessions are currently waiting. The following is the description of this view, and it contains some very useful columns, especially the P1 and P2 references to the objects associated with the wait events.

SQL> desc v$session_wait 

Name Null? Type
--------------------------- -------- ------------
SID NUMBER
SEQ# NUMBER
EVENT VARCHAR2(64)
P1TEXT VARCHAR2(64)
P1 NUMBER
P1RAW RAW(4)
P2TEXT VARCHAR2(64)
P2 NUMBER
P2RAW RAW(4)
P3TEXT VARCHAR2(64)
P3 NUMBER
P3RAW RAW(4)
WAIT_CLASS_ID NUMBER
WAIT_CLASS# NUMBER
WAIT_CLASS VARCHAR2(64)
WAIT_TIME NUMBER
SECONDS_IN_WAIT NUMBER
STATE VARCHAR2(19)

Using v$session_wait, it is easy to interpret each wait event parameter using the corresponding descriptive text columns for that parameter. Also, wait class columns were added so that various wait events could be grouped into the related areas of processing such as network, application, idle, concurrency, etc.

This view provides the DBA with a dynamic snapshot of the wait event picture for specific sessions. Each wait event contains other parameters that provide additional information about the event. For example, if a particular session waits for a buffer busy waits event, the database object causing this wait event can easily be determined:

SQL> 
select username, event, p1, p2 from 
v$session_wait 
where sid = 74;

The output of this query for a particular session with SID 74 might look like this:

USERNAME    EVENT            SID P1 P2 
---------- ----------------- --- -- ---
HR         buffer busy waits 74  4  155

Columns P1 and P2 allow the DBA to determine file and block numbers that caused this wait event. The query below retrieves the object name that owns data block 155, the value of P2 above:

SQL> select segment_name,segment_type
from dba_extents
where file_id = 4 
and 155 between block_id and block_id + blocks – 1;

OR

SQL> select segment_name,segment_type
from dba_extents
where file_id = &file_id
and &Block_id between block_id and block_id + blocks – 1;

Note: Here you can enter values

If you are getting below error:
ERROR:
ORA-01555: snapshot too old: rollback segment number 0 with name "SYSTEM" too
small

Solution:
Please refer my link....

SEGMENT_NAME              SEGMENT_TYPE
------------------------------ ---------------
employee                                     TABLE

The above output shows that the table named orders caused this wait event, a very useful clue when tuning the SQL within this session. Also, see my notes on v$session_wait.

The ability to analyze and correct Oracle Database physical read wait events is critical in any tuning project. The majority of activity in a database involves reading data, so this type of tuning can have a huge, positive impact on performance.

System wait tuning has become very popular because it can show you those wait events that are the primary bottleneck for your system. Some experts like the 10046 wait event (level 8 and higher) analysis technique and Oracle MOSC now has an analysis tool called trcanlzr.sql to interpret bottlenecks via 10046 trace dumps. However, some Oracle professionals find dumps cumbersome and prefer to sample real-time wait events.

When doing wait analysis, it is critical to remember that all Oracle databases experience wait events, and that the presence of waits does not always indicate a problem. In fact, all well-tuned databases have some bottleneck. (For example, a computationally intensive database may be CPU-bound and a data warehouse may be bound by disk-read waits.) In theory, any Oracle database will run faster if access to hardware resources associated with waits is increased.

Finding the Contentions:

For example, V$SESSION_EVENT can show that session 124 (SID=124) had many waits on the db file scattered read, but it does not show which file and block number. However, V$SESSION_WAIT shows the file number in P1, the block number read in P2, and the number of blocks read in P3 (P1 and P2 let you determine for which segments the wait event is occurring).

Same P1,P2,P3 means they are waiting on same file, same block and same number of blocks. But could be on different rows or same rows.

You need to query v$lock to find out more info.

The values represent:

P1—The absolute file number for the data file involved in the wait.
P2—The block number within the data file referenced in P1 that is being waited upon.
P3—The reason code describing why the wait is occurring.

Here's an Oracle data dictionary query for these values:
SQL>
select p1 "File#",p2 "Block#",p3 "ReasonCode"
from v$session_wait
where event = '&event_name';

You can trace P1 and P2 back to the specific table or index with these scripts:
If information collected from the above query repeatedly shows that the same block (or range of blocks) is experiencing waits, this indicates a "hot" block or object. The following query will give the name and type of the object:

SELECT relative_fno, owner, segment_name, segment_type 
FROM dba_extents 
WHERE file_id = &file 
AND &block BETWEEN block_id AND block_id + blocks - 1;

Verification via Event name:

Oracle 10g v $ session view different wait events corresponding p1, p2, p3 of meaning, we can not remember all waiting for an event corresponding p1, p2, p3 of meaning.

The meaning of each wait event corresponds know by querying the V $ EVENT_NAME p1, p2, p3 of
SQL> 
col name format a25; 
col p1 format a10; 
col p2 format a10; 
col p3 format a10; 
SELECT NAME, PARAMETER1 P1, PARAMETER2 P2, PARAMETER3 P3 
2 FROM V$EVENT_NAME 
3 WHERE NAME = '&event_name'; 

The event_name input values: db file scattered read 
Original value of 3: WHERE NAME = '& event_name A' 
The new value 3: WHERE NAME = 'db file scattered read' 

The name P1 P2 P3 
-------------------------------------------------- -------- 
db file scattered read file # block # blocks 

file #: data file number 
Block #: starting block number 
blocks: to read the the the number of of the data block 

If you want to trace and analyze the session detail or query details to proceed further, the trace the query or session.

Click here to read more about SQL Trace and Oradebug.

Translate >>