Jun 17, 2017

Applying PSU (APR 2017) Patch on Oracle 11gR2 Two Node RAC

This document will explain how to apply PSU patch ( April-2017) on Oracle 11.2.0.4 RAC, All the steps are performed on 2 node RAC.

Consolidated steps:

1 : Take Backup of OPatch folder on Both GI and RDBMS homes and keep recommended OPatch folder as per document.
2 : Stop OEM, if running
3 : Download required PSU (25476126) and unzip it
4 : Create response file from any OPatch location
5 : Apply the PSU node by node
6 : Check opatch details and run @catbundle psu apply followed by @utlrp.sql

Step details:

1) Stop OEM if running

bash-4.2$ export ORACLE_UNQNAME=PROD
bash-4.2$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.4.0
Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
https://eddb1.imcl.com:5500/em/console/aboutApplication
Oracle Enterprise Manager 11g is not running.


2) Exting patches both GI and RDMS homes:

a) RDBMS Home
bash-4.2$ ./opatch lspatches
20831122;OCW Patch Set Update : 11.2.0.4.7 (20831122)
20760982;Database Patch Set Update : 11.2.0.4.7 (20760982)

OPatch succeeded.
bash-4.2$ ./opatch version
OPatch Version: 11.2.0.3.14

OPatch succeeded.
bash-4.2$

b) GI Home

bash-4.2$ pwd
/oracle/app/11.2.0/grid/OPatch
bash-4.2$ ./opatch version
OPatch Version: 11.2.0.3.14

OPatch succeeded.
bash-4.2$ ./opatch lspatches
20831122;OCW Patch Set Update : 11.2.0.4.7 (20831122)
20760982;Database Patch Set Update : 11.2.0.4.7 (20760982)
20299019;ACFS Patch set update : 11.2.0.4.6 (20299019)

OPatch succeeded.
bash-4.2$


3) Download Latest PSU and Unzip

bash-4.2$ cd /u02/Patch_dir
bash-4.2$ pwd
/u02/Patch_dir
bash-4.2$ ls
25476126                       PatchSearch.xml                p25476126_112040_AIX64-5L.zip


4) Create a response file

su - oracle
$cd $ORACLE_HOME/OPatch/ocm/bin/
bash-4.2$ pwd
/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch/ocm/bin
$./emocmrsp -no_banner -output /u02/Patch_dir/ocm.rsp

output:

bash-4.2$ ./emocmrsp -no_banner -output /u02/Patch_dir/ocm.rsp
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y
The OCM configuration response file (/u02/Patch_dir/ocm.rsp) was successfully created.
bash-4.2$


5) Now run the patch installation as root user, to ensure that the installation is done directly to BOTH oracle and grid homes

# ./opatch auto /u02/Patch_dir/25476126 -ocmrf /u02/Patch_dir/ocm.rsp

output:

bash-4.2# pwd
/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch
bash-4.2# ./opatch auto /u02/Patch_dir/25476126 -ocmrf /u02/Patch_dir/ocm.rsp
Executing /oracle/app/11.2.0/grid/perl/bin/perl ./crs/patch11203.pl -patchdir /u02/Patch_dir -patchn 25476126 -ocmrf /u02/Patch_dir/ocm.rsp -paramfile /oracle/app/11.2.0/grid/crs/install/crsconfig_params

This is the main log file: /oracle/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatchauto2017-06-17_16-12-55.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/oracle/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatchauto2017-06-17_16-12-55.report.log

2017-06-17 16:12:55: Starting Clusterware Patch Setup
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params

Stopping RAC /oracle/app/oracle/product/11.2.0/dbhome_1 ...
Stopped RAC /oracle/app/oracle/product/11.2.0/dbhome_1 successfully

patch /u02/Patch_dir/25476126/24732075  apply successful for home  /oracle/app/oracle/product/11.2.0/dbhome_1
patch /u02/Patch_dir/25476126/23054319/custom/server/23054319  apply successful for home  /oracle/app/oracle/product/11.2.0/dbhome_1

Stopping CRS...
Stopped CRS successfully

patch /u02/Patch_dir/25476126/24732075  apply successful for home  /oracle/app/11.2.0/grid
patch /u02/Patch_dir/25476126/23054319  apply successful for home  /oracle/app/11.2.0/grid
patch /u02/Patch_dir/25476126/22502505  apply successful for home  /oracle/app/11.2.0/grid

Starting CRS...
Installing Trace File Analyzer
CRS-4123: Oracle High Availability Services has been started.

Starting RAC /oracle/app/oracle/product/11.2.0/dbhome_1 ...
Started RAC /oracle/app/oracle/product/11.2.0/dbhome_1 successfully

opatch auto succeeded.
bash-4.2#

6) Verify the Opatch applied or not?

su - oracle
bash-4.2$ pwd
/u02/Patch_dir
bash-4.2$ cd $ORACLE_HOME/OPatch
bash-4.2$ ./opatch lspatches
23054319;OCW Patch Set Update : 11.2.0.4.160719 (23054319)
24732075;Database Patch Set Update : 11.2.0.4.170418 (24732075)

OPatch succeeded.
bash-4.2$

#su - grid

bash-4.2$ ./opatch lspatches
22502505;ACFS Patch Set Update : 11.2.0.4.160419 (22502505)
23054319;OCW Patch Set Update : 11.2.0.4.160719 (23054319)
24732075;Database Patch Set Update : 11.2.0.4.170418 (24732075)

OPatch succeeded.
bash-4.2$ whoami
grid
bash-4.2$


Now you can proceed same steps in other nodes

Note: Application downtime is not required.

7) For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql and utlrp.sql script as follows:

# su - oracle
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> @utlrp.sql
SQL>

Hope it will work for you. Put your comments if you have any remarks.


Jun 14, 2017

ORA-02020: too many database links in use - solution

One of my database is used as remote database for distributed transactions as well as master table access. For one requirement we have created 'n' number of db links. So many queries fired to use the dblink and at that time we found below error:

"ORA-02020: too many database links in use "

Cause:  The current session has exceeded the INIT.ORA open_links maximum.


How to find?
The above error will be reported if open_links parameter value is less. Default value of this parameter is 4. If frequently you are getting this error, then change the value to higher one.

e.g.,
SQL> show parameter open_links;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------
open_links                           integer     4
open_links_per_instance              integer     4

Action: 
Increase the open_links limit, or free up some open links by committing or rolling back the transaction and canceling open cursors that reference remote databases.

SQL> alter system set open_links_per_instance=10 scope=spfile;
SQL> alter system set open_links=10 scope=spfile;

and then, bounce the database.



Global Temporary Table ( GTT) - key facts

The data in a GTT is written to the temporary tablespace, which is not directly protected by redo, so using a GTT improves performance by reducing redo generation. Unfortunately, prior to Oracle 12c, all undo associated with DML against a GTT is written to the normal undo tablespace, which is itself protected by redo. As a result, using a GTT reduces the amount of redo generation, but does not eliminate it. Another why of describing this is, using a GTT removes direct redo generation, but not indirect redo generation cause by undo.

1. GTTs use space in your temporary tablespace. Regular tables use space in their assigned tablespace.

2. GTTs never have to be truncated, the data automatically goes away when your session ends (or you commit). Regular tables persist until you delete/truncate.

3. GTTs allow you to see only the data your session has put in. Regular tables allow you to see other people's data that they put in (Assuming you're not using FGAC).

4. Regular tables get fragmented/oversized over time because your report is 100 rows and the next guy's report is 10000000 rows which may lead to lots of empty blocks being scanned. GTTs are effectively truncated when you logout/commit.

5. Regular tables have stats calculated in order to help the optimizer (which may be good or bad). GTTs don't have stats calculated unless they are specifically set.

6. GTTs don't persist very well. This makes it harder for stateless or pooled connections to "build" with one connection and then "Query" with another connection. 

7. You do understand that even a GTT will generate GOBS of redo when you use UPDATE and DELETE on it (generates MINIMAL with INSERT). You can pass a GTT to child function or procedure from parent call.

8. No REDO is generated for the GTT, however UNDO is generated for the GTT and REDO for the UNDO *must* be generated

9. Since INSERTs and SELECT generates minimal UNDO (and consequently minimal REDO), GTT will be best suited when most of your operations against GTT are either INSERTs and SELECTs.

10. Since UPDATEs and DELETEs generate the most amount of UNDO (and consequently most REDO to protect the UNDO), using GTT for these DML operations will almost generate the same amount of REDO as for a normal table. 

11. An UPDATE against a GTT will generate 1/2 of REDO as it generated for normal table (since REDO for the UNDO is logged)

12. A DELETE against a GTT will generate same amount of REDO as it generated for normal table (since UNDO for DELETE is more) and the REDO for the UNDO is logged.

13. Although the data in a GTT is written to the temporary tablespace, the associated undo is still written to the normal undo tablespace, which is itself protected by redo, so using a GTT does not reduce undo and the redo associated with protecting the undo tablespace.

14.  If the TRUNCATE statement is issued against a temporary table, only the session specific data is truncated. There is no affect on the data of other sessions.

15. Indexes can be created on temporary tables. The content of the index and the scope of the index is the same as the database session. Index stats is managed by Oracle it self.

16. Views can be created against temporary tables and combinations of temporary and permanent tables.

17. Temporary tables can have triggers associated with them.

18. Export and Import utilities can be used to transfer the table definitions, but no data rows are processed.


You cannot use the following features with temporary tables:

  1. Synonyms, triggers and views on SESSION schema tables (including physical tables and temporary tables)
  2. Caching statements that reference SESSION schema tables and views
  3. Temporary tables cannot be specified in referential constraints and primary keys
  4. Temporary tables cannot be referenced in a triggered-SQL-statement
  5. Check constraints on columns
  6. Generated-column-spec
  7. Importing into temporary tables


How to create a GTT:

CREATE GLOBAL TEMPORARY TABLE gtt_table_name (  ..)
ON COMMIT DELETE ROWS | ON COMMIT PRESERVE ROWS | ON ROLLBACK DELETE ROWS;

Jun 6, 2017

Use of "METHOD_OPT" in gather statistics - Best Practices

How to use "METHOD_OPT" parameter during gather statistics? What performance benefits we will have? When to use this option? 

During training session on performance tuning, I faced these questions from many DBAs.

With release 11g there are some enhancements in gather statistics. By far the most controversial parameter in the DBMS_STATS.GATHER_*_STATS procedures is the METHOD_OPT parameter.

The METHOD_OPT parameter controls the creation of histograms during statistics collection. Histograms are a special type of column statistic created to provide more detailed information on the data distribution in a table column.

So why are histograms such a controversial issue?
Histogram creation does extend the elapse time and the system resources needed for statistics collection but the far bigger concerns people have with histograms comes from their interaction with the bind peeking feature and how their presence affects the cardinality estimates for near popular values.

The purpose of method_opt: 

The method_opt argument within dbms_stats controls the following:

The generation of histograms
The creation of extended statistics  (Oracle 11g)
The collection of "base" column statistics

Let's look at some of the many permutations of the method_opt clause.

The recommendation from Oracle Database11g onwards is to let METHOD_OPT default and to take advantage of Adaptive Cursor Sharing.

If you plan to manually set the METHOD_OPT parameter to a non-default value ensure you specify only the columns that really need a histogram. Setting METHOD_OPT to FOR ALL COLUMNS SIZE 254 will cause Oracle to gather a histogram on every column. This will unnecessarily extend the elapse time and the system resources needed for statistics gathering, as well as increasing the amount of space required to store the statistics.

You should also refrain from setting METHOD_OPT to FOR ALL INDEX COLUMNS SIZE 254 as this will cause Oracle to gather histograms on every column used in an index, which again could waste system resources. This setting also has a nasty side effect of preventing Oracle from collecting basic column statistics for non-index columns.

The default method_opt "For all columns size auto":

Remember, it is wasteful to create a histogram on a column that is never referenced by a SQL statement.  It is the SQL workload that determines if a column needs histograms.

The default value of  "for all columns size auto" is the Oracle default and this option will analyze histograms on what the optimizer considers to be "important columns".  The optimizer does this by examining your current workload when you run the dbms_stats command, interrogating all current SQL in the library cache to see which columns might benefit from a histogram.
e.g.,
begin
   dbms_stats.gather_schema_stats(
      ownname          => 'HR',
      estimate_percent => dbms_stats.auto_sample_size,
      method_opt       => 'for all columns size auto',
      degree           => 4 -- optional
   );
end;
/

The method "for all indexed columns":

 The method "for all indexed columns" limits base column gathering to only those columns that are included in an index. This value is not recommended as it is highly unlikely that only index columns will be used in the select list, where clause predicates, and group by clause of all of the SQL statement executed in the environment.

The method_opt='SKEWONLY'  dbms_stats Option:

The first is the 'skewonly' option which very time-intensive because it examines the distribution of values for every column within every index.

If dbms_stats discovers an index whose columns are unevenly distributed, it will create histograms for that index to aid the cost-based SQL optimizer in making a decision about index vs. full-table scan access.

For example, if an index has one column that is in 50% of the rows, a full-table scan is faster than and index scan to retrieve these rows.

Histograms are also used with SQL that has bind variables and SQL with cursor_sharing enabled.  In these cases, the CBO determines if the column value could affect the execution plan, and if so, replaced the bind variable with a literal and performs a hard parse.

e.g.,
begin
   dbms_stats.gather_schema_stats(
      ownname          => 'HR',
      estimate_percent => dbms_stats.auto_sample_size,
      method_opt       => 'for all columns size skewonly',
      degree           => 7
   );
end;
/

The method_opt='REPEAT'  dbms_stats Option:

Following the one-time detailed analysis, the re-analyze task will be less resource intensive with the REPEAT option.  Using the repeat option will only re-analyze indexes with existing histograms, and will not search for other histograms opportunities.  This is the way that you will re-analyze you statistics on a regular basis.

e.g.,

begin
   dbms_stats.gather_schema_stats(
      ownname          => 'HR',
      estimate_percent => dbms_stats.auto_sample_size,
      method_opt       => 'for all columns size repeat',
      degree           => 7
   );
end;
/

The method_opt='AUTO' dbms_stats Option:

The auto option is used when monitoring is implemented and creates histograms based upon data distribution and the manner in which the column is accessed by the application (e.g. the workload on the column as determined by monitoring, especially foreign keys to determine the cardinality of table join result sets).  Using method_opt=>'auto' is similar to using the gather auto in the option parameter of dbms_stats.

begin
   dbms_stats.gather_schema_stats(
      ownname          => 'SCOTT',
      estimate_percent => dbms_stats.auto_sample_size,
      method_opt       => 'for all columns size auto',
      degree           => 7
   );
end;
/

Important:  Whenever you have almost all of your SQL using bind variables, you will want to delete your existing table statistics and then change the method_opt to prevent future histogram creation.  To delete the histograms that were created with method_opt, just remove and re-analyze the tables using dbms_stats, delete_table_stats.

To prevent future generation of histograms, invoke dbms_stats.set_param as follows

SQL> exec dbms_stats.set_param(pname=>'METHOD_OPT', pval=>'FOR ALL COLUMNS SIZE 1');

Remember, analyzing for histograms is time-consuming, and histograms are used under two conditions:

Table join order : The CBO must know the size of the intermediate result sets (cardinality) to properly determine the correct join order the multi-table joins.

Table access method : The CBO needs to know about columns in SQL where clauses, where the column value is skewed such that a full-table scan might be faster than an index range scan. Oracle uses this skew information in conjunction with the clustering_factor columns of the dba_indexes view.

Hence, this is the proper order for using the dbms_stats package to locate proper columns for histograms:

1. Skewonly option - You want to use skewonly to do histograms for skewed columns, for cases where the value will make a difference between a full-table scan and an index scan.

2. Monitor - Next, turn-on monitoring. Issue an 'alter table xx monitoring' and 'alter index yyy monitoring' command for all segments in your schema. This will monitor workload.

3. Auto option - Once monitoring is in-place, you need to re-analyze with the "auto" option to create histograms for join columns within tables. This is critical for the CBO to determine the proper join order for finding the driving table in multi-table joins.

4. Repeat option - Finally, use the "repeat" option to re-analyze only the existing histograms.
Periodically you will want to re-run the skewonly and auto option to identify any new columns that require histograms. Once located, the repeat option will ensure that they are refreshed with current values.

I think I like the first method better, because the statistics will still exist even if the index is dropped and unlike the second approach, it doesn't change the logical structure of the table.

Best Practices:

1) Always gather stats with AUTO_SAMPLE_SIZE so the new stats gathering algorithm of Oracle 11g can kick in.
2) If you use partition tables you must use synopsis so the global stats can be derived much faster in an incremental fashion.

Synopsis are created when you enable incremental stats at the table or system level:

Table level: 
SQL> exec DBMS_STATS.SET_TABLE_PREFS('HR','EMP','INCREMENTAL','TRUE');

System level: 
SQL> exec DBMS_STATS.SET_GLOBAL_PREFS('INCREMENTAL','TRUE');

This will avoid re-scanning partitions that haven’t changed since the last stats generation, making it faster.

Note:
To gather big schema use parallelism like below to reduce time:
e.g.,
exec dbms_stats.gather_schema_stats(ownname => 'HR',estimate_percent => 100,degree => 5); 

Translate >>