Dec 18, 2018

Connect Oracle database without tnsnames.ora from windows command prompt

-- Connect non sys user 

Syntax:

sqlplus <username>/"<password>"@<hostname>:<port#>/<database/servicename>

e.g.,
sqlplus system/"Passw0rd#"@example02:1521/testdb


e.g.,

C:\>sqlplus system/"Passw0rd#"@example02:1521/testdb

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 18 12:31:53 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Nov 15 2018 06:35:54 -05:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL>

-- Connect as sysdba

Syntax:
sqlplus <username>/"<password>"@<hostname>:<port#>/<database/servicename> as sysdba
e.g.,
sqlplus sys/"Passw0rd#"@example02:1521/testdb as sysdba

C:\>sqlplus sys/"Passw0rd#"@example02:1521/testdb as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 18 12:57:26 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL>


Nov 13, 2018

Warning: ORA-16782: instance not open for read and write access

While I am converting Snapshot standby to Physical standby database in RAC environment, below issue may be expected if other instances are not down.

Issue Replicated:


DGMGRL> convert database testdr to physical standby;
Converting database "testdr" to a Physical Standby database, please wait...
Error: 
ORA-03113: end-of-file on communication channel
Process ID: 44589
Session ID: 1355 Serial number: 43829

Failed to convert database "testdr"
DGMGRL> 



DGMGRL> show database testdr;

Database - testdr

  Role:               SNAPSHOT STANDBY
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          4 hours 33 minutes 30 seconds (computed 1 second ago)
  Instance(s):
    testdr1
      Warning: ORA-16782: instance not open for read and write access

    testdr2

Database Status:
WARNING

DGMGRL> 


How to Fix?

Don't panic. If you check other database instances, all are already closed or down.
So Now do the follow the steps one by one:

-- shut the instance where it is open
SQL> shut immediate;

-- Start with mount ( operate from one node)

SQL> startup mount

-- Now convert to Physical Standby

SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Database altered.

SQL> select name, DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database;

NAME      DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
--------- ------------------------------ -------------------- ----------------
test    testdr                         MOUNTED              PHYSICAL STANDBY


-- Enable Replication

SQL> alter database recover managed standby database disconnect from session using current logfile;


-- But Dataguad broker is not updated. If you will check below message will be expected.

DGMGRL> connect testdr
Password:
Connected as SYSDG.
DGMGRL> show database testdr;

Database - testdr

  Role:               SNAPSHOT STANDBY
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          (unknown)
  Instance(s):
    testdr1
      Warning: ORA-16782: instance not open for read and write access

    testdr2

  Database Error(s):
    ORA-16816: incorrect database role

Database Status:
ERROR


-- So, Now do the following

DGMGRL> CONVERT DATABASE testdr TO PHYSICAL STANDBY;
Converting database "testdr" to a Physical Standby database, please wait...
Database "testdr" converted successfully
DGMGRL> 

-- Now verify in broker also, Issue is fixed

DGMGRL> show database testdr

Database - testdr

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 16.91 MByte/s
  Real Time Query:    OFF
  Instance(s):
    testdr1 (apply instance)
    testdr2

Database Status:
SUCCESS


Now the issue is resolved.

Oct 28, 2018

Setting Tablespace Thresholds and Alerts

How to set tablespace thresholds?

If you have very less no. of targets and you are using dbcontrol for each database, then use DBMS_SERVER_ALERT package to set. If you have OEM Cloud control 12c / 13c, then you can create a new Metric extension and and Rule and apply it. Here we ll discuss how to use DBMS_SERVER_ALERT package.

Use of the DBMS_SERVER_ALERT package as an early warning mechanism for space issues. The DBMS_SERVER_ALERT package as an early warning mechanism for space issues. These can be set database-wide, or for individual tablespaces. When the threshold is crossed warnings are sent by the Enterprise Manager (DB Control, Grid Control or Cloud Control).

Setting the OBJECT_NAME parameter to NULL sets the default threshold for all tablespace in the database. Setting the OBJECT_NAME parameter to a tablespace name sets the threshold for the specified tablespace and overrides any default setting.

There are two types of tablespace thresholds that can be set.

TABLESPACE_PCT_FULL : Percent full. 

When the warning or critical threshold based on percent full is crossed a notification occurs.

TABLESPACE_BYT_FREE : Free Space Remaining (KB). 

The constant name implies the value is in bytes, but it is specified in KB. When the warning or critical threshold based on remaining free space is crossed a notification occurs. When you view these thresholds in different tools the units may vary, for example

Cloud Control displays and sets these values in MB.
The thresholds are set using a value and an operator.

OPERATOR_LE : Less than or equal.
OPERATOR_GE : Greater than or equal.


Setting Thresholds:

Note:  You should know of your existing thresholds before changing them, so you know what to set them back to.

The following examples show how to set the different types of alerts.

Example-1:  Database-wide KB free threshold.

Begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_byt_free,
    warning_operator        => DBMS_SERVER_ALERT.operator_le,
    warning_value           => '1024000',
    critical_operator       => DBMS_SERVER_ALERT.operator_le,
    critical_value          => '102400',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => NULL);
end;
/

Example-2:    Database-wide percent full threshold.

Begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_pct_full,
    warning_operator        => DBMS_SERVER_ALERT.operator_ge,
    warning_value           => '80',
    critical_operator       => DBMS_SERVER_ALERT.operator_ge,
    critical_value          => '90',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => NULL);
end;
/

Example-3:  Tablespace-specific KB free threshold.

begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_byt_free,
    warning_operator        => DBMS_SERVER_ALERT.operator_le,
    warning_value           => '1024000',
    critical_operator       => DBMS_SERVER_ALERT.operator_le,
    critical_value          => '102400',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => 'USERS');
end;
/

Example-4:    Tablespace-specific percent full threshold.

begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_pct_full,
    warning_operator        => DBMS_SERVER_ALERT.operator_ge,
    warning_value           => '80',
    critical_operator       => DBMS_SERVER_ALERT.operator_ge,
    critical_value          => '90',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => 'USERS');
end;
/

Example-5: Tablespace-specific reset to defaults ( Set warning and critical values to NULL)

  --DBMS_SERVER_ALERT.set_threshold(
  --  metrics_id              => DBMS_SERVER_ALERT.tablespace_pct_full,
  --  warning_operator        => DBMS_SERVER_ALERT.operator_ge,
  --  warning_value           => NULL,
  --  critical_operator       => DBMS_SERVER_ALERT.operator_ge,
  --  critical_value          => NULL,
  --  observation_period      => 1,
  --  consecutive_occurrences => 1,
  --  instance_name           => NULL,
  --  object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
  --  object_name             => 'USERS');


>> Setting the warning and critical levels to '0' disables the notification.

Displaying Thresholds
The threshold settings can be displayed using the DBA_THRESHOLDS view.

SET LINESIZE 200

COLUMN tablespace_name FORMAT A30
COLUMN metrics_name FORMAT A30
COLUMN warning_value FORMAT A30
COLUMN critical_value FORMAT A15

SELECT object_name AS tablespace_name,
       metrics_name,
       warning_operator,
       warning_value,
       critical_operator,
       critical_value
FROM   dba_thresholds
WHERE  object_type = 'TABLESPACE'
ORDER BY object_name;










Ref:
https://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_salt.htm#CHDIFIHI

Oct 24, 2018

Difference between redo logs and undo tablespace?

For a database to be reliable should be ACID compliant. Undo comes in to picture to meet these requirement.

ACID means
  A  -->  atomicity
  C  -->  Consistency
  I   -->   Isolation
  D  -->  Durability

Link below explains why we need undo tablespace and what it does.

Yes, you could use redo logs to perform the rollback. If that's all you wanted to do with the undo ...

In an ideally configured system, the redo logs are effectively 'write-only' files for performance reasons. Ideally the redo logs are read only for archive (at each log file switch) and for recovery purposes (at start up). One job only, and very, very optimized. (Oracle is slowly moving away from this ... log miner, cdc, streams, etc.)

A commit is only considerd complete when the redo log is updated with the last dregs of the transaction that requested the commit. Any delay in writing to the redo log that means a delay in the commit performance.

On the undo/rollback side, Oracle guarantees 'read consistency' by [if necessary] rebuilding each and every block being queried to the way it looked when the query started. That rebuilding is effectively rolling back the block to the point-in-time of the start of the query. A LOT of rebuilding may occur when the database is active, and (again, for performance reasons) you don't want to get that from the redo logs.

Some questions:

Would it be true to say that the redo logs contain records of committed transactions only, while the under holds the "temporary" records before the commit?

Basic concept:

REDO => Committed transactions
UNDO => Uncommitted Transactions.

But , it is not true always. In case of huge volume transactions, redo also contain un-committed transactions.

Sometimes, if more buffer space is needed, LGWR writes redo log entries before a transaction is committed. These entries become permanent only if the transaction is later committed.

In such case, undo information will needed when Oracle do auto recovery. It will use the undo information to rollback the uncommited entries in redo logfiles.

Oct 22, 2018

FLUSH_DATABASE_MONITORING_INFO Procedure

A FLUSH_DATABASE_MONITORING_INFO is Procedure in DMBS_STATS package. This procedure flush monitoring information for all tables. Corresponding entries in the

*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS views are updated immediately.

These views are populated only for tables with the MONITORING attribute. They are intended for statistics collection over a long period of time.
For performance reasons, the Oracle Database does not populate these views immediately when the actual modifications occur.
Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DBMS_STATS PL/SQL package to populate these views with the latest information.
The ANALYZE_ANY system privilege is required to run this procedure.

exec dbms_stats.FLUSH_DATABASE_MONITORING_INFO;

after executing above package, then below query will give you updated data.

select table_owner,table_name,inserts,updates,deletes from DBA_TAB_MODIFICATIONS;

How to run this package?
e.g.,

SQL> exec dbms_stats.FLUSH_DATABASE_MONITORING_INFO;

PL/SQL procedure successfully completed.

Note that when you are running gather stats procedure, all stats will be collected to *_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS views. The above call is only in such exception cases.

GATHER_TABLE_STATS difference between 11g and 12c 

Oct 19, 2018

Create Custom Metrics Extensions in Oracle Enterprise Manager 13c

About Metric Extensions:

Metric extensions provide you with the ability to extend Oracle's monitoring capabilities to monitor conditions specific to your IT environment. This provides you with a comprehensive view of your environment. Furthermore, metric extensions allow you to simplify your IT organization's operational processes by leveraging Enterprise Manager as the single central monitoring tool for your entire data-center instead of relying on other monitoring tools to provide this supplementary monitoring.

Creating a Custom Metric Extension:

By default Oracle has multiple metrics to monitor and report the system. But as per your need you can create your own metric to monitor and both monitor and corrective actions. This is the existing feature of OEM to customize / create your own metrics using Metric Extension. These are metrics you want to track that may or may not be provided with the underlying products. I want to illustrate this point one of requirement on using Oracle Enterprise Manager with Oracle Utilities products.


Requirement:

Suppose there is common admin account in all your database targets with sysdba previllege and to monitor, deploy and all database activities are taken care using this common account via a named credential. If you business need a report on daily basis of audit report of that all the activities using that names credential or account.

How to create Named Credential?

Let us proceed with the requirements ...

Here are the consolidated steps to create custom metric extension and generate audit report:

Step-1: Create Metric Extension
Step-2: Deploy to Target
Step-3: Publish Metric Extension
Step-4: Create Information Publisher Report

A couple of things before we start:
  1. You need to create common accounts in all targets. You can use OEM job for the same task.
    e.g.,  SYSADMIN
  2. You need to create a Global Named Credential.
    e.g., NC_ORACLE_SYSADMIN
    Use below EMCLI command to see the above Named Credential is created or not.

    $ emcli list -resource=NamedCredentials -columns="CredName,CredOwner"
  3. Test your SQL query what kind of output you need.
Example: To Create SYSADMIN manually in all Targets

Connect to each target and run the below script:

create user SYSADMIN identified by <password> default tablespace users profile default;
grant connect, resource to SYSADMIN;
grant sysdba to SYSADMIN;


Assume these accounts has been created multiple targets. To deply to multiple targets, you can create a shell script and via OEM job you can deploy to many database targets. For testing let us say these targets are : TESTDB1, TESTDB2 etc.

Create Global Named Credential for this account ( using GUI)






















.

Note that all targets are 12c with Unified audit enabled. DB level data can be captured for SYSDBA accounts when database is in 12c.

Enable audit for SYSADMIN account in all the targets:

e.g.,

SQL> audit all by SYSADMIN by access;

Audit succeeded.


Example : Sample Query to Test in any Target to verify what type of data you need to push to your metric extension. Assume push data hourly basis.

SELECT to_char(event_timestamp) EVENT_TIMESTAMP,
DBUSERNAME,USERHOST,
REGEXP_SUBSTR(CLIENT_IDENTIFIER, '^[^@]*') USER_NAME,
substr(REGEXP_SUBSTR(CLIENT_IDENTIFIER, '*@[^@]*'),2) USER_IP,
ACTION_NAME, OBJECT_SCHEMA,OBJECT_NAME
FROM  unified_audit_trail
WHERE event_timestamp > systimestamp - numtodsinterval(60,'MINUTE')
and dbusername = 'SYSADMIN';


Now. I am doing some activity in TESTDB1 and let us capture the outputs

Sample output:
















Note: Tested only one target.

Here we can proceed with all our steps.

Step-1: Create Metric Extension:

Navigate to the Metrics Extension facility. This can be done from the link page or menu (Monitoring --> Metric Extensions). For example:

OEM console à Enterprise à Monitoring à Metric Extensions à Create à Metric Extension 

















From the Create menu, Select Metrics Extension. For example:


















Specify the Metric NameTarget Type (Database Instance in this case), Display NameAdapter (SQL in this case), Description and other attributes for the metric including default collection frequency. For example:


Target Type : database Instance
Enter Metric Name : e.g., SYSADMIN_AUDIT_TRAIL
Collection Frequency : set 1 hour ( with Hourly)
Adapter : SQL


·         You might notice the Select Advanced Properties which allows you to specify other attributes on the target to specialize the metric. This is new to OEM 13c and in this case will allow you to target multi-tenant databases (or not) for example.
·         Now as this is an SQL based metric you need to specify the SQL statement to execute to gather the data. You can get over this by making sure the credentials have access to the view or create a synonym. Remember the database user must have SELECT access as a minimum. The example of the SQL is shown below:

Click on “Next”































Add your tested query in the above screen.
In Next screen add all the columns as metric columns with datatype string. Add EVENT_TIMESTAMP as key column.

















Repeat same steps till addition of all the columns with submitting "OK". At last you have following list of columns.
















Like wise add all the columns and display value as per the query.
  • For each column in the query you need to define it as part of the Metric. You do not have to define all of them but it is recommended to get full reuse. For each column, defines it attributes including if it data or a key value. Key values are used for SLA tracking. Also you can define more meta data to allow OEM to determine how to process it. The columns for our example are shown below:
  • The credentials for the metric need to be defined. When you create a metric you simply attach it to the metrics collection to use it. Again ensure that the credential is valid for the query. In my example I will use the standard database monitoring credential. For example:

Here you can modify or re-arrange your column if you feel you did some mistake. Then Click on "Next".












Use default credentail. But you need to give below access to DBSNMP account:

-- on all target
SQL> grant select on unified_audit_trail  to dbsnmp;
Grant succeeded.
SQL> grant select any dictionary to dbsnmp;
Grant succeeded.

Then, you cam move to next step with "Next". In the next scree, You can attach a database and run the test to verify the metric. This does not attach the metric to the target. It just tests it. Use (+Add) to add a database target and then Use “Run Test”.


When script is in execution mode:












Now Click on Run Test. If you are getting results without error, then it is fine to proceed with next steps. Otherwise you need to fix the issues. May be you used semi-column (;) symbol at end of the query or may be you  have not used any key column etc.

Click on “Next”. Review before saving the metric. At any time you can change the metric before you publish it.
































Now the metric is still in editable mode so can be edited as much as necessary. This is indicated on the metric screen. For example:


Here we completed creation of metric Extension. But the ME is now in Editable mode. You cam modify if you feel to alter something. But once it is deployed, then it will create a version and same version can be altered. Now we cam move to next step to deploy the ME in our required database targets. 

Step-2: Deploy Metric Extension (ME) to Target(s)

When you created Metric Extension, this is an editable version. You need to deploy to one or more targets to create a deployable version. As soon as ME created, you can see below status for your ME

Steps to Deploy Targets:
1) OEM console à Enterprise à Monitoring à Metric Extensions à Select the ME

2) Action à Save as Deployable Draft (Now status is changed to Deployable draft)




3) Action à Deploy to Targets ( Here you need to add your required targets using “+Add”)










after adding number of targets, then "submit" to deploy. Here you can select groups also.
Here you will have below screen when you deployed the ME into multiple targets.
4) Now add one or more databases Targets. E.g., 













If we will search your metric extension, then you will find no. of deployed targets 2. Because we have deployed to two targets as per our above screen.

Here we completed two steps. First is Creation of ME and second is deployment of ME to database targets. Here we can move to next step to publish ME.

Step-3: Publish Metric Extension (ME)

When you have deployable edition of Metric Extension, then you can publish it to make an effective.
Follow below steps to publish the ME.

Steps to Deploy Targets:
1) OEM console à Enterprise à Monitoring à Metric Extensions à Select the ME

2) Action à Publish Metric Extension (Now status is changed to Published)

Now, our ME published. Now data will be captured by ME. Next step and the last step is reporting.
You Information publisher report to generate report from OMS. Use below sample query to generate report from the newly created metric Extension (ME).

SELECT target_name "DatabaseName",
         Key_Value "Time Stamp",
         MAX (CASE WHEN COLUMN_LABEL = 'dbuserame' THEN VALUE ELSE NULL END)
             "DbUserName",
         MAX (CASE WHEN COLUMN_LABEL = 'userhost' THEN VALUE ELSE NULL END)
            "UserHost",
         MAX (CASE WHEN COLUMN_LABEL = 'USER_NAME' THEN VALUE ELSE NULL END)
            "OemUserName",
         MAX (CASE WHEN COLUMN_LABEL = 'USER_IP' THEN VALUE ELSE NULL END)
            "UserIP",
            MAX (CASE WHEN COLUMN_LABEL = 'object_schema' THEN VALUE ELSE NULL END)
            "SchemaName",
            MAX (CASE WHEN COLUMN_LABEL = 'object_name' THEN VALUE ELSE NULL END)
            "ObjectName",
         MAX (CASE WHEN COLUMN_LABEL = 'action_name' THEN VALUE ELSE NULL END)
            "ActionPeformed"
    FROM SYSMAN.mgmt$metric_details
    WHERE metric_name = 'ME$SYSADMIN_AUDIT_TRAIL'
GROUP BY target_name, key_value
order by 1,2 desc,5,6,7,8
/

Click Here to use Information publisher report.

Put your comments. 

Thank you!

Oct 18, 2018

Load excel/ csv file data Oracle table using PL/SQL Block

To read a CSV file using PLSQL block and loading into a Oracle table.

Let us proceed with very simple steps and we can see here how easily and quickly load .csv file data to table in Oracle database using a plsql block.

STEP:1 - Create a table to which csv data will be inserted. Remeber your .csv file must be organized. Based on your requirement create table structure before data load.


  CREATE TABLE sales.book_info
    (
    product_id NUMBER(4),
    product_name VARCHAR2(10),
    price NUMBER(7,2),
    author VARCHAR2(14)
    );  


STEP:2 create database directory in the Database where your .csv file resides

SQL> create directory salesdata as '/u03/work/salesdata';


STEP:3  Run the below procedure to read data from .CSV and insert into table sales.book_info table.

DECLARE
      F UTL_FILE.FILE_TYPE;
      V_LINE VARCHAR2 (1000);
      V_product_id NUMBER(4);
      V_product_name VARCHAR2(10);
      V_price NUMBER(7,2);
      V_author VARCHAR2(14);
    BEGIN
     F := UTL_FILE.FOPEN ('salesdata', 'employee.csv', 'R');
    IF UTL_FILE.IS_OPEN(F) THEN
      LOOP
        BEGIN
          UTL_FILE.GET_LINE(F, V_LINE, 1000);
          IF V_LINE IS NULL THEN
            EXIT;
          END IF;
          V_product_id := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 1);
          V_price := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 3);
          V_product_name := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 2);
          V_author := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 4);
          INSERT INTO sales.book_info VALUES(V_product_id, V_product_name, V_price, V_author);
          COMMIT;
        EXCEPTION
        WHEN NO_DATA_FOUND THEN
          EXIT;
        END;
      END LOOP;
    END IF;
    UTL_FILE.FCLOSE(F);
  END;
  /

STEP:4 Select the table sales.book_info and check if csv data are loaded or not

SQL> Select * from sales.book_info;

     product_id product_name      price author
     --------- ------------------ ---------- ------------
         1 Ray of Hope 100 Gouranga
         2 A Needle of thons       200 Kabu Panda
         3 Two States 150 ChetanBhagat



Oct 17, 2018

Collect hourly segment growth report in Oracle Database

You can call below described .sql file via shell script. Schedule that .sh file in crontab to collect the metrics.


1) Create .sh file to call .sql file

$ cat /u01/app/oracle/admin/testdb/scripts/schema_segment_growth_report.sh

#!/bin/bash
export DT=`date +%d%m%Y_%H%M%S`
export ORACLE_SID=testdb
export ORACLE_HOME=/u01/app/oracle/product/12.2.0.2/dbhome_1
export SCRIPT_HOME=/u01/app/oracle/admin/testdb/scripts
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin
cd /u03/Admin_Reports/Segment_Stats
$ORACLE_HOME/bin/sqlplus -s '/ as sysdba' << EOF
whenever sqlerror exit sql.sqlcode;
set echo off
@$SCRIPT_HOME/segment_growth_report.sql
exit;
EOF


2) Create the .sql file to collect the report

$ cat segment_growth_report.sql

set linesize 122
spool segment_growth_report_${ORACLE_SID}_${DT}.log
set feedback off
col owner for a30
col segment_name for a30
col segment_type for a22
select * from (
select owner, segment_name, segment_type,round(bytes/1024/1024,1) Size_Mb from dba_segments
where owner not in('SYS','SYSTEM') order by
bytes/1024/1024  DESC ) where rownum <= 10;

spool off

exit


3) Schedule in Cron

00 * * * * /u01/app/oracle/admin/testdb/scripts/schema_segment_growth_report.sh   1>/dev/null 2>&1



Oct 11, 2018

SQL Plan Management (SPM) & cursor_sharing=force

SQL plan management is a preventative mechanism that enables the optimizer to automatically manage execution plans, ensuring that the database uses only known or verified plans.


Purpose of SQL Plan Management:
The primary goal of SQL plan management is to prevent performance regressions caused by plan changes. A secondary goal is to gracefully adapt to changes such as new optimizer statistics or indexes by verifying and accepting only plan changes that improve performance.

Benefits of SQL Plan Management:

SQL plan management can improve or preserve SQL performance in database upgrades and system and data changes.

Specifically, benefits include:

  • A database upgrade that installs a new optimizer version usually results in plan changes for a small percentage of SQL statements.
  • Most plan changes result in either improvement or no performance change. However, some plan changes may cause performance regressions. SQL plan baselines significantly minimize potential regressions resulting from an upgrade.
  • When you upgrade, the database only uses plans from the plan baseline. The database puts new plans that are not in the current baseline into a holding area, and later evaluates them to determine whether they use fewer resources than the current plan in the baseline. If the plans perform better, then the database promotes them into the baseline; otherwise, the database does not promote them.
  • Ongoing system and data changes can affect plans for some SQL statements, potentially causing performance regressions.
  • SQL plan baselines help minimize performance regressions and stabilize SQL performance.
  • Deployment of new application modules introduces new SQL statements into the database.
  • The application software may use appropriate SQL execution plans developed in a standard test configuration for the new statements. If the system configuration is significantly different from the test configuration, then the database can evolve SQL plan baselines over time to produce better performance.
Differences Between SQL Plan Baselines and SQL Profiles:

Both SQL profiles and SQL plan baselines help improve the performance of SQL statements by ensuring that the optimizer uses only optimal plans.

Both profiles and baselines are internally implemented using hints. However, these mechanisms have significant differences, including the following:

In general, SQL plan baselines are proactive, whereas SQL profiles are reactive.

How SPM works with cursor_sharing=force?

Here is an simplified example, but it is sufficient to show the problem.

SQL> alter system set cursor_sharing=force;
SQL> show parameter optimizer_capture_sql_plan_baselines

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------
optimizer_capture_sql_plan_baselines boolean     FALSE


SQL> create table my_objects as select * from dba_objects;
SQL> set autotrace trace exp
SQL> select object_name from my_objects where object_id=100;

Execution Plan
----------------------------------------------------------
Plan hash value: 880823944

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |    14 |  1106 |   336   (1)| 00:00:05 |
|*  1 |  TABLE ACCESS FULL| MY_OBJECTS |    14 |  1106 |   336   (1)| 00:00:05 |
--------------------------------------------------------------------------------

-- full table scan as it is expected
SQL> set autotrace off

-- I will load this plan into baseline 
SQL> select sql_id, sql_text from v$sql 
where sql_text like 'select object_name from my_objects where object_id%';

SQL_ID                  SQL_TEXT
--------------------------------------------------------------------------------
fu2bja6dgn1yg           select object_name from my_objects where object_id=:"SYS_B_0"

SQL> var num_plans number;
SQL> exec :num_plans := dbms_spm.load_plans_from_cursor_cache('fu2bja6dgn1yg');

-- check the plan history, the full table scan plan is now an accepted plan baseline
SQL> select sql_handle, plan_name, enabled, accepted
from dba_sql_plan_baselines
where sql_text like 'select object_name from my_objects where object_id%'; 

SQL_HANDLE                     PLAN_NAME                      ENA ACC
------------------------------ ------------------------------ --- ---
SQL_31a282bdb4642130           SQL_PLAN_338n2rqu6889h89405218 YES YES


-- and then I created an index on the column
SQL> create index my_objects_idx on my_objects(object_id);

-- since there is an accepted plan in baseline, 
I expect optimizer will use full table scan. But optimizer uses a new index scan plan
SQL> set autotrace trace exp
SQL> select object_name from my_objects where object_id=100;


Execution Plan
----------------------------------------------------------
Plan hash value: 3644674915

----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU) | Time    |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                |     1 |    79 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| MY_OBJECTS     |     1 |    79 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | MY_OBJECTS_IDX |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------


-- but the index scan plan is not accepted yet!
SQL> select sql_handle, plan_name, enabled, accepted
from dba_sql_plan_baselines
where sql_text like 'select object_name from my_objects where object_id%';  

SQL_HANDLE                     PLAN_NAME                      ENA ACC
------------------------------ ------------------------------ --- ---
SQL_31a282bdb4642130           SQL_PLAN_338n2rqu6889h89405218 YES YES
SQL_31a282bdb4642130           SQL_PLAN_338n2rqu6889hec7825a4 YES NO

So why does optimizer choose the plan that is not accepted?

Answers:

In your test case, your statement did use the baseline plan or full table scan plan. 
The problem is the plan shown to you by “autotrace trace exp” is not the plan actually 
used by your statement.   

Let me show what I mean by executing your test case but displaying the plan by 
querying v$SQL_PLAN using the DBMS_XPLAN package. 
SQL> alter system set cursor_sharing=force;
System altered.

SQL>
SQL> show parameter optimizer_capture_sql_plan_baselines
NAME                                 TYPE                             VALUE                                                                                                    
------------------------------------ -------------------------------- -----------                                                                          
optimizer_capture_sql_plan_baselines boolean                          FALSE                                                                                                    
SQL>
SQL> --Create new table
SQL>
SQL> create table my_objects as select * from dba_objects;

Table created.

SQL>
SQL> -- Run a simple query against the new table and check the plan
SQL>
SQL> select object_name from my_objects where object_id=100;

OBJECT_NAME
----------------------------------------------------
SEQ$                                                                                                                                                                           

SQL>
SQL> select * from table(dbms_xplan.display_cursor(format=>'Typical'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------
SQL_ID  fu2bja6dgn1yg, child number 0
-------------------------------------
select object_name from my_objects where object_id=:"SYS_B_0"

Plan hash value: 880823944
--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |       |       |   397 (100)|          |
|*  1 |  TABLE ACCESS FULL| MY_OBJECTS |     1 |    40 |   397   (1)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("OBJECT_ID"=:SYS_B_0)                                                                                                                                            
18 rows selected.

SQL>
SQL> -- Get the SQL_ID
SQL>
SQL> select sql_id, sql_text from v$sql where sql_text like 'select object_name 
from my_objects where object_id%';
SQL_ID           SQL_TEXT
-------------   -------------------------------------------------------------------
fu2bja6dgn1yg   select object_name from my_objects where object_id=:"SYS_B_0"                                                                                                  
SQL>
SQL> -- Create a baseline for the statement
SQL>
SQL> var num_plans number;
SQL>
SQL> exec :num_plans := dbms_spm.load_plans_from_cursor_cache('fu2bja6dgn1yg');

PL/SQL procedure successfully completed.

SQL>
SQL> select sql_handle, plan_name, enabled, accepted from dba_sql_plan_baselines;

SQL_HANDLE                     PLAN_NAME                            ENA ACC
------------------------------ ------------------------------------ --- ---
SQL_31a282bdb4642130           SQL_PLAN_338n2rqu6889h89405218       YES YES

SQL>
SQL> -- Created an index on the column
SQL>
SQL> create index my_objects_idx on my_objects(object_id);

Index created.

SQL>
SQL> -- Rerun the query and check the plan
SQL>
SQL> select object_name from my_objects where object_id=100;

OBJECT_NAME
------------------------------------------------------------
SEQ$                                                                                                                                                                           
SQL> select * from table(dbms_xplan.display_cursor(format=>'Typical'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------
SQL_ID  fu2bja6dgn1yg, child number 0
-------------------------------------
select object_name from my_objects where object_id=:"SYS_B_0"

Plan hash value: 880823944

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |       |       |   135 (100)|          |
|*  1 |  TABLE ACCESS FULL| MY_OBJECTS |   399 | 31521 |   135   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"=:SYS_B_0)
Note
-----
   - SQL plan baseline SQL_PLAN_338n2rqu6889h89405218 used for this statement


22 rows selected.

SQL>
SQL> select sql_handle, plan_name, enabled, accepted
  2  from dba_sql_plan_baselines;

SQL_HANDLE                     PLAN_NAME                            ENA ACC
------------------------------ ------------------------------------ --- ---
SQL_31a282bdb4642130           SQL_PLAN_338n2rqu6889h89405218       YES YES
SQL_31a282bdb4642130           SQL_PLAN_338n2rqu6889heb45da5a       YES NO

So why did “autotrace trace exp” show you an index plan? The “autotrace trace exp” command executes an explain plan command for your
SQL statement immediately after the statement has executed. 
It doesn’t actually show you the plan that was used during the previous execute. 

When you run an explain plan command for the same statement text 
(same everything including the literals) it will trigger a hard parse, 
because the cursors generated by an explain plan command are not shareable.  
Since the cursor isn’t shared there is no point in doing the literal replacement that 
would allow the cursor to be shared. 
Therefore the explain plan command does not replace the literals. 

Since literal replace did not occur, there is no SQL plan baseline for the statement. 
Remember, your baseline is for the SQL statement 
SELECT object_name FROM my_objects WHERE object_id=:"SYS_B_0". 
Therefore the explain plan command shows you the index access plan. 

You can demonstrate this behavior in your environment by simple running an 
explain plan command for you statement. 

restore database when datafile deleted but database has no backup and not in archivelog mode

Scenario :

DB is up and running
Deleted non system datafiles using rm -rf command
DB is still up and running. We have not stopped database.


-------------------------PROCEDURE-----------------------------------------
Find the PID of DBWRITER for the right database.

$ ps -ef | grep dbw0_TEMPDB

oracle   12272     1  0 10:55 ?        00:00:00 ora_dbw0_TEMPDB ===>
oracle   12941 11501  0 12:36 pts/0    00:00:00 grep dbw0_TEMPDB



It Lists the deleted file handles for that DBWRITER process.

ls -l /proc/12272/fd | grep deleted


l-wx------ 1 oracle dba 64 Jun  8 12:39 10 -> /u01/app/oracle/diag/rdbms/TEMPDB/TEMPDB/trace/TEMPDB_ora_8518.trc (deleted)
l-wx------ 1 oracle dba 64 Jun  8 12:39 11 -> /u01/app/oracle/diag/rdbms/TEMPDB/TEMPDB/trace/TEMPDB_ora_8518.trm (deleted)
lrwx------ 1 oracle dba 64 Jun  8 12:39 31 -> /u03/oradata/TEMPDB/TEST_DATA01.DBF (deleted)
lrwx------ 1 oracle dba 64 Jun  8 12:39 32 -> /u03/oradata/TEMPDB/TEST_LOBS01.DBF (deleted)
lrwx------ 1 oracle dba 64 Jun  8 12:39 33 -> /u03/oradata/TEMPDB/TEST_INDEX01.DBF (deleted)
lrwx------ 1 oracle dba 64 Jun  8 12:39 8 -> /u01/app/oracle/product/11.2.0/dbhome_1/dbs/lkinstTEMPDB (deleted)


Create a symbolic link to your datafile with the original name.

ln -s /proc/8969/fd/31 /u03/oradata/TEMPDB/TEST_DATA01.DBF
ln -s /proc/8969/fd/32 /u03/oradata/TEMPDB/TEST_LOBS01.DBF
ln -s /proc/8969/fd/33 /u03/oradata/TEMPDB/TEST_INDEX01.DBF


The ORA errors will go!!
========================================================================

Make the tablespace with that datafile read only

SQL> select distinct tablespace_name from dba_data_files where file_name ='/u03/oradata/TEMPDB/TEST_DATA01.DBF';

TABLESPACE_NAME
------------------------------
TEST_DATA


SQL>  select distinct tablespace_name from dba_data_files where file_name ='/u03/oradata/TEMPDB/TEST_LOBS01.DBF';

TABLESPACE_NAME
------------------------------
TEST_LOBS

SQL> select distinct tablespace_name from dba_data_files where file_name ='/u03/oradata/TEMPDB/TEST_INDEX01.DBF';

TABLESPACE_NAME
------------------------------
TEST_INDEX

alter tablespace TEST_DATA read only;
alter tablespace TEST_LOBS read only;
alter tablespace TEST_INDEX read only;


Copy the file from the symlink to a new name

cp /u03/oradata/TEMPDB/TEST_DATA01.DBF /u03/oradata/TEMPDB/TEST_DATA01_NEW.DBF.dbf
cp /u03/oradata/TEMPDB/TEST_LOBS01.DBF /u02/oradata/TEMPDB/TEST_LOBS01_NEW.DBF.dbf
cp /u03/oradata/TEMPDB/TEST_INDEX01.DBF /u02/oradata/TEMPDB/TEST_INDEX01_NEW.DBF.dbf


(IMP !!!! Wait for COPY)

Shutdown the database:

shutdown abort;

Remove the now invalid symlink, and rename the datafile to its original name. Be careful not to remove the wrong file now!

rm -i /u03/oradata/TEMPDB/TEST_DATA01.DBF
rm -i /u03/oradata/TEMPDB/TEST_LOBS01.DBF
rm -i /u03/oradata/TEMPDB/TEST_INDEX01.DBF

mv /u03/oradata/TEMPDB/TEST_DATA01_NEW.DBF.dbf /u03/oradata/TEMPDB/TEST_DATA01.DBF
mv /u02/oradata/TEMPDB/TEST_LOBS01_NEW.DBF.dbf /u03/oradata/TEMPDB/TEST_LOBS01.DBF
mv /u02/oradata/TEMPDB/TEST_INDEX01_NEW.DBF.dbf /u03/oradata/TEMPDB/TEST_INDEX01.DBF

Startup your database normally and make the tablespace read/write:

startup

alter tablespace TEST_DATA read write;
alter tablespace TEST_LOBS read write;
alter tablespace TEST_INDEX read write;


Translate >>