Dec 29, 2017

Find Top CPU or Memory consumer PIDs in Linux system

How to find Top CPU% or Memory% consumer PIDs in Linux system?

Ans:

[oracle@example01]$ ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem |head
   PID   PPID CMD                         %MEM %CPU
     1      0 /sbin/init                   0.0  0.7
     2      0 [kthreadd]                   0.0  0.0
     3      2 [migration/0]                0.0  0.0
     4      2 [ksoftirqd/0]                0.0  0.0
     5      2 [migration/0]                0.0  0.0
     6      2 [watchdog/0]                 0.0  0.0
     7      2 [migration/1]                0.0  0.0
     8      2 [migration/1]                0.0  0.0
     9      2 [ksoftirqd/1]                0.0  0.0


[oracle@example01]$ ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%cpu |head
   PID   PPID CMD                         %MEM %CPU
 45297  44930 oraclesdmprd3 (DESCRIPTION=  0.0  260
  5668      1 /u01/app/12.1.0.2/grid/bin/  0.0 22.6
121288      1 /u01/app/12.1.0.2/grid/bin/  0.0 19.5
 42679      1 /u01/app/12.1.0.2/grid/jdk/  0.1 11.5
 33439 104190 /u01/app/oracle/product/12c  0.1  9.4
107668      1 /u01/app/12.1.0.2/grid/bin/  0.8  8.9
110681      1 oraclestrlmval3 (LOCAL=NO)   0.0  6.4
105402      1 /u01/app/12.1.0.2/grid/bin/  0.0  2.9
 67776      1 oraclecimprd3 (LOCAL=NO)     0.1  2.7
[oracle@example ~]$ 

Dec 28, 2017

Find details from PID when huge databases are hosted on single host

I got below question from an anonymous user :

If I have the PID number for a process (on a UNIX machine), how can I find out the name of its associated process? What do I have to do? But in my host there are 30+ databases are hosted.
Ans:

No need to worry. You need to check what is running against the PID. So I am created an issue for a JAVA program and you can see now how I am fixing the issue.



Now find PID details:

[oracle@example ~]$ ps -fp 59865





So, Here I found my OEM agent is consuming more CPU. I think restart of agent may fix the issue. Let us do that.

[oracle@example ~]$ cd /u01/app/oracle/product/OEM/agent13c/agent_13.2.0.0.0/bin
[oracle@example bin]$ ./emctl status agent


Here, I found one Java program utilizing more CPU and that to it is an agent.

So I stopped and started the agent and issue fixed so smoothly.

e,g., ( for me this was issue, for you it may be different)

bin]$ ./emctl stop agent
bin]$ ./emctl status agent
bin]$ ./emctl start agent

after that the Top is like below:










Using optimizer_index_cost_adj

The optimizer_index_cost_adj parameter was created to allow use to change the relative costs of full-scan versus index operations.  This is the most important parameter of all, and the default setting of 100 is incorrect for most Oracle systems.  For some OLTP systems, re-setting this parameter to a smaller value (between 10- to 30) may result in huge performance gains!

If you are having slow performance because the CBO first_rows optimizer mode is favoring too many full-table scans, you can reduce the value of the optimizer_index_cost_adj parameter to immediately tune all of the SQL in your database to favor index scans over full-table scans. This
is sometimes a "silver bullet" that can improve the performance of an entire database in cases where the database is OLTP and you have verified that the full-table scan costing is too low.

OPTIMIZER_INDEX_CACHING = 50
OPTIMIZER_INDEX_COST_ADJ = 25
_optimizer_index_compute_stats=TRUE ( default FALSE)

So, before change, collect following statistics.

Here are all the columns for the two events form the v$system_event table:

SQL> select * from v$system_event
  2  where event in ('db file sequential read', 'db file scattered read');

EVENT                      TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
-------------------------- ----------- -------------- ----------- ------------
db file sequential read       92814138              0  3787666073   40,8091499
db file scattered read         4349122              0     2348427   ,539977264

Then i inspected database 2. Here, i found the other extreme:

SQL> select round((select average_wait
 from v$system_event
where event='db file sequential read')
/
(select average_wait
 from v$system_event
where event='db file scattered read')
 * 100) as starting_point
from dual;

STARTING_POINT
--------------
             0

Then i looked into the v$system_event table (i thought, perhaps timed_statistics is disabled), but:

SQL> select * from v$system_event
  2  where event in ('db file sequential read', 'db file scattered read');

EVENT                        TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
---------------------------- ----------- -------------- ----------- ------------
db file sequential read        119504401              0     7598589   ,063584177
db file scattered read         368384847              0  1,1029E+10   29,9391977

So the ratio is correct (0.063584177 / 29.9391977 = .0021).

Note: If "db file sequential read" events are more, then more index scans are happening. Sometimes FTS is better than index scan in some of the scenarios. In the above ratio, 0 means index scan and FTS are balanced.

Note: Test in Pre-prod before move to prod.

Here are changes before and after:

-- before ( heavy performance issues)
SQL> show parameter optimizer_index_cost_adj;
optimizer_index_cost_adj             integer                 100
SQL> show parameter optimizer_index_caching;
optimizer_index_caching              integer                  0
SQL> 

-- after ( performance issue resolved)
SQL> show parameter optimizer_index_cost_adj;
optimizer_index_cost_adj             integer                  10
SQL> 
SQL> show parameter optimizer_index_caching;
optimizer_index_caching              integer                  50
SQL> 

But, it may not help always. you should have a proper calculation to proceed.

-- Calculation

SQL> select average_wait from gv$system_event where event='db file sequential read';

AVERAGE_WAIT
------------
         .04
         .04
         .03

SQL> select average_wait from gv$system_event where event='db file scattered read';

AVERAGE_WAIT
------------
         .07
         .08
         .08

Here the formula is : 
OPTIMIZER_INDEX_COST_ADJ
select round((select average_wait 
from gv$system_event 
where event='db file sequential read') 

(select average_wait from gv$system_event where event='db file scattered read') * 100) 
from dual; 


SQL> select 
  c.t as index_block_cache_count,
  i.t as index_block_total_count,
  (c.t/i.t)*100 as start_optimizer_index_caching
from 
  (select sum(blocks) as t from dba_segments where segment_type = 'INDEX') i,
  (select count(*) as t from v$bh v, dba_objects o
  where v.objd = o.object_id and o.object_type = 'INDEX') c
;

INDEX_BLOCK_CACHE_COUNT INDEX_BLOCK_TOTAL_COUNT START_OPTIMIZER_INDEX_CACHING
----------------------- ----------------------- -----------------------------
                   7860                 3405720                    .230788203


SQL> 

Here I adjusted one of the production for "show parameter optimizer_index_cost_adj" as 1 and "optimizer_index_caching" as 50 and helped to improve the performance a lot.

But remember, you must check other below parameter's also:

cursor_sharing
open_cursors   
session_cached_cursors 

All the above parameters are based on the requirement and application prospective.

In one of my OLAP project, I used below setups for best of use:

cursor_sharing=EXACT (default)
open_cursors=500 (default=50) 
session_cached_cursors=2000 ( default=50) 

Here now we can see how we can find what exact usage of "open_cursor" and "session_cached_cursors" values:

-- View the existing value set

SQL> show parameter cursors;

NAME                                 TYPE                             VALUE
------------------------------------ -------------------------------- ------------------------------
open_cursors                         integer                          1000
session_cached_cursors               integer                          50
SQL> 



-- Find the current usage and value set for "open_cursors" and "session_cached_cursors":

select 'session_cached_cursors'  parameter,lpad(value, 10)  value, decode(value, 0, '  n/a', to_char(100 * used / value, '999990') || '%')  usage
from   (select max(s.value)  used
from   v$statname  n,v$sesstat  s
where  n.name = 'session cursor cache count'
and    s.statistic# = n.statistic#),
(select value from   v$parameter where  name = 'session_cached_cursors')
union all
select 'open_cursors',lpad(value, 10),to_char(100 * used / value,  '99990') || '%'
from   (select max(sum(s.value))  used
from   v$statname  n,
v$sesstat  s
where  n.name in ('opened cursors current', 'session cursor cache count')
and    s.statistic# = n.statistic#
group by s.sid),
(select value from   v$parameter where  name = 'open_cursors');


PARAMETER              VALUE                         USAGE
---------------------- ----------------------------- ----------
session_cached_cursors         50                    876444%
open_cursors                 1000                    43822%

Find percentage distribution of the total parse calls between hard and soft parses and also reports the percentage of total parse calls satisfied by the session cursor cache.

SQL> 
select to_char(100 * sess / calls, '999999999990.00') || '%' cursor_cache_hits,
to_char(100 * (calls - sess - hard) / calls, '999990.00') || '%' soft_parses,
to_char(100 * hard / calls, '999990.00') || '%' hard_parses
from   (select value calls from v$sysstat where name = 'parse count (total)'),
(select value hard  from v$sysstat where name = 'parse count (hard)'),
(select value sess  from v$sysstat where name = 'session cursor cache hits');


CURSOR_CACHE_HITS    SOFT_PARSES          HARD_PARSES
-------------------- -------------------- --------------------
          143.52%        -48.90%                5.38%

SQL> 

-- current session cursor cache count

select s.value used, count(1)
from  v$statname  n,
v$sesstat  s
where n.name = 'session cursor cache count'
and   s.statistic# = n.statistic#
group by s.value
order by s.value;

     USED   COUNT(1)
---------- ----------
         0         65
         1         10
         2         13
         3          3
         5          1
         7          2
         8          1
         9          4
        10          1
        12          1
        15          2
        17          3
        20          2
        21          1
        25          1
        29          1
        30          1
        69          1
        70          1
        90          1
       109          1
       110          1
       342          1
       362          1
       581          1
      7212          1
    437871          1

27 rows selected.

SQL> 



Note: increase of "session_cached_cursors" parameter value will increase size of share_pool.

To know more on "session_cached_cursors" Click here.



Dec 23, 2017

Mange Black-outs using Oracle Enterprise Manager (OEM) cloud control 13c EMCLI

Blackouts allow you to collect accurate monitoring data. For example, you can stop data collections during periods where a managed target is undergoing routine maintenance, such as a database backup or hardware upgrade. If you continue monitoring during these periods, the collected data will show trends and other monitoring information that are not the result of normal day-to-day operations. To get a more accurate, long-term picture of a target's performance, you can use blackouts to exclude these special-case situations from data analysis.

Here I will show you how to use EMCLI commands to manage blackout:
a) Create a Black-out
b) List blackouts
c) Stop Blackout:
d) Delete a blackout
e) Get blackot reasons

Enterprise Manager administrators who have at least Blackout Target privileges on all Selected Targets in a blackout will be able to create, edit, stop, or delete the blackout.

a) Create a Black-out:

./emcli create_blackout -name="CPU Patch 12062017" -reason="to be done" -add_targets="omstst:oracle_database" -schedule="duration::10"

$ ./emcli create_blackout -name="CPU Patch 12062017" -reason="to be done" -add_targets="omstst:oracle_database" -schedule="duration::10"

Error: Only Super Administrators are allowed to add a new reason (the get_blackout_reasons command will list the reasons you can use)
bin]$

Fix: Connect to SYSMAN user or user with super admin privilege and then try.

Think: How fast could you blackout entire environment for maintenance window with this scripted.

b) List blackouts:

$./emcli get_blackout_targets -name=Blackout_Name
e.g.,

$ ./emcli get_blackout_targets -name="CPU Patch 12062017"
Target Name       Target Type      Status       Status ID
testbox_testbox1  oracle_database  In Blackout  1
bin]$

c) Stop Blackout:

Simply state the name of the blackout and the agent will issue a stop to the OMS

./emcli stop_blackout -name="Linux Patch 12062017"

Think, How quickly could you end a blackout of environment post maintenance?

Note:  Get pre-defined blackout reasons. i.e., What Oracle make default for you.

./emcli get_blackout_reasons

d) Delete a blackout:

Syntax: ./emcli delete_blackout -name="blackout_name"

e.g.,
$./emcli delete_blackout -name="Linux Patch 12062017"


e) Get blackot reasons. What Oracle make default for you.

./emcli get_blackout_reasons

e.g.,

 bin]$ ./emcli get_blackout_reasons
AS: Application Server Bounce
AS: Application Server Configuration Change
AS: Application Server Emergency
AS: Application Server Patch/Maintenance
AS: Application Server Upgrade
Apps: Application Configuration Change
Apps: Application Decommissioning
Apps: Application Emergency
Apps: Application Password Change
Apps: Application Patch/Maintenance
Apps: Application Standardization
Apps: Application Upgrade
Apps: New Application Monitoring
DB: Associated Exadata Express Service Decommissioning
DB: Database Cold Backup
DB: Database Configuration Change
DB: Database Decommissioning
DB: Database Emergency
DB: Database Password Change
DB: Database Patch/Maintenance
DB: Database Refresh
DB: Database Upgrade
DB: Disaster Recovery Node
DB: New Database Monitoring
Enterprise Manager: Agent Configuration Change
Enterprise Manager: Agent Emergency
Enterprise Manager: Agent Patch/Maintenance
Enterprise Manager: Agent Upgrade
Enterprise Manager: OMS Server Configuration Change
Enterprise Manager: OMS Server Emergency
Enterprise Manager: OMS Server Maintenance
Enterprise Manager: OMS Server Patch/Maintenance
Enterprise Manager: OMS Server Upgrade
Enterprise Manager: Plug-in Upgrade
Hardware Configuration Change
Hardware Emergency
Hardware Failure
Hardware Patch/Maintenance
Hardware Upgrade
Network Configuration Change
Network Emergency
Network Hardware Failure
Network Outage
Network Patch/Maintenance
Network Upgrade
OS Configuration Change
OS Emergency
OS Password Change
OS Patch/Maintenance
OS Upgrade
Other: Customer Request
SLB: Load Balancer Configuration Change
SLB: Load Balancer Emergency
SLB: Load Balancer Patch/Maintenance
SLB: Load Balancer Upgrade
Security Patch
Server Bounce
Server Decommissioning
Server Physical Move
Storage Filer Upgrade
Testing

$

Note: 
You can use any of them as a blackout reason. Use -reason and put any reason.

Dec 22, 2017

Manage groups though EMCLI in OEM cloud control 13c

Mange Group using EMCLI in Oracle Enterprise manager Cloud Control 13c:

Here are we will do the following:
a) List available groups
b) Create new group
c) Delete group
d) Create group with adding Targets
e) List targets in a group
f) Modify group

a) List available groups:

./emcli get_groups
OR
emcli>get_groups()
Target Name  Target Type
ADMINGRP0      group      
All            group      
MAC-Grp        group      
Prod-Grp       group      
Test-Grp       group      

emcli>

b) Create new group:

./emcli create_group -name="tempgrp"
OR
emcli>create_group(name="tempgrp")
Group "tempgrp:group" created successfully

emcli>

c) Delete group:

./emcli delete_group -name="name"
OR
emcli>delete_group(name='tempgrp')
Group "tempgrp:group" deleted successfully

emcli>

d) Create group with adding Targets:

./emcli create_group -name="tstgrp" -add_targets="database2:oracle_database; database3:oracle_database"

This example creates a database-only group named db_group. This group consists of two Oracle databases: emp_rec and payroll:

./emcli create_group -name=tst_group -add_targets="emp_rec:oracle_database" -add_targets="payroll:oracle_database"

e.g.,

emcli>create_group(name="tst_group",add_targets="testdb.example.com:oracle_database")
Group "tst_group:group" created successfully

emcli>

emcli>get_groups()
Target Name  Target Type
..     
Prod-Grp     group        
tst_group    group      

emcli>

e) List targets in a group:

./emcli get_group_members -name=my_group -depth=-1
OR
emcli>get_group_members(name="tst_group")
Target Name         Target Type    
testdb.example.com  oracle_database

emcli>

f) Modify group:

Syntax:
modify_group
(name="name"
[,type="<group>"]
[,add_targets="name1:type1;name2:type2;..."]...
[,delete_targets="name1:type1;name2:type2;..."]...
[,privilege_propagation="true|false"]
[,drop_existing_grants="yes|no"])
[ ] indicates that the parameter is optional

e.g.,

emcli>modify_group(name="tst_group",delete_targets="testdb.example.com:oracle_database")
Group "tst_group:group" modified successfully

emcli>

emcli>modify_group(name="tst_group",add_targets="testdb.example.com:oracle_database")
Group "tst_group:group" modified successfully

emcli>get_group_members(name="tst_group")                                            
Target Name         Target Type    
testdb.example.com  oracle_database

emcli>

Dec 14, 2017

10 points may not be used by Oracle DBA

1) ORADEBUG SUSPEND | RESUME | WAKEUP

SUSPEND command:
This command suspends the current process. First select a process using SETORAPID or SETOSPID
Do not use SETMYPID as the current ORADEBUG process will hang and cannot be resumed even from another ORADEBUG process.

e.g.,
ORADEBUG SUSPEND

RESUME command:
resumes the current process.
While the process is suspended ORADEBUG can be used to take dumps of the current process state e.g. global area, heap, subheaps etc.

e.g.,
ORADEBUG RESUME

WAKEUP command
To wake up a process use:

ORADEBUG WAKEUP pid

For example to wake up SMON, first obtain the PID using:
SELECT pid FROM v$process
WHERE addr =
(
    SELECT paddr FROM v$bgprocess
    WHERE name = 'SMON'
);

If the PID is 6 then send a wakeup call using:
ORADEBUG WAKEUP 6

e.g.,
Suspend a running process
SQL> oradebug setorapid 12
Unix process pid: 10131, image: oracle@localhost.localdomain (TNS V1-V3)
SQL> oradebug suspend
Statement processed.

Resume a process
SQL> oradebug resume
Statement processed.

If you have trace turned on a suspended and resume session you will see the following messages in the trace file.
....
FETCH #5:c=54992,e=55962,p=0,cr=2334,cu=0,mis=0,r=1,dep=1,og=1,tim=1249098751175228
EXEC #5:c=0,e=72,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1249098751175408
Received ORADEBUG command ‘suspend’ from process Unix process pid: 22990, image:
*** 2017-12-14 21:12:34.268
Received ORADEBUG command ‘resume’ from process Unix process pid: 22990, image:
FETCH #5:c=58991,e=32316703,p=0,cr=2334,cu=0,mis=0,r=1,dep=1,og=1,tim=1249098783492125
EXEC #5:c=0,e=71,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1249098783492321
....

3) SQL Full Text: Which view I will use? v$sql | v$sqlarea | v$sqlstats

The column definition for columns in V$SQLSTATS are identical to those in the V$SQL and V$SQLAREA views. However, the V$SQLSTATS view differ from V$SQL and V$SQLAREA in that it is faster, more scalable, and has a greater data retention (the statistics may still appear in the view, even after the cursor has been aged out of the shared pool).

3) DESC command can be used inside SQL query to view dynamic result instead of cancel the whole query and re-write:

e.g.,
SQL> select TABLE_NAME,INDEX_NAME,LAST_ANALYZED
  2  from dba_indexes
  3  where
  4  #desc dba_indexes
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- --------------------------------------------
 OWNER                                                             NOT NULL VARCHAR2(128)
 INDEX_NAME                                                        NOT NULL VARCHAR2(128)
 INDEX_TYPE                                                                 VARCHAR2(27)
 TABLE_OWNER                                                       NOT NULL VARCHAR2(128)
 TABLE_NAME                                                        NOT NULL VARCHAR2(128)
 TABLE_TYPE                                                                 VARCHAR2(11)
 LAST_ANALYZED                                                              DATE
 DEGREE                                                                     VARCHAR2(40)

  4  OWNER='SCOTT';

TABLE_NAME                    INDEX_NAME                                    LAST_ANALYZED
EMP                                       empid_pk                                           07-MAR-17
DEPT                                      deptno_pk                                         07-MAR-17
…..                                         
SQL>

4) Transaction commit when exiting SQL*Plus

if the case of exit without commit from sqlplus, the running transaction commit or rollback??
Answer will be Yes|no. It depends upon what the exitcommit parameter is set to!

To see current setting:
e.g.,
SQL> sho exitcommit
exitcommit ON                  (default)

To OFF:
SET EXITCOMMIT OFF;  ( this makes you “rollback” on exit)


5) Estimating Cardinalities using GATHER_PLAN_STATISTICS hint ( Oracle 10g & above)

If we add the GATHER_PLAN_STATISTICS hint to our simple SQL statement we should be able to see the actual cardinality of each operation at execution time alongside the Optimizer estimates for each cardinality in the plan.
e.g.,
SELECT     /*+ GATHER_PLAN_STATISTICS */ p.prod_name, SUM(s.quantity_sold)
FROM       sales s, products p
WHERE      s.prod_id =p.prod_id
AND        p.prod_desc = 'Consumables'
GROUP By p.prod_name ;
…..
SQL> SELECT *
FROM table(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST'));



The FORMAT parameter value 'ALLSTATS LAST' requests that the execution statistics of the last execution of the cursor be shown.

In order to see the A-Rows values for all of the operations in the plan you would have to use the FORMAT value  'ALLSTATS ALL', which will show you the execution statistics for ALL executions of the cursor. Be careful here as the values in the A-Rows column is now cumulative (incremented on every execution). The more the statement is executed the higher the values in the A-Rows column will become. This FORMAT will also show you a lot more columns in the execution plan.

6. Create index with COMPRESS:

People have often commented to me that Table Compression (and Advanced Row Compression) are  great at helping to reduce the storage footprint of an Oracle database by a factor of 2x-5x, but Oracle shouldn’t forget about the indexes.

In my experience, indexes often take up to 50% of the total database space and it is not uncommon to have 10-20 indexes on a single table (in extreme cases, It may 100 indexes per table).

Syntax:
CREATE INDEX idx_name ON table_name(col1, col2, col3) COMPRESS [<prefix_col_length>] ;

e.g.,
create index idx_test product(product_type,product_color,manufacture_date) compress 2;

The <prefix_col_length> after the COMPRESS keyword denotes how many columns to compress. The default (and the maximum) prefix length for a non-unique index is the number of key columns, and the maximum prefix length for a unique index is the number of key columns minus one.

Prior to Oracle 11g, it is risk if prefix columns are defined properly. You may have following issues:
a)       there is a small CPU overhead to reconstruct the key column values during index lookup or scans
b)      running ANALYZE INDEX takes an exclusive lock on the table, effectively making the table “offline” for this period
c)       Not useful if prefix columns are unique in a block
d)      You cannot compress your Functional Indexes with Advanced Index Compression.
e)      Bitmap indexes can't be compressed.


On partition Table:
CREATE INDEX my_test_idx ON test(a, b) COMPRESS ADVANCED LOW local
    (PARTITION p1 COMPRESS ADVANCED LOW,
     PARTITION p2 COMPRESS,
     PARTITION p3,
 PARTITION p4 NOCOMPRESS);

7. _use_nosegment_indexes parameter and nosegment keyword:

In contrast to conventional indexes, a virtual index has no associated segment, so the creation time and associated disk space are irrelevant. In addition, it is not seen by other sessions, so it doesn't affect the normal running of your system. This article presents a simple example of how virtual indexes are used.

A virtual index is a "fake" index whose definition exists in the data dictionary, but has no index tress association.  It is used by Oracle developers to test whether a specific index is going to use useful without having to use the disk space associated with the "real" index.  The hidden parameter _use_nosegment_indexes is used by Quest tools and is also use in the Oracle Tuning pack.

Because the virtual index is a fake index, Oracle will never be able to use it, but you can use the hidden _use_nosegment_indexes parameter to evaluate execution plans for virtual indexes:

SQL> set autotrace on explain;
SQL> alter session set "_use_nosegment_indexes" = true;
Session altered.
SQL> select ename from emp where ename = 'MILLER';

ENAME
-----
MILLER

1 rows selected.

OBJECT     OPERATION
---------- ----------------------------------------
                 SELECT STATEMENT()
                  NESTED LOOPS()
EMP                TABLE ACCESS(FULL)
EMP                 TABLE ACCESS(BY INDEX ROWID)
VIRTUAL_INDEX         INDEX(UNIQUE SCAN)

Creating virtual indexes:

The CREATE INDEX statement has a special hidden "nosegment" clause that allows one to create an index definition without actually creating the index.

To build a virtual index, runs the following statement:

CREATE unique INDEX virt_ind_name on table_name(col_name) NOSEGMENT;

Removing virtual indexes:

It is important to drop the index after you're done with it.
e.g.,
DROP INDEX virt_ind_name;

8. Create Tigger with DISABLE : (Oracle 11g onwards)

Prior to Oracle 11g, disabling triggers used to be a two step process. Triggers were automatically created in an ‘enabled’ state. To disable it we had to disable it using the DISABLE clause of the ‘ALTER TRIGGER’ or ‘ALTER TABLE ‘ commands.

In Oracle 11g, we can specify ‘DISABLE’ clause in trigger definition it self.  In the absence of the ‘DISABLE’ clause, the trigger is created in an ‘ENABLED’ state. Following is the script to create table and then create trigger in disabled state.

CREATE TABLE TEST
(
COL1 VARCHAR(5) NOT NULL,
COL2 NUMBER(5),
COL3 DATE
);

Let us create a trigger now.

CREATE OR REPLACE TRIGGER TRI_TEST
BEFORE INSERT ON TEST
FOR EACH ROW
DISABLE
BEGIN
:NEW.COL3 := SYSDATE;
END;

Trigger will be created successfully.  We can verify the status of the trigger using data dictionary view user_triggers. Even though trigger is created in a disabled status,  Oracle will make sure that it does not have any compilation errors. Trigger will be compiled with errors if we try to reference non-existent column in the trigger.

This feature can be very useful when we want to enable trigger at later stage for a specific events and as mentioned earlier, we can avoid extra step of altering the trigger after creation.

9. Export direct =y ( EXPDP direct=y …)

The speedup of the direct path export can be large however. The 10% of the processing that it cuts out, accounts for a much larger percentage of the run-time. For example, I have just exported about 100meg of data, and 1.2 million records. The direct path export took about one minute. The conventional path export on the other hand took three minutes.

Setting direct=true bypasses the SGA, so it will be faster. Help full in case mission critical/ OLTPs environments.

10. SQL*Plus Error Logging

Suppose you have a SQL script called myscript.sql:

set puase on
set lines 132 pages 0 trimsppol on
select * from nonexistent_table
/

Note there are several errors in the script: the first line has "pause" misspelled, the second line has "trimspool" misspelled, and finally the third line has a select statement from a table that does not even exist. When you run the script via SQL*Plus prompt, unless you spooled the output, you will not be able to check the error afterward. Even if you spooled, you would have access to the physical server to examine the spool file, which may not be possible.

Oracle Database 11g has a perfect solution: Now you can log the errors coming from SQL*Plus on a special table. You should issue, as a first command:

SQL> set errorlogging on

Now you run the script:

SQL> @myscript

The run will produce the following error messages:

SP2-0158: unknown SET option "puase"
SP2-0158: unknown SET option "trimsppol"
select * from nonexistent_table
              *
ERROR at line 1:
ORA-00942: table or view does not exist

which you may or may not have seen, depending on how you ran the script—in the foreground from an SQL*Plus prompt or in the background as a script invocation. After the script completes, you can log into the database and check the errors in a table named SPERRORLOG.

sql> col timestamp format a15
sql> col username format a15
sql> col script format a10
sql> col identifier format a15
sql> col statement format a20
sql> col message format a20
sql> select timestamp, username, script, statement, message
  2> from sperrorlog;

Note that you checked the error from a different session, not the session where the script was run. In fact the script has finished and the session has been terminated anyway. This gives you a powerful ability to check errors after they occurred in SQL*Plus sessions that were impossible, or at least difficult, to track otherwise.

The table SPERRORLOG is a special table that is created for this purpose only. You can also create your own table and populate that with errors from SQL*Plus. This table should be created as:

SQL> create table my_sperror_log
  2  (
  3     username varchar2(256),
  4     timestamp       timestamp,
  5     script          varchar2(1024),
  6     identifier      varchar(256),
  7     message         clob,
  8     statement       clob
  9  )
 10  /

Table created.

Now you can use this table in error logging instead of the default table.

SQL> set errorlogging on table sh.my_sperror_log;
SQL> @myscript

Now MY_ERROR_LOG (and not SPERRORLOG) will hold the error log. You can truncate all the rows in the table by issuing

SQL> set errorlogging on truncate

There is an optional IDENTIFIER clause that allows you to tag the errors from specific sessions. Suppose you issue the command:

SQL> set errorlogging on identifier MYSESSION1

Now if you run the script, the records will be created with the column called IDENTIFIER populated with the value MYSESSION1. You can extract those records only by issuing the query:

select timestamp, username, script, statement, message
from sperrorlog
where identifier = 'MYSESSION1';

You will see the records from that session only. This is very useful if you are trying to isolate errors in multiple scripts and sessions.


Translate >>