May 25, 2014

Transportable Tablespaces

Transportable Tablespaces in Oracle

Transportable tablespaces were introduced in Oracle 8i to allow whole tablespaces to be copied between databases in the time it takes to copy the datafiles. In Oracle 8i one of the restrictions was that the block size of both databases must be the same. In Oracle 9i the introduction of multiple block sizes has removed this restriction. In this article I will run through a simple example of transporting a tablespace between two databases.

a) Setup
b) Using EXPORT & IMPORT utility
c) Cross-Platform Tablespace Conversions
d) Using RMAN TRANSPORT TABLESPACE: Basic Scenario
e) RMAN TRANSPORT TABLESPACE with UNTIL Time or SCN
f) Transport Tablespace with SET NEWNAME for Auxiliary Datafiles

a) Setup:
~~~~~~~~~
For this example I'm going to create a new tablespace, user and table to work with in the source database.

SQL> CONN / AS SYSDBA

CREATE TABLESPACE sample
  DATAFILE '/u01/app/oradata/datafiles/sample01.dbf'
  SIZE 1M AUTOEXTEND ON EXTENT MANAGEMENT LOCAL;

CREATE USER sample_user IDENTIFIED BY sample_user
  DEFAULT TABLESPACE sample
  TEMPORARY TABLESPACE TEMP
  QUOTA UNLIMITED ON sample;

GRANT CREATE SESSION, CREATE TABLE TO sample_user;

CONN sample_user/sample_user;

CREATE TABLE sample_table (
  sample_id          NUMBER,
  description VARCHAR2(50),
  CONSTRAINT pk_sample_table PRIMARY KEY (sample_id)
);

INSERT /*+ APPEND */ INTO sample_table (sample_id, description)
SELECT level,
       'Description for ' || level
FROM   dual
CONNECT BY level <= 10000;

COMMIT;

b) Using EXPORT and IMPORT UTILITY
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A) Source Database:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For a tablespace to be transportable it must be totally self contained. This can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure. The TS_LIST parameter accepts a comma separated list of tablespace names and the INCL_CONSTRAINTS parameter indicates if constraints should be included in the check.

SQL> CONN / AS SYSDBA
SQL> EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'sample', incl_constraints => TRUE);

PL/SQL procedure successfully completed. 
( if succeeded, then you can proceed)

The TRANSPORT_SET_VIOLATIONS view is used to check for any violations.

SELECT * FROM transport_set_violations;

no rows selected

Assuming no violations are produced we are ready to proceed by switching the tablespace to read only mode.

SQL> ALTER TABLESPACE sample READ ONLY;

Tablespace altered.

Next we export the tablespace metadata using the export (expdp or exp) utility. If you are using 10g or above you should use the expdp utility. This requires a directory object pointing to a physical directory with the necessary permissions on the database server.

SQL> CONN / AS SYSDBA
SQL> CREATE OR REPLACE DIRECTORY temp_dir AS '/tmp/';
SQL> GRANT READ, WRITE ON DIRECTORY temp_dir TO system;

We can now export the tablespace metadata.

$ expdp userid=system/password 
directory=temp_dir transport_tablespaces=sample 
dumpfile=sample.dmp 
logfile=sample_exp.log

If you are using a version prior to 10g/11g, you do not need the directory object and your command would look something like this.

$ exp userid='system/password as sysdba' transport_tablespace=y tablespaces=sample file=sample.dmp log=sample_exp.log

Copy the datafile to the appropriate location on the destination database server. Also copy the dump file to a suitable place on the destination database server. You may use binary FTP or SCP to perform this copy.

The source tablespace can now be switched back to read/write mode.

SQL> ALTER TABLESPACE sample READ WRITE;

Tablespace altered.

~~~~~~~~~~~~~~~~~~~~~~~~
B) Destination Database:
~~~~~~~~~~~~~~~~~~~~~~~~
Create any users in the destination database that owned objects within the tablespace being transported, assuming they do not already exist.

SQL> CONN / AS SYSDBA
SQL> CREATE USER sample_user IDENTIFIED BY sample_user;
SQL> GRANT CREATE SESSION, CREATE TABLE TO sample_user;

Now we import the metadata into the destination database. If you are using 10g or above you should use the impdp utility. This requires a directory object pointing to a physical directory with the necessary permissions on the database server.

SQL> CONN / AS SYSDBA
SQL> CREATE OR REPLACE DIRECTORY temp_dir AS '/tmp/';
SQL> GRANT READ, WRITE ON DIRECTORY temp_dir TO system;

We can now import the tablespace metadata.

$ impdp userid=system/password directory=temp_dir dumpfile=sample.dmp 
logfile=sample_imp.log transport_datafiles='/u02/oradata/datafiles/sample01.dbf'

If you are using a version prior to 10g, you do not need the directory object and your command would look something like this.

$ imp usersample_id='system/password as sysdba' transport_tablespace=y datafiles='/u02/oradata/datafiles/sample01.dbf' tablespaces=sample file=sample.dmp log=sample_imp.log

Switch the new tablespace into read write mode.

SQL> ALTER TABLESPACE sample READ WRITE;

Tablespace altered.

SQL>The tablespace is now available in the destination database.

SELECT tablespace_name, plugged_in, status
FROM   dba_tablespaces
WHERE  tablespace_name = 'sample';

TABLESPACE_NAME                PLU     STATUS
---------------------          ---     ---------
sample                         YES     ONLINE

1 row selected.

SQL>

C) Cross-Platform Tablespace Conversions:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If you are transporting tablespaces between platforms you need to perform Cross-Platform Tablespace Conversions.

The CONVERT TABLESPACE allows tablespaces to be transported between platforms with different byte orders. The transportable tablespace mechanism is unchanged, this command merely converts the tablespace to allow the transport to work.

The platform of the source and destination platforms can be sample_identified using the V$TRANSPORTABLE_PLATFORM view. The platform of the local server is not listed as no conversion in necessary for a matching platform.

SQL> SELECT platform_name FROM v$transportable_platform;

PLATFORM_NAME
------------------------------------
Solaris[tm] OE (32-bit)
...
...
Microsoft Windows 64-bit for AMD

15 rows selected.The tablespace conversion can take place on either the source or the destination server. The following examples show how the command is used in each case.

# Conversion on a Solaris source host to a Linux destincation file.
CONVERT TABLESPACE my_tablespace
  TO PLATFORM 'Linux IA (32-bit)'
  FORMAT='/tmp/transport_linux/%U';

# Conversion on a Linux destination host from a Solaris source file.
CONVERT DATAFILE=
    '/tmp/transport_solaris/my_ts_file01.dbf',
    '/tmp/transport_solaris/my_ts_file02.dbf'
  FROM PLATFORM 'Solaris[tm] OE (32-bit)'
  DB_FILE_NAME_CONVERT
    '/tmp/transport_solaris','/u01/oradata/MYDB';In the first example the converted files are placed in the directory specified by the FORMAT clause. In the second example the specified datafiles are converted to the local servers platform and placed in the correct directory specified by the DB_FILE_NAME_CONVERT clause.


d) Using RMAN TRANSPORT TABLESPACE: Basic Scenario
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Using this scenario, the AUXILIARY DESTINATION clause to be used, which causes RMAN to use default values that work for most cases in managing the auxiliary instance. Only required options are specified.

Note : Oracle recommends that you use an auxiliary destination with TRANSPORT TABLESPACE to simplify management of auxiliary instance files.

1) Requirements:

>> The process described here is only one part of the process of transporting tablespaces. Before you use TRANSPORT TABLESPACE, you must meet the requirements.

>> Confirm that tablespace transport is supported between your source and destination platforms.
Use below view to verify:

SQL>select platform_name,endian_format from v$transportable_platform ;

"little" endian format allowed to transport into "big" endian format.

>> Identify a self-contained set of tablespaces to include in the transportable set

2) To use RMAN TRANSPORT TABLESPACE:

Start RMAN client, connect to the source database and, if used, the recovery catalog. Then enter the TRANSPORT TABLESPACE command, specifying the required arguments.

For example, to transport the tablespaces sample2 and sample3, use the TRANSPORT TABLESPACE command as follows:

RMAN> transport tablespace sample2, sample3
   tablespace destination '/u01/app/oradata/datafiles'
   auxiliary destination '/u02/oradata/datafiles';

Wait TRANSPORT TABLESPACE command to completes,
the following outputs result:

i) The transportable set datafiles are left in the location /u01/app/oradata/datafiles with their original names.
ii) The Data Pump Export dump file for the transportable set is named dmpfile.dmp, the export log is named explog.log and the sample import script is named impscrpt.sql. All are created in the tablespace destination /u02/oradata/datafiles.

Note:
If there is already a file under the name of the export dump file in the tablespace destination, then
>> TRANSPORT TABLESPACE fails when it calls Data Pump Export. If repeating a previous TRANSPORT TABLESPACE operation, make sure you delete the previous output files, including the export dump file.
>> The auxiliary set files are removed from /u02/oradata/datafiles.
>> You can now return to the process for transporting tablespaces again.

f) RMAN TRANSPORT TABLESPACE with UNTIL Time or SCN:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If you specify a target time with the TRANSPORT TABLESPACE command, then during the tablespace transport operation, RMAN will restore the tablespace at the auxiliary instance using backups from prior to the target time and perform point-in-time recovery on the auxiliary database to the specified target time. (Backups and redo logs needed for this point-in-time recovery must be available.)

The target time can be specified using an SCN (in the current incarnation or its ancestors), log sequence number, restore point or time expression. For example:

TRANSPORT TABLESPACE sample
   TABLESPACE DESTINATION '/u01/app/oradata/datafiles'
   AUXILIARY DESTINATION '/u02/oradata/datafiles'
   UNTIL SCN 25433932989;

TRANSPORT TABLESPACE sample
   TABLESPACE DESTINATION '/u01/app/oradata/datafiles'
   AUXILIARY DESTINATION '/u02/oradata/datafiles'
   UNTIL RESTORE POINT 'last_upgrade';

TRANSPORT TABLESPACE sample
   TABLESPACE DESTINATION '/u01/app/oradata/datafiles'
   AUXILIARY DESTINATION '/u02/oradata/datafiles'
   UNTIL TIME 'SYSDATE-1';


f) Transport Tablespace with SET NEWNAME for Auxiliary Datafiles
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SET NEWNAME command can be used in a RUN block to specify filenames for use in TRANSPORT TABLESPACE

Example:

RMAN>
RUN {
   SET NEWNAME FOR DATAFILE '/u01/app/oradata/datafiles/sample01.dbf'
      TO '/u02/oradata/datafiles/sample01.dbf';
   SET NEWNAME FOR DATAFILE '/u01/app/oradata/datafiles/sample02.dbf'
      TO '/u02/oradata/datafiles/sample02.dbf';
   TRANSPORT TABLESPACE SAMPLE
         TABLESPACE DESTINATION '/u01/app/oradata/datafiles'
         AUXILIARY DESTINATION '/u02/oradata/datafiles';
   }

The SET NEWNAME commands cause these auxiliary instance datafiles to be restored to the locations named instead of /u02/oradata/datafiles.

Note:

SET NEWNAME is best used with one-time operations. If you expect to create transportable tablespaces from backup regularly for a particular set of tablespaces, consider using CONFIGURE AUXNAME instead of SET NEWNAME in order to make persistent settings for the location of the auxiliary instance datafiles.

So many ways are there to move a tablespace within platform/ cross-platform.

...Please fee to post comment...

May 12, 2014

Troubleshoot : ORA-02020: too many database links in use

Resolve "ORA-02020: too many database links in use" error 

One of my database is used as remote database for local / distributed databases. For one requirement we have created 'n' number of db links. So many queries fired to use the dblink and at that time we found below error:

"ORA-02020: too many database links in use "

Cause:  The current session has exceeded the INIT.ORA open_links maximum.

To find:
---------
SQL> show parameter open_links;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
open_links                           integer         4
open_links_per_instance      integer         4


Action: 
-------
Increase the open_links limit, or free up some open links by committing or rolling back the transaction and canceling open cursors that reference remote databases.

To Change:
----------
SQL> alter system set open_links_per_instance=10 scope=spfile;
SQL> alter system set open_links=10 scope=spfile;

and then, bounce the database.

Note:
> The default value is set to 4.
> If it is set to 0, distributed transactions are not allowed.
> If you are expecting your transactions to have a maximum of 3 database links open concurrently, set this parameter to 3 or higher.
> Do not set it too high, it is better for the application to close database links when no longer in use than to change the parameter to a high number.

If you are not sure how many database links are opened up concurrently by your session's database application, you can query v$dblink.

SQL> select in_transaction, count(*) from v$dblink
     group by in_transaction;

IN_     COUNT(*)
---       ----------
YES          1

Extra Coverage:

Close a db link
To explicitly close the database link , use the command below:

SQL> alter session close database link remotedb;  -- remotedb --> a dblink name
Session altered.

OR
SQL> exec DBMS_SESSION.CLOSE_DATABASE_LINK (dblink_name);

Know your open links

Once you have created and made use of a database link in Oracle,
you might want to keep an eye on the number of concurrent open database links
in your database so you can tune the open_links initialization parameter.

Read more here on limiting concurrent open links.

You will need to query v$dblink to see how many links are open in your session:

SQL> select in_transaction, count(*) from v$dblink
     group by in_transaction;

IN_   COUNT(*)
--- ----------
YES          1

Here are some interesting columns and descriptions of v$dblink: 

db_link               Db link name
owner_id            Owner name
logged_on           Is the database link currently logged on?
protocol              Dblink's communications protocol
open_cursors      Are there any cursors open for the db link ?
in_transaction      Is the db link part of a transaction which has not been commited or rolled back yet ?
update_sent        Was there an update on the db link ?

dba_db_links

To gather information on all database links in your database, query dba_db_links.
You will need dba privileges to see this view, the alternatives are user_db_links and all_db_links.

A user who does not have dba privileges can query all_db_links to see which db links are available to him.

SQL> select * from dba_db_links;

OWNER            DB_LINK       HOST       CREATED
----------           ----------         ----------   ----------
GOURANG      REMOTEDB    remotedb   12-May-2014 18:10:01

Some interesting columns: 

owner           User who owns the db link, will state 'PUBLIC' if it is a public database link.
db_link         Db link name.
username      Username that was specified if it was hardcoded during the create statement, null if not        
                     specified during the create statement.
host              The tnsnames alias specified during the create statement.
created         Date and time of link creation.

Thanks
Please feel free to post comments...


May 10, 2014

Trouble shoot -- enq: TM - contention

Resolve  "enq: TM - contention " issues in Oracle

Recently, during  monitoring production system, I found "enq: TM - contention" oracle event. The blocked sessions were executing simple INSERT & UPDATE statements similar to:

INSERT INTO customer VALUES (:1, :2, :3);

Query to find blocking session details:

select sid,serial#,event, blocking_session, username,status,terminal,program,sql_id
from v$session
where BLOCKING_SESSION  IS NOT NULL;

About "enq: TM - contention" :

These kind of Waits i.e., enq: TM - contention indicate there are un-indexed foreign key constraints. Reviewing the CUSTOMER table, we found a foreign key constraint referencing the PRODUCT table that did not have an associated index. This was also confirmed with development team and verified with DDL. We added the index on the column referencing the PRODUCT table and the problem was solved.

Finding the root cause of the enq: TM - contention wait event

Using the above query to find the blocking sessions, we found the real culprit. Periodically, as the company reviewed its vendor list, they "cleaned up" the CUSTOMER  table several times a week. As a result, rows from the CUSTOMER table were deleted. Those delete statements were then cascading to the PRODUCT table and taking out TM locks on it.

Reproducing a typical problem that leads to this wait

This problem has a simple fix, but I wanted to understand more about why this happens. So I reproduced the same issue to see what happens under the covers. I first created a subset of the tables from this CUSTOMER and loaded them with sample data.

CREATE TABLE customer
( customer_id number(10) not null,
customer_name varchar2(50) not null,
contact_name varchar2(50),
CONSTRAINT customer_pk PRIMARY KEY (customer_id)
);
INSERT INTO customer VALUES (1, 'customer 1', 'Contact 1');
INSERT INTO customer VALUES (2, 'customer 2', 'Contact 2');
COMMIT;

CREATE TABLE product
( product_id number(10) not null,
product_name varchar2(50) not null,
customer_id number(10) not null,
CONSTRAINT fk_customer
FOREIGN KEY (customer_id)
REFERENCES customer(customer_id)
ON DELETE CASCADE );
INSERT INTO product VALUES (1, 'Product 1', 1);
INSERT INTO product VALUES (2, 'Product 2', 1);
INSERT INTO product VALUES (3, 'Product 3', 2);
COMMIT;

I then executed statements similar to what we found at this customer:

User 1: DELETE customer WHERE customer_id = 1;
User 2: DELETE customer WHERE customer_id = 2;
User 3: INSERT INTO customer VALUES (5, 'customer 5', 'Contact 5');

Similar to the customer's experience, User 1 and User 2 hung waiting on "enq: TM - contention". Reviewing information from V$SESSION I found the following:

-- Find details of blocking sessions
sql>
SELECT l.sid, s.blocking_session blocker, s.event, l.type, l.lmode, l.request, o.object_name, o.object_type
FROM v$lock l, dba_objects o, v$session s
WHERE UPPER(s.username) = UPPER('&User')
AND l.id1 = o.object_id (+)
AND l.sid = s.sid
ORDER BY sid, type;

-- Solution

Following along with the solution we used for our customer, we added an index for the foreign key constraint on the CUSTOMER table back to the PRODUCT table:

sql> CREATE INDEX idx_fk_customer ON product (customer_id);

When we ran the test case again everything worked fine. There were no exclusive locks acquired and hence no hanging. Oracle takes out exclusive locks on the child table, the PRODUCT table in our example, when a foreign key constraint is not indexed.

Sample query to find unindexed foreign key constraints

Now that we know unindexed foreign key constraints can cause severe problems, here is a script that I use to find them for a specific user (this can easily be tailored to search all schemas):

SELECT * FROM (
SELECT c.table_name, cc.column_name, cc.position column_position
FROM   user_constraints c, user_cons_columns cc
WHERE  c.constraint_name = cc.constraint_name
AND    c.constraint_type = 'R'
MINUS
SELECT i.table_name, ic.column_name, ic.column_position
FROM   user_indexes i, user_ind_columns ic
WHERE  i.index_name = ic.index_name
)
ORDER BY table_name, column_position;

Thanks
Please feel free to post comments...

May 7, 2014

"Transaction recovery: lock conflict caught and ignored" in alert log

"Transaction recovery: lock conflict caught and ignored" in alert log

This is a normal informational message indicating that SMON is trying to recover a failed transaction,
may be SMON is taking to finish the recovery process depends on the size of data modified and need to be recovered and the degree of parallelism when recovering terminated transactions. i.e., This happens when there is a long transactions interrupted and try to recover.

Verify with below query to check volume of dead transactions.

select sysdate, b.name useg, a.ktuxeusn xid_usn, a.ktuxeslt
xid_slot, a.ktuxesqn xid_seq, a.ktuxesta
from x$ktuxe a, undo$ b
where a.ktuxecfl like '%DEAD%' ;

If you are getting a very high volume no. of records, then you needs to change the following parameter and that to in OLTP environment ( more Write operation).

Check the value of FAST_START_PARALLEL_ROLLBACK parameter.

It needs to be set to high in order to speed the recovery, ensure it's set as follows:

ALTER SYSTEM SET FAST_START_PARALLEL_ROLLBACK = HIGH;

Hope this message will not come to your alert log.

But, after setting 'HIGH' value, still you are getting same message, then it indicates a block / segment corruption.

Points to Remember:

Parallel Transaction Recovery :

Recovery occurs in parallel mode. Several parallel slave processes will be spawned and will be involved in recovery. This is also termed as Fast Start Parallel Rollback. The background process SMON acts as a coordinator and rolls back a set of transactions in parallel using multiple server processes. To enable Parallel recovery mode, set the parameter FAST_START_PARALLEL_ROLLBACK to LOW / HIGH.

Values:

•FALSE - Parallel rollback is disabled
•LOW - Limits the maximum degree of parallelism to 2 * CPU_COUNT
•HIGH -Limits the maximum degree of parallelism to 4 * CPU_COUNT

If you change the value of this parameter, then transaction recovery will be stopped and restarted with the new implied degree of parallelism. Fast-start parallel rollback is mainly useful when a system has transactions that run a long time before committing, especially parallel INSERT, UPDATE, and DELETE operations. When SMON discovers that the amount of recovery work is above a certain threshold, it automatically begins parallel rollback by dispersing the work among several parallel processes: process 1 rolls back one transaction, process 2 rolls back a second transaction, and so on. The threshold is the point at which parallel recovery becomes cost-effective, in other words, when parallel recovery takes less time than serial recovery.

Please go through the below link to troubleshot the issue.
Click here : Troubleshoot block corruption

Caution: Any change in production database, please change with Oracle support recommendation or with your own risk.


Click here to read similar issue from my post .


Please feel free to post comment...

May 2, 2014

dbms_scheduler jobs are not running automatically- a case study

Issue Description:

Suddenly I received a call few database jobs are not running automatically which are stopped automatically. I verified all those jobs. From the DDL I did not find any issues. All used procedures are also working fine which are not changed since six months. I started my investigation.

Investigation :

Step-1:
Disabled and enabled both the jobs, dropped and re-created both jobs with existing DDL. But issue not resolved.

Step-2:
Tried to call the job manually using below method. But Job is called and runned sucessfully. Then I thoght any J00* and CJQ process may be creating issue. Tried to trace both jobs.

-- Manually calling the job

SQL> exec DBMS_SCHEDULER.run_job('owner.jobname');

-- Started tracing
Note : Give this grant to your user/job owner  --> grant alter session to user
and conect to user/ job owner

SQL> alter session set tracefile_identifier='10046';
SQL> alter session set timed_statistics = true;
SQL> alter session set statistics_level=all;
SQL> alter session set max_dump_file_size = unlimited;
SQL> alter session set events '10046 trace name context forever,level 12';
SQL> exec DBMS_SCHEDULER.run_job('jobname');
SQL> select * from dual;
SQL> exit;

But when verified the tkprof output no symptoms identified.

Step-3:
I doubted the number of job queue slave process. job_queue_processes may be too low (this is the most common problem). The value of job_queue_processes limits the total number of dbms_scheduler and dbms_job jobs that can be running at a given time. To check whether this is the case check the current value of job_queue_processes with:

SQL> select value from v$parameter where name='job_queue_processes';
Then check the number of running jobs
SQL> select count(*) from dba_scheduler_running_jobs;
SQL> select count(*) from dba_jobs_running;

If this is the problem you can increase the parameter job_queue_processes:

SQL> alter system set job_queue_processes=1000;

But, I found counts are very less and job_queue_process value as 1000. Then I doubt sessions/ process may be low. But I found the set value is sufficient enough for this environment.

Click here to increase session/ process parameter value...

Step-4:
I checked any error in alert log. But not found anything related to this job. If any issue it must write to alert log and write to j00* named .trc file.

Step: 5:
If either of the above indicates that the SYSAUX tablespace (where the scheduler stores its logging tables) is full, you can use the dbms_scheduler.purge_log procedure to clear out old log entries.

-- To purge dbms_scheduler logging tables manually
sql> exec dbms_scheduler.purge_log ();

-- To purge dbms_scheduler logging tables automatically ( keep 90 days log)
sql> exec DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('log_history','90');

-- Verify the count
sql> select count(*) from user_scheduler_job_run_details;

Step-6:
Database is up and running. Other jobs are running. This job is not associated to any other window job. No session issue. Job is calling manually. Then I doubt on job priority which may cause not to run. Some times this issue may come.

-- How to see job priority

SQL> select owner,job_name,job_type,job_action,
repeat_interval,enabled,job_priority
from dba_scheduler_jobs where owner='owner';

I found the job priority is 3. i.e., very less priority. I thouht the job schould be disabled and enabled with followed by setting priority 1. I followed these steps:
-- if your doing in sys user:
SQL> exec dbms_scheduler.disable(name => 'owner.jobname');
SQL> exec dbms_scheduler.enable(name => 'owner.jobname');
SQL> exec DBMS_SCHEDULER.SET_ATTRIBUTE (name => 'owner.jobname',attribute => 'job_priority',value => 1 );

Then I verified with following query, I found the stopped jobs id running fine.

SQL> select * from dba_scheduler_job_run_details
where owner = 'owner' and trunc(log_date) = trunc(log_date) 
order by log_date desc;

Ohh. Sometimes issue may be resolved with tricky way but we may take it very serious to resolve this. Any way with this case study lots of things are covered.


Another workaround:

Issue may be for Oracle default window issues / job_queue process value.
-- Current job_queue_processes parameter value:
SQL> show parameter job_queue_processes;

NAME                      TYPE        VALUE
------------------------- ----------- ------------
job_queue_processes       integer     1000

-- alter to '0'
SQL> alter system set job_queue_processes=0;
System altered.

-- Againt set to original value/ required value:

SQL> alter system set job_queue_processes=100;


Note: Keeping a larger value may create issue also. Based on env and jobs you can decide.

Some Oracle default values I made False in Env. You choose yours:

SQL> select window_name,active from DBA_SCHEDULER_WINDOWS;

WINDOW_NAME          ACTIVE
------------------------------   -----
MONDAY_WINDOW        FALSE
TUESDAY_WINDOW       FALSE
WEDNESDAY_WINDOW     FALSE
THURSDAY_WINDOW      FALSE
FRIDAY_WINDOW        FALSE
SATURDAY_WINDOW      FALSE
SUNDAY_WINDOW        FALSE
WEEKNIGHT_WINDOW     FALSE
WEEKEND_WINDOW       TRUE

9 rows selected.

Note- I found WEEKNIGHT_WINDOW job was still running(active). I stopped like below, then all jobs started running automatically. Run below query to find some info about your jobs:

select * from DBA_SCHEDULER_RUNNING_JOBS;
select * from dba_scheduler_global_attribute;
select * from dba_scheduler_windows;
select * from dba_scheduler_jobs;

SQL> EXECUTE DBMS_SCHEDULER.CLOSE_WINDOW ('WEEKNIGHT_WINDOW');

Note:
Some times a miss-configuration exist in your scheduler windows settings that may lead the WEEKNIGHT_WINDOW kept ACTIVE and make the scheduler hang, the WEEKNIGHT_WINDOW must not be active as well as WEEKEND_WINDOW in 11g because they must not be used in 11g as there is a window for each week day, so please run the below to disable these windows:

exec DBMS_SCHEDULER.DISABLE ('WEEKNIGHT_WINDOW, WEEKEND_WINDOW'); 


Thanks you. Please feel free to ask questions via this blog.
Have a nice day...

Translate >>