Feb 23, 2019

expdp job still executing even after killed from OS level

I killed a expdp job from OS level but I saw that the job still executing. I started the job as background like below:

$ nohup expdp \"/ as sysdba\" parfile=export_emp_tables.par &

When I saw from v$session, I found program as "oracle@example01 (DM00)". That means DMOO background process started to execute the job. i.e., master process DM00 started.

You can find "master process DM00 started" message from alert log also. There it self you can find complete session details.

Then, How we can clear that session?

Step1: Check the job name with schema

Find the Job name.

SET lines 200 
COL owner_name FORMAT a10; 
COL job_name FORMAT a20 
COL state FORMAT a12 
COL operation LIKE state 
COL job_mode LIKE state 
COL owner.object for a50 
SELECT owner_name, job_name, rtrim(operation) "OPERATION", 
rtrim(job_mode) "JOB_MODE", state, attached_sessions 
FROM dba_datapump_jobs 
WHERE job_name NOT LIKE 'BIN$%' 
ORDER BY 1,2;



OWNER_NAME JOB_NAME             OPERATION    JOB_MODE     STATE        ATTACHED_SESSIONS
---------- -------------------- ------------ ------------ ------------ -----------------
SYS        SYS_EXPORT_TABLE_01  EXPORT       TABLE        EXECUTING                    1
SYSTEM     SYS_EXPORT_FULL_01   EXPORT       FULL         NOT RUNNING                  0


Here Running job is 'SYS_EXPORT_TABLE_01'.


Step 2: To stop the job perform below steps

Syntax

exec DBMS_DATAPUMP.STOP_JOB (DBMS_DATAPUMP.ATTACH('JOB_NAME','OWNER',1.0)

‘1’ is to abort the job immediately
‘0’ is to remove the job from the job list so that it is not restartable

SQL> exec DBMS_DATAPUMP.STOP_JOB (DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','SYS'),1,0);

Example:
SQL> exec DBMS_DATAPUMP.STOP_JOB (DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','SYS'),1,0);

PL/SQL procedure successfully completed.
SQL>

Verify again:

SELECT owner_name, job_name, rtrim(operation) "OPERATION", 
rtrim(job_mode) "JOB_MODE", state, attached_sessions 
FROM dba_datapump_jobs 
WHERE job_name NOT LIKE 'BIN$%' 
ORDER BY 1,2;

OWNER_NAME JOB_NAME             OPERATION    JOB_MODE     STATE        ATTACHED_SESSIONS
---------- -------------------- ------------ ------------ ------------ -----------------
SYSTEM     SYS_EXPORT_FULL_01   EXPORT       FULL         NOT RUNNING                  0

Now it seems, no job is running.

Feb 19, 2019

ORA-01152: file 1 was not restored from a sufficiently old backup - Solution

Sample Restore and Recover scenario in Oracle using RMAN:

After restoring database, when I tried to recover database, found "RMAN-06556: datafile 1 must be restored from backup..." error which is common in most of the recover scenarios. There are many ways to solve this issues. Using SCN or using LOG Sequence or using UNTIL time you can recover the database.

In this scenario, I am facing with until SCN. You can see below error:

RMAN> run {
allocate channel c0 device type sbt parms="SBT_LIBRARY=/u01/tivoli/Base/libobk.so,BLKSIZE=1048576";
set until scn 7460264;
recover database noredo;
}
2> 3> 4> 5> 
allocated channel: c0
channel c0: SID=762 device type=SBT_TAPE
channel c0: CommVault Systems for Oracle: Version 11.0.0(BUILD80)

executing command: SET until clause

Starting recover at 18-FEB-19
released channel: c0
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/18/2019 20:29:06
RMAN-06556: datafile 1 must be restored from backup older than SCN 7460264

RMAN> 

So, I tried to find out what SCN I can use to recover:

SQL> select CHECKPOINT_CHANGE# from v$datafile_header;

CHECKPOINT_CHANGE#
------------------
           7460265
           7460265
           7460265
           7460264
           7460264

SQL>
break on file_type skip 1

select 'D' file_type, file#, checkpoint_change#, status from V$datafile_header
union all
select 'L', group#, first_change#, status from V$log order by 1,3,2;



F      FILE# CHECKPOINT_CHANGE# STATUS
- ---------- ------------------ ----------------
D          4            7460264 ONLINE
           5            7460264 ONLINE
           1            7460265 ONLINE
           2            7460265 ONLINE
           3            7460265 ONLINE

L          1            7441809 INACTIVE
           2            7446438 INACTIVE
           3            7456460 CURRENT

But, No luck. still same error...
Example:

RMAN> run {
allocate channel c0 device type sbt parms="SBT_LIBRARY=/opt/commvault/Base/libobk.so,BLKSIZE=1048576";
set until scn 7441809;
recover database noredo;
}2> 3> 4> 5> 

allocated channel: c0
channel c0: SID=762 device type=SBT_TAPE
channel c0: CommVault Systems for Oracle: Version 11.0.0(BUILD80)

executing command: SET until clause

Starting recover at 18-FEB-19
released channel: c0
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/18/2019 20:33:06
RMAN-06556: datafile 1 must be restored from backup older than SCN 7441809

Even tried to open with resetlog. Still same error.

Example:

RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 02/18/2019 20:34:49
ORA-01152: file 1 was not restored from a sufficiently old backup 
ORA-01110: data file 1: '/u02/oradata/ORCL/datafile/o1_mf_system_k9ww6f19_.dbf'

RMAN> 

Then, I thought let us try to recover the database using LOG SEQUENCE. Used "list backup of archivelog all" command to find out what log sequence used during restore.

Find last log sequence used

RMAN> list backup of archivelog all;

...
BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
701     Unknown    SBT_TAPE    00:00:00     18-FEB-19      
        BP Key: 705   Status: AVAILABLE  Compressed: NO  Tag: TAG20190216T164243
        Handle: 1169468_ORCL_lttq1og3_1_1   Media: V_880953_3272683

  List of Archived Logs in backup set 701
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    815     7421975    16-FEB-19 7430618    16-FEB-19
  1    816     7430618    16-FEB-19 7441809    16-FEB-19
  1    817     7441809    16-FEB-19 7446438    16-FEB-19
  1    818     7446438    16-FEB-19 7456460    16-FEB-19
  1    819     7456460    16-FEB-19 7460470    16-FEB-19
  1    820     7460470    16-FEB-19 7460491    16-FEB-19

Now we can use 820 as logsequence# for thread 1 and start recover.

-- Finally able to recover
Example:
run
{
SET UNTIL SEQUENCE 820 THREAD 1;
recover database;
ALTER DATABASE OPEN RESETLOGS;
}

output:


RMAN> run
{
SET UNTIL SEQUENCE 820 THREAD 1;
recover database;
ALTER DATABASE OPEN RESETLOGS;
}2> 3> 4> 5> 6> 

executing command: SET until clause

Starting recover at 18-FEB-19
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=762 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=13 device type=DISK

starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=819
channel ORA_SBT_TAPE_1: reading from backup piece 1169468_ORCL_lttq1og3_1_1
channel ORA_SBT_TAPE_1: piece handle=1169468_ORCL_lttq1og3_1_1 tag=TAG20190216T164243
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
archived log file name=/u02/fast_recovery_area/ORCL/archivelog/2019_02_18/o1_mf_1_819_g6pqfpov_.arc thread=1 sequence=819
channel default: deleting archived log(s)
archived log file name=/u02/fast_recovery_area/ORCL/archivelog/2019_02_18/o1_mf_1_819_g6pqfpov_.arc RECID=631 STAMP=1000586214
media recovery complete, elapsed time: 00:00:00
Finished recover at 18-FEB-19

Statement processed

RMAN> 

-- Now verify the database status

SQL> col HOST_NAME for a30
SQL> select instance_name,host_name,startup_time,status from v$instance;

INSTANCE_NAME    HOST_NAME                 STARTUP_TIME       STATUS
---------------- ------------------------ ------------------ ------------
orcl             exammple09                18-FEB-19          OPEN

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
ORCL      READ WRITE

I hope it may help you. Post your comments. 

Feb 6, 2019

Import Data on Amazon RDS using Datapump utility

- From On Premise / EC2 to AWS RDS

The common question we have that "How we import data into an Amazon RDS DB instance?"

Yes, Answer to this question, there are many more challenges and it depends on the amount of data you have and the number and variety of database objects in your database.

For example, you can use Oracle SQL Developer to import a simple, 50 MB or 100M database. You can use Oracle Data Pump to import complex databases, or databases that are several hundred megabytes or several terabytes in size.

You can also use AWS Database Migration Service (AWS DMS) to import data into an Amazon RDS DB instance. AWS DMS can migrate databases without downtime and, for many database engines, continue ongoing replication until you are ready to switch over to the target database. You can migrate to Oracle from either the same database engine or a different database engine using AWS DMS. If you are migrating from a different database engine, you can use the AWS Schema Conversion Tool to migrate schema objects that are not migrated by AWS DMS. For more information about AWS DMS, see What is AWS Database Migration Service.

The following performance guidelines apply to all RDS data import/export operations:

Load and unload data in parallel using compression and multiple threads.

For large data loads, consider disabling automated backups by setting the backup retention for the RDS DB instance to zero; a restart of the RDS DB instance is necessary to enact this change. Disabling backups is not recommended for production instances. Disabling backups prevents point-in-time recovery functionality, deletes all previous automated snapshots for the DB instance, and prevents recovery of deleted snapshots. If the DB instance is not yet in production, disabling backups can improve load performance. This change should be immediately rolled back when the data load is complete. Also consider generating recovery points by creating manual snapshots at critical stages of the data load process.

Consider disabling Multi-AZ during the data load operation to reduce the overhead caused by the synchronous write operations of Multi-AZ data loads. Multi-AZ functionality is a recommended
best practice for all production RDS DB instances and should be enabled as soon as the data load is complete.

If you are loading a large amount of data in parallel, ensure that the client machine has sufficient resources to avoid becoming a bottleneck during the data load process.
Datapump utilities like expdp or impdp cannot be directly used on RDS database since there is no SSH connect allowed on the host where RDS database is running.


-- Scenario

Source : On Premise

hostname : example01
database : testdb


Target : AWS RDS
database : testorcl


Step-1:  Take backup from On premise / EC2 instance

a) Create the directory on source database

SQL> create or replace directory export as '/u03/backup/export';
Directory created.

SQL> select name from v$database;

NAME
---------
TESTDB

b) Export the dumpfile from EC2 database (if you already have the exported dumpfile, skip this step)

expdp directory=export dumpfile=test1.dmp logfile=test1_expdp.log schemas=TEST_SCHEMA exclude=statistics

$ pwd

/u03/backup/export
$ ls
 test1.dmp  test1_expdp.log


Step-2: Create a database link to the target DB instance :

Create a database link between your source instance and your target DB instance. Note that your local Oracle instance must have network connectivity to the DB instance in order to create a database link and to transfer your export dump file.

Perform this step connected with the same user account as the previous step.

The following command creates a database link named to_rds that connects to the Amazon RDS master user at the target DB instance:

e.g.,
SQL>create database link to_rds connect to <master_user_account> identified by <password>using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<dns or ip address of remote db>)(PORT=<listener port>))(CONNECT_DATA=(SID=<remote SID>)))';

Note: Based on your connection provided by AWS, create your db-link. Assume "RDS_DBLINK_EXAMPLE" is created here.


Step-3 : Create privileged used where you need to import

You need to provide below grants to the user/ schema which one you will use to import.

grant read, write on directory DATA_PUMP_DIR to TEST_SCHEMA_RDS;
grant execute on dbms_datapump to TEST_SCHEMA_RDS;
grant select_catalog_role to TEST_SCHEMA_RDS;
grant create session,create table to TEST_SCHEMA_RDS;
grant execute on dbms_file_transfer to TEST_SCHEMA_RDS;
GRANT UNLIMITED TABLESPACE TO TEST_SCHEMA_RDS;

Note: Assume you gave already you have created user access. Give above access from admin user provided by AWS.


Step-4: Transfer the dump-file from On premise to RDS instance of AWS

Use DBMS_FILE_TRANSFER to copy the exported dump file to the target DB instance.

Use DBMS_FILE_TRANSFER to copy the dump file from the source database  instance to the target DB instance.

Login to EC2 Oracle instance or on-premise Oracle instance, and copy the exported dumpfile from the local DATA_PUMP_DIR to RDS DATA_PUMP_DIR.

The following script copies a dump file named sample.dmp from the source instance to a target database link named RDS_DBLINK_EXAMPLE (created in the previous step):


Connect to rds instance from your local host / ec2 instance:


BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object =>'EXPORT',
source_file_name =>'test1.dmp',
destination_directory_object =>'DATA_PUMP_DIR',
destination_file_name => 'test1.dmp',
destination_database =>'RDS_DBLINK_EXAMPLE'
);
END;
/


Step-5: Check the datapump directory name in aws rds.


SQL> select owner,directory_name,directory_path from dba_directories;

OWNER       |DIRECTORY_NAME      |DIRECTORY_PATH

------------|--------------------|----------------------------------------
SYS         |DATA_PUMP_DIR       |/rdsoradata/datapump



Step-6 : Use DBMS_DATAPUMP or IMPDP to import the data file on the target DB instance

I saw lots of post but most of them not working properly. Below pl/sql block I found from Oracle document which is absolutely working fine.

Here, you need to change dump-file name, schema name and remap tablespace options.

set serveroutput on;
set timing on;
DECLARE
  ind NUMBER;              -- Loop index
  h1 NUMBER;               -- Data Pump job handler
  percent_done NUMBER;     -- Percentage of job complete
  job_state VARCHAR2(30);  -- To keep track of job state
  le ku$_LogEntry;         -- For WIP and error messages
  js ku$_JobStatus;        -- The job status from get_status
  jd ku$_JobDesc;          -- The job description from get_status
  sts ku$_Status;          -- The status object returned by get_status
BEGIN
  h1 := DBMS_DATAPUMP.OPEN('IMPORT','FULL',NULL,'EXAMPLE1');
  DBMS_DATAPUMP.ADD_FILE(h1,'test1.dmp','DATA_PUMP_DIR');
  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','TEST_SCHEMA','TEST_SCHEMA_RDS');
  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','TEST_SCHEMA','TEST_SCHEMA_RDS');
  DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','REPLACE');
  DBMS_DATAPUMP.START_JOB(h1);

 percent_done := 0;
  job_state := 'UNDEFINED';
  while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
           dbms_datapump.ku$_status_job_error +
           dbms_datapump.ku$_status_job_status +
           dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

-- If the percentage done changed, display the new value.


     if js.percent_done != percent_done

    then
      dbms_output.put_line('*** Job percent done = ' ||
                           to_char(js.percent_done));
      percent_done := js.percent_done;
    end if;

-- If any work-in-progress (WIP) or Error messages were received for the job,

-- display them.

       if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)

    then
      le := sts.wip;
    else
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
      else
        le := null;
      end if;
    end if;
    if le is not null
    then
      ind := le.FIRST;
      while ind is not null loop
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      end loop;
    end if;
  end loop;

-- Indicate that the job finished and gracefully detach from it. 


  dbms_output.put_line('Job has completed');

  dbms_output.put_line('Final job state = ' || job_state);
  dbms_datapump.detach(h1);
END;
/


Step-7: Clean up

Connect to admin user of aws rds instance and run the below queries:

SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) 
order by mtime;

FILENAME                    TYPE         FILESIZE    MTIME

--------------------------- ---------- ---------- ---------
test1.dmp                   file         290816     31-JAN-19
example.dmp                 file         7470804992 31-JAN-19

SQL>


e.g.,

SQL> exec utl_file.fremove('DATA_PUMP_DIR','test1.dmp');
SQL> exec utl_file.fremove('DATA_PUMP_DIR','example.dmp');


SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) 

order by mtime;

FILENAME                       TYPE         FILESIZE MTIME

----------------------------- ---------- ---------- ---------
datapump/                     directory        4096 01-FEB-19


I hope it helped you.

Translate >>