Jun 30, 2016

TRANSACTION RECOVERY is stuck on a 2 Node RAC | Bug 23217190 | Workaround

We will discuss the following to make clear about the issue and troubleshoot mechanism:

1) What is Transaction Recovery in Oracle?
2) Possible issues
3) Related parameter and modes of Transaction Recovery
4) Detection of Transaction Recovery
5) Need of Oracle support
6) Disable Transaction Recovery
7) Precautions:
8) Oracle 11.2.0.4 BUG and workaround

Details:

1) What is Transaction Recovery in Oracle?

When rolling back, all uncommitted transactions of a failed instance, Transaction Recovery is started by SMON. Some "in-progress" transactions that may not committed and Oracle needs to undo the same.
Oracle applies undo blocks to roll back uncommitted changes in data blocks that were either written before the crash or introduced by redo application during cache recovery. This normally happens during the Roll backward phase when the DB is restarted. Transaction Recovery can be performed by either the Server process which initiated the Transaction or the SMON process (in case the Server process is dead).

SMON process takes over the recovery when:
  •  Server process is dead / crashed.
  •  Instance itself is crashed
Undo blocks (whether in rollback segments or automatic undo tablespaces) record database actions that may need to be undone during certain database operations. In database recovery, the undo blocks roll back the effects of uncommitted transactions previously applied by the rolling forward phase.

After the roll forward, any changes that were not committed must be undone. Oracle applies undo blocks to roll back uncommitted changes in data blocks that were either written before the failure or introduced by redo application during cache recovery. This process is called rolling back or transaction recovery.

2) Possible issues

a) High CPU Utilization by SMON process
b) Database may hang during large transaction recovery.
c) If Database is shutdown abort, then the database may hang during consequent startup.
d) Database repeatedly crashes while open.
e) Cause huge volume archive-log generation

3) Related parameter and modes of Transaction Recovery

Parameter : fast_start_parallel_rollback

Default is:
SQL> show parameter fast_start_parallel_rollback;

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback         string      LOW
SQL> 

Values and meaning:

FALSE - Parallel rollback is disabled
LOW - Limits the maximum degree of parallelism to 2 * CPU_COUNT
HIGH -Limits the maximum degree of parallelism to 4 * CPU_COUNT

There are two modes of Transaction Recovery. These are :
a) Parallel Transaction Recovery
b) Serial Transaction Recovery

a) Parallel Transaction Recovery:

Recovery occurs in parallel mode. Several parallel slave processes will be spawned and will be involved in recovery. This is also termed as Fast Start Parallel Rollback. The background process SMON acts as a coordinator and rolls back a set of transactions in parallel using multiple server processes. To enable Parallel recovery mode, set the parameter FAST_START_PARALLEL_ROLLBACK to LOW / HIGH.

b) Serial Transaction Recovery

This mode recovers the transaction sequentially. Many of the times, serial transaction recovery will be faster. Setting the FAST_START_PARALLEL_ROLLBACK parameter to false will enable the serial transaction recovery.

4) Detection of Transaction Recovery

a) Identify Dead Transactions and their Sizes
b) Identify undo segments containing dead transactions
c) Identify the Time for transaction recovery to complete
d) Query to identify the number of parallel Recovery Slaves
e) Identify Objects involved in Transaction Recovery

a) Identify Dead Transactions and their Sizes

query:

select ktuxeusn  USN, ktuxeslt Slot, ktuxesqn  Seq, ktuxesta State, ktuxesiz Undo
from x$ktuxe
where ktuxesta <> 'INACTIVE'
and ktuxecfl like '%DEAD%'
order by ktuxesiz  asc;

e.g.,

SQL> select ktuxeusn  USN, ktuxeslt Slot, ktuxesqn  Seq, ktuxesta State, ktuxesiz Undo 
from x$ktuxe  
where ktuxesta <> 'INACTIVE'  
and ktuxecfl like '%DEAD%' 
order by ktuxesiz  asc;    

       USN       SLOT        SEQ STATE                  UNDO
---------- ---------- ---------- ---------------- ----------
        13         24      84986 ACTIVE                    1
        14          6        871 ACTIVE                    5

SQL> 

Interpreting:

ktuxeusn –  Undo Segment Number
ktuxeslt   –  Slot number
ktuxesqn –  Sequence
ktuxesta  –  State
ktuxesiz  –  Undo Blocks Remaining
ktuxecfl   –  Flag

b) Identify undo segments containing dead transactions

query:

select useg.segment_name,
       useg.segment_id,
       useg.tablespace_name,
       useg.status
  from dba_rollback_segs useg
 where useg.segment_id in
       (select unique ktuxeusn
          from x$ktuxe
         where ktuxesta <> 'INACTIVE'
           and ktuxecfl like '%DEAD%');
e.g.,

SQL> select  useg.segment_name, useg.segment_id, useg.tablespace_name, useg.status
  2  from dba_rollback_segs useg
  3  where useg.segment_id  in (select unique ktuxeusn
  4                                          from x$ktuxe
  5                                          where ktuxesta <> 'INACTIVE'
  6                                          and ktuxecfl like '%DEAD%');

SEGMENT_NAME                   SEGMENT_ID TABLESPACE_NAME                STATUS
------------------------------ ---------- ------------------------------ ----------------
_SYSSMU13_1849806892$                  13 UNDOTBS2                       ONLINE
_SYSSMU14_3452166187$                  14 UNDOTBS2                       ONLINE

2 rows selected

SQL>

c) Identify the Time for transaction recovery to complete

select usn,
       state,
       undoblockstotal "Total",
       undoblocksdone "Done",
       undoblockstotal - undoblocksdone "ToDo",
       decode(cputime,
              0,
              'unknown',
              sysdate + (((undoblockstotal - undoblocksdone) /
              (undoblocksdone / cputime)) / 86400)) "Estimated time to complete"
  from v$fast_start_transactions;

d) Query to identify the number of parallel Recovery Slaves:

select * from v$fast_start_servers;

STATE UNDOBLOCKSDONE PID XID
----------- ---------------- ---------- ----------------
RECOVERING 12 10 001F005C00001BD6
RECOVERING 0 19 001F005C00001BD6

Column STATE shows the state of the server being IDLE or RECOVERING. If only 1 process is in state RECOVERING while the other processes are in state IDLE, then you should disable Parallel Transaction Recovery. Normally these queries will show the progress of the transaction recovery.

If there is no progress in the number of blocks, then we can consider disabling the parallel recovery and let the recovery continue in serial transaction.

e) Identify Objects involved in Transaction Recovery

select decode(px.qcinst_id,NULL,username,' - '||lower(substr(s.program,length(s.program)-4,4) ) ) "Username",
         decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
         to_char( px.server_set) "Slave Set",
         to_char(s.sid) "SID",
         decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID", 
         px.req_degree "Requested DOP", 
         px.degree "Actual DOP" 
from gv$px_session px, gv$session s 
where px.sid=s.sid (+) 
and px.serial#=s.serial# 
order by 5 , 1 desc ;

Username QC/Slave Slave Set SID QC SID Requested DOP Actual DOP
------------ ------------------ -------- --------- ------------- -----------
SYS QC 736 736                                                    

SQL> select sid,serial# from gv$session where sid=1445;

      SID   SERIAL#
--------- ---------
     1445     13981
     1445     28003

SQL> select distinct current_obj#
  2  from v$active_session_history
  3  where SESSION_ID=1445
  4  and SESSION_SERIAL# in(13981,27943);

CURRENT_OBJ#
------------
       88667
       89381
       87133

SQL> 

SQL> select owner,Object_name, object_type
  2  from dba_objects
  3  where object_id in(17197,17310,17423);

OWNER                 OBJECT_NAME OBJECT_TYPE
-----------------------------------------  ---------- -------
MMGT                  MATERIAL_REQUEST TABLE
MMGT                  PURCHAGE_ORDER TABLE
MMGT                  PUCHAGE_RECEIPTS TABLE

SQL> 

So using These above method you can find out objects and other details related to what is in Parallel Transaction Recovery.

5) Need of Oracle support:

Oracle support required to provide undo dumps to analyze more and next level action plan:

a) Identify the UNDO name using the undo segment name and transaction details.

select sysdate, b.name useg, a.ktuxeusn xidusn, a.ktuxeslt xidslot, a.ktuxesqn xidsqn 
from x$ktuxe a, undo$ b 
where a.ktuxesta = 'ACTIVE' 
and a.ktuxecfl like '%DEAD%' 
and a.ktuxeusn = b.us#;

b) Dump the Undo Block

Alter system dump undo block ‘<undo seg name>’ xid <xidusn> <xidslot> <xidsqn>;

e.g.,

SQL> oradebug setmypid
Statement processed.
SQL> alter system dump undo block '_SYSSMU13_1849806892$' xid 13 24 84986 ;

System altered.

SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/prod/PROD2/trace/PROD2_ora_13959258.trc
SQL> 

you can upload related trace file to oracle support for next level action plan.

6) Disable Transaction Recovery

  • Disabling Transaction recovery could temporarily open up the database.
  • SMON will stop performing all the transaction recovery.
  • There is no way to stop only one Transaction recovery / few transaction recoveries.
  • When the transaction recovery is disabled, any data modified by this dead transaction but required by other transactions will be recovered on "as needed" basis.
  • We always suggest to disable the transaction recovery only with the help of Oracle Support. 
Caution:
 Oracle does not recommend to Disable the Transaction recovery and open the Database for Business Use.

7) Precautions:

Check the long running Transactions before shutting down the Database
DBA and Application team to be aware of any long running Jobs.
Plan for the downtime and inform application team accordingly so that they do not schedule business critical / long running jobs.
Do not Panic. Transaction recovery is part of Database normal Operations

8) Oracle Bug and Workaround:

In the above observation, frequently three tables are commng for recovery. As per my experience I found this case in 2 production environments where no reason of this kind of transaction recovery. Recently Oracle agree with me it is a bug and they merged this bug with below bug id for Oracle 11.2.0.4+PSU7. Very soon Oracle will release a patch.

 Bug 23217190

Bug details:

Bug 23217190 - TRANSACTION RECOVERY IS STUCK ON A 2 NODE RAC
=========================================================
"If you see the symptoms above, set event 10015 level 10 for SMON. If the
trace produced shows entries like this:
kturrt: skipped undo for xid: 0x02a4.021.00001850 @ uba uba:
0x00c00626.03b4.14
it is possibly this bug. Dump the undo for the transaction. For example
alter system dump undo block _SYSSMU675_3821756173$ xid 676 33 6224;
If the "Begin trans" record contains a "Global transid", then it is likely
this bug. The proxy_id names the sibling transaction which has been lost. "

Further review of Dev. team concluded Bug 23217190 is similar of Bug 13841869 and marked as Duplicate of ' Bug 13841869 '

++ Bug 13841869 - WHEN LOCAL TRANSACTION CONVERTS TO GLOBAL, FLAG KTUEXF_GLOBALTX IS NOT SET


Work-around:
==========

Drop and re-create the verified objects with planned downtime to fix this issue.

Note: All activities can be done at your own risk. It is highly recommended go with Oracle support to fix the issue.

Reference:
Oracle Documents

Jun 27, 2016

Log mining real-time steps in Oracle 11g RAC

LogMiner Benefits:

LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

1) Pinpointing when a logical corruption to a database, such as errors made at the application level, may have begun. These might include errors such as those where the wrong rows were deleted because of incorrect values in a WHERE clause, rows were updated with incorrect values, the wrong index was dropped, and so forth.

2) Determining what actions you would have to take to perform fine-grained recovery at the transaction level. Normally you would have to restore the table to its previous state, and then apply an archived redo log file to roll it forward.

3) Performance tuning and capacity planning through trend analysis. You can determine which tables get the most updates and inserts. That information provides a historical perspective on disk access statistics, which can be used for tuning purposes.

4) Performing postauditing. LogMiner can be used to track any data manipulation language (DML) and data definition language (DDL) statements executed on the database, the order in which they were executed, and who executed them.


Steps:
1) Create a dictionary file and build it
                 DBMS_LOGMNR_D.BUILD()
2) Add redo logs
                DBMS_LOGMNR.ADD_LOGFILE()
3) Start log mining
                DBMS_LOGMNR.START_LOGMNR()
4) Querying log mining
               V$LOGMNR_CONTENTS view
5) Stop Log mining
               DBMS_LOGMNR.END_LOGMNR()

Let us discuss with below case studies:

Case Studies:

Case-1 (Extracting the Dictionary to Flat File)]

1.1 Set the Init.ora parameter UTL_FILE_DIR, with the name of dictionary where you want to create a dictionary file.

Make sure to specify an existing directory that Oracle has permissions to write to by the PL/SQL procedure by setting the initialization parameter UTL_FILE_DIR in the init.ora.


Example : utl_file_dir = '/u01/LogMinerDIR'

# cd /u01
# mkdir LogMinerDIR
# chmod 777 LogMinerDIR
# chown oracle:oinstall LogMinerDIR

bash-4.2$ vi initPROD1.ora  ( both parameter file)
"initPROD1.ora" 1 line, 78 characters
SPFILE='+DATA/PROD/spfilePROD.ora'          
UTL_FILE_DIR = '/u01/LogMinerDIR'
~

Then, bounce database

bash-4.2$ srvctl status database -d PROD
Instance PROD2 is running on node rac1
Instance PROD1 is running on node rac2
bash-4.2$ srvctl stop database -d PROD
bash-4.2$ srvctl start database -d PROD

SQL> show parameter UTL_FILE_DIR;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir                         string      /u01/LogMinerDIR
SQL>


1.2 Now generate the dictionary file using following command:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('DICTIONARY.ORA', 'path', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

e.g.,

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('DICTIONARY.ORA', '/u01/LogMinerDIR', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

PL/SQL procedure successfully completed.

SQL>

1.2 Now prepare the list of the archive files which were generated when the unwanted changes done.

Example :
sqlplus / as sysdba
SQL>
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_1.349.915193325', OPTIONS => DBMS_LOGMNR.NEW);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_2.350.915193335', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_3.351.915193349', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_4.352.915193363', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_5.353.915193387', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_6.354.915193419', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_7.355.915193437', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_8.356.915193449', OPTIONS => DBMS_LOGMNR.ADDFILE);

Note: Thread 2 logs mining should be done in node 2.

execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_1.279.915193333', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_2.263.9151933397', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_3.274.915193355', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_4.277.915193373', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_5.271.915193405', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_6.268.915193429', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_7.267.915193441', OPTIONS => DBMS_LOGMNR.ADDFILE);


1.3. Now you can start the logMiner as shown below:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME => '/path/DICTIONARY.ORA');

e.g.,

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME => '/u01/LogMinerDIR/DICTIONARY.ORA');

PL/SQL procedure successfully completed.

SQL>

Note: After adding log files, then start above log miner from the dictionary. In RAC thread 1 redo start from

1.4. Now you can query v$LOGMNR_CONTENTS details using following queries:

Querying Log Information:

Once LogMiner is started, the contents of the logfiles can be queried using the following views:

V$LOGMNR_DICTIONARY - The dictionary file in use.
V$LOGMNR_PARAMETERS - Current parameter settings for LogMiner.
V$LOGMNR_LOGS - Which redo log files are being analyzed.
V$LOGMNR_CONTENTS - The contents of the redo log files being analyzed.
The following query displays the SQL issued along with the undo SQL to reverse it.

SELECT scn, operation, sql_redo, sql_undo
FROM   v$logmnr_contents;
The following query displays the number of hits for each object during the analyzed period.

SELECT seg_owner, seg_name, count(*) AS Hits
FROM   v$logmnr_contents
WHERE  seg_name NOT LIKE '%$'
GROUP BY seg_owner, seg_name;

-- here you can view all contents

SQL> select username, operation, SQL_REDO,SQL_UNDO,SEG_OWNER FROM V$LOGMNR_CONTENTS;

-- To specific

SQL> select seg_owner, operation, sql_redo, sql_undo
 from v$logmnr_contents where SEG_NAME = '';

SQL> select username, operation, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = '';

SQL> SELECT (XIDUSN || '.'|| XIDSLT || '.'|| XIDSQN) AS XID, USERNAME , SQL_REDO,SQL_UNDO AS SQL_UNDO
FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'HR';

-- To terminate the log miner

SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;
e.g.,

SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;

PL/SQL procedure successfully completed.

SQL>


Case-2 (Extracting the Dictionary from ONLINE CATALOG)]
-- Thread1 logs start from Node1
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_1.349.915193325', OPTIONS => DBMS_LOGMNR.NEW);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_2.350.915193335', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_3.351.915193349', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_4.352.915193363', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_5.353.915193387', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_6.354.915193419', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_7.355.915193437', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_8.356.915193449', OPTIONS => DBMS_LOGMNR.ADDFILE);

-- Thread 2 logs start from node2
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_1.279.915193333', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_2.263.9151933397', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_3.274.915193355', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_4.277.915193373', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_5.271.915193405', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_6.268.915193429', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_7.267.915193441', OPTIONS => DBMS_LOGMNR.ADDFILE);

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> select username, operation, SQL_REDO,SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = 'SCOTT';

-- To terminate the log miner
SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;
e.g.,

SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;

PL/SQL procedure successfully completed.

SQL>

--- Sample output:

SELECT scn, operation, sql_redo, sql_undo
FROM   v$logmnr_contents;

The following query displays the number of hits for each object during the analyzed period.

SELECT seg_owner, seg_name, count(*) AS Hits
FROM   v$logmnr_contents
WHERE  seg_name NOT LIKE '%$'
GROUP BY seg_owner, seg_name;


SELECT OPERATION, STATUS, SQL_REDO FROM V$LOGMNR_CONTENTS
  WHERE SEG_OWNER = 'HR' AND TABLE_NAME = 'XML_CLOB_COL_TAB';

The query output looks similar to the following:

OPERATION         STATUS  SQL_REDO

INSERT            0       insert into "HR"."XML_CLOB_COL_TAB"("F1","F2","F5") values
                             ('9999','Gou45Moh','GOURANGA')
         
XML DOC BEGIN     5       update "HR"."XML_CLOB_COL_TAB" a set a."F3" = XMLType(:1)
                             where a."F1" = '9999' and a."F2" = 'Gou45Moh' and a."F5" = 'GOURANGA'

XML DOC WRITE     5       XML Data

XML DOC WRITE     5       XML Data

XML DOC WRITE     5       XML Data

XML DOC END       5
                                                                  
In the SQL_REDO columns for the XML DOC WRITE operations there will be actual data for the XML document. It will not be the string 'XML Data'.


Add On:

Please note that to extract a dictionary to the redo logs, the database must be open and in ARCHIVELOG mode and archiving must be enabled
Also to make sure that the redo logs contain information that will provide the most value to you, you should enable at least minimal supplemental logging.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; - will take long time.
Database altered.

SQL> alter database add supplemental log data (primary key) columns;
Database altered.

Now, consider the following statements issued by an application against your database:

SQL> insert into test values (100002,sysdate,12,1);

1 row created.

SQL> commit;

Commit complete.

SQL> update test set hotel_id = 13 where res_id = 100002;

1 row updated.

SQL> commit;

Commit complete.

SQL> delete res where res_id = 100002;

1 row deleted.

SQL> commit;

Commit complete.

Note : Each one is succeeded by a commit statement, which indicates that each statement is a transaction.


Translate >>