Jun 27, 2016

Log mining real-time steps in Oracle 11g RAC

LogMiner Benefits:

LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

1) Pinpointing when a logical corruption to a database, such as errors made at the application level, may have begun. These might include errors such as those where the wrong rows were deleted because of incorrect values in a WHERE clause, rows were updated with incorrect values, the wrong index was dropped, and so forth.

2) Determining what actions you would have to take to perform fine-grained recovery at the transaction level. Normally you would have to restore the table to its previous state, and then apply an archived redo log file to roll it forward.

3) Performance tuning and capacity planning through trend analysis. You can determine which tables get the most updates and inserts. That information provides a historical perspective on disk access statistics, which can be used for tuning purposes.

4) Performing postauditing. LogMiner can be used to track any data manipulation language (DML) and data definition language (DDL) statements executed on the database, the order in which they were executed, and who executed them.


Steps:
1) Create a dictionary file and build it
                 DBMS_LOGMNR_D.BUILD()
2) Add redo logs
                DBMS_LOGMNR.ADD_LOGFILE()
3) Start log mining
                DBMS_LOGMNR.START_LOGMNR()
4) Querying log mining
               V$LOGMNR_CONTENTS view
5) Stop Log mining
               DBMS_LOGMNR.END_LOGMNR()

Let us discuss with below case studies:

Case Studies:

Case-1 (Extracting the Dictionary to Flat File)]

1.1 Set the Init.ora parameter UTL_FILE_DIR, with the name of dictionary where you want to create a dictionary file.

Make sure to specify an existing directory that Oracle has permissions to write to by the PL/SQL procedure by setting the initialization parameter UTL_FILE_DIR in the init.ora.


Example : utl_file_dir = '/u01/LogMinerDIR'

# cd /u01
# mkdir LogMinerDIR
# chmod 777 LogMinerDIR
# chown oracle:oinstall LogMinerDIR

bash-4.2$ vi initPROD1.ora  ( both parameter file)
"initPROD1.ora" 1 line, 78 characters
SPFILE='+DATA/PROD/spfilePROD.ora'          
UTL_FILE_DIR = '/u01/LogMinerDIR'
~

Then, bounce database

bash-4.2$ srvctl status database -d PROD
Instance PROD2 is running on node rac1
Instance PROD1 is running on node rac2
bash-4.2$ srvctl stop database -d PROD
bash-4.2$ srvctl start database -d PROD

SQL> show parameter UTL_FILE_DIR;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir                         string      /u01/LogMinerDIR
SQL>


1.2 Now generate the dictionary file using following command:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('DICTIONARY.ORA', 'path', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

e.g.,

SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('DICTIONARY.ORA', '/u01/LogMinerDIR', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

PL/SQL procedure successfully completed.

SQL>

1.2 Now prepare the list of the archive files which were generated when the unwanted changes done.

Example :
sqlplus / as sysdba
SQL>
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_1.349.915193325', OPTIONS => DBMS_LOGMNR.NEW);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_2.350.915193335', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_3.351.915193349', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_4.352.915193363', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_5.353.915193387', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_6.354.915193419', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_7.355.915193437', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_8.356.915193449', OPTIONS => DBMS_LOGMNR.ADDFILE);

Note: Thread 2 logs mining should be done in node 2.

execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_1.279.915193333', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_2.263.9151933397', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_3.274.915193355', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_4.277.915193373', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_5.271.915193405', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_6.268.915193429', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_7.267.915193441', OPTIONS => DBMS_LOGMNR.ADDFILE);


1.3. Now you can start the logMiner as shown below:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME => '/path/DICTIONARY.ORA');

e.g.,

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME => '/u01/LogMinerDIR/DICTIONARY.ORA');

PL/SQL procedure successfully completed.

SQL>

Note: After adding log files, then start above log miner from the dictionary. In RAC thread 1 redo start from

1.4. Now you can query v$LOGMNR_CONTENTS details using following queries:

Querying Log Information:

Once LogMiner is started, the contents of the logfiles can be queried using the following views:

V$LOGMNR_DICTIONARY - The dictionary file in use.
V$LOGMNR_PARAMETERS - Current parameter settings for LogMiner.
V$LOGMNR_LOGS - Which redo log files are being analyzed.
V$LOGMNR_CONTENTS - The contents of the redo log files being analyzed.
The following query displays the SQL issued along with the undo SQL to reverse it.

SELECT scn, operation, sql_redo, sql_undo
FROM   v$logmnr_contents;
The following query displays the number of hits for each object during the analyzed period.

SELECT seg_owner, seg_name, count(*) AS Hits
FROM   v$logmnr_contents
WHERE  seg_name NOT LIKE '%$'
GROUP BY seg_owner, seg_name;

-- here you can view all contents

SQL> select username, operation, SQL_REDO,SQL_UNDO,SEG_OWNER FROM V$LOGMNR_CONTENTS;

-- To specific

SQL> select seg_owner, operation, sql_redo, sql_undo
 from v$logmnr_contents where SEG_NAME = '';

SQL> select username, operation, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = '';

SQL> SELECT (XIDUSN || '.'|| XIDSLT || '.'|| XIDSQN) AS XID, USERNAME , SQL_REDO,SQL_UNDO AS SQL_UNDO
FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'HR';

-- To terminate the log miner

SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;
e.g.,

SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;

PL/SQL procedure successfully completed.

SQL>


Case-2 (Extracting the Dictionary from ONLINE CATALOG)]
-- Thread1 logs start from Node1
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_1.349.915193325', OPTIONS => DBMS_LOGMNR.NEW);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_2.350.915193335', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_3.351.915193349', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_4.352.915193363', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_5.353.915193387', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_6.354.915193419', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_7.355.915193437', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+DATA/PROD/onlinelog/group_8.356.915193449', OPTIONS => DBMS_LOGMNR.ADDFILE);

-- Thread 2 logs start from node2
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_1.279.915193333', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_2.263.9151933397', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_3.274.915193355', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_4.277.915193373', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_5.271.915193405', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_6.268.915193429', OPTIONS => DBMS_LOGMNR.ADDFILE);
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '+FRA/PROD/onlinelog/group_7.267.915193441', OPTIONS => DBMS_LOGMNR.ADDFILE);

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> select username, operation, SQL_REDO,SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = 'SCOTT';

-- To terminate the log miner
SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;
e.g.,

SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;

PL/SQL procedure successfully completed.

SQL>

--- Sample output:

SELECT scn, operation, sql_redo, sql_undo
FROM   v$logmnr_contents;

The following query displays the number of hits for each object during the analyzed period.

SELECT seg_owner, seg_name, count(*) AS Hits
FROM   v$logmnr_contents
WHERE  seg_name NOT LIKE '%$'
GROUP BY seg_owner, seg_name;


SELECT OPERATION, STATUS, SQL_REDO FROM V$LOGMNR_CONTENTS
  WHERE SEG_OWNER = 'HR' AND TABLE_NAME = 'XML_CLOB_COL_TAB';

The query output looks similar to the following:

OPERATION         STATUS  SQL_REDO

INSERT            0       insert into "HR"."XML_CLOB_COL_TAB"("F1","F2","F5") values
                             ('9999','Gou45Moh','GOURANGA')
         
XML DOC BEGIN     5       update "HR"."XML_CLOB_COL_TAB" a set a."F3" = XMLType(:1)
                             where a."F1" = '9999' and a."F2" = 'Gou45Moh' and a."F5" = 'GOURANGA'

XML DOC WRITE     5       XML Data

XML DOC WRITE     5       XML Data

XML DOC WRITE     5       XML Data

XML DOC END       5
                                                                  
In the SQL_REDO columns for the XML DOC WRITE operations there will be actual data for the XML document. It will not be the string 'XML Data'.


Add On:

Please note that to extract a dictionary to the redo logs, the database must be open and in ARCHIVELOG mode and archiving must be enabled
Also to make sure that the redo logs contain information that will provide the most value to you, you should enable at least minimal supplemental logging.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; - will take long time.
Database altered.

SQL> alter database add supplemental log data (primary key) columns;
Database altered.

Now, consider the following statements issued by an application against your database:

SQL> insert into test values (100002,sysdate,12,1);

1 row created.

SQL> commit;

Commit complete.

SQL> update test set hotel_id = 13 where res_id = 100002;

1 row updated.

SQL> commit;

Commit complete.

SQL> delete res where res_id = 100002;

1 row deleted.

SQL> commit;

Commit complete.

Note : Each one is succeeded by a commit statement, which indicates that each statement is a transaction.


2 comments:

  1. is there any way that i can get my data back from archive log files while my testing server has no longer exist. Also i don't have any data guard or physical backup.Only th thing i have is archivelog.

    Can i do log mining in a completely new instance?

    ReplyDelete
  2. Thanks for reading my blog.
    If you have restored from backup controlfile or standby controlfile, you can recover from archive logs without any dataguad setup.

    Yes, you can do log mining in any new instance.
    rgds.

    ReplyDelete

Translate >>