Sep 28, 2013

Generation of more archivelogs leads degrade of performance

When more archivelogs will generated in production database, this may degrade performance of your database. To avoid such issues, you need to increase redolog size and increase redo log groups based on transaction.

Note: No downtime required.

When more transaction will happen/ Commit / Rollback occurs, at that time huge redolog buffer will write and causes archive log generation. There are certain causes are there also.

-- Find no. of archivelogs in each hour:

set pages 1000
select trunc(COMPLETION_TIME,'HH') Hour,thread# , round(sum(BLOCKS*BLOCK_SIZE)/1048576) MB,count(*) Archives from v$archived_log
group by trunc(COMPLETION_TIME,'HH'),thread#  order by 1 ;

Myth# Per hour 5 to 10 log switch is OK for a database in OLTPs. Some authors describe 5 to 6 also.

If you are observing very high value for archive logs, then it may impact the performance. It may crash the instance also. So you have to increase the redo size. You can follow the following steps:

-- Find no. of redologs and size
select group#,members,bytes/1024/1024 "SIZE(MB)" from v$log;

-- Add redolog
alter database add logfile group 4 ('/oracle/oradata/prod/redolog4.log') size 200M;

---Adding Member (if required)
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/oradata/prod/redolog4b.log' TO GROUP 4;

-- To resize exiting redo log group:
-- Online management

sql> alter system switch logfile;

-- If log is Inactive / not use, then

sql> alter database clear logfile group 1;

-- Do the following immediately

ALTER DATABASE DROP LOGFILE GROUP 1;

ALTER DATABASE ADD LOGFILE GROUP 1 ('/oracle/oradata/prod/redo01.log') SIZE 102400K reuse;

-- Rename redolog
ALTER DATABASE RENAME FILE '/oracle/oradata/prod/redolog4c.log' TO '/oracle/oradata/prod/redolog4b.log';

-- Listing all

spool log.lst
set echo off
set feedback off
set linesize 120
set pagesize 35
set trim on
set trims on
set lines 120
col group# format 999
col thread# format 999
col member format a70 wrap
col status format a10
col archived format a10
col fsize format 999 heading "Size (MB)"

select l.group#, l.thread#,f.member,l.archived,
l.status,(bytes/1024/1024) fsize
from v$log l, v$logfile f
where f.group# = l.group#
order by 1,2
/

-- Archivelog generation on a daily basis:


set pages 1000 
select trunc(COMPLETION_TIME,'DD') Day, thread#, round(sum(BLOCKS*BLOCK_SIZE)/1048576) MB,count(*) Archives_Generated from v$archived_log
group by trunc(COMPLETION_TIME,'DD'),thread# order by 1;

Cheers !!! Hope sure this document will help

DBAshhh...........njy....................

No comments:

Post a Comment

Translate >>