Oct 26, 2022

Oracle GoldenGate 19c Setup

Key Features of GoldenGate

1) Latency Reduction: In GoldenGate, Data movement across multiple systems or databases is in real-time which helps in reducing latency.

2) Consistency & Performance: Oracle GoldenGate Replication only moves committed transactions. It keeps the operations of each transaction in a controlled virtual-memory pool called a cache until the transaction is committed or rolled back. The Oracle GoldenGate cache manager makes use of the operating system’s memory management functions to ensure consistency and improve performance.

3) Support of Databases: Oracle Database versions and releases are supported, as well as a wide range of heterogeneous databases running on a variety of operating systems. Data from an Oracle database can be replicated in a heterogeneous database.

4) Simplicity: GoldenGate has a simple architecture and easy configuration.

5) Minimal Overhead: GoldenGate has a low overhead on the underlying databases and infrastructure, resulting in high performance.


Plan: 

Install Oracle19c and GoldenGate19c on both the machine and configure replication

-- source

[oracle@example01 ~]$ hostname -i

172.168.100.1

[oracle@example01 ~]$


-- Target

[oracle@example02 ~]$ hostname -i

172.168.100.2

[oracle@example02 ~]$


-- Install Oracle 19c Database Software and Create Database 

1) create path

mkdir -p /u01/app/oracle/product/19.3.0.0/dbhome_1


cp /software/LINUX.X64_193000_db_home.zip /u01/app/oracle/product/19.3.0.0/dbhome_1

cd /u01/app/oracle/product/19.3.0.0/dbhome_1

unzip LINUX.X64_193000_db_home.zip


$ ./runInstaller -silent -debug -force \

oracle.install.option=INSTALL_DB_SWONLY \

UNIX_GROUP_NAME=dba \

ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_1 \

ORACLE_BASE=/u01/app/oracle \

oracle.install.db.InstallEdition=EE \

oracle.install.db.DBA_GROUP=dba \

oracle.install.db.OPER_GROUP=dba \

oracle.install.db.OSBACKUPDBA_GROUP=dba \

oracle.install.db.OSDGDBA_GROUP=dba \

oracle.install.db.OSKMDBA_GROUP=dba \

oracle.install.db.OSRACDBA_GROUP=dba \

DECLINE_SECURITY_UPDATES=true


Source GG:

172.168.100.1

oracle/oracle


$ pgrep -lf pmon

2226 ora_pmon_oemdb

5918 ora_pmon_democd


export ORACLE_HOME=/u01/app/oracle/product/19.3.0.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH


$ dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName SRCGGDB -sid SRCGGDB -sysPassword sys -systemPassword system -emConfiguration NONE -datafileDestination /u01/oradata -storageType FS -characterSet AL32UTF8


export ORACLE_SID=SRCGGDB


Target GG:

[oracle@example02 scratch]$ hostname -i

172.168.100.2

oracle/oracle


export ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH


$ dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName TGTGGDB -sid TGTGGDB -sysPassword sys -systemPassword system -emConfiguration NONE -datafileDestination /u01/oradata -storageType FS -characterSet AL32UTF8


GG Installation: Source Server :

copy software to server

unzip into a folder

unzip -d GG19c_Software V983658-01.zip

cd /softwares/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1


create folders for GG Home software:

mkdir -p /u01/goldengate/gg_home

cd /softwares/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1/response

$ cat oggcore.rsp

oracle.install.responseFileVersion=/u01/goldengate/install/rspfmt_ogginstall_response_schema_v19_1_0

INSTALL_OPTION=ORA19c

SOFTWARE_LOCATION=/u01/goldengate/gg_home

START_MANAGER=false

INVENTORY_LOCATION=/u01/app/oraInventory

DATABASE_LOCATION=/u01/app/oracle/product/19.3.0.0.0/dbhome_1

UNIX_GROUP_NAME=dba


$ cd /softwares/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1

./runInstaller -silent -showProgress -waitforcompletion -responseFile /softwares/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1/response/oggcore.rsp

[oracle@example01 Disk1]$ pwd

/softwares/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1

[oracle@example01 Disk1]$ ./runInstaller -silent -showProgress -waitforcompletion -responseFile /softwares/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1/response/oggcore.rsp

Starting Oracle Universal Installer...


Checking Temp space: must be greater than 120 MB.   Actual 62877 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 14697 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2022-10-20_07-33-32AM. Please wait ...You can find the log of this install session at:

 /u01/app/oraInventory/logs/installActions2022-10-20_07-33-32AM.log


Prepare in progress.

..................................................   10% Done.


Prepare successful.


Copy files in progress.

..................................................   36% Done.

..................................................   54% Done.

..................................................   77% Done.

..................................................   82% Done.

..................................................   88% Done.

....................

Copy files successful.


Link binaries in progress.

..........

Link binaries successful.


Setup files in progress.

..................................................   93% Done.

..................................................   95% Done.

..................................................   96% Done.

..................................................   98% Done.

..................................................   99% Done.


Setup files successful.


Setup Inventory in progress.


Setup Inventory successful.

..................................................   95% Done.

Successfully Setup Software.

..................................................   100% Done.


Finish Setup successful.

The installation of Oracle GoldenGate Core was successful.

Please check '/u01/app/oraInventory/logs/silentInstall2022-10-20_07-33-32AM.log' for more details.



===============================================

GG Installation: Target Server 

===============================================

-- copy software to server

unzip into a folder

unzip -d GG19c_Software V983658-01.zip


cd /software/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1


-- create folders for GG Home software:

mkdir -p /u01/goldengate/gg_home

--create response file:

[oracle@example02 response]$ pwd

/software/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1/response

[oracle@example02 response]$ vi oggcore.rsp

$ cat oggcore.rsp

oracle.install.responseFileVersion=/u01/goldengate/install/rspfmt_ogginstall_response_schema_v19_1_0

INSTALL_OPTION=ORA19c

SOFTWARE_LOCATION=/u01/goldengate/gg_home

START_MANAGER=false

INVENTORY_LOCATION=/u01/app/oraInventory

DATABASE_LOCATION=/u01/app/oracle/product/19.3.0.0/dbhome_1

UNIX_GROUP_NAME=dba


$ cd /software/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1

./runInstaller -silent -showProgress -waitforcompletion -responseFile /software/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1/response/oggcore.rsp


log:

[oracle@example02 Disk1]$ ./runInstaller -silent -showProgress -waitforcompletion -responseFile /software/GG19c_Software/fbo_ggs_Linux_x64_shiphome/Disk1/response/oggcore.rsp

Starting Oracle Universal Installer...


Checking Temp space: must be greater than 120 MB.   Actual 19273 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 14946 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2022-10-20_07-25-03AM. Please wait ...You can find the log of this install session at:

 /u01/app/oraInventory/logs/installActions2022-10-20_07-25-03AM.log


Prepare in progress.

..................................................   10% Done.


Prepare successful.


Copy files in progress.

..................................................   36% Done.

..................................................   54% Done.

..................................................   77% Done.

..................................................   82% Done.

..................................................   88% Done.

....................

Copy files successful.


Link binaries in progress.

..........

Link binaries successful.


Setup files in progress.

..................................................   93% Done.

..................................................   95% Done.

..................................................   96% Done.

..................................................   98% Done.

..................................................   99% Done.


Setup files successful.


Setup Inventory in progress.


Setup Inventory successful.

..................................................   95% Done.

Successfully Setup Software.

..................................................   100% Done.


Finish Setup successful.

The installation of Oracle GoldenGate Core was successful.

Please check '/u01/app/oraInventory/logs/silentInstall2022-10-20_07-25-03AM.log' for more details.

[oracle@example02 Disk1]$



=========================================================

Configure Golden Gate in both Source and Target Databases

=========================================================

1) Must be in archive log mode (source & Target)

``````````````````````````````````````````````````````````````````````````````````````````````````

SQL> show parameter db_recovery_file_dest;


NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

db_recovery_file_dest                string      /u01/fra

db_recovery_file_dest_size           big integer 10G

SQL> select LOG_MODE from v$database;


LOG_MODE

------------

ARCHIVELOG


SQL>


2) Verify that supplemental logging and forced logging are set properly.(source)

``````````````````````````````````````````````````````````````````````````````````````````````````

ALTER DATABASE FORCE LOGGING;

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

SQL> select SUPPLEMENTAL_LOG_DATA_MIN,FORCE_LOGGING from v$database;


SUPPLEME FORCE_LOGGING

-------- ---------------------------------------

YES      YES



3) This parameter must be changed on source and target databases:

``````````````````````````````````````````````````````````````````````````````````````````````````

ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;


SQL> show parameter ENABLE_GOLDENGATE_REPLICATION


NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

enable_goldengate_replication        boolean     TRUE

SQL>


4) Create the administrator and user/schema owners on both source and target database.

``````````````````````````````````````````````````````````````````````````````````````````````````

-- Source

SQL> create tablespace goldengate datafile '/u01/oradata/SRCGGDB/goldengate01.dbf' size 1G autoextend on MAXSIZE UNLIMITED;

Tablespace created.


SQL> create user srcgguser identified by srcgguser default tablespace goldengate quota unlimited on goldengate;

User created.


SQL> grant create session,connect,resource,alter system to srcgguser;

Grant succeeded.


SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>'srcgguser', privilege_type=>'CAPTURE', grant_optional_privileges=>'*');

PL/SQL procedure successfully completed.


-- Target

SQL> create tablespace goldengate datafile '/u01/oradata/TGTGGDB/goldengate01.dbf' size 1G autoextend on MAXSIZE UNLIMITED;

Tablespace created.


SQL> create user tgtgguser identified by tgtgguser default tablespace goldengate quota unlimited on goldengate;

User created.


SQL> grant create session,connect,resource,alter system to tgtgguser;

Grant succeeded.


SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>'tgtgguser', privilege_type=>'*', grant_optional_privileges=>'*');

PL/SQL procedure successfully completed.


--------------set alias - source & target (optional) -----------------------------

alias src='export ORACLE_SID=SRCGGDB;export ORACLE_UNQNAME=SRCGGDB;export OGG_HOME=/u01/goldengate/gg_home;export ORACLE_HOME=/u01/app/oracle/product/19.3.0.0.0/dbhome_1;export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$OGG_HOME;export PATH=$ORACLE_HOME/bin:/bin:/usr/bin'


alias tgt='export ORACLE_SID=TGTGGDB;export ORACLE_UNQNAME=TGTGGDB;export OGG_HOME=/u01/goldengate/gg_home;export ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_1;export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$OGG_HOME;export PATH=$ORACLE_HOME/bin:/bin:/usr/bin'


---------------------------------------------------------------------------------------------------------------------------------------


5) Go to Golden Gate Installed location (in our scenario /u01/goldengate/gg_home) 

and then run the following Golden Gate inbuild scripts for creating all necessary objects to support DDL replication.

````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````

-- Source ( Use source GG user as 'srcgguser')

[oracle@example01 ~]$ hostname -i

172.168.100.1

[oracle@example01 ~]$


cd /u01/goldengate/gg_home

[oracle@example01 gg_home]$ pwd

/u01/goldengate/gg_home

$ sqlplus / as sysdba

SQL>

@marker_setup.sql

@ddl_setup.sql

@role_setup.sql

GRANT GGS_GGSUSER_ROLE TO <loggedUser>

@ddl_enable.sql


export OGG_HOME=/u01/goldengate/gg_home

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$OGG_HOME


Start GGSCI and login into database using dblogin command.


dblogin userid <<gguser>>, password <<gguser>>


-- Target ( Use source GG user as 'tgtgguser')

[oracle@example02 ~]$ hostname -i

172.168.100.2

[oracle@example02 ~]$


cd /u01/goldengate/gg_home

[oracle@example01 gg_home]$ pwd

/u01/goldengate/gg_home

$ sqlplus / as sysdba

SQL>

@marker_setup.sql

@ddl_setup.sql

@role_setup.sql

GRANT GGS_GGSUSER_ROLE TO <loggedUser>

@ddl_enable.sql


6) Configure Golden Gate to start GG processes

````````````````````````````````````````````````````````

Create below directories in both Source and Target

export OGG_HOME=/u01/goldengate/gg_home

cd $OGG_HOME


mkdir /u01/goldengate/gg_home/dirprm

mkdir /u01/goldengate/gg_home/dirdat

mkdir /u01/goldengate/gg_home/dirchk

mkdir /u01/goldengate/gg_home/BR

mkdir /u01/goldengate/gg_home/dirdmp

mkdir /u01/goldengate/gg_home/dirrpt

mkdir /u01/goldengate/gg_home/dirpcs

mkdir /u01/goldengate/gg_home/dirtmp


7) Create the Manager in Source database and start the Manager

`````````````````````````````````````````````````````````

-- source

cd $OGG_HOME/dirprm

vi mgr.prm


PORT 7801

AUTORESTART EXTRACT EXT*, RETRIES 5, WAITMINUTES 5, RESETMINUTES 60

PURGEOLDEXTRACTS dirdat/*, USECHECKPOINTS


-- start manager

[oracle@example01 gg_home]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle

Version 19.1.0.0.4 OGGCORE_19.1.0.0.0_PLATFORMS_191017.1054_FBO

Linux, x64, 64bit (optimized), Oracle 19c on Oct 17 2019 21:16:29

Operating system character set identified as UTF-8.


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


GGSCI (example01) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (example01) 2> start manager

Manager started.


GGSCI (example01) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING


GGSCI (example01) 4>



8) Create the Manager in Target database and start the Manager

``````````````````````````````````````````````````````````````````````

Note: Grant dba role to target GG user ( tgtgguser)


SQL> grant dba to tgtgguser;

Grant succeeded.

SQL> @role_setup.sql

GRANT GGS_GGSUSER_ROLE TO <loggedUser>

SQL> GRANT GGS_GGSUSER_ROLE TO tgtgguser;

SQL> EXECUTE ON dbms_logmnr_d package


-- create manager


cd $OGG_HOME/dirprm

vi mgr.prm


PORT 7801

AUTORESTART REPLICAT REP*, RETRIES 5, WAITMINUTES 5, RESETMINUTES 60

PURGEOLDEXTRACTS dirdat/*, USECHECKPOINTS


-- start manager


[oracle@example02 gg_home]$ ./ggsci

GGSCI (example02) 1> info all


Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (example02) 2> start manager

Manager started.


GGSCI (example02) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING


GGSCI (example02) 4>


============================================================================

Create Demo data in source database : SRCGGDB


cd $ORACLE_HOME

cd demo/schema/human_resources

ls 

hr_analz.sql  hr_code.sql  hr_comnt.sql  hr_cre.sql  hr_drop_new.sql  hr_drop.sql  hr_idx.sql  hr_main_new.sql  hr_main.sql  hr_popul.sql


$ sqlplus / as sysdba

SQL>@hr_main.sql


-- verify

$ sqlplus hr/oracle

SQL> SELECT table_name FROM user_tables;


TABLE_NAME

--------------------------------------------------------------------------------

REGIONS

COUNTRIES

LOCATIONS

DEPARTMENTS

JOBS

EMPLOYEES

JOB_HISTORY


7 rows selected.


SQL>

============================================================================

9) Create HR schema only in target database : TGTGGDB

```````````````````````````````````````````````````````````````

SQL> create tablespace HR datafile '/u01/oradata/TGTGGDB/hr01.dbf' size 1G autoextend on MAXSIZE UNLIMITED;

Tablespace created.


SQL> create user HR identified by tgtoracle default tablespace HR quota unlimited on HR;

User created.


SQL> grant create session,connect,resource,alter system to HR;

Grant succeeded.


--------------------- Add both databases entries in tnsnames.ora file ---------------------


SRCGGDB =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = 172.168.100.1)(PORT = 1521))

    )

    (CONNECT_DATA =

      (SERVICE_NAME = SRCGGDB)

    )

  )

----------------------------------------------------------------------------------------------


10) Create Sample Extract and Datapump in Source:

``````````````````````````````````````````````````````````````

-- source

$ cd $OGG_HOME

[oracle@example01 gg_home]$ ./ggsci


GGSCI (example01) 1> dblogin userid srcgguser,password srcgguser

Successfully logged into database.


GGSCI (example01 as srcgguser@SRCGGDB) 2>


add schematrandata HR

register extract ext1 database

add extract ext1, integrated tranlog, begin now

add exttrail /u01/goldengate/gg_home/dirdat/tr, extract ext1


edit params ext1


extract ext1

SETENV (ORACLE_HOME ="/u01/app/oracle/product/19.3.0.0.0/dbhome_1")

SETENV (ORACLE_SID ="SRCGGDB")

Userid srcgguser@SRCGGDB, PASSWORD srcgguser

EXTTRAIL /u01/goldengate/gg_home/dirdat/tr

DDL INCLUDE MAPPED

SEQUENCE HR.*;

TABLE HR.*;


:wq


-- to view 

ggsci> view params ext1


-- Configure Pump process:


ggsci> add extract extpmp1, EXTTRAILSOURCE /u01/goldengate/gg_home/dirdat/tr

-- Below one will copy to remote/ target GG server for replication purpose

ggsci> add rmttrail /u01/goldengate/gg_home/dirdat/tr, extract extpmp1

ggsci> edit params extpmp1

extract extpmp1

PASSTHRU

RMTHOST 172.168.100.2, MGRPORT 7801

RMTTRAIL /u01/goldengate/gg_home/dirdat/tr

SEQUENCE HR.*;

TABLE HR.*;


:wq


-- start all the process


GGSCI (example01) 3> info all


Program     Status      Group       Lag at Chkpt  Time Since Chkpt


MANAGER     RUNNING

EXTRACT     STOPPED     EXT1        00:00:00      00:28:18

EXTRACT     STOPPED     EXTPMP1     00:00:00      00:00:04


GGSCI (example01) 4> start *


Sending START request to MANAGER ...

EXTRACT EXT1 starting


Sending START request to MANAGER ...

EXTRACT EXTPMP1 starting


GGSCI (example01) 5> info all


Program     Status      Group       Lag at Chkpt  Time Since Chkpt


MANAGER     RUNNING

EXTRACT     RUNNING     EXT1        00:00:00      00:28:28

EXTRACT     RUNNING     EXTPMP1     00:00:00      00:00:14

GGSCI (example01) 6>



11)Configure the Target Machine (Single Instance Database): TGTGGDB

`````````````````````````````````````````````````````````````````````````````

-- Target

ggsci> dblogin userid tgtgguser,password tgtgguser

ggsci> add CHECKPOINTTABLE tgtgguser.GGCHKPT


-- Register Replicats to database:


ggsci> register replicat REP1 database


-- Add Replicats:


ggsci> add replicat REP1, INTEGRATED, exttrail /u01/goldengate/gg_home/dirdat/tr CHECKPOINTTABLE tgtgguser.GGCHKPT


-- Parameter file for Replicat:


ggsci> edit params rep1

REPLICAT REP1

SETENV (ORACLE_HOME="/u01/app/oracle/product/19.3.0.0/dbhome_1")

SETENV (ORACLE_SID = "TGTGGDB")

USERID tgtgguser, PASSWORD tgtgguser

DISCARDFILE /u01/goldengate/gg_home/dirrpt/carerpt.dsc, PURGE

DISCARDROLLOVER ON SUNDAY

MAP HR.* TARGET HR.*;


:wq


-- start the replicat process


GGSCI (example02 as tgtgguser@TGTGGDB) 4> info all


Program     Status      Group       Lag at Chkpt  Time Since Chkpt


MANAGER     RUNNING

REPLICAT    STOPPED     REP1        00:00:00      00:01:34



GGSCI (example02 as tgtgguser@TGTGGDB) 5> start REPLICAT REP1


Sending START request to MANAGER ...

REPLICAT REP1 starting



GGSCI (example02 as tgtgguser@TGTGGDB) 6> info all


Program     Status      Group       Lag at Chkpt  Time Since Chkpt


MANAGER     RUNNING

REPLICAT    RUNNING     REP1        00:00:00      00:00:00



GGSCI (example02 as tgtgguser@TGTGGDB) 7>


12) Perform the Initial Load (using export Import):

`````````````````````````````````````````````````````````````````````

sqlplus hr/tgtoracle

connected.


Export the HR schema from source and import in Target database.

Then, do the transaction in the source and verify those in target.


Testing:

============

--- Source

sqlplus hr/oracle

connected.

SQL> insert into DEPARTMENTS values(999,'Goldengate',200,1700);


1 row created.


SQL> commit;


Commit complete.


SQL> select count(1) from DEPARTMENTS;


  COUNT(1)

----------

        28


SQL> !hostname -i

172.168.100.1


SQL>



-- Target 


sqlplus hr/tgtoracle

connected.


SQL> select * from select count(1) from DEPARTMENTS;^C


SQL> select count(1) from DEPARTMENTS;


  COUNT(1)

----------

        28


SQL> select * from DEPARTMENTS where DEPARTMENT_ID=999;


DEPARTMENT_ID DEPARTMENT_NAME                MANAGER_ID LOCATION_ID

------------- ------------------------------ ---------- -----------

          999 Goldengate                            200        1700


SQL> !hostname -i

172.168.100.2


SQL>


-- data maching. Replication working fine.




May 16, 2022

Oracle AWR baseline

Internally, an AWR baselines is a range of snapshots that can be used to compare to other pairs of snapshots. The Oracle database server will exempt the snapshots assigned to a specific baseline from the automated purge routine. Thus, the main purpose of a baseline is to preserve typical runtime statistics in the AWR repository, allowing you to run the AWR snapshot reports on the preserved baseline snapshots at any time and compare them to recent snapshots contained in the AWR.

This allows you to compare current performance (and configuration) to established baseline performance, which can assist in determining database performance problems.

 How to create/drop AWR baseline?

To create a baseline of performance of the system, one create a baseline and used for later comparisons. If a snapshot id passed in is invalid the procedure will return an error message indicating it is invalid. The snapshot baseline will be kept till it is dropped manually.

SQL> EXEC dbms_workload_repository.create_baseline(start_snap_id=>1, end_snap_id=>10, baseline_name=>'First baseline');

-- If cascade is TRUE, it will also drop the snapshot in the baseline.
-- If cascade is FALSE, it will only drop the baseline (not the snapshot)
SQL> EXEC dbms_workload_repository.drop_baseline(baseline_name=>'First baseline', cascade=>false);

-- view baseline name, start and end snap id
SQL> select BASELINE_NAME, START_SNAP_ID, END_SNAP_ID from dba_hist_baseline;

May 11, 2022

Manage your Database Flash Recovery Area (FRA)

 In the Oracle database, the Flash Recovery Area or FRA is a location on disk where the database can create and manage several kinds of backup and recovery-related files. 
Main file types are archivelog, flashback log, backups, as well as mirrors for your control files and redo log files.
All files in the FRA are Oracle-managed files. Using a Flash Recovery Area simplifies the administration of your database by automatically retaining them for as long as they are needed for restore and recovery activities, and deleting them when they are no longer needed, because the space is needed for another backup and recovery-related purpose.
The FRA size is set with only one parameter for all file types together, but we can do some calculations on the size needed for individual file types.
Flash Recovery Area needs
Before you start configuring your FRA sizing, you need to define your own needs for the recovery windows and retention time. 
How long do you need to keep your backups?
How do you want to use flashback database?
How much archivelog do you want to keep on disk?
Checking the current usage:
You can check the configuration by looking at two parameters.
SQL> SHOW parameter db_recovery_file_dest
 NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +RECO
db_recovery_file_dest_size           big INTEGER 2300G
SQL>
The current FRA usage can be checked with the views v$recovery_area_usage (for each file type) and v$recovery_file_dest (for overall size and usage).
break on report
compute sum of percent_space_used on report
compute sum of percent_space_reclaimable on report
 select file_type
,      percent_space_used
,      percent_space_reclaimable
,      number_of_files
,      con_id
from   v$recovery_area_usage
order by 1
/
 
FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
ARCHIVED LOG                         61.57                     61.57             243          0
AUXILIARY DATAFILE COPY                  0                         0               0          0
BACKUP PIECE                             0                         0               0          0
CONTROL FILE                             0                         0               0          0
FLASHBACK LOG                        18.08                       .34              53          0
FOREIGN ARCHIVED LOG                   .12                         0               5          0
IMAGE COPY                               0                         0               0          0
REDO LOG                                 0                         0               0          0
                        ------------------ -------------------------
sum                                  79.77                     61.91
 
8 rows selected.

 
col name format a7
clear breaks
clear computes 
 
select name
,      round(space_limit / 1024 / 1024) size_mb
,      round(space_used  / 1024 / 1024) used_mb
,      decode(nvl(space_used,0),0,0,round((space_used/space_limit) * 100)) pct_used
from v$recovery_file_dest
order by name
/
 
NAME       SIZE_MB    USED_MB   PCT_USED
------- ---------- ---------- ----------
+RECO      1201138     958099         80

The v$recovery_area_usage view gives information about reclaimable files. Files that are reclaimable will be removed by the database when the space is needed for other purposes. This is done when the usage of the FRA is about 80% of the defined size (db_recovery_file_dest_size).
For monitoring your FRA, you need to check your unreclaimable space. Just checking the percentage of the FRA in use is not very helpful, because it will often be around 80%. 
This query combines the two views to calculate the PERCENT_SPACE_NOT_RECLAIMABLE. If it is around (or above) 80% you will need to act, because that is a situation where your actual FRA usage will also rise above 80%. It is an indication that Oracle cannot remove files, because all files need to be kept for recovery purposes. The most common problem with an undersized FRA is that the database will hang when it cannot create an archivelog file at time of a logswitch.
col name format a7
clear breaks
clear computes
 
select name
,      round(space_limit / 1024 / 1024) space_limit_mb
,      round(space_used  / 1024 / 1024) space_used_mb
,      percent_space_used
,      percent_space_reclaimable
,      percent_space_not_reclaimable
from v$recovery_file_dest
,    ( select sum(percent_space_reclaimable)                      percent_space_reclaimable
       ,      sum(percent_space_used)                             percent_space_used
       ,      sum(percent_space_used - percent_space_reclaimable) percent_space_not_reclaimable
       from  v$recovery_area_usage)
order by name
/

 
 
NAME    SPACE_LIMIT_MB SPACE_USED_MB PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE PERCENT_SPACE_NOT_RECLAIMABLE
------- -------------- ------------- ------------------ ------------------------- -----------------------------
+RECO          1201138        958099              79.77                     61.91                         17.86
Configuration FRA size
Configuration of the FRA size is done with the parameter db_recovery_file_dest_size.
SQL> SHOW parameter db_recovery_file_dest
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +RECO
db_recovery_file_dest_size           big INTEGER 2300G
SQL>
SQL> ALTER system  SET db_recovery_file_dest_size=3500G  scope=BOTH ;
 
System altered.
SQL> 

Configuration in a RAC cluster:

For a RAC cluster, you should configure a shared recovery area for all instances, with the same location and size for all instances.
It is good to know that the contents of the v$recovery_area_usage are also about the files from all instances. Queries on this view will give the same results on all instances. There is no gv$recovery_area_usage view.

SQL> SHOW parameter db_recovery_file_dest
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +RECO
db_recovery_file_dest_size           big INTEGER 2300G
 
SQL> 
SQL> SELECT inst_id , VALUE / (1024 * 1024 * 1024) GB  
     FROM gv$parameter 
     WHERE name = 'db_recovery_file_dest_size'
 /
 
   INST_ID         GB
---------- ----------
         1       2300
         2       2300
 
SQL> 

SQL> ALTER system  SET db_recovery_file_dest_size=3500G  sid='*' scope=BOTH ;
 
System altered.
 
SQL> SELECT inst_id , VALUE / (1024 * 1024 * 1024) GB  
     FROM gv$parameter 
     WHERE name = 'db_recovery_file_dest_size'
 /
 
   INST_ID         GB
---------- ----------
         1       3500
         2       3500
 
SQL>

So, in this case, you would use a maximum of 3500 GB on the +RECO disk group for the database. You do not need 3500 GB for each instance.  

Configuring archivelog deletion:

You can define your archivelog deletion policy in RMAN. If there is no archived redo log deletion policy in RMAN, the files can be deleted when backed up at least once to disk or SBT. Or the logs are obsolete according to the backup retention policy.
If you do create an archivelog deletion policy, they can be deleted after you meet the requirements in the policy. 

Examples are:
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO SBT;
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY;
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

The deletion policy itself does not delete archivelog files. It is a protection; files are not deleted if it conflicts with the policy.  

Two common ways to delete archivelogs are:
  • Delete as part of your backup script 
  • Do not create any cleanup script and let the database handle it

A common RMAN script for archivelog files is:

backup device type sbt archivelog all not backed up  1 times ;
delete noprompt archivelog until time = 'sysdate-1' backed up 1 times to sbt ;


The delete statement will check the delete policy, so files won't be deleted if that conflicts with the policy. However, you will receive an error during the backup:
archived log file
 
name=+RECO/MYDB/ARCHIVELOG/2020_06_26/thread_1_seq_2.4585.1044140407 thread=1 sequence=2

RMAN-08120: WARNING: archived log not deleted, not yet applied by standby

If you don't run a delete in backup script, the files can just be deleted by the database, based on your configuration. Remember, we are using OMF for the FRA, so these are Oracle managed files.
For example, if you have a data guard standby database and no backup running on the standby, you will find that the database will delete archivelogs that are applied. This is about the files that are "reclaimable"in the v$recovery_area_usage view. So, when the database runs out of space, it will remove some of the reclaimable files, as you can see in the alert log:

Mon May 09 04:55:55 2022
Deleted Oracle managed file +RECO/MYDB/ARCHIVELOG/2020_05_02/thread_1_seq_55913.1742.1044422411
Deleted Oracle managed file
 
Do not create a shell script that will just delete archivelog files. If your database doesn't know you removed them, you can still get an error if you run out of space in your FRA. Always use RMAN to delete archivelog files.

Calculating archivelog size:

How much archivelog is created? You can check the size of your redo log files in v$log and multiply it with the average number of log switches from v$loghist. But that’s not perfect, as you can do a log switch before the redo log file is 100% full.   
It’s better to look at the v$archived_log view. This will have more information about all the created archivelogs:

If you use the gv$archived_log version, you will see all instances of a RAC database.
If you check with gv$archive_dest.destination = 'USE_DB_RECOVERY_FILE_DEST' you will only get files in the FRA (so we will skip remote destinations used by Data Guard) 
If you check the status, you can see if the file is still available in the FRA

select trunc(l.first_time) arch_date
,      l.inst_id
,      l.status
,      n.destination
,      round(sum( blocks * block_size )  / ( 1024 * 1024 * 1024 ) ) total_file_size_gb
,      count(*) file_count
from   gv$archived_log l
,      gv$archive_dest n
where  l.inst_id = n.inst_id
and    l.dest_id = n.dest_id
and    n.destination = 'USE_DB_RECOVERY_FILE_DEST'
group by trunc(l.first_time)
,        l.inst_id
,        l.status
,        n.destination
order by trunc(l.first_time)
,        l.inst_id
,        l.status
,        n.destination
/

 
ARCH_DATE    INST_ID S DESTINATION               TOTAL_FILE_SIZE_GB FILE_COUNT
--------- ---------- - ------------------------- ------------------ ----------
08-MAY-22          1 D USE_DB_RECOVERY_FILE_DEST                108         46
08-MAY-22          2 D USE_DB_RECOVERY_FILE_DEST                108         46
09-MAY-22          1 D USE_DB_RECOVERY_FILE_DEST                299        116
09-MAY-22          2 D USE_DB_RECOVERY_FILE_DEST                299        116
10-MAY-22          1 D USE_DB_RECOVERY_FILE_DEST                441        167
10-MAY-22          2 D USE_DB_RECOVERY_FILE_DEST                441        167
11-MAY-22          1 D USE_DB_RECOVERY_FILE_DEST                406        146
11-MAY-22          2 D USE_DB_RECOVERY_FILE_DEST                406        146
[....]
64 rows selected.

The status can have several values: 
A - Available
D - Deleted
U - Unavailable
X - Expired

So, based on this column we can calculate how much archivelog is created and how much is still available in the FRA.

select trunc(l.first_time) arch_date
,      round(sum(decode(l.status,'D',blocks * block_size,0)/(1024 * 1024 * 1024))) deleted_gb
,      round(sum(decode(l.status,'A',blocks * block_size,0)/(1024 * 1024 * 1024))) available_gb
,      round(sum(decode(l.status,'U',blocks * block_size,0)/(1024 * 1024 * 1024))) unavailable_gb
,      round(sum(decode(l.status,'X',blocks * block_size,0)/(1024 * 1024 * 1024))) expired_gb
,      round(sum(blocks * block_size) /(1024 * 1024 * 1024))                       total_size_gb
from   gv$archived_log l
,      gv$archive_dest n
where  l.inst_id = n.inst_id
and    l.dest_id = n.dest_id
and    n.destination = 'USE_DB_RECOVERY_FILE_DEST'
group by trunc(l.first_time)
order by trunc(l.first_time)
/

 
 
ARCH_DATE DELETED_GB AVAILABLE_GB UNAVAILABLE_GB EXPIRED_GB TOTAL_SIZE_GB
--------- ---------- ------------ -------------- ---------- -------------
....
08-MAY-22        217            0              0          0           217
09-MAY-22        599            0              0          0           599
10-MAY-22        883            0              0          0           883
11-MAY-22        812            0              0          0           812
31 rows selected.

So, now we know how much archivelog is created daily in the FRA.
We only need to decide how long we want to keep the archivelogs. This depends on several factors:
How often do you make a backup? Making a backup can make the space used by the archivelog reclaimable

Even after you made a backup, you might want to keep the archivelog on disk for some time, because recovering your database is faster when you don't need to restore all the files first
In a data guard environment, you might want to keep files longer, because you can't remove them before they are shipped to the standby databases

Now we can calculate how much FRA space we need for archivelog.
Configuring flashback log
Before you can use flashback, you should turn the option on and configure the retention target.

SQL> SELECT flashback_on FROM v$database ;
 
FLASHBACK_ON
------------------
NO
 
SQL> ALTER DATABASE flashback ON ;
 
DATABASE altered.
 
SQL> SELECT flashback_on FROM v$database ;
 
FLASHBACK_ON
------------------
YES
 
SQL>
SQL> ALTER system SET db_flashback_retention_target=1440 scope=BOTH ;
 
System altered.
 
SQL>
SQL> SHOW parameter db_flashback_retention_target
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target        INTEGER     1440
 
SQL>
The flashback sizing can be viewed in v$flashback_database_log (or gv$flashback_database_log for a RAC database).
SQL>
SELECT inst_id
,      to_char(oldest_flashback_time,'dd-mm-yyyy-hh24:mi')    oldest_flashback_time
,      retention_target
,      round((sysdate - oldest_flashback_time ) * (60 * 24))  actual_retention_possible
,      round(flashback_size / (1024 * 1024 * 1024))           flashback_size_gb
,      round(estimated_flashback_size / (1024 * 1024 * 1024)) estimated_flashback_size_gb
FROM   gv$flashback_database_log
/
 
   INST_ID OLDEST_FLASHBACK RETENTION_TARGET ACTUAL_RETENTION_POSSIBLE FLASHBACK_SIZE_GB ESTIMATED_FLASHBACK_SIZE_GB
---------- ---------------- ----------------  ------------------------- ----------------- ---------------------------
         2 07-07-2021-12:44             1440                       1512               301                         208
         1 07-07-2021-12:44             1440                       1512               301                         208

The db_flashback_retention_target is exactly what the name implies: a retention target. If the FRA is large, you can have more; because these are OMF files, flashback files are only removed if the database needs the storage for other things. If the FRA is too small, you may not have enough to reach the target.
In this case, the target is 1440 minutes (1 day), but we have storage in use for a possible flashback of 1512 minutes. We have 301 GB in use, but only need about 208 to reach the target. So, some of the space is reclaimable: 

SQL>  SELECT * FROM v$recovery_area_usage WHERE FILE_TYPE = 'FLASHBACK LOG' ;
 
FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
FLASHBACK LOG                         6.85                      1.55              76          0
 
SQL>

If you want to be sure you can flashback your database, you need to make a guaranteed restore point (aka a snapshot). The database will then keep the needed flashback files until you drop the restore point, even if you exceed the db_flashback_retention_target. 

SQL>  CREATE restore point my_rp guarantee flashback DATABASE;
 
Restore point created.
 
SQL>  SELECT name , storage_size FROM v$restore_point
 
NAME       STORAGE_SIZE
---------- ------------
MY_RP        4294967296
 
SQL> DROP restore point my_rp ;
 
Restore point dropped.
 
SQL>

Configuring backups

Definition of your retention policy should be done in RMAN. Your policy can be a recovery window or the number of backups you want to save.
The actual usage of storage in the FRA can of course be checked again in the v$recovery_area_usage view, looking at the file types IMAGE COPY, AUXILIARY DATAFILE COPY and BACKUP PIECE.
In RMAN, you can define:

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;

If you keep the defaults, the backup files will be stored in the FRA with retention policy redundancy 1. Meaning that if I make two backups, 1 will be reclaimable.

RMAN> show DEFAULT DEVICE TYPE;
 
RMAN configuration parameters for database with db_unique_name RCAT are:
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default

 
 
RMAN> show RETENTION POLICY;
 
RMAN configuration parameters for database with db_unique_name RCAT are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
 
RMAN> backup tablespace cattbs;
RMAN> backup tablespace cattbs;
And then check the FRA again:
SQL> SELECT * FROM v$recovery_area_usage WHERE file_type = ‘BACKUP PIECE’;
 
FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
BACKUP PIECE                            .8                       .06               9          0

Alter a "Delete obsolete" in RMAN. The first backup is removed, and the reclaimable space is back to 0.
RMAN's crosscheck command will resolve the inconsistency between the FRA usage in the database and on disk.

Be aware that the most common problem with an undersized FRA is a database hang, because the database cannot create an archivelog file at time of a logswitch. The most common solution is to create a backup of the archivelogs, which will also delete these archivelogs.
This doesn't make much sense if both the archivelogs and the backups are stored in the FRA. You can't create a backup in the FRA, when the FRA runs out of space.

So, always investigate if you can locate backups outside the FRA. See also this Oracle support note: How to disable use of Flash Recovery Area for Archivelogs and Backups (Doc ID 297397.1).
Compression

If you are licensed to use the advanced compression option, you can compress RMAN backups while they are created.
Automatic compression of archivelog files is still not implemented by Oracle; see Oracle support Doc ID 2449903.1.

You should not write your own scripts to compress archivelogs in the FRA. If you want to save space using your own scripts, you should create an archive location outside the FRA.

Conclusion
A full FRA can cause a hanging database, so a system down. When monitoring the FRA, focus on the space that is unreclaimable and not just on the actual usage. Make sure you know your needs for your recovery window and flashback target.
Store the RMAN backups outside the FRA, if possible. If you want to make sure you can use a flashback database, you should create a guaranteed restore point. And don't forget to drop it, when it’s no longer needed.
Always let the database or your RMAN backup script handle files in the FRA, because they are all Oracle Managed Files.

Translate >>