Wednesday, May 18, 2011

How to create self managing circular tables

One of the most common tasks DBA's deal with often is to enforce purge policies of data life cycle management for tables. Same thing applies for the logging tables DBA's create internally to track or manage monitoring jobs, performance data, stats collection, auditing, etc.

Even though logging tables DBA's create don't typically grow to gigabytes in size, it'd be ideal to automate data purge process as per retention requirements.

Enter the self managing circular table.

Oralce is internally using circular buffer to implement the same for ASH (active session history) and for some of the fixed tables in the dictionary.

Another way to do this is by designing the table with retention time depicting columns to use for becoming part of the primary key pair, to locate the interested data, and to merge the new data.

Let's say the requirement is to track standby lag for all the standby databases in the environment and to maintain this data for one full year. This needs to be logged in a table in the central repository database.

create table stby_lag_audit (
day_id number not null,
day_hour number not null,
stby_date date,
instance_name varchar2(30) not null,
host_name varchar2(30) not null,
stby_version varchar2(10),
stby_lag number)

create unique index stby_lag_audit_idx on stby_lag_audit (day_id, day_hour, instance_name, host_name)

alter table stby_lag_audit add primary key (day_id, day_hour, instance_name, host_name) using index

Now all we have to do is use "merge" statement if there is a driving table to insert a new record if no record exists for a given primary key or to update an existing record. Pl/sql or other procedural languages can be used to upsert the data into stby_lag_audit table where there is no driving table.

Let's say our driving table is stby_lag_info with columns (instance_name, host_name, stby_version, stby_lag).

merge into stby_lag_audit a using stby_lag_info b
on (a.day_id = to_number(to_char(sysdate, 'DDD')) and
a.day_hour = to_number(to_char(sysdate, 'HH24')) and
a.instance_name = b.instance_name and
a.host_name = b.host_name)
when matched then
update set
a.stby_date = sysdate,
a.stby_version = b.stby_version,
a.stby_lag = b.stby_lag
when not matched then
insert (a.day_id, a.day_hour, a.stby_date, a.instance_name, a.host_name, a.stby_version, a.stby_lag)
(to_number(to_char(sysdate, 'DDD')),
to_number(to_char(sysdate, 'HH24')),

With this, logging data older than one year gets updated automatically with new data and no need to create any additional jobs to purge the logging data older than retention period.