Tuesday, September 03, 2013

How to accomplish on commit trigger in Oracle

If you have any use case for on commit trigger, it is almost always due to a given application using database improperly, fixing application design can take some time, especially if it is third party vendor application. In the spirit of knowing and keeping all possible lever's at DBA's disposal, one can use below approach to define on commit trigger for enforcing any data integrity checks at transaction level. This post is not to encourage anybody to use triggers, one should avoid them as much as possible.

The trick here is to leverage userenv('commitscn') feature.

You can read more about on commit trigger research at http://technology.amis.nl/2006/08/07/the-hunt-for-the-oracle-database-on-commit-trigger/

Jonathan Lewis has great information about commitscn at http://www.jlcomp.demon.co.uk/commit.html

Here is the test case:

01:38:59 SQL> create table tr1(a number not null, b number, primary key(a));

Table created.

01:38:59 SQL> create table tr2(a number not null, b number);

Table created.

01:38:59 SQL> create table tr3(a number not null, b number, primary key(a));

Table created.

01:38:59 SQL>
01:38:59 SQL> create or replace trigger tr1_tr before insert on tr1
01:38:59   2  for each row
01:38:59   3  begin
01:38:59   4  insert into tr2 values (1,userenv('commitscn'));
01:38:59   5  end;
01:38:59   6  /

Trigger created.

01:38:59 SQL>
01:38:59 SQL> create or replace trigger tr2_tr before update on tr2
01:38:59   2  for each row
01:38:59   3  begin
01:38:59   4  insert into tr3 values (0,0);
01:38:59   5  end;
01:38:59   6  /

Trigger created.

01:38:59 SQL>
01:38:59 SQL> insert into tr1 values (1,1);

1 row created.

01:38:59 SQL> --insert into tr2 values (1, userenv('commitscn'));
01:38:59 SQL> select * from tr1;

                   A                    B
-------------------- --------------------
                   1                    1

01:38:59 SQL> select * from tr2;

                   A                    B
-------------------- --------------------
                   1             40208814

01:38:59 SQL> select * from tr3;

no rows selected

01:38:59 SQL> commit;

Commit complete.

01:38:59 SQL> select * from tr1;

                   A                    B
-------------------- --------------------
                   1                    1

01:38:59 SQL> select * from tr2;

                   A                    B
-------------------- --------------------
                   1             40208816

01:38:59 SQL> select * from tr3;

                   A                    B
-------------------- --------------------
                   0                    0

01:38:59 SQL>
01:38:59 SQL>
01:38:59 SQL> insert into tr1 values (2,2);

1 row created.

01:38:59 SQL> --insert into tr2 values (2, userenv('commitscn'));
01:38:59 SQL> select * from tr1;

                   A                    B
-------------------- --------------------
                   1                    1
                   2                    2

01:38:59 SQL> select * from tr2;

                   A                    B
-------------------- --------------------
                   1             40208816
                   1             40208819

01:38:59 SQL> select * from tr3;

                   A                    B
-------------------- --------------------
                   0                    0

01:38:59 SQL> commit;
commit
*
ERROR at line 1:
ORA-00001: unique constraint (TEST123.SYS_C003852) violated
ORA-06512: at "TEST123.TR2_TR", line 2
ORA-04088: error during execution of trigger 'TEST123.TR2_TR'

01:38:59 SQL> select * from tr1;

                   A                    B
-------------------- --------------------
                   1                    1

01:38:59 SQL> select * from tr2;

                   A                    B
-------------------- --------------------
                   1             40208816

01:38:59 SQL> select * from tr3;

                   A                    B
-------------------- --------------------
                   0                    0

TR1 can be real application table, I introduced TR2 and TR3 tables to leverage commitscn feature, and all of this can be done without changing application code. By inserting one row into TR2 table with commitscn value, Oracle would automatically update commitscn value it inserted in TR2 table at commit time. Since we have before update trigger created on TR2 table, it will be fired at commit time. 

Any error during TR2 table based trigger execution will automatically cause the entire transaction to be rolled back (as seen above).

Saturday, November 17, 2012

Deep dive into ASH

Below is the link for my presentation on "Deep dive into ASH" at IOUG 2012 on April 26th at Vegas.

Deep dive into ASH
Whitepaper
ASH demo scripts

Thursday, November 15, 2012

Internals of Active Dataguard

I presented Internals of Active Dataguard session at NoCoug Winter conference on November 15th, 2012.

Here are the links for the presentation and demo scripts:

Internals of Active Dataguard
Demo Script
Demo Script
Demo Script
Demo Script
Demo Script

Wednesday, February 29, 2012

Internals of online index build presentation at Nocoug

I presented session on Internals of online index build at Nocoug on 02/23/2012.

Here are the links for presentation and demo scripts:

Internals of online index build powerpoint

Demo scripts

Labels:

Thursday, September 08, 2011

Enhancing Oracle Active Data Guard.

I think Active Data Guard (ADG) is one of the best features Oracle has ever released and it is remarkably popular considering how easily read work load can be off loaded while guaranteeing data quality (due to the nature of binary replication, good luck to those who rely on logical replication for DR) and ability to ensure zero data loss.

Here are some of the enhancements which can make ADG even more adaptable:

1) Ability to create global temporary tables. Oracle doesn't have to generate redo for GTT's and hence they can be supported on ADG standby. This will enable batch jobs and ETL job's to run on the ADG standby.

2) Continue using parallel threads for redo apply when redo stream encounters new datafile addition. When you add new datafile on primary, redo apply will serialize on creating the datafile on ADG standby. When you add lot of datafiles in quick succession on primary, standby can fall behind by several minutes.

3) When new datafile addition fails on standby for any reason, manual fix requires ADG standby to go in "MOUNT" mode, which is an outage. Most common reason for datafile addition to fail is lack of space, once the space issue is addressed manual fix is to use "create datafile as" command which requires standby to be in "MOUNT" mode.

4) Ability to maintain partial physical standby for ADG. When you use RAC for database consolidation, multiple schemas become part of one big physical database. Not all applications have same business priority, it is very natural that only some of those schema's require read only traffic offload to ADG standby. Instead of creating full physical standby, you only want to create standby with the tablespaces you are interested in and it requires quite a bit of manual scripting and processes involved to take care of new datafiles being created on primary which are not part of the interested tablespaces. Oracle can introduce init parameters like "KEEP_TABLESPACES" and "UNKEEP_TABLESPACES" on the standby which can address this issue by managing new datafiles depending on how it falls into these parameters.

5) Block cleanout that happens as part of commit on primary database is non logged (not recorded in the redolog), which means commit time block cleanout can't happen on the standby. This will cause all read queries on ADG standby to keep visiting undo transaction table (one additional undo block visit for every data segment block that is not cleaned out). I think Oracle should log commit time block cleanout so that standby can also see it and this problem will be resolved.

6) Data Guard role switchover from standby to primary causing ADG standby to go from open read only state to "MOUNT" state and then to "OPEN" state causing all established connections to fail, which can extend total database outage. Although it is not significant, every second counts for availability.

7) Like "CURRENT_SCN" column in v$database, additional column for corresponding time would make it very easier to determine standby lag in real time. All other options currently available are either expensive to query for multiple executions or not fully reliable.

8) When you enable supplemental logging (or force logging) on the primary database, it only gets saved in the control file and this change will not go to the standby unless you regenerate standby control file from the primary or run that command explicitly on the standby too. This can cause huge problem, if you have logical standby or Goldengate being used, after you switchover to the standby. Oracle should record any change in supplemental logging definition to the dictionary also than just to the control file.

9) Ability to determine datafiles impacted by NOLOGGING changes on the standby. Querying "UNRECOVERABLE_TIME" from v$datafile doesn't yield correct result and customer's can falsely rely on this data. Since metadata about blocks impacted by NOLOGGING change is already logged, redo apply on the standby can update standby control file with this information and hence v$datafile can be queried reliably for determining nologging impacted datafiles.

10) Ability to use multiple processes for redo thread merging as part of redo apply for RAC databases. As redo generation is increasing with each release of Oracle and customers increasingly using RAC for database consolidation, one process may not be able to merge redo from all the threads fast enough for recovery slaves to consume. It can be a problem if large number of RAC nodes are being used and redo generation in most of the threads is quite high.

I've opened enhancement request for some of the above points.

Wednesday, May 18, 2011

How to create self managing circular tables

One of the most common tasks DBA's deal with often is to enforce purge policies of data life cycle management for tables. Same thing applies for the logging tables DBA's create internally to track or manage monitoring jobs, performance data, stats collection, auditing, etc.

Even though logging tables DBA's create don't typically grow to gigabytes in size, it'd be ideal to automate data purge process as per retention requirements.

Enter the self managing circular table.

Oralce is internally using circular buffer to implement the same for ASH (active session history) and for some of the fixed tables in the dictionary.

Another way to do this is by designing the table with retention time depicting columns to use for becoming part of the primary key pair, to locate the interested data, and to merge the new data.

Let's say the requirement is to track standby lag for all the standby databases in the environment and to maintain this data for one full year. This needs to be logged in a table in the central repository database.

create table stby_lag_audit (
day_id number not null,
day_hour number not null,
stby_date date,
instance_name varchar2(30) not null,
host_name varchar2(30) not null,
stby_version varchar2(10),
stby_lag number)
/

create unique index stby_lag_audit_idx on stby_lag_audit (day_id, day_hour, instance_name, host_name)
/

alter table stby_lag_audit add primary key (day_id, day_hour, instance_name, host_name) using index
/

Now all we have to do is use "merge" statement if there is a driving table to insert a new record if no record exists for a given primary key or to update an existing record. Pl/sql or other procedural languages can be used to upsert the data into stby_lag_audit table where there is no driving table.

Let's say our driving table is stby_lag_info with columns (instance_name, host_name, stby_version, stby_lag).

merge into stby_lag_audit a using stby_lag_info b
on (a.day_id = to_number(to_char(sysdate, 'DDD')) and
a.day_hour = to_number(to_char(sysdate, 'HH24')) and
a.instance_name = b.instance_name and
a.host_name = b.host_name)
when matched then
update set
a.stby_date = sysdate,
a.stby_version = b.stby_version,
a.stby_lag = b.stby_lag
when not matched then
insert (a.day_id, a.day_hour, a.stby_date, a.instance_name, a.host_name, a.stby_version, a.stby_lag)
values
(to_number(to_char(sysdate, 'DDD')),
to_number(to_char(sysdate, 'HH24')),
sysdate,
b.instance_name,
b.host_name,
b.stby_version,
b.stby_lag)
/


With this, logging data older than one year gets updated automatically with new data and no need to create any additional jobs to purge the logging data older than retention period.

Tuesday, April 26, 2011

How to make sqlprompt bold and blinking

In a large database environment, when DBA's use multiple windows to connect to databases, it is quite possible to type command meant for standby database on a live database window and vice versa.

If all live database sqlplus windows have bold and blinking prompt, then it may reduce the chance of making mistakes like above.

This can be achieved by using external table.

Run this one time shell command on all database boxes:

echo "`tput blink; tput smso`LIVE`tput rmso;tput sgr0`" > /oracle/custom_sqlprompt.log

Add following table to all live databases:

create table custom_sqlprompt (

custom_sqlprompt varchar2(30)

)

organization external

(type oracle_loader

default directory custom_sqlprompt_dir

access parameters

(

records delimited by newline

nobadfile

nologfile

nodiscardfile

fields terminated by '\n'

missing field values are null

(CUSTOM_SQLPROMPT)

)

location ('custom_sqlprompt.log')

)

reject limit unlimited;

create public synonym custom_sqlprompt for custom_sqlprompt;

grant select on custom_sqlprompt to public;

Add following entry in glogin.sql or login.sql:

column sql_prompt new_value sqlprompt

set termout off

select ctype sql_prompt from (select case when controlfile_type = 'STANDBY' then 'STBY'

else

(select custom_sqlprompt from customl_sqlprompt) end as ctype

from v$database);

set termout on

set sqlprompt "&sqlprompt>"


Keep in mind that when you try to open spool files using vi editor, you'll see bunch of control characters, this will not be the case if you use "cat or more" command. You can temporarily disable custom sqlprompt if you don't want to see control characters in the spool file.