Cheap and light way to get record count frequently in a big table
How to get count(*) without really scanning all the records in a huge table:If your requirement is to frequently get count of records in a huge table and to get it quickly, then you may use the following method to achieve it. One real life requirement example is to find the count of records in an exception queue table to determine how many application threads need to be started to process the exception queue for meeting SLA's.
Basically, we would create a new tracking table to be populated though trigger on the main table to maintain the counters for any inserts or deletes on the main table. We also need an Oracle sequence to distribute the counter trackers for better concurrency and locking mechanism. In essence, counter will be incremented by 1 for an insert and decremented by 1 for a delete statement.
Oracle Materialized view does similar thing but it is not scalable for a busy table (you will see enqueue contention).
create table test_tmp(id1 varchar2(30), id2 number, id3 number);
Table created.
create table test_cnt (id1 varchar2(30), id2 number, c1 char(25),
c2 number, c3 number) initrans 50; #This is the tracking table
Table created.
create unique index test_cnt_idx on test_cnt(id1, id2, c2);
Index created.
create sequence test_cnt_seq start with 1 increment by 1 maxvalue 100 cycle cache 20;
Sequence created.
create or replace trigger test_tr before insert or delete on test_tmp
for each row
declare
v1 number;
begin
select test_cnt_seq.nextval into v1 from dual;
if inserting then
update test_cnt set c3 = c3+1 where id1=:new.id1 and id2 = :new.id2 and c2=v1;
if sql%notfound then
insert into test_cnt values (:new.id1, :new.id2, 'X', v1, 1);
end if;
end if;
if deleting then
update test_cnt set c3 = c3-1 where id1=:old.id1 and id2 = :old.id2 and c2=v1;
if sql%notfound then
insert into test_cnt values (:old.id1, :new.id2, 'X', v1, -1);
end if;
end if;
end;
/
Trigger created.
insert into test_tmp values ('A', 1, 1);
1 row created.
insert into test_tmp values ('B', 2, 2);
1 row created.
insert into test_tmp values ('C', 3, 3);
1 row created.
insert into test_tmp values ('D', 4, 4);
1 row created.
insert into test_tmp values ('E', 5, 5);
1 row created.
commit;
Commit complete.
select id1, sum(c3) from test_cnt group by id1 having sum(c3) > 0;
id1 SUM(C3)
------------------------------ ----------
D 1
A 1
B 1
C 1
E 1
delete from test_tmp where id3=5;
1 row deleted.
commit;
Commit complete.
select id1, sum(c3) from test_cnt group by id1 having sum(c3) > 0;
id1 SUM(C3)
------------------------------ ----------
D 1
A 1
B 1
C 1
Triggers have it's downside and they can fire multiple times for the same record in a single sql statement execution as per http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:11504247549852
6 Comments:
thanks it clears me.. I am also writing blogs on oracle and mysql visit my site when u get time http://blogskies.com - technology and alot more :)
Seriously. You want to create a Trigger on a busy OLTP table. You maybe better off creating a Materialized view log and not worry about this at all!
Creating this trigger on a busy table is better off than having a Materialized view for the same purpose. Sure, there is some overhead with having triggers, but same is the case with Materialized view in this case also.
Materialized view will not work better in this situation because it is prone to "JI" lock concurrency issues.
If the overhead of having triggers is not acceptable in your environment then this solution will not be applicable for your needs.
And why not create an function index with a function like: nvl2(id,'A','B').
A ffs on this index will count the rows, index has a minimal size and its maintenance cost is low. It indexes null (A) and not null(B) values... Being the smallest index of the table, it is the one Oracle will use. Or, if a part of the table is non changing, count this part and do a function index for the rest... All cheaper than a trigger (and maybe than a MV?).
Regards
Christian COMMARMOND
DBA
Christian COMMARMOND,
Creating an index will work but if the underlying table have millions of records, irrespective of how skinny the index is, it'd still have to count all the records one by one for the select count(*) query, which will be more expensive than having tracking table for big tables.
This post is very useful for those who need to get record count frequently. In this post they given the exact method performing this operation.This way is very simple and cheap. This method surely makes your task easy.
Post a Comment
<< Home