Disable triggers and re-enable triggers but avoid table alteration in meantime - oracle

I have the following situation:
A table (MyTable) should be processed (updates/inserts/deletes etc) by a batch process (a call to a myplsql() procedure).
During myplsql execution no one should touch MyTable - so MyTable is locked in exclusive mode by myplsql.
Now MyTable has a number of on insert,on update, on delete triggers defined but those are not needed while performing batch processing - moreover they slow down the batch process extremely.
So the solution is to disable the triggers before calling myplsql().
But how to avoid someone touching the MyTable just after alter table ... disable trigger is performed and before myplsql manages to lock the table,
given that alter table performs implicit commit - so any lock acquired before that will be lost anyway?
Part of the problem is that I do not control the other code or the other user that could try to touch the Table.
In a few words I need to perform the following in a single shot:
Lock MyTable
Disable Triggers (somehow without loosing the lock)
Process MyTable
Enable Triggers
Unlock MyTable
One thought was to remove grants from the table - and render it unusable for other users.
But as it is turns out - that is not an option as the other processes/users perform their operations logged in as a table owner user.
Thanks.

A slightly different approach is to keep the triggers enabled but reduce (if not quite entirely remove) their impact, by adding a when clause something like:
create or replace trigger ...
...
for each row
when (sys_context('userenv', 'client_info') is null
or sys_context('userenv', 'client_info') != 'BATCH')
declare
...
begin
...
end;
/
Then in your procedure add a call at the start as your 'disable triggers' step:
dbms_application_info.set_client_info('BATCH');
and clear it again at the end, just in case the session is left alive and reused (so you might want to do this in an exception handler too):
dbms_application_info.set_client_info(null);
You could also use module, or action, or a combination. While that setting is in place the trigger will still be evaluated but won't fire, so any thing happening inside will skipped - the trigger body does not run, as the docs put it.
This isn't foolproof as there is nothing really stopping other users/applications making the same calls, but if you pick a more descriptive string and/or a combination of settings, it would have to be deliberate - and I think you're mostly worried about accidents not bad actors.
Quick speed test with a pointless trigger that does just slows things down a bit.
create table t42 (id number);
-- no trigger
insert into t42 (id) select level from dual connect by level <= 10000;
10,000 rows inserted.
Elapsed: 00:00:00.050
create or replace trigger tr42 before insert on t42 for each row
declare
dt date;
begin
select sysdate into dt from dual;
end;
/
-- plain trigger
insert into t42 (id) select level from dual connect by level <= 10000;
10,000 rows inserted.
Elapsed: 00:00:00.466
create or replace trigger tr42 before insert on t42 for each row
when (sys_context('userenv', 'client_info') is null
or sys_context('userenv', 'client_info') != 'BATCH')
declare
dt date;
begin
select sysdate into dt from dual;
end;
/
-- userenv trigger, not set
insert into t42 (id) select level from dual connect by level <= 10000;
10,000 rows inserted.
Elapsed: 00:00:00.460
- userenv trigger, set to BATCH
exec dbms_application_info.set_client_info('BATCH');
insert into t42 (id) select level from dual connect by level <= 10000;
10,000 rows inserted.
Elapsed: 00:00:00.040
exec dbms_application_info.set_client_info(null);
There's a bit of variation from making remote calls, but I ran a few times and it's clear that running with a plain trigger is very similar to running with the constrained trigger without BATCH set, and both are much slower than running without a trigger or with the constrained trigger with BATCH set. In my testing there's an order of magnitude difference.

Related

Truncate local table only when Remote table is accessible or have complete data in oracle

I've a problem which I'm hard to find solution. Hope you guys in this community can solve.
On daily basis I'm copying table from one database(T_TAGS_REMOTE) to table on another database (T_TAGS_LOCAL) through DB links. For this I truncate T_TAGS_LOCAL table first and then perform insert.
Above task is done through Linux job.
Problem comes when
Sometimes T_TAGS_REMOTE from remote database is not accessible giving ORA error
Sometimes T_TAGS_REMOTE have not complete data rows (i,e SYSDATE COUNT < SYSDATE-1 COUNT)
Requirements:
STOP truncating STOP inserting when any of the above problem (1) or (2) has encountered
MyCode:
BEGIN
SELECT COUNT(1) AS OLD_RECORDS_COUNT FROM T_TAGS_LOCAL;
EXECUTE IMMEDIATE 'TRUNCATE TABLE T_TAGS_LOCAL';
INSERT /*+ APPEND */ INTO T_TAGS_LOCAL SELECT * FROM AK.T_TAGS_REMOTE#NETCOOL;
END;
/
Please suggest BETTER option for table copy or code to handle this problem.
I would not use the technique you are using, it would always generate issues. Instead, I think your use case fits a replication using materialized views. A materialized view log in source, and a materialized view using the dblink in target
You only need to decide the refresh method, that could be FAST ON COMMIT, as I guess your table is not very big as you are copying the whole table each and every single day.
Example
In Source
SQL> create table t ( c1 number primary key, c2 number ) ;
Table created.
SQL> declare
begin
for i in 1 .. 100000
loop
insert into t values ( i , dbms_random.value ) ;
end loop;
commit ;
end;
/ 2 3 4 5 6 7 8 9
PL/SQL procedure successfully completed.
SQL> create materialized view log on t with primary key ;
Materialized view log created.
SQL> select count(*) from t ;
COUNT(*)
----------
100000
In Target
SQL> create materialized view my_copy_of_t build immediate refresh fast on demand as
select * from your_source#your_db_link
-- To refresh in target
SQL> select count(*) from my_copy_of_t ;
COUNT(*)
----------
100000
Now, we change source
SQL> insert into t values ( 100001 , dbms_random.value );
1 row inserted
SQL> commit ;
Commit completed.
In target, for refreshing
SQL> exec dbms_mview.refresh('MY_COPY_OF_T');
The only requirement for FAST REFRESH ON DEMAND is that you must have a materialized view log for each of the tables that are part of the Materialized View. In your case, as you are replicating a table, you only need a materialized view log on the source table.
A better option might be using a materialized view. The way you do it now, you'd refresh it on demand using a database job scheduled via DBMS_JOB or DBMS_SCHEDULER.

Automatically delete new records with specific values - ORACLE

I have a table in a database (Oracle 11g) that receives roughly 45000 new records each day. Our organization has roughly 15 items (is a predetermined be a static unique value for each) and I am looking to either delete these records automatically or change a specific value in the these records columns before my batch job packages these transactions and sends them off. Any suggestions on the best way to do this? These transactions are only 10-20 of the 45000 so checking each time they are entered seems like it may have to much cost. To add the values come periodically through the day via DTS package from SQL 2000 server; and yes 2000 is end of life an we will be upgrading early next year.
sample below - accept only +0 values, if a value is less than 0 we change it to 99999;
create table my_table (val int);
create or replace trigger my_trigger
before insert on my_table
for each row
declare
begin
if :new.val <0 then:new.val := 99999; end if;
end my_trigger;
insert into my_table values(0);
insert into my_table values(1);
insert into my_table values(-1);
select * from my_table
1 0
2 1
3 99999
if want to prevent insert of the "wrong" values - "silent insert reject" is not recommended, you'd better need either raise an exception in trigger or set a constraint, see discussion there: Before insert trigger

Oracle 'after create' trigger to grant privileges

I have an 'after create on database' trigger to provide select access on newly created tables within specific schemas to different Oracle roles.
If I execute a create table ... as select statement and then query the new table in the same block of code within TOAD or a different UI I encounter an error, but it works if I run the commands separately:
create table schema1.table1 as select * from schema2.table2 where rownum < 2;
select count(*) from schema1.table1;
If I execute them as one block of code I get:
ORA-01031: insufficient privileges
If I execute them individually, I don't get an error and am able to obtain the correct count.
Sample snippet of AFTER CREATE trigger
CREATE OR REPLACE TRIGGER TGR_DATABASE_AUDIT AFTER
CREATE OR DROP OR ALTER ON Database
DECLARE
vOS_User VARCHAR2(30);
vTerminal VARCHAR2(30);
vMachine VARCHAR2(30);
vSession_User VARCHAR2(30);
vSession_Id INTEGER;
l_jobno NUMBER;
BEGIN
SELECT sys_context('USERENV', 'SESSIONID'),
sys_context('USERENV', 'OS_USER'),
sys_context('USERENV', 'TERMINAL'),
sys_context('USERENV', 'HOST'),
sys_context('USERENV', 'SESSION_USER')
INTO vSession_Id,
vOS_User,
vTerminal,
vMachine,
vSession_User
FROM Dual;
insert into schema3.event_table VALUES (vSession_Id, SYSDATE,
vSession_User, vOS_User, vMachine, vTerminal, ora_sysevent,
ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name);
IF ora_sysevent = 'CREATE' THEN
IF (ora_dict_obj_owner = 'SCHEMA1') THEN
IF DICTIONARY_OBJ_TYPE = 'TABLE' THEN
dbms_job.submit(l_jobno,'sys.execute_app_ddl(''GRANT SELECT
ON '||ora_dict_obj_owner||'.'||ora_dict_obj_name||' TO
Role1,Role2'');');
END IF;
END IF;
END IF;
END;
Jobs are asynchronous. Your code is not.
Ignoring for the moment the fact that if you're dynamically granting privileges that something in the world is creating new tables live in production without going through a change control process (at which point a human reviewer would ensure that appropriate grants were included) which implies that you have a much bigger problem...
When you run the CREATE TABLE statement, the trigger fires and a job is scheduled to run. That job runs in a separate session and can't start until your CREATE TABLE statement issues its final implicit commit and returns control to the first session. Best case, that job runs a second or two after the CREATE TABLE statement completes. But it could be longer depending on how many background jobs are allowed to run simultaneously, what other jobs are running, how busy Oracle is, etc.
The simplest approach would be to add a dbms_lock.sleep call between the CREATE TABLE and the SELECT that waits a reasonable amount of time to give the background job time to run. That's trivial to code (and useful to validate that this is, in fact, the only problem you have) but it's not foolproof. Even if you put in a delay that's "long enough" for testing, you might encounter a longer delay in the future. The more complicated approach would be to query dba_jobs, look to see if there is a job there related to the table you just created, and sleep if there is in a loop.

Is there any data dictionary object in oracle to record the transaction details of triggers?

I have created trigger TEST_TRIG as below:
CREATE TRIGGER TEST_TRIG
AFTER INSERT ON TEST_TABLE
FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
TEST_PROC();
END;
Procedure TEST_PROC code:
create or replace
PROCEDURE TEST_PROC
AS
BEGIN
EXECUTE IMMEDIATE 'truncate table TEST_FINAL';
INSERT INTO TEST_FINAL select * from TEST_TABLE;
commit;
END;
Initially, I disabled TRIGGER TEST_TRIG and inserted a record into TEST_TABLE and executed PROCEDURE TEST_PROC manually.
Output: I was able to fetch the same record what i inserted into TEST_TABLE from TEST_FINAL.
I flushed those records from both table and enabled the trigger TEST_TRIG.
Now when i inserts and commits the record in TEST_TABLE, I didn't found the record in TEST_FINAL table... I haven't received any error message also!!!
So I want to know whether trigger got fired or not?
I don't think you have fully grasped the implications of AUTONOMOUS_TRANSACTION. Effectively it means the code bounded by the pragma runs in a separate session . So, because of Oracle's read consistent isolation level, the autonomous transaction cannot see any of the data changes generated by the main transaction.
Thus, if TEST_TABLE is empty when you start the trigger will insert no rows into TEST_FINAL, regardless of how many rows you're inserting right now.
So: don't flush both tables. Insert some rows into TEST_TABLE and commit. TEST_FINAL will still be empty. Insert some more rows into TEST_TABLE and, lo! the first set of rows will appear in TEST_FINAL.
Obviously this is not the result you want. So you need to revisit your logic. It really doesn't make sense to truncate TEST_FINAL every time and definitely not FOR EACH ROW. That is Teh Suck! as far as performance goes. Likewise and for the same reason it doesn't make sense to populate the target table with INSERT ... SELECT .
Discarding the TRUNCATE means you don't need the pragma and everything becomes much simpler,
If you want to keep a history of the affected rows use something like this instead:
CREATE TRIGGER TEST_TRIG
AFTER INSERT ON TEST_TABLE
FOR EACH ROW
BEGIN
insert into test_final (col1, col2)
values (:new.col1, :new.col2);
END;
You'll need to change the exact code to fit your exact requirements.

ETL into operational oracle database - used by jsp/spring/hibernate app

I am needing to have some legacy data loaded into an operational oracle (11gR2) database. The database is being used by a jsp/spring/hibernate (3.2.5.ga) application. A sequence is used for generating unique-keys across all the tables. the sequence definition is as below:
CREATE SEQUENCE "TEST"."HIBERNATE_SEQUENCE" MINVALUE 1 MAXVALUE 999999999999999999999999999 INCREMENT BY 1 START WITH 1000 CACHE 20 NOORDER NOCYCLE
The idea for the data load/ETL is to come up wtih a script that starts out with the max sequence value by running
select HIBERNATE_SEQUENCE.NEXTVAL from dual
at the beginning of the script generation process - and generated SQL Insert statements for the data that needs to be populated. there is some logic involved in handling data cleanup, business rules etc that get applied applied through the script and the generated SQL Insert statements are expected to be run in one batch and that should be able to bring in all of the legacy data.
assuming that the max sequence value was 1000 - the script uses this as as variable and increments is as necessary, and the output SQL INSERTS will be as below:
INSERT INTO USER_STATUS(ID, CREATE_DATE, UPDATE_DATE, STATUS_ID, USER_ID)
VALUES (**1001**, CURRENT_DATE, CURRENT_DATE, 20, 445);
INSERT INTO USER_ACTIVITY_LOG(ID, CREATE_DATE, UPDATE_DATE, DETAILS, LAST_USER_STATUS_ID)
VALUES (**1002**, CURRENT_DATE, CURRENT_DATE, 'USER ACTIVITY 1', **1001**);
INSERT INTO USER_STATUS(ID, CREATE_DATE, UPDATE_DATE, STATUS_ID, USER_ID)
VALUES (**1003**, CURRENT_DATE, CURRENT_DATE, 10, 445);
INSERT INTO USER_ACTIVITY_LOG(ID, CREATE_DATE, UPDATE_DATE, DETAILS, LAST_USER_STATUS_ID)
VALUES (**1004**, CURRENT_DATE, CURRENT_DATE, 'USER ACTIVITY 3', **1003**);
I have created some mock SQL to show the idea of how the output INSERTS are going to be - there are going to be a lot more tables involved in the insert operations. whenever we need to make data changes from the back-end we would use the HIBERNATE_SEQUENCE.NEXTVAL to get the next unique key value. but since the sql generation script runs in a disconnected mode, it does not use the HIBERNATE_SEQUENCE.NEXTVAL, but tries to increment a local variable instead.
The assumption we are having about being able to generate (and run) this script is to
have the application taken down for maintenance
have no database activity during the time of running the script and start out with the max sequence value.
generate the SQL
run the SQL - commit.
assuming that, in the process of script generation, the max sequence value goes up from 1000 to 5000 - after the script is run and the data is loaded, the HIBERNATE_SEQUENCE would need to dropped/created to start at 5001.
bring the application back up.
Now, to the reason i am posting this, in such detail... i am needing your suggestions/input about any loopholes in this design and if there is anything i am overlooking.
Any input is appreciated.
Thanks!
I would suggest against dropping and creating the sequence if its used for any other task in your application, doing so means you also need to re-add any permissions, synonyms,etc.
Do you know at the start of the script how many inserts you will do? If so, and assuming that you wont have any other activity, then you can adjust the 'increment by' value of the sequence , so a single select from it will move the sequence forward by whatever value you want.
> drop sequence seq_test;
sequence SEQ_TEST dropped.
> create sequence seq_test start with 1 increment by 1;
sequence SEQ_TEST created.
> select seq_test.nextval from dual;
NEXTVAL
----------------------
1
> alter sequence seq_test increment by 500;
sequence SEQ_TEST altered.
> select seq_test.nextval from dual;
NEXTVAL
----------------------
501
> alter sequence seq_test increment by 1;
sequence SEQ_TEST altered.
> select seq_test.nextval from dual;
NEXTVAL
----------------------
502
Just be aware that the DDL statements will issue an implicit commit, so once they have run any inflight transaction will be commited, and any work performed after them will be a separate transaction.

Resources