rolling back ddl changes on oracle using flyway - oracle

My question: Is there anyway to disable auto-commit of DDL statements in Oracle DB?
Context:
I'm using Flyway 4 to maintain the state of my Oracle DB. As they say in their faq page, they can't rollback DDL changes in Oracle because DDL is autocommitted in this DB.
For instance, I am moving a column from a table to another table (copying existing values). So I would like to have in the same sql file an ALTER TABLE ADD, then an UPDATE, then an ALTER TABLE DROP. I happened to get an error on the DROP statement but the columns that were added by the first ALTER TABLE remained in place. I would like to be able to rollback that change too.
One work around I am using is adding a separate sql file for each DDL statement. But this is ugly. Any other way of doing this?

Any other way of doing this?
As flyway simply runs the migration files you can put anonymous PL SQL blocks in them. With some effort these can then be used to manage errors via PL SQL exceptions to undo all previous steps. If you are manipulating multiple columns each column may need to be in a separate file or all columns would need to be rolled back.
Psuedocode for single column:
declare
begin
-- alter table add new column
exception when others then
dbms_output.put_line('Exception adding new column');
-- nothing to rollback as failed on first step
end;
-- update copy data to new column
exception when others then
dbms_output.put_line('Exception when populating new column');
-- delete any data in new column if committed in blocks
-- drop new column
end;
-- drop old column
exception when others then
dbms_output.put_line('Exception when dropping old column');
-- ? not sure if you can rollback a failed drop column
-- delete any data in new column if committed in blocks
-- drop new column
end;
end;
/
As you can see this is a bit of effort. You may need to expand your explanation of why you wish to avoid multiple sql files with your colleagues and whether this only affects the database?

Related

Dynamically read the columns of the :NEW object in an oracle trigger

I have an oracle trigger that needs to copy values from the updated table to another table.
The problem is that the columns aren't known when the trigger is created. Part of this system allows the table schema to be updated by the application. (don't ask).
Essentially what I want to do is pivot the table to another table.
I have a stored procedure that will do the pivot, but I can't call it as part of the trigger because it does a select on the table being updated. Causing a "mutating" error.
What would be ideal would be to create a dynamic scripts that reads all the column names from user_tab_cols for the updated table, and reads the value from the :new object.
But of course...I can't :)
:NEW doesn't exist at the point the dynamic script is executed. So something like the following would fail:
EXECUTE IMMEDIATE `insert into pivotTable values(:NEW.' || variableWithColumnName ||')';
So, I'm stuck.
I can't read from the table that was updated, and I can't read the value that was updated from the :NEW object.
Is there anyway to accomplish this other than rebuilding the trigger each time the schema is changed?
No. You'll need to rebuild the trigger whenever the table changes.
If you want to get really involved, you could write a procedure that dynamically generated the DDL to CREATE OR REPLACE the trigger by reading user_tab_columns. You could then create a DDL trigger that fired when the table was altered, submitted a job via dbms_job that called the procedure to recreate the trigger. That works but it's a rather large number of moving parts which means that it can fail in all sorts of subtle and spectacular ways particularly if the application that is making schema changes on the fly decides to add columns in the middle of the day.

Oracle Gobal Temp table issue

I am using JdbcTemplate and Oracle stored procedure. In oracle store procedure I have a select query in which I have IN clause like 'IN (SELECT ID FROM GLOBAL_TEMP_TABLE)'.
And the definition of temp table is ON COMMIT PRESERVE ROWS.
However, when I am calling stored procedure from java it give me more records than I expected, seems temp table is storing data from previous session. Need your help.
Without looking at any code, it is hard to tell.
Yet, the symptoms you describe might only be caused because you are still accessing your data from the same session.
From Oracle-Base: Global Temporary Tables (GTT):
The ON COMMIT DELETE ROWS clause indicates that the data should be deleted at the end of the transaction.
the ON COMMIT PRESERVE ROWS clause indicates that rows should be preserved until the end of the session.
That is, in your case, you need to close the session to clear the data.
You cannot access data from a previous or other session when you select rows from a global temporary table.
There are 2 options:
Your session is not new
It's not a temporary table
Keep in mind if you use ON COMMIT PRESERVE ROWS you have to delete the rows yourself. The data is kept until the session ends.
To find out if your session is still the same, query is:
select sid,serial,logon_time from v$session
and write it to a log file.

Fire one trigger when modifying two different tables

We have a problem with our DBMS (Oracle) which prevents us from using materialized views, so my boss came with this idea of implementing it using a real table and triggers; inserting, updating or deleting from this table when an insert, update, or delete is done in one of the tables upon which the materialized view would had been based.
I know I'm going to hell for agreeing to this, but the time for lamentations is well overdue.
My problem is, I don't know how to make this trigger to fire when a change is done in any of these tables, and not in only one of them. Something like this doesn't seem to work:
create trigger my_trigger
after insert or update or delete on table1, table2
Also, would there be a way to create just one trigger instead of one for insert, one for update, and one for delete?
Try this,
CREATE or REPLACE TRIGGER test
AFTER INSERT OR UPDATE OR DELETE ON tabletest
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
DECLARE
<< Your declarations>>
BEGIN
IF INSERTING THEN
<<Your insertions>>
END IF;
IF UPDATING THEN
<<Your updations>>
END IF;
IF DELETING THEN
<<Your deletions>>
END IF;
EXCEPTION
WHEN OTHERS THEN
<<exception handling>>;
END;
Also you cannot have multiple tables in the same trigger, you need to write the same code and change the table name in each trigger if the functionality is the same.

INSERT trigger for inserting record in same table

I have a trigger that is fire on inserting a new record in table in that i want to insert new record in the same table.
My trigger is :
create or replace trigger inst_table
after insert on test_table referencing new as new old as old
for each row
declare
df_name varchar2(500);
df_desc varchar2(2000);
begin
df_name := :new.name;
df_desc := :new.description;
if inserting then
FOR item IN (SELECT pid FROM tbl2 where pid not in(1))
LOOP
insert into test_table (name,description,pid) values(df_name,df_desc,item.pid);
END LOOP;
end if;
end;
its give a error like
ORA-04091: table TEST_TABLE is mutating, trigger/function may not see it
i think it is preventing me to insert into same table.
so how can i insert this new record in to same table.
Note :- I am using Oracle as database
Mutation happens any time you have a row-level trigger that modifies the table that you're triggering on. The problem, is that Oracle can't know how to behave. You insert a row, the trigger itself inserts a row into the same table, and Oracle gets confused, cause, those inserts into the table due to the trigger, are they subject to the trigger action too?
The solution is a three-step process.
1.) Statement level before trigger that instantiates a package that will keep track of the rows being inserted.
2.) Row-level before or after trigger that saves that row info into the package variables that were instantiated in the previous step.
3.) Statement level after trigger that inserts into the table, all the rows that are saved in the package variable.
An example of this can be found here:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936
Hope that helps.
I'd say that you should look at any way OTHER than triggers to achieve this. As mentioned in the answer from Mark Bobak, the trigger is inserting a row and then for each row inserted by the trigger, that then needs to call the trigger to insert more rows.
I'd look at either writing a stored procedure to create the insert or just insert via a sub-query rather than by values.
Triggers can be used to solve simple problems but when solving more complicated problems they will just cause headaches.
It would be worth reading through the answers to this duplicate question posted by APC and also these this article from Tom Kyte. BTW, the article is also referenced in the duplicate question but the link is now out of date.
Although after complaining about how bad triggers are, here is another solution.
Maybe you need to look at having two tables. Insert the data into the test_table table as you currently do. But instead of having the trigger insert additional rows into the test_table table, have a detail table with the data. The trigger can then insert all the required rows into the detail table.
You may again encounter the mutating trigger error if you have a delete cascade foreign key relationship between the two tables so it might be best to avoid that.

Oracle DDL in autonomous transaction

I need to execute a bunch of (up to ~1000000) sql statements on an Oracle database. These statements should result in a referentially consistent state at the end, and all the statements should be rolled back if an error occurs. These statements do not come in a referential order. So if foreign key constraints are enabled, one of the statements may cause a foreign key violation even though, this violation would be fixed with a statement that would be executed later on.
I tried disabling foreign keys first and enabling them after all statements were executed. I thought I would be able to roll back when there was an actual foreign key violation. I was wrong though, I found out that every DDL statement in Oracle started with a commit, so there was no way to rollback the statements this way. Here is my script for disabling foreign keys:
begin
for i in (select constraint_name, table_name from user_constraints
where constraint_type ='R' and status = 'ENABLED')
LOOP execute immediate 'alter table '||i.table_name||' disable constraint
'||i.constraint_name||'';
end loop;
end;
After some research, I found out that it was recommended to execute DDL statements, like in this case, in an autonomous transaction. So I tried to run DDL statements in an autonomous transaction. This resulted in the following error:
ORA-00054: resource busy and acquire with NOWAIT specified
I am guessing this is because the main transaction still has DDL lock on the tables.
Am I doing something wrong here, or is there any other way to make this scenario work?
There's several potential approaches.
The first thing to consider is that whatever you do at the table level will apply to all sessions using that table. If you haven't got exclusive access to that table, you probably don't want to drop/recreate constraints, or disable/enable them.
The second thing to consider is that you probably don't want to be in a position of rolling back a million inserts/updates. Rolling back can be SLOW.
Generally I would load into a temporary table. Then do a single INSERT from the temporary table into the destination table. As a single statement, Oracle will apply all the check constraints at the end.
If you can't go through a temporary table (eg updates to existing data), before starting make the constraints deferrable initially immediate. Then, within your session,
SET CONSTRAINTS emp_job_nn, emp_salary_min DEFERRED;
You can then apply the changes and, when you commit, the constraints will be validated.
You should aquaint yourself with DML error logging as it can help identify any rows causing violations.

Resources