I can't fully understand how a statement-level trigger works. It executes once for each transaction right? If I have this AFTER INSERT trigger and what it does inside is that it updates one specific column if it meets the condition (ex for column status, UPDATE table_name SET STATUS = "Single" WHERE COLUMN is null).
Are the newly inserted data only the ones get to be affected? Or every data in the table that has this null value in column status. I'll be glad hearing your knowledge about this.
A statement level trigger will fire once after the triggering statement has run, unlike a row level trigger which fires for each affected row.
After statement triggers are generally used to do processing of the set of data - e.g. logging into a table, or running some post-statement processing (usually a procedure).
If you're wanting to update a value in every affected row, then I would advise using a before row level trigger. The update statement in your question would affect all rows where the COLUMN column is null.
Whether a trigger is actually the right thing to use is debatable. However, I would recommend you look at the documentation and also this Oracle-base article to gain a better understanding of how triggers work and when you might use them.
Related
I came across this update statement and was wondering how the internal working is. It updates a column which also is used in the where clause of the update.
Should this be ideally done in two steps, or does oracle takes care of it automatically?
UPDATE TBL1 SET DATE1=DATE2 WHERE DATE2> DATE1
Oracle takes care of it automatically. Effectively when it runs the update, Oracle performs the following steps:
Queries the table - i.e. evaluate the WHERE clause predicate for each row in the table
For each row that is returned by step 1, update it as per the SET clause. The values of each column are those that were fetched.
For this reason, it is perfectly possible to run an update like this which swaps the values of columns:
UPDATE TBL1 SET DATE1=DATE2, DATE2=DATE1 WHERE DATE2 > DATE1;
The update might be blocked if another session tries to update or delete one of the same rows. Deadlocks are possible but Oracle automatically resolves these by rolling back one of the sessions and raising an exception.
Problem: I have a table to which a customer may add columns. This table might have hundreds of columns of varying data types depending on how insane the customer is. I need to deploy an AFTER UPDATE trigger against this table to insert a row in another table for each column value that has changed.
Example:
Table_A, Row 1: Key_Value=1, Col1=123, Col2="foo"...Coln="bar"
becomes
Table_B, Row 1: Key_Value=1, ColName="Col1", ColValue=123
Table_B, Row 2: Key_Value=1, ColName="Col2", ColValue="foo"
Table_B, Row 3: Key_Value=1, ColName="Coln", ColValue="bar"
Since I do not know what columns they may create and this trigger must be deployed with the application, I need to evaluate the OLD vs NEW pseudo records dynamically (if :new.columns[1] != :old.columns[1] then...) to see what has changed and log only the changed columns. The only examples I have been able to find require referencing the columns in the pseudo records explicitly (if :new.col1 != :old.col1 then...).
Question: Is there a way to do this in Oracle?
Caveats: No, this is not for auditing purposes, so I cannot use Oracle's built-in auditing. No, we are not going to rewrite our app because you know how to do it better, this is the way it needs to work for better or worse.
Any helpful comments are welcome. All snarkey DBA drivel is not. Thanks in advance.
No. You can't dynamically reference columns in the :new or :old pseudorecord.
The closest you're likely to come is to write code that dynamically generates the entire trigger body by querying the data dictionary and making static references to columns in the pseudorecord. That code, however, would need to be run every time a column was added or removed from the table. Normally, that would be done as part of normal release management. If you are saying that people are adding and removing columns from this table without going through a release process, you could write a DDL trigger that submitted a job via dbms_job that called the procedure that rebuilt the trigger. That would be a lot of moving pieces and it would be a pain to troubleshoot when something inevitably goes wrong but if you're not open to alternate ways of implementing the functionality, that's complexity you'll have to live with.
Please suppose you have, in Oracle Database, a BEFORE UPDATE TRIGGER.
If fires only when in a particular column is assigned a certain value (in example, the string 'SUBSTITUTE'` is inserted as update in the ALPHA column), otherwise it does not fire.
This trigger does many queries and, under certain conditions, updates some records of the triggered table.
Being a BEFORE UPDATE TRIGGER, could it cause MUTATING TABLE error?
You can assume that the body of the trigger does not update the ALPHA column, but could update other columns and/or insert new records in the same table, using :OLD values.
The update of the ALPHA column to the string value 'SUBSTITUTE' provokes the trigger fire.
A mutating table is a table that is currently being modified by an update, delete, or insert statement. If your before-update for-each-row trigger tries to modify the table that is defined against then it will get an ORA-04091: table X is mutating, trigger/function may not see it error. Here's a SQL Fiddle with a trivial example.
You'd get the same with an after-update trigger depending on what you're doing; and you can't make it statement-level if you need to act depending on the :new.alpha value.
Both the 'does many queries' part and the update suggest that perhaps a trigger is not the right tool here; this is quite vague though, and what the right tool is depends on what you're doing. A procedure that makes all the necessary changes and is called instead of the simple update might be one solution, for example.
If I have a column called NAME and it has a value of "CLARK" and I run an update statement
update table1 set name = 'CLARK';
Does Oracle actually update the column or does it ignore the update command since the values are the same?
I found this question (Oracle, how update statement works) and the first answer implies that an update occurs even if the values are equal. I also tried it in SQL Developer and it ran but I don't know if an update truly occurred.
Thanks in advance.
Yes, Oracle does update the column even if it the same.
In a really simple example, this makes no difference. But consider the following:-
When a record is updated, a lock is obtained on that record for the updating session,
When a record is updated, triggers on the table would fire
This aspects of the update show that the column is actually updated.
Of course, perhaps there are some optimisations when the value is the same, but these are not visible to you as a user of Oracle.
Yes, all row are updated and all triggers fired, even if the actual values doesn't change.
I have an Oracle trigger which is calling a stored procedure that has PRAGMA AUTONOMOUS_TRANSACTION defined. The values that are passed from the trigger have been committed already but it appears that the values are not available in the stored procedure? I'm not positive of this since the ability to debug/log/commit is difficult and the timing of the output is confusing me a bit. I'd like to know if it's expected that any passed values are simply available in the stored procedure regardless of the AUTONOMOUS_TRANSACTION?
Thanks
Values passed in to a stored procedure as parameters will always be available to the stored procedure. It doesn't matter whether the procedure is declared using an autonomous transaction.
Code running in an autonomous transaction cannot see changes made by the calling transaction. 9 times out of 10, when people are describing problems seeing the data they expect, this is the source of the problem.
If your stored procedure is doing anything other than writing something to a log table, I would be exceptionally cautious about using autonomous transactions. If you are using autonomous transactions for anything other than logging, you are almost certainly using them incorrectly. And you are probably introducing a whole host of bugs related to race conditions and transactional integrity.
"The trigger logic is conditionally
updating Table B which calls the
stored procedure to select from the
values on Table A so that Table B can
be updated with a calculated value. "
Perhaps Table B really ought to be a Materialized View derived from Table A? We can build a lot of complexity into the WHERE clauses of the queries which populate MViews. Find out more.
If you have a row level trigger on table_x, then that trigger can be fired multiple times by the same statement as different rows are impacted by that statement.
The order in which those rows are impacted is indeterminate. As such, the state of table_x is indeterminate during the execution of a row level trigger. This is why the MUTATING TABLE exception is raised.
An autonomous transaction 'cheats' by looking at the committed state of the table (ie excluding all changes made by that statement, and other statements in the transaction).
If you want a stored procedure to look at the state of table_x in response to activity on that table, then it needs to be done after all the rows changes have been made (ie in a statement level trigger, not a row level trigger).
The design pattern for this is often to set a flag (package level variable) in a row level trigger, check the flag in an AFTER statement level trigger, and if necessary action it and reset it.