As per Steven Feuerstein book
When an exception occurs in a PL/SQL block, the Oracle database does not roll back
any of the changes made by DML statements in that block. You are the manager of the application’s logical transaction, so you decide what kind of behavior should occur.
I give it a try:
CREATE TABLE DML_Exception (exception_name VARCHAR2(20));
INSERT INTO DML_exception VALUES('CASE_NOT_FOUND');
INSERT INTO DML_exception VALUES('TOO_MANY_ROWS');
I got both rows in my table
Select * from DML_Exception
Now I deleted both rows from the table and raise the exception in PL/SQL block.
BEGIN
DELETE FROM dml_exception;
raise value_error;
END;
But my table still containing the both rows. What I missed?
You missed some other parts of the book. Yes, Steven is true – if an exception occurs in a block, all preceding DML effects remain in place. Yet, there should be other mention in the book that any top-level SQL or PL/SQL statement (i.e., anonymous block as well) execution opens a cursor for that statement and if there's an exception during the cursor's execution, all DML effects done during the cursor's execution are rolled back. Perhaps a simple example will give you the clue...
In your original example, you executed ...
BEGIN
DELETE FROM dml_exception;
raise value_error;
END;
... as the top-level statement. Yes, at the end of the block, though still within, your delete effects remained in place. Yet, your block raised an exception which got propagated all the way up to the top-level cursor. Thus, in order to adhere to the principles of atomicity, Oracle rolled back all pending effects of the opened cursor.
If you call your PL/SQL block from within another top-level PL/SQL block, which handles and does not re-raise the exception raised in the lower-level PL/SQL block, ...
BEGIN
BEGIN
DELETE FROM dml_exception;
raise value_error;
END;
EXCEPTION
WHEN others THEN NULL;
END;
..., then your delete effects shall remain in place. (And since there's no commit in that block, you end up having a transaction in progress.)
Related
I need to delete one or more row from list of tables stored in a table, and commit only if all deletion succeed.
So I wrote something like this (as part of a bigger procedure):
BEGIN
SAVEPOINT sp;
FOR cur_table IN (SELECT * FROM TABLE_OF_TABLES)
LOOP
EXECUTE IMMEDIATE 'DELETE FROM ' || cur_table.TABNAME || ' WHERE ID = :id_bind'
USING id;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK TO SAVEPOINT sp;
END;
I know this couldn't work, because of the "execute immediate".
So, what is the correct way to do that?
Dynamic SQL (Execute Immediate) doesn't commit the transaction. You have to commit explicitly. Your code is fine but it doesn't record/log the errors in case if they occur.
Log the errors in the exception handler section.
Honestly, this sounds like a bad idea. Since you have the tables stored in the database, why not just list them out in your procedure? Surely, you're not adding tables that often that this procedure would need to be updated often?
As pointed out in the comments, this will work fine, since EXECUTE IMMEDIATE does not automatically commit.
Don't forget to add a RAISE at the end of your exception block, or you'll never know that an error happened.
I want to write a stored proc that delete data from two tables.
Should either of the deletes fail I want to make sure that no data was deleted.
This should be a simple task, but I never worked in Oracle before.
I'm not if I should be using TRY/CATCH, TRANSACTIONS or SAVEPOINTS.
Any guidance would be appreciated.
Currently I have:
CREATE OR REPLACE PROCEDURE SP_DELETE_STUFF
(
GROUPNAME IN VARCHAR2
) AS
BEGIN
SAVEPOINT Original_Start;
-- First delete all permissions for a given group
DELETE FROM my_table_1
WHERE group_name = GROUPNAME;
-- Second delete the group
DELETE FROM my_table_2
WHERE group_name = GROUPNAME;
EXCEPTION
WHEN OTHERS THEN
BEGIN
ROLLBACK TO SAVEPOINT Original_Start;
COMMIT;
END;
END
If your goal is just to rollback the changes that a particular call of the stored procedure has made if there is an error, you'd use a savepoint and a rollback to savepoint like you are doing here.
I would question your use of a commit after your rollback to savepoint. That will commit the transaction that the caller of your stored procedure had started. It seems unlikely that you want to commit the caller's changes when your procedure encounters an error. So I would expect that you want to remove the commit.
Not related to transaction scoping, I would also expect that you would want to at least re-raise the exception that you caught so that the caller is aware that there was a failure. I would expect that you would want something like
EXCEPTION
WHEN OTHERS THEN
BEGIN
ROLLBACK TO SAVEPOINT Original_Start;
RAISE;
END;
I've created a trigger that only allows a user to have 10 current placed orders. So now when the customer tries to placed order number 11 the oracle database throws back a error. Well 3 errors.
ORA-20000: You currently have 10 or more orders processing.
ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12
ORA-04088: error during execution of trigger
'C3283535.TRG_ORDER_LIMIT'
The top error is one I've created using:
raise_application_error(-20000, 'You currently have 10 or more orders
processing.');
I just wondered after search and trying many ways how to change the error messages for the other two errors or even not show them all together to the user?
Here is the code I've used
create or replace trigger trg_order_limit
before insert on placed_order for each row
declare
v_count number;
begin
-- Get current order count
select count(order_id)
into v_count
from placed_order
where fk1_customer_id = :new.fk1_customer_id;
-- Raise exception if there are too many
if v_count >= 10 then
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
end if;
end;
Thanks a lot
Richard
The exception propagation goes from the internal-to-external block, as opposed to variable scope which goes from external-to-internal block. For more reference on this, read McLaughlin's "Programming with PL/SQL", Chapter 5.
What you are getting here is an exception stack - exceptions raised from the innermost blocks to the outermost blocks.
When you raise an exception from a trigger, your raise_application_error statement returns an error.
It is then propagated to the trigger block which says ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12. This is because the trigger treats the raised exception as an error and stops to continue.
The error is then propagated to the session which raises the ORA-04088: error during execution of trigger 'C3283535.TRG_ORDER_LIMIT'. This error reports to us about where, as in which part of the program, the error was raised.
If you are using a front-end program like Java Server Pages or PHP, you will catch the raised error - 20000 first. So, you can display the same to your end user.
EDIT :
About the first error - ORA-20000, you can change it in the RAISE_APPLICATION_ERROR statement itself.
If you want to handle the ORA-06512, you can use Uday Shankar's answer which is helpful in taking care of this error and showing an appropriate error message.
But, you will still be getting the last ORA-04088. If I was at your place I wouldn't have worried, as after getting the ORA-20000 I would raise an application error at the front end itself while hiding all the other details from the user.
In fact, this is the nature of Oracle's exception stack. All the errors from the innermost to the outermost block are raised. This is helpful a lot of times for us to identify the exact source of error.
In the trigger you can add the exception handling part as shown below:
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
I see that this is quite an old post, but I think that readers should be aware that
This does not really enforce the business rule (max 10 orders). If
is is just "some" number to avoid too high amounts and you don't
care if sometimes people have 12 orders then this may be fine. But if not, think of a scenario where you have 9 orders already, and then orders for the same customer are inserted from 2 different sessions / transactions simultaneously. In that case you will end up with 11 orders, without detecting this overflow situation. So you can not rely on this trigger actually.
Besides that, you might need to have this trigger fire on update too, if the fk1_customer_id may be updated (I have seen implementations where at first a NULL is put into the FK column, and later being updated to the actual value). You may want to consider if this scenario is realistic.
There is a fundamental flaw in the trigger. You are inside a transaction and inside a statement that is currently being executed but not complete yet. So what if the insert is not a single row insert but something like
insert into placed_order (select ... from waiting_orders ...)
what do you expect the trigger to see?
This kind of business rule is not easy to enforce. But if you choose to do it in a trigger, you better do it in an after statement trigger (thus, not in a before row trigger). An after statement trigger still will not see results of other uncommitted transactions, but at least the current statement is in a defined state.
In fact the business rule CAN fundamentally only be enforced at commit time; but there is not a thing like an ON-COMMIT trigger in the Oracle database.
What you can do is denormalising the count of records into the customers table (add a column ORDER_COUNT), and place a deferred constraint (ORDER_COUNT <= 10) in that table.
But then you are still relying on correctly maintaining this field throughout your code.
A fully reliable alternative, but somewhat cumbersome, is to create a materialized view (something like SELECT fk_customer_id, count(*) order_count from placed_orders group by fk_customer_id, with FAST REFRESH ON COMMIT on the placed_order table and create a check constraint order_count <= 10 on the materialized view.
This is about the only way to reliably enforce this type of constraints, without having to think of all possible situations like concurrent sessions, updates etc.
Note however that the FAST REFRESH ON COMMIT will slow down your commit; so this solution is not useable for high volumes (sigh... Why just does Oracle not provide an ON COMMIT trigger...)
This information should be easy to find, but I haven't had any luck.
When I have a BEGIN - END block in a PL/SQL, does it behave as an atomic transaction, that will try to commit on hitting the END block and if anything goes wrong rolls back the changes?
If not, how do I make sure that the code inside the BEGIN - END block behaves like an atomic transaction and how does the block behave "by default"?
EDIT: I am running from a stored procedure and I am using an implicit block, I think.
Firstly, BEGIN..END are merely syntactic elements, and have nothing to do with transactions.
Secondly, in Oracle all individual DML statements are atomic (i.e. they either succeed in full, or rollback any intermediate changes on the first failure) (unless you use the EXCEPTIONS INTO option, which I won't go into here).
If you wish a group of statements to be treated as a single atomic transaction, you'd do something like this:
BEGIN
SAVEPOINT start_tran;
INSERT INTO .... ; -- first DML
UPDATE .... ; -- second DML
BEGIN ... END; -- some other work
UPDATE .... ; -- final DML
EXCEPTION
WHEN OTHERS THEN
ROLLBACK TO start_tran;
RAISE;
END;
That way, any exception will cause the statements in this block to be rolled back, but any statements that were run prior to this block will not be rolled back.
Note that I don't include a COMMIT - usually I prefer the calling process to issue the commit.
It is true that a BEGIN..END block with no exception handler will automatically handle this for you:
BEGIN
INSERT INTO .... ; -- first DML
UPDATE .... ; -- second DML
BEGIN ... END; -- some other work
UPDATE .... ; -- final DML
END;
If an exception is raised, all the inserts and updates will be rolled back; but as soon as you want to add an exception handler, it won't rollback. So I prefer the explicit method using savepoints.
BEGIN-END blocks are the building blocks of PL/SQL, and each PL/SQL unit is contained within at least one such block. Nesting BEGIN-END blocks within PL/SQL blocks is usually done to trap certain exceptions and handle that special exception and then raise unrelated exceptions. Nevertheless, in PL/SQL you (the client) must always issue a commit or rollback for the transaction.
If you wish to have atomic transactions within a PL/SQL containing transaction, you need to declare a PRAGMA AUTONOMOUS_TRANSACTION in the declaration block. This will ensure that any DML within that block can be committed or rolledback independently of the containing transaction.
However, you cannot declare this pragma for nested blocks. You can only declare this for:
Top-level (not nested) anonymous PL/SQL blocks
List item
Local, standalone, and packaged functions and procedures
Methods of a SQL object type
Database triggers
Reference: Oracle
You don't mention if this is an anonymous PL/SQL block or a declarative one ie. Package, Procedure or Function.
However, in PL/SQL a COMMIT must be explicitly made to save your transaction(s) to the database. The COMMIT actually saves all unsaved transactions to the database from your current user's session.
If an error occurs the transaction implicitly does a ROLLBACK.
This is the default behaviour for PL/SQL.
The default behavior of Commit PL/SQL block:
You should explicitly commit or roll back every transaction. Whether you issue the commit or rollback in your PL/SQL program or from a client program depends on the application logic. If you do not commit or roll back a transaction explicitly, the client environment determines its final state.
For example, in the SQLPlus environment, if your PL/SQL block does
not include a COMMIT or ROLLBACK statement, the final state of your
transaction depends on what you do after running the block. If you
execute a data definition, data control, or COMMIT statement or if you
issue the EXIT, DISCONNECT, or QUIT command, Oracle commits the
transaction. If you execute a ROLLBACK statement or abort the SQLPlus
session, Oracle rolls back the transaction.
https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/sqloperations.htm#i7105
The following link on the PostgreSQL documentation manual http://www.postgresql.org/docs/8.3/interactive/populate.html says that to disable autocommit in postgreSQL you can simply place all insert statements within BEGIN; and COMMIT;
However I have difficulty in capturing any exceptions that may happen between the BEGIN; COMMIT; and if an error occurs (like trying to insert a duplicate PK) I have no way to explicitly call the ROLLBACK or COMMIT commands. Although all insert statements are automatically rolled back, PostgreSQL still expects an explicit call to either the COMMIT or ROLLBACK commands before it can consider the transaction to be terminated. Otherwise, the script has to wait for the transaction to time out and any statements executed thereafter will raise an error.
In a stored procedure you can use the EXCEPTION clause to do this but the same does not apply in my circumstance of performing bulk inserts. I have tried it and the exception block did not work for me because the next statement/s executed after the error takes place fails to execute with the error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
The transaction remains open as it has not been explicitly finalised with a call to COMMIT or ROLLBACK;
Here is a sample of the code I used to test this:
BEGIN;
SET search_path TO testing;
INSERT INTO friends (id, name) VALUES (1, 'asd');
INSERT INTO friends (id, name) VALUES (2, 'abcd');
INSERT INTO friends (id, nsame) VALUES (2, 'abcd'); /*note the deliberate mistake in attribute name and also the deliberately repeated pk value number 2*/
EXCEPTION /* this part does not work for me */
WHEN OTHERS THEN
ROLLBACK;
COMMIT;
When using such technique do I really have to guarantee that all statements will succeed? Why is this so? Isn't there a way to trap errors and explicitly call a rollback?
Thank you
if you do it between begin and commit then everything is automatically rolled back in case of an exception.
Excerpt from the url you posted:
"An additional benefit of doing all insertions in one transaction is that if the insertion of one row were to fail then the insertion of all rows inserted up to that point would be rolled back, so you won't be stuck with partially loaded data."
When I initialize databases, i.e. create a series of tables/views/functions/triggers/etc. and/or loading in the initial data, I always use psql and it's Variables to control the flow. I always add:
\set ON_ERROR_STOP
to the top of my scripts, so whenever I hit any exception, psql will abort. It looks like this might help in your case too.
And in cases when I need to do some exception handling, I use anonymous code blocks like this:
DO $$DECLARE _rec record;
BEGIN
FOR _rec IN SELECT * FROM schema WHERE schema_name != 'master' LOOP
EXECUTE 'DROP SCHEMA '||_rec.schema_name||' CASCADE';
END LOOP;
EXCEPTION WHEN others THEN
NULL;
END;$$;
DROP SCHEMA master CASCADE;