Add Conditional Transactional Behavior In Oracle Procedure - oracle

In oracle 10g, I have one stored procedure per table per operation (insert, select, update, delete). Actually there can be multiple procedures per table per operation, for example in case of select it can be SelectList, SelectOneRecord, Search (with dynamic query).
None of these procedures have transactions.
Sometimes I have to combine multiple operations in a transaction. For example insert in one table and update in another table, all in one transaction. For this I make a separate procedure which has transaction. This procedure then call the two procedures.
To enable the above combination-of-procedure-calls-in-single-transaction, I do not put any transaction behavior in the procedures, as discussed above.
Most of the time I have to perform only one operation, such as insert in one table. Since the insert procedure do not have transaction behavior therefore I have to make a separate procedure that have transaction behavior and that procedure call the insert procedure.
I end up with lots of basic procedures (one table, one operation) and lots of transaction procedures that are basically wrappers around basic procedures.
My question is, is there some way to have conditional transactional behavior in the basic procedures. By this I mean that some if-condition where I can put the transaction logic, so that the transaction behavior can be on or off based on some parameter that I pass. Then when I want to do only one operation, such as insert in a table, I call the basic procedure with transaction behavior; and when I want to call two procedures in a transaction, such as insert in one table and update in another table all in one transaction, then I make a separate transaction procedure and call the two basic procedure without transaction behavior.
The following is a transaction procedure that call another procedure and wrap it in a transaction:
BEGIN
SAVEPOINT the_start;
BasicProcedure(<list of parameters>);
COMMIT;
EXCEPTION
WHEN OTHERS THEN
BEGIN
ROLLBACK TO the_start;
RAISE;
END;
END;
I can very well put the savepoint line and the commit line in if-statements, but can I also put the exception block in a if-statement. Do I have to put the exception block in if-statement? What if I catch exception in procedure, would it automatically rollback when exception comes?

What you have written is called a Table API. Some people swear by them, others anathematize them. The case for Table APIs is basically: modularity and code reuse. The case against is primarily performance: they encourage row by row or table by table processing, when a bespoke SQL join would be more efficient.
Personally I have stood on both sides of the fence. Currently I favour tailored code: it makes transactions easier to work with.
Which brings me to your situation. Table APIs are supposed to be generic and usable in all situations. That means they cannot control the management of transactions: that properly belongs to the programs which call the Table API methods. These programs are the code which implements the business logic: a business transaction consists of a number of activities which constitute a Unit Of Work. All of these have to succeed in order for the transaction to succeed, otherwise the business transaction needs to be rolled back. If the Table API commands issue their own commits, a subsequent failure would leave the business transaction in an inconsistent state. ACID applies at this level as well as the individual SL level.
This is actually no different from writing stored procedures with bespoke SQL in them.
"it's unclear to me if you're advocating making the business logic
transactions stored procedures."
This is a large area to cover, and there's more to design PL/SQL applications than business logic. (If you have access to a time machine you should travel back to Open World 2009 to catch my presentation on "Designing PL/SQL with Intent")
But broadly, yes, the outward-facing aspect of the PL/SQL layer should consist of business logic APIs, organised around Units of Work i.e. business transactions.
"such procedures be autonomous transactions? "
Absolutely not. Autonomous transactions are fit for only one purpose: logging and auditing of activity, where we need a permanent record of what occurred without affecting the wider transaction. Any other use of that pragma is a kludge or a data corruption bug waiting to happen.
Business transactions are transactions pure and simple. Either the stored procedures should own the commit or else they should defer it to the calling program.

Related

Is commit required in an Oracle stored procedure which is called from Java class?

I have an Oracle Stored Procedure that does some inserts and updates on a table in DB.
There is no explicit Commit or Rollback statement at the end of the procedure.
However, when I call this SP through a java class, I see that the inserts and updates are committed into the DB.
So can anyone help me understand if we really need a commit statement at the end of the stored procedure in Oracle?
I am not java experience but as far as I know when you close the connection of the database the data are committed (unless if you rollback them). Now to return into your question is when to use the commit in SP.
When you use DML(insert,update,delete) operation in the procedure on a table, the table will be Locked therefore if any other user try to access the locked table, it has to wait till you commit/rollback your operation. so if your procedure was taking time, due to a long loop or bad optimized query then the user will be blocked. So if you had a commit before the DMl, the no blocks will happen.
Other reason, is the undo tablespace, where all the data not committed will wait there till you commit them, so if for example you inserted lot of data (millions), your undo might get full depend on your size and youll get an error.
so short answer , if your procedure doesn't has lot of operations on big tables and it fast then you can pass by the commit , otherwise it better to add commits.

using spring transaction management with select queries [duplicate]

I don't use Stored procedures very often and was wondering if it made sense to wrap my select queries in a transaction.
My procedure has three simple select queries, two of which use the returned value of the first.
In a highly concurrent application it could (theoretically) happen that data you've read in the first select is modified before the other selects are executed.
If that is a situation that could occur in your application you should use a transaction to wrap your selects. Make sure you pick the correct isolation level though, not all transaction types guarantee consistent reads.
Update :
You may also find this article on concurrent update/insert solutions (aka upsert) interesting. It puts several common methods of upsert to the test to see what method actually guarantees data is not modified between a select and the next statement. The results are, well, shocking I'd say.
Transactions are usually used when you have CREATE, UPDATE or DELETE statements and you want to have the atomic behavior, that is, Either commit everything or commit nothing.
However, you could use a transaction for READ select statements to:
Make sure nobody else could update the table of interest while the bunch of your select query is executing.
Have a look at this msdn post.
Most databases run every single query in a transaction even if not specified it is implicitly wrapped. This includes select statements.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a transaction block.
https://www.postgresql.org/docs/current/tutorial-transactions.html

PostgreSQL vs. Oracle default transaction management

In PostgreSQL, if you encounter an error in transaction (for example when your insert statement violates unique constraint), the whole transaction is aborted, you cannot commit it and no rows are inserted:
database=# begin;
BEGIN
database=# insert into table (id, something) values ('1','whatever');
INSERT 0 1
database=# insert into table (id, something) values ('1','whatever');
ERROR: duplicate key value violates unique constraint "table_id_key"
Key (id)=(1) already exists.
database=# insert into table (id, something) values ('2','whatever');
ERROR: current transaction is aborted, commands ignored until end of transaction block
database=# rollback;
database=# select * from table;
id | something |
-----+------------+
(0 rows)
You can change that by setting ON_ERROR_ROLLBACK to "on" or "interactive", after that you can do multiple inserts ignoring errors, commit and have only successfully inserted rows in table after transaction end.
database=# \set ON_ERROR_ROLLBACK interactive
In Oracle, this is the default transaction management behaviour, which surprises me. Isn't this completely counterintuitive and dangerous?
When I start a transaction I want to be sure that all the statements were successfull. What if my multiple inserts comprise some kind of an object or data structure? I end up completely unaware of the data state in my database and should be checking it after the commit.
If one of the inserts fails I want to be sure that other inserts will be rollbacked or not even evaluated after the first error, which is exactly how it's done in PostgreSQL.
Why does Oracle have such way of transaction management as a default, and why is it considered good practice?
For example, some random guy here in comments
This is a very neat feature.
I don't understand this, though: "Normally, any error you make will
throw an exception and cause your current transaction to be marked as
aborted. This is sane and expected behavior..."
No, it's really not. Oracle doesn't work this way, nor does MySQL. I
have no experience with MSSQL or DB2 but I'll bet a dollar each they
don't work this way either. There no intuitive reason why a syntax
error, or any other error for that matter, should abort a transaction.
I can only assume there's either some limitation deep in the Postgres
guts that requires this behavior, or that it conforms to some obscure
part of the SQL standard that everyone else sensibly ignores. There's
certainly no API / UX reason why it should work this way.
We really shouldn't be too proud of any workarounds we've developed
for this pathological behavior. It's like IT Stockholm Syndrome.
Does not it violate even the definition of the transaction?
Transactions provide an "all-or-nothing" proposition, stating that
each work-unit performed in a database must either complete in its
entirety or have no effect whatsoever.
I agree with you. I think it's a mistake not to abort the whole tx. But people are used to that, so they think it's reasonable and correct. Like people who use MySQL think that the DBMS should accept 0000-00-00 as a date, or people using Oracle expect that '' IS NULL.
The idea that there's a clear distinction between a syntax error and something else is flawed.
If I write
BEGIN;
CREATE TABLE new_customers (...);
INSET INTO new_customers (...)
SELECT ... FROM customers;
DROP TABLE customers;
COMMIT;
I don't care that it's a typo resulting in a syntax error that caused me to lose my data. I care that the transaction didn't successfully execute all its statements but still committed.
It'd be technically feasible to allow soft rollback in PostgreSQL before any rows are actually written by a statement - probably before we even enter the executor. So failures in the parse and parameter binding phases could allow the tx not to be aborted. We have a statement memory context we could use to clean up.
However, once the statement starts changing rows, it's doing so on disk with the same transaction ID as the prior statements in the tx. So you can't roll it back without rolling back the whole tx. To allow statement rollback Pg needs to assign a new subtransaction ID. That costs resources. You can do it explicitly with SAVEPOINTs when you want to, and internally that's what psql is doing. In theory we could allow the server to do this implicitly for each statement to implement statement rollback, just at a performance cost. But I doubt any patch implementing this would get committed, at least not without a LOT of argument, because most of the PostgreSQL team are (IMO reasonably) not fond of "whoops, that broke but we'll continue anyway" transaction semantics.

Do the time of the COMMIT and ROLLBACK affect performance?

Suppose I have a set of ID . For each ID , I will insert many records to many different tables based on the ID .Between inserting difference tables, different business checks will be called . If any checking fail , all the records that are inserted based on this ID will be ROLLBACK .This bulk insert action is done through using PL/SQL . Do the time of the COMMIT and ROLLBACK affect the performance and how does it affect ? For example , should I COMMIT after finish the process for one ID or COMMIT after finish all ID?
This is not so much of a performance decision but a process design decision. Do you want the other IDs to stay in the database when you have to roll back a faulty ID?
For obvious reasons, rollback takes longer when more rows must be rolled back. Rollback usually takes longer (sometimes much longer!) than the operations that have to be rolled back. Commit is always fast in Oracle, so it probably doesn't matter how often you commit in that regard.
Your problem description indicates you have a large set of smaller logical transactions (each new ID is a transaction). You should commit each logical transaction. The two reasons to wait to commit the entire set of transactions are:
If the entire set of transactions is in fact a transaction itself - all inserts must succeed for any rows to be committed. In that context, your smaller "transactions" aren't truly transactions.
You don't have a restart capability in your bulk load process, which in effect makes this a special case of item 1. If your bulk load process aborts, you need a way to skip successfully applied ID's.
Tom Kyte's advice is to commit each logical unit of work - the transaction.
Don't take the transaction time longer. make it short as possible as you can. Because according to your query some locks have been created. This locks may cause perfomance issues... so do it ID by ID...
There are two "forces" at work....
locking
during your open transaction, oracle puts locks on the changed rows.
whenever another transaction needs to update any of the locked rows,
it has to wait.
in the worst case, you can even build a deadlock.
synchronous write
every commit performs a synchronous write.
(there are ways to disable that, but it is usually the thing everybody wants: integrity).
that synchronous write can take (much) longer then the a regular write (that can be buffered).
Not to forget that there is usually an additional network round trip involved with an commit.
so, the one force says "commit as soon as possible (considering your integrity requirements)" the other says "commit as as less often as possible".
There are some other issues to consider as well, e.g. the maximum transaction size. every uncommited transaction needs some temporary space. the bigger the transaction gets, the more you need. You can also run into ORA-01555 "snapshot too old".
If there is any advice to give, then it is to implement a configurable "commit frequency" so that you can easily change it as needed.
One option if you need to control the individual sets but retain the ability to commit or rollback the entire transaction is to use savepoints. You can set a savepoint at the beginning of the outermost loop, then rollback to it if an error occurs. You might end up with something like this:
begin
--Initial batch logging
for r_record in cur_cursor loop
savepoint s_cursor loop;
begin
--Process rows
exception
when others then
rollback to s_cursor;
end;
end loop;
--Final batch logging
exception
when others then
rollback;
raise;
end;

Find number of times the procedure is called using another procedure

I have two procedures A and B. Procedure A performs certain tasks. Procedure B has to monitor how many times procedure A is called in a day.
How to achieve this?
Add a statement to the procedure:
update statistics_table
set proc_a_count = proc_a_count + 1;
Of course, you'll have to create a suitable table to hold the count and initialize it with a zero in the field.
insert a row into a log table.
Oracle does not track this sort of thing by default but if you just want to record some simple information then switch on the built-in AUDIT functionality:
AUDIT EXECUTE PROCEDURE BY ACCESS;
You can view the accesses in the view dba_audit_trail. Find out more.
If for some reason you don't want to use the audit trail - say you want to capture more information - then you will need to use your own logging mechanism. This is a good use for the AUTONOMOUS TRANSACTION pragma. Just be careful that writing the log records doesn't have an undue impact on the performance of your application.
edit
The role of procedure B in your question is entirely superfluous: either the database records how often procedure A runs or else A writes its own trace records. Unless B is a packaged query on the log (however implemented)?

Resources