one query I need to use multiple transaction blocks inside a single jdbc batch, how to do that any example? - jdbc

Because I have insert statement which should be happen only if update statement executed fine, otherwise proceed to other set of insert and update query inside batch and do the same

Related

Oracle 19: Why IN gets converted to Exist in explain plan and any suggestions around it

Please see image below:
As per that the IN query got converted to Exists in explain plan. Any reason for that? does it mean Oracle automatically converts IN to Exists?
Also any suggestion to reduce the cost? this statement is a part of a SP and it receives ~ separated string ('123') for example (63278~63282~63285~63288~63291~63296~63299~63302~63305~63308~63311~63314~63319~63322~63325~63329~63332~63253~63256~63260~63264~63267~63272~63275~63279~63283~63286~63289~63292~63297~63300~63303~63306~63309~63312~63315~63320~63323~63326~63330~63333~63269~63258~63277~63294~63317~63262~63270~63281~63295~63318~63328~63254~63257~63261~63265~63268~63273~63276~63280~63284~63287~63290~63293~63298~63301~63304~63307~63310~63313~63316~63321~63324~63327~63331~63334) in query. It takes around 10 to 15 mins to execute.
How can we generate explain plan for entire stored proc? We are using Oracle 19.
Thank you in advance.
IN clause retrieves all records which match with the given set of values. It acts as multiple OR conditions. IN clause scans all rows fetched from the inner query.
But, EXISTS is a Boolean operator that returns either True or False. Its used in combination to a sub-query. If the subquery returns any row, it returns True else False. If the result of data large inside the IN clause, then not recommended using IN. For getting high performance most time uses EXISTS vs IN. For that Oracle and PostgreSQL converts your IN to EXISTS
Since you are doing the job in a PL/SQL procedure you could create (out of the procedure) a GLOBAL TEMPORARY TABLE with DELETE ON COMMIT, in the procedure you INSERT in this table the result of the sub select with the CONNECT BY, then your replace the SELECT ... CONNECT BY by a SELECT in the temporary table. The temp table will be emptied at the end of the procedure and this method is session safe. And you have benefit of the index and probably a better plan. You could also compare the UPDATE with 2 ones: splitting the OR condition on 2 statements.

using spring transaction management with select queries [duplicate]

I don't use Stored procedures very often and was wondering if it made sense to wrap my select queries in a transaction.
My procedure has three simple select queries, two of which use the returned value of the first.
In a highly concurrent application it could (theoretically) happen that data you've read in the first select is modified before the other selects are executed.
If that is a situation that could occur in your application you should use a transaction to wrap your selects. Make sure you pick the correct isolation level though, not all transaction types guarantee consistent reads.
Update :
You may also find this article on concurrent update/insert solutions (aka upsert) interesting. It puts several common methods of upsert to the test to see what method actually guarantees data is not modified between a select and the next statement. The results are, well, shocking I'd say.
Transactions are usually used when you have CREATE, UPDATE or DELETE statements and you want to have the atomic behavior, that is, Either commit everything or commit nothing.
However, you could use a transaction for READ select statements to:
Make sure nobody else could update the table of interest while the bunch of your select query is executing.
Have a look at this msdn post.
Most databases run every single query in a transaction even if not specified it is implicitly wrapped. This includes select statements.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a transaction block.
https://www.postgresql.org/docs/current/tutorial-transactions.html

Fire trigger only when rows are updated

I have a command to execute inside oracle triggers after a modification of a table.
I need this command to run once (even if there is 100 rows updated), and only when there is rows updated.
FOR EACH ROW allow to be sure to send the command only when there is rows updated, so how could I stop its execution after the first loop ?
It looks you're going to use compound trigger. In for each row section you need to collect rowids to update and in after statement run whole update
Use global package variable.
1) Reset it to null in before update statement trigger.
2) Save values/inc counter in for each row trigger
3) do final check in after update statement trigger and fire your logic inly once regardless number of affected rows

Batch update using Spring

I am trying to have batch update query However each update Query is different but running on the same table. The Where clause is the same.
For example :
TABLE : Column A,B,C,D,ID
update A where ID=1
update B,C where ID=1
update D,B where ID=1 and so on ... ( all the combinations of A,B,C,D)
I have investigated spring jdbc (JDBCTemplate and JDBCNamedParameter ) and QueryDsl but its not possible to have such updates.
Is there any other method by which such update as batch is possible ? I have stick to Spring-JDBC.
Do you want to use a prepared statement passing in the arguments for each update? If so, it's not possible to do this as a batch. You could batch multiple statements, but then you would have to create these statements without using placeholders for the arguments. In this scenario you would use the int[] JdbcTemplate.batchUpdate(String[] sql) method (http://docs.spring.io/spring/docs/4.0.3.RELEASE/javadoc-api/org/springframework/jdbc/core/JdbcTemplate.html#batchUpdate-java.lang.String:A-).
It's not possible to batch different prepared statements using the JDBC API. You can batch individual statements without arguments (http://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#addBatch(java.lang.String)) or batch multiple sets of arguments for a prepared statement (http://docs.oracle.com/javase/7/docs/api/java/sql/PreparedStatement.html#addBatch()), but the SQL statement would have to be the same for all sets of arguments.
You can still wrap multiple update calls in a transaction, but there would be multiple roundtrips to the database server.
You can wrap your update with a stored proc, then you can batch round trips to the database.
Inside the stored proc you'll need to generate the update based on the arguments passed in. So you could test for null, or pass a separate flag for each column. If the flag is set, then generate SQL that updates that column.

select query to wait for insertion of other record

In my application multiple requests simultaneously read record from one table and based on that insert new record in table.
I want to execute request serially so that the second request reads the latest value inserted by the first request.
I tried to achieve this using select for update query but it lock only row to be wait for update, as I can't update existing record it got same value as previous request got.
Is it possible using Oracle locking mechanism? How?
Dude - that's what transactions are for!
Strong suggestion:
Put your code into a PL/SQL stored procedure
Wrap the select/insert in a "begin tran/commit"
Don't even think about locks, if you can avoid it!

Resources