Auto increment version by jdbc batch update - spring

I am trying to insert 1000 record using spring jdbc template batch update to avoid the performance issue I am trying to insert in batch size of 50. . I have one column where I need to increment version number. For example If I insert 1000 it should have once version. but version is getting incremented after 50 bcs its getting commited in th db. Please let me know how to handle disable auto commit in spring jdbc template. So that batch run in terms of size 50 every time and commit to database final. So that version will be maintained to 1 for all 1000 record.

Related

Set timeout value for ALTER TABLE ADD COLUMN in Oracle (DDL_LOCK_TIMEOUT not works)

Question
How I can set a timeout value for nonblocking DDL (ALTER TABLE add column) in oracle so that if any DML lock the table for long time (several hours), my DDL can fast-fail instead of waiting for hours. (we expect oracle raise error like ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired to interrupt our DDL)
P.S: DDL_LOCK_TIMEOUT is not working (refer 'What I tried' below)
Background
I'm working on a big oracle database (Oracle Database 19c). There are legacy application every hour will do aggregation job to calculate the data in past hour, like AVG, SUM of the counters. The production has 40 CPUs and 200GB+ memory, normally the aggregation job will run around 30 minutes, but in some case, like due to maintenance break the aggregation jobs are delayed, more data need to be handle in next aggregation job cause the job running for few hours.
Those legacy applications are out of my control. It's not possible to change the aggregation job.
Edition-Based Redefinition is not used.
My work is update database table (due to new counter added). We use ALTER TABLE to add new column to the existing tables. But in some case, the aggregation job lock the table for hours make my script hang there for hours. It make customer unhappy. So I want to make my script fast-fail.
What I tried
By google a long time, seems DDL_LOCK_TIMEOUT is the simplest solution.
However, based on the test, we notice that DDL_LOCK_TIMEOUT is not works in our case. By a long time google again, we found Oracle document here clearly mentioned:
The DDL_LOCK_TIMEOUT parameter affects blocking DDL statements (but not nonblocking DDL statements)
ALTER TABLE add column is exactly 'nonblocking DDL' as listed in List of Nonblocking DDLs
Expectation
When a DML lock the table for 1 hours, like SELECT * FROM MY_TABLE FOR UPDATE and commit after 1 hours. I want my DDL like ALTER TABLE MY_TABLE ADD (COL_A number) can get timeout after 10 minutes instead of wait for 1 hour.
Other Solutions
1
There have one solution in my mind that we can first issue a lock table MY_TABLE IN EXCLUSIVE MODE wait 600 to get the lock fist. But before we go with this solution, I want to seek is there any simple solution just like DDL_LOCK_TIMEOUT to set only one parameter.
2
Based on oracle doc, enable Supplemental Logging able to downgrade the nonblocking DDL to blocking way. But Supplemental Logging is DB level configuration. I do not have the permission to do such change.

How to handle binary(Blob) data in Spring Batch application

My requirement is reading data from a Database aggregate it and convert to bytes then stream it to another database(Oracle) in a Blob column.
Oracle requires to disable JDBC autocommit to stream to a Blob column and call Connection#Commit when finished.
I currently have 3 steps.
Step 1(Tasklet):
It has two SQL queries. one to initialize the column (UPDATE DATABASEUSER.TABLENAME SET payload = empty_blob() WHERE PrimaryKey= ?)
the second one returns the Blob locator (SELECT payload AS payload FROM DATABASEUSER.TABLENAME WHERE PrimaryKey = ? FOR UPDATE)
I also get the connection object from the datasource to disable autocommit
Step 2(Chuck)
I have an IteamReader that reads data from the source DB in a generic way and a Processor that takes converts the rows to a CSV format but in bytes. Then I have a Custom ItemWriter to stream the data to the Blob column.
Step 3(Tasklet)
This is when I cleanup and commit the connection.
Question 1: Is this the correct strategy? Appreciate any direction as I'm kinda unsure
I solved it.
I used the ResourcelessTransactionManager transaction manager in all my steps. In step 1 I get a connection from the datasource, disable autocommit and call commit on the final step. I use the same connection in all steps.

IBM DB2 batch update behavior on duplicate key

I am currently writing a Java application and using Batch Insertion in autocommit mode. My question is if I insert 4 rows in a batch and a BatchUpdateException is thrown because the second row of the batch have trigger a Duplicate Key violation! Does the DBC driver continue to process the 2 remaining row leaving the database with 3 inserted rows? Or does it stop at row 2 leaving the database with 1 inserted rows ? Or it rollback the whole batch leaving the database state with 0 inserted rows?
It works as this:
You have the chunk size mentioned in the step. Say for example the chunk size is 10.
So, every time a batch of 10 items will be committed.
Say, in a batch of 10 item, the 4th item throws duplicate key exception as is your case.
In that case, the whole batch will be rejected and the job will stop (if the skip policy is not implemented).
However, all the previous correct chunks which are already committed, will not be rolled back.
To add further, if after removing the incorrect data, if the same job is restarted, then the job will start exactly from the chunk where it errored last.
So, nothing happens to the data already written.

JDBC batch creation in Sybase

I have a requirement of updating a table which has about 5 million rows.
So for that purpose i want to create batch statements in java and update as a bulk operation.
Righht now I have 100 batches aand it works fine.But when i increase the number of batches over hundred i get an exceptio as : com.sybase.jdbc2.jdbc.SybBatchUpdateException: JZ0BE: BatchUpdateException: Error occurred while executing batch statement: Message empty.
How can i have more batch statements in my CallableStatement object.
Not enough reputation to leave comments...but what types of statements are you batching? how many of these rows are you updating? Does the table have a primary key? How many columns in the table, and how many of those columns are you updating?
Generic answer:
The JDBC framework in sybase is extremely fast. You might at least consider writing a simple procedure that receives the primary key (or other) information you're using to identify the row, along with the new values that row will be updated to as input variables. this procedure will update a single row only.
Wrap this procedure in it's own java method that handles the callablestatement, register your out error number and error message params, etc.
Then you can loop through whatever constructs you're using now to update data, and use the same java method to call the procedure to update the values row by row.
Again, i don't know the volume of what you're trying to do...but I do know if you're trying to do single row updates, this will be VERY fast.

How to set fetch size in Hibernate against an Oracle database?

When programming directly in JDBC against an Oracle database you can call stmt.setFetchSize(fetchSize) on the statement to determine the max number of records to fetch in one round trip from the small default value. I'd like to do this from withing Hibernate 3.2.
Can someone tell me where this would be set?
Use Query.setFetchSize().

Resources