I am using spring batch to update employee status based on input received from thirdparty API. Can anyone help me how can I update status of employee in EMPLOYEE table if step fails with some exception and overall job status to FAILED to my table instead of spring batch tables?
You can proceed in two steps:
step1 (tasklet): make the rest call and save the result in a file (remove the file after the job if necessary)
step2 (chunk-oriented): read employee items and update their statuses in the database
For the writer, you can use a JdbcBatchItemWriter configured with a sql statement like: update table employee set status = ? where id = ?.
As per the step failure question, if any exception occurs during the processing of a chunk, the transaction will be rolled back and no updates will be committed to the database. More details about this in the reference documentation here.
Hope this helps.
Related
I have the following setup:
- a spring boot application makes a post request to insert an object using hibernate into a postgresql table TableA;
- that certain postgresql table has a trigger that on certain conditions triggers a function that does an insert into a TableB;
- TableB has 2 triggers: the first one uses listen/notify function to send notifications on websocket after insert on TableB and the second one uses a function to insert into TableC
The problem: If the first insert the client does in TableA is successfully, he receives 200 Code, but down the road there are possibly another 2 inserts.
If a problems arises during either of the following inserts, the client sees 200 Code, but missed important data.
My logic says this has to be wrapped into a transaction-style entity of all-or-nothing but how do i do that ?
Kind Regards,
EDIT1: I just made a test, i annotated the first post request method with #Transactional and during the following inserts, if it encounters any error and any of the insert is not executed, the client is returned 500 error code. Sorry for wasting your time.
Annonate the first request method with #Transactional and control all possible errors, even by using try and catch blocks.
You should also define which errors your code is going to return, not all of them will be caused by the server, when you should be returning a 5XX error, but also by the client (e.g. sending wrong or unexpected parameters) and you should probably return 4XX code.
We are working on a spring batch application which processes 4-5 million records. We have multiple steps configured in the job. Out of that 2 steps are fetching large data and we are storing few information from that data in Jobexecution context in prcessors's after step.
#AfterStep
public void afterStep(StepExecution stepExecution)
{
stepExecution.getJobExecution().getExecutionContext().put("FETCH_2_CURSOR", rptObj);
}
So that data can be referred in last step to perform some calculation.
The job runs successfully when we are using postgres DB as a job repository, However it fails when we are using db2 LUW as a job repository on the second fetch data step.
I have read in some forums that large objects should not be written to the ExecutionContext.
Anyone can suggest is it a good idea to store large data in job execution context? Or shall we serialize that obj in file system and then refer in final step?
Error :
Caused by: org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL [UPDATE BATCH_JOB_EXECUTION_CONTEXT SET SHORT_CONTEXT = ?, SERIALIZED_CONTEXT = ? WHERE JOB_EXECUTION_ID = ?]; The value of a host variable in the EXECUTE or OPEN statement is out of range for its corresponding use.. SQLCODE=-302, SQLSTATE=22001, DRIVER=4.19.26; nested exception is com.ibm.db2.jcc.am.SqlDataException: The value of a host variable in the EXECUTE or OPEN statement is out of range for its corresponding use.. SQLCODE=-302, SQLSTATE=22001, DRIVER=4.19.26
It's not suggested to add a lot of data in ExecutionContext.Even though if you add, you will get an exception at runtime.
the SERIALIZED_CONTEXT column has limit about 65kb
This has been answered in Spring Batch forum.
http://forum.spring.io/forum/spring-projects/batch/126318-data-truncation-data-too-long-for-column-serialized-context-at-row-1
I'm new to Informatica, need help with my requirements.
1) I have a table CS_pipe which has columns named 'ReportName' and 'Status' in Oracle
2) When a report fails the Status column will have value 'Failed'
3) I need to create a package that will pull value from this table and mail a group when the table status is 'Failed' informing them of the report failure.
Is it possible via Informatica? If yes how can it be done?
You can create a workflow and schedule it to run at fixed intervals, like every 15 minutes, and read all the values where Status = 'Failed', sending email message with the required content.
Then you'd need a way to prevent emails from getting sent for the same failed report over and over again. Depending on your requirements and available columns, you could create a datetiem variable to fetch only the latest status records. But that's a separate story.
in our work we create two .net listener,
first one:
calling oracle stored procedure that insert bulk of data into table(table1) using insert into select syntax:
insert into table1 select c1,c2... from tbl2 inner join tbl3....
then we use explicity commit;
second listener:
calling oracle procedure that reading data inserted into table1 via listener1
but we notice that even the record inserted into table1 listener2 couldn't see that recordat same time even that commit is use.
my question is how does cmmit work when we use insert ...select?
is this issue related to session?when listener 1 session end listener 2 can read data?
please help,
thank in advance.
You're using the wrong terms...
A listener is a server application that listens to the incoming client requests and hands it to the DB engine. A listener is not being used on the client end.
A session is not related to the data you can see, a transaction is the object that controls that.
Oracle works in a very clear way - After a transaction has committed - all the new transactions can see it, and already existing transactions can see the new content based on it transaction configurations..
I recommend you reading about isolation levels in that context http://msdn.microsoft.com/en-us/library/system.transactions.isolationlevel(v=vs.110).aspx
By default - the moment (and in DB it is defined by SCN) a transaction have been committed - the data is visible to the client.
Bottom line - your issue is related either to transaction isolation levels (in the case the reading transaction started before the commit), or to the writer, which does not commit the data when you think it is (a transaction issue).
After the call to transaction.Commit() in .net returned - the data is already visible, and other transactions are seeing it.
You're second question was how commit works.
This is a very complicated process in Oracle, so I'll give a really short description:
1. When you commit, Oracle first runs some verifications before the commit itself (for example, runs the deferred constraints).
2. After oracle knows it can safely commit the changes it gets the system time (SCN) , write the commit itself to the redo log, and flushes the data to disk (for consistency).
3. Sends an ACK to the user, that the data is already visible to the world.
4. marks the buffers been used as free.
Something I want to add, just to make sure (and I'm writing it half a sleep - so excuse me if it does not compile...)
In you're .net code - your code should be logically equivalent to it:
OracleConnection con = new OracleConnection(connStr);
con.Open();
OracleTransaction trans = con.BeginTransaction();
OracleCommand cmd = con.CreateCommand();
cmd.Connection = cmd;
cmd.CommandText = "insert into ...";
cmd.ExecuteNonQuery();
cmd.Dispose();
trans.Commit();
trans.Dispose();
con.Close();
con.Dispose();
and if you're using LINQ - make sure you create the transaction scope on the right area.
I am using oracle as backhand and jsp servlet as frunthand I am executing update query and I want to identify whether update query has made updation in database or not. i am using executeUpdate() it is executing but it results 0 when update query fails to execute and 1 when execution is done but it does not identify whether data is updated or not
executeUpdate will return the number of modified rows on your update sentence. So if you are getting 0, it does not mean it failed to execute but no rows were modified upon execution. And if you are getting 1, you've managed to update one row.
Usually, if update sentence failed, you will get a SQLException thrown by the JDBC driver.