executeBatch update does not continue executing the rest of the commands if theres a failure in one of the execution. Is there anyway or any alternative to executeBatch, wherein even if a command fails to execute, still the rest of the commands are successfully executed. Not using executeUpdate since it takes a lot of time and executes query one by one.
The documentation says this:
An exception thrown when an error occurs during a batch update operation. In addition to
the information provided by SQLException, a BatchUpdateException provides the update
counts for all commands that were executed successfully during the batch update, that is, all
commands that were executed before the error occurred. The order of elements in an array of
update counts corresponds to the order in which commands were added to the batch.
After a command in a batch update fails to execute properly and a BatchUpdateException is
thrown, the driver may or may not continue to process the remaining commands in the
batch. If the driver continues processing after a failure, the array returned by the method
BatchUpdateException.getUpdateCounts will have an element for every command in the
batch rather than only elements for the commands that executed successfully before the error.
In the case where the driver continues processing commands, the array element for any command
that failed is Statement.EXECUTE_FAILED.
So as I understand it depends on jdbc-driver you work with.
Probably a better solution would be to find a reason of the problem and fix it?
Related
I am trying to modify the accepted solution to this question here -https://stackoverflow.com/questions/62362298/run-procedures-in-parallel-oracle-pl-sql
such that -
When a program (or Stored procedure) as part of a chain step finishes, it immediately restarts for the next invocation.
I am basically trying to create a way to parallelly run jobs continously. The accepted solution works for a single execution of parallel jobs. But I am unsure how to keep these jobs running indefinitely.
So far, I have read the Scheduler documentation and it seems maybe a rule with evaluation_interval can be used?.. Not sure..
I have multiple batches that are using different 3rd party apis to get and store/update data. the connections are made via laravels http request. all batches have about 6k jobs. Because all jobs are important I need to log the failed ones and nofiy the user.
Sometimes the response returns an error for all jobs. sometimes just a connection error or an error because the server cant process those requests.
The batch automatically cancels on first failure. But is there a way to cancel the batch if there are multiple failues (on nth failure) not just first?
First turn off normal batch error handling, then implement your own:
Initialize a counter with zero.
Whenever an error occurs, increase that counter.
Whenever that counter reaches/exceeds 5, fail the batch.
The concise implementation depends on the batch system you are working with.
I want to write the single log file (which gets created on daily basis) by multiple SPs running in different session.
This is what i have done.
create or replace PKG_LOG:
procedure SP_LOGFILE_OPEN
step 1) Open the logfile:
LF_LOG := UTL_FILE.FOPEN(LV_FILE_LOC,O_LOGFILE,'A',32760);
end SP_LOGFILE_OPEN;
procedure SP_LOGFILE_write
step 1) Write the logs as per application need:
UTL_FILE.PUT_LINE(LF_LOG,'whatever i want to write');
step 2) Flush the content as i want to logs to be written in real time.
UTL_FILE.FFLUSH(LF_LOG);
end SP_LOGFILE_write;
Now whenever in any stored procedure i want to write the log first i am calling SP_LOGFILE_OPEN and then SP_LOGFILE_write(as many time as i want).
Problem is, if there are two stored procedures say SP1 and SP2. If both of them try to open it same concurrently it never throughs error or waits for another to finish. Instead it gets open in both the sessions where SP1 and SP2 is executing.
The content of SP1(if it started running first) will be completly written into logfile but content from SP2 will be partially written into logfile. SP2 starts wrtting only when SP1's execution stops. Also initial content of SP2 which it was trying to write into logfile gets lost due to FFLUSH.
As per my requirement i dont want to lose the content of second SP2 when SP1 was running.
Any suggestions please. I dont want to drop teh idea of FFLUSH as i need in real time.
Thanks.
You could use DBMS_LOCK to get a custom lock or wait until a lock is available, then do your write, then release the lock. It has to be serialized.
But this will make your concurrency problem even worse. You're basically saying that all calls to this procedure must get in a line and be processed one by one. And remember that disk I/O is slow, so your database is now only as fast as your disk.
Yours is a bad idea. Instead of writing directly to a file, simply enqueue a log message to an Oracle advanced queue and create a job running very frequently (every few seconds) to dequeue from the AQ. It's the procedure invoked by the job that actually writes to the file. This way you can synchronize different SP executions trying to log concurrently on the same file. The actual logging is made by one single SP invoked by the job.
I have been facing this problem lately :
normaly when there is a pgm ILE COBOL running on batch job on IBM i-series (AS/400) and triggers an exeption it makes the batch job stop et go from RUN to MSGW, but when it is a SQLCBLLE and there is a problem executing an sql statement it simply rolls back and continues execution without passing the job to MSGW.
Is there a way to know if an sqlcblle in a batch job has not executed correctly and if there is a possibility to trigger MSGW for the batch job and let the default error handler get them ?
Every SQL statement should be followed by a test that checks SQLSTATE (or possibly SQLCODE) to see if the SQL succeeded. Depending on the SQLSTATE (or perhaps SQLCODE) value, the program needs to decide what action to take.
The action can be to send a *INQ message to put the job into MSGW status until a reply is returned.
Without seeing code that causes a problem, it's difficult to say much more. A statement such as exec sql select * from tableA already has a potentially significant problem by not specifying a column list, regardless of the existence of tableA. Embedded SQL generally will not cause an exception to be returned, but will use SQLSTATE to describe problems. It's the developer's responsibility to check for those returned conditions.
There is an interesting discussion that may be helpful here. It's about RPG rather than CBL but may be useful in solving your problem.
I have a doubt regarding multiple file transfer with qftp. There is no direct way to transfer multiple files with qftp class. Well, I tried it using arbitrary ftp command "mput" with "rawCommand" in QFTP. But it doesnt work for me.
Please let me know how I could do a multiple file transfer with qftp.
Thanks,
Use a for-loop, and start a new transfer for every iteration. Then collect all the commandFinished() signals at the end.
QFtp works asynchronously. If an operation cannot be executed immediately, the operation will automatically be scheduled for later execution. The results of scheduled operations are reported via signals.