SQLCBLLE not running correctly and does not produce MSGW - debugging

I have been facing this problem lately :
normaly when there is a pgm ILE COBOL running on batch job on IBM i-series (AS/400) and triggers an exeption it makes the batch job stop et go from RUN to MSGW, but when it is a SQLCBLLE and there is a problem executing an sql statement it simply rolls back and continues execution without passing the job to MSGW.
Is there a way to know if an sqlcblle in a batch job has not executed correctly and if there is a possibility to trigger MSGW for the batch job and let the default error handler get them ?

Every SQL statement should be followed by a test that checks SQLSTATE (or possibly SQLCODE) to see if the SQL succeeded. Depending on the SQLSTATE (or perhaps SQLCODE) value, the program needs to decide what action to take.
The action can be to send a *INQ message to put the job into MSGW status until a reply is returned.
Without seeing code that causes a problem, it's difficult to say much more. A statement such as exec sql select * from tableA already has a potentially significant problem by not specifying a column list, regardless of the existence of tableA. Embedded SQL generally will not cause an exception to be returned, but will use SQLSTATE to describe problems. It's the developer's responsibility to check for those returned conditions.

There is an interesting discussion that may be helpful here. It's about RPG rather than CBL but may be useful in solving your problem.

Related

Oracle PL/SQL - Running procedures in parallel

I am trying to modify the accepted solution to this question here -https://stackoverflow.com/questions/62362298/run-procedures-in-parallel-oracle-pl-sql
such that -
When a program (or Stored procedure) as part of a chain step finishes, it immediately restarts for the next invocation.
I am basically trying to create a way to parallelly run jobs continously. The accepted solution works for a single execution of parallel jobs. But I am unsure how to keep these jobs running indefinitely.
So far, I have read the Scheduler documentation and it seems maybe a rule with evaluation_interval can be used?.. Not sure..

Need to write the single logfile by different SP running in different session concurerntly in PL SQL

I want to write the single log file (which gets created on daily basis) by multiple SPs running in different session.
This is what i have done.
create or replace PKG_LOG:
procedure SP_LOGFILE_OPEN
step 1) Open the logfile:
LF_LOG := UTL_FILE.FOPEN(LV_FILE_LOC,O_LOGFILE,'A',32760);
end SP_LOGFILE_OPEN;
procedure SP_LOGFILE_write
step 1) Write the logs as per application need:
UTL_FILE.PUT_LINE(LF_LOG,'whatever i want to write');
step 2) Flush the content as i want to logs to be written in real time.
UTL_FILE.FFLUSH(LF_LOG);
end SP_LOGFILE_write;
Now whenever in any stored procedure i want to write the log first i am calling SP_LOGFILE_OPEN and then SP_LOGFILE_write(as many time as i want).
Problem is, if there are two stored procedures say SP1 and SP2. If both of them try to open it same concurrently it never throughs error or waits for another to finish. Instead it gets open in both the sessions where SP1 and SP2 is executing.
The content of SP1(if it started running first) will be completly written into logfile but content from SP2 will be partially written into logfile. SP2 starts wrtting only when SP1's execution stops. Also initial content of SP2 which it was trying to write into logfile gets lost due to FFLUSH.
As per my requirement i dont want to lose the content of second SP2 when SP1 was running.
Any suggestions please. I dont want to drop teh idea of FFLUSH as i need in real time.
Thanks.
You could use DBMS_LOCK to get a custom lock or wait until a lock is available, then do your write, then release the lock. It has to be serialized.
But this will make your concurrency problem even worse. You're basically saying that all calls to this procedure must get in a line and be processed one by one. And remember that disk I/O is slow, so your database is now only as fast as your disk.
Yours is a bad idea. Instead of writing directly to a file, simply enqueue a log message to an Oracle advanced queue and create a job running very frequently (every few seconds) to dequeue from the AQ. It's the procedure invoked by the job that actually writes to the file. This way you can synchronize different SP executions trying to log concurrently on the same file. The actual logging is made by one single SP invoked by the job.

DBMS_LOCK.SLEEP vs UNIX sleep

I have a shell script which will trigger a PL/SQL report generation procedure after certain pre-conditions are satisfied. The logic for checking whether the pre-conditions are fulfilled is written in PL/SQL package. The report generation needs to wait until the pre-conditions are not fulfilled.
What are the pros and cons of waiting using dbms_lock.sleep inside PL/SQL procedure vs UNIX sleep?
Like a lot of design decisions the answer is, it depends.
Database connections are expensive and relatively time consuming operations. So probably the more efficient approach would be to connect to the database once and let the PL/SQL job handle the waiting process.
Also it's probably cleaner to have a simple PL/SQL call and let the database handle the report or sleep logic rather than write an API that returns a state which the calling program must interpret and act on. This also gives you a neater path to alternative execution (say by calling from a GUI or a DBMS_SCHEDULER job).
There are two specific advantages of using a shell script sleep:
You have the option of emitting a status every time the loop enters sleep mode (if this is interactive)
Execute on sys.dbms_lock is not granted to anybody by default. Some DBAs can be reluctant to grant execute on that package.

What results can I force in a cucumber scenario

Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.

Alternative to executeBatch in jdbc with different failure handling?

executeBatch update does not continue executing the rest of the commands if theres a failure in one of the execution. Is there anyway or any alternative to executeBatch, wherein even if a command fails to execute, still the rest of the commands are successfully executed. Not using executeUpdate since it takes a lot of time and executes query one by one.
The documentation says this:
An exception thrown when an error occurs during a batch update operation. In addition to
the information provided by SQLException, a BatchUpdateException provides the update
counts for all commands that were executed successfully during the batch update, that is, all
commands that were executed before the error occurred. The order of elements in an array of
update counts corresponds to the order in which commands were added to the batch.
After a command in a batch update fails to execute properly and a BatchUpdateException is
thrown, the driver may or may not continue to process the remaining commands in the
batch. If the driver continues processing after a failure, the array returned by the method
BatchUpdateException.getUpdateCounts will have an element for every command in the
batch rather than only elements for the commands that executed successfully before the error.
In the case where the driver continues processing commands, the array element for any command
that failed is Statement.EXECUTE_FAILED.
So as I understand it depends on jdbc-driver you work with.
Probably a better solution would be to find a reason of the problem and fix it?

Resources