I have a shell script which will trigger a PL/SQL report generation procedure after certain pre-conditions are satisfied. The logic for checking whether the pre-conditions are fulfilled is written in PL/SQL package. The report generation needs to wait until the pre-conditions are not fulfilled.
What are the pros and cons of waiting using dbms_lock.sleep inside PL/SQL procedure vs UNIX sleep?
Like a lot of design decisions the answer is, it depends.
Database connections are expensive and relatively time consuming operations. So probably the more efficient approach would be to connect to the database once and let the PL/SQL job handle the waiting process.
Also it's probably cleaner to have a simple PL/SQL call and let the database handle the report or sleep logic rather than write an API that returns a state which the calling program must interpret and act on. This also gives you a neater path to alternative execution (say by calling from a GUI or a DBMS_SCHEDULER job).
There are two specific advantages of using a shell script sleep:
You have the option of emitting a status every time the loop enters sleep mode (if this is interactive)
Execute on sys.dbms_lock is not granted to anybody by default. Some DBAs can be reluctant to grant execute on that package.
Related
I am trying to modify the accepted solution to this question here -https://stackoverflow.com/questions/62362298/run-procedures-in-parallel-oracle-pl-sql
such that -
When a program (or Stored procedure) as part of a chain step finishes, it immediately restarts for the next invocation.
I am basically trying to create a way to parallelly run jobs continously. The accepted solution works for a single execution of parallel jobs. But I am unsure how to keep these jobs running indefinitely.
So far, I have read the Scheduler documentation and it seems maybe a rule with evaluation_interval can be used?.. Not sure..
I want to write the single log file (which gets created on daily basis) by multiple SPs running in different session.
This is what i have done.
create or replace PKG_LOG:
procedure SP_LOGFILE_OPEN
step 1) Open the logfile:
LF_LOG := UTL_FILE.FOPEN(LV_FILE_LOC,O_LOGFILE,'A',32760);
end SP_LOGFILE_OPEN;
procedure SP_LOGFILE_write
step 1) Write the logs as per application need:
UTL_FILE.PUT_LINE(LF_LOG,'whatever i want to write');
step 2) Flush the content as i want to logs to be written in real time.
UTL_FILE.FFLUSH(LF_LOG);
end SP_LOGFILE_write;
Now whenever in any stored procedure i want to write the log first i am calling SP_LOGFILE_OPEN and then SP_LOGFILE_write(as many time as i want).
Problem is, if there are two stored procedures say SP1 and SP2. If both of them try to open it same concurrently it never throughs error or waits for another to finish. Instead it gets open in both the sessions where SP1 and SP2 is executing.
The content of SP1(if it started running first) will be completly written into logfile but content from SP2 will be partially written into logfile. SP2 starts wrtting only when SP1's execution stops. Also initial content of SP2 which it was trying to write into logfile gets lost due to FFLUSH.
As per my requirement i dont want to lose the content of second SP2 when SP1 was running.
Any suggestions please. I dont want to drop teh idea of FFLUSH as i need in real time.
Thanks.
You could use DBMS_LOCK to get a custom lock or wait until a lock is available, then do your write, then release the lock. It has to be serialized.
But this will make your concurrency problem even worse. You're basically saying that all calls to this procedure must get in a line and be processed one by one. And remember that disk I/O is slow, so your database is now only as fast as your disk.
Yours is a bad idea. Instead of writing directly to a file, simply enqueue a log message to an Oracle advanced queue and create a job running very frequently (every few seconds) to dequeue from the AQ. It's the procedure invoked by the job that actually writes to the file. This way you can synchronize different SP executions trying to log concurrently on the same file. The actual logging is made by one single SP invoked by the job.
I am a beginner in SAP ABAP. I am debugging an asynchronous RFC (parallel processing). I have put a break-point in the calling portion of the RFC, an external break-point inside the RFC and an external break point in the form which is called at the end of task through perform. I am able to debug the RFC FM.
Another session opens up. But I am not able to debug the perform which is called after end of task. After the RFC is debugged, the control returns to the calling point of the FM. it doesn't go inside the form. When all the iterations are finished, then at the end it goes inside the perform. Why so? shouldn't the perform be executed in parallel?
Inside the perform I have written like RECEIVE RESULTS FROM FUNCTION XXX. But the debugger control is not going inside the perform after returning from the RFC.
You have given very little information on the overall program flow, but there's a part of the documentation that might be relevant to your case:
A prerequisite for the execution of a registered callback routine is
that the calling program still exists in its internal session when
the remote function is terminated. It is then executed here at the
next change of the work process in a roll-in. If the program was
terminated or is located on the stack as part of a call sequence, the
callback routine is not executed.
[...]
The time when the callback routines are executed can be programmed
explicitly or be reached implicitly:
The statement WAIT FOR ASYNCHRONOUS TASKS is used for explicit programming. As specified by a condition, this statement changes the
work process and hence executes the callback routines registered up to
this time. It waits for as many registered routines to end until the
condition is met (the maximum wait time can be restricted). Explicit
programming is recommended whenever the results of the remote function
are required in the current program.
If the results of the remote function are not required in the current program, the time at which the callback routines are executed
can also be determined by an implicit change of the work process (for
example, at the end of a dialog step). This can be a good idea, for
example, in GUI scenarios in which uses of WAIT are not wanted. In
this case, it must be ensured that the work process changes before the
program is ended. There is also a risk that, if the work process is
changed implicitly, not all callback routines are registered in time.
It is likely that the program issuing the call and registering the callback routine is either terminated or does not issue a WAIT FOR ASYNCHRONOUS TASKS so that the callback is only executed on the next roll-in.
Re-reading your question, you apparently assume that the callback routine will be executed in parallel to the program that has registered it. That is not the case, ABAP is not multi-threaded.
I have been facing this problem lately :
normaly when there is a pgm ILE COBOL running on batch job on IBM i-series (AS/400) and triggers an exeption it makes the batch job stop et go from RUN to MSGW, but when it is a SQLCBLLE and there is a problem executing an sql statement it simply rolls back and continues execution without passing the job to MSGW.
Is there a way to know if an sqlcblle in a batch job has not executed correctly and if there is a possibility to trigger MSGW for the batch job and let the default error handler get them ?
Every SQL statement should be followed by a test that checks SQLSTATE (or possibly SQLCODE) to see if the SQL succeeded. Depending on the SQLSTATE (or perhaps SQLCODE) value, the program needs to decide what action to take.
The action can be to send a *INQ message to put the job into MSGW status until a reply is returned.
Without seeing code that causes a problem, it's difficult to say much more. A statement such as exec sql select * from tableA already has a potentially significant problem by not specifying a column list, regardless of the existence of tableA. Embedded SQL generally will not cause an exception to be returned, but will use SQLSTATE to describe problems. It's the developer's responsibility to check for those returned conditions.
There is an interesting discussion that may be helpful here. It's about RPG rather than CBL but may be useful in solving your problem.
From my shell script I am killing my background function process using kill command. This function calls SQL procedure using sqlplus:
func_foo(){
retval=`sqlplus -s $USER_NAME/$PWD <<EOF
set pages 0 lines 120 trimout on trimspool on tab off echo off verify off feed off serverout on
exec pkg_xyz.proc_abc();
exit;
EOF`
}
func_foo&
pid_func_foo=$!
sleep 5
kill $pid_func_foo 2>/dev/null
wait $pid_func_foo 2>/dev/null
Problem with the approach is that even if my function process is killed, Oracle process keeps on running. Oracle process is not getting killed. I am new to oracle, I am not sure how to handle this scenario. Please provide me with the hint on how to handle this scenario.
Killing the Oracle processes is a bad idea. Try to solve your problem in another way.
Run your procedure as a job, using dbms_scheduler. You can simply stop the job when needed by calling dbms_scheduler.stop_job('job name').
Build your procedure so it can be stopped programmatically. I have build a couple of procedures that run for a very long time. Every now and then the procedure checks a table called "Status", containing only one row. If the status is "ok", it runs on. If I change the row to something else, the procedure sees this and stops.
Hitting control-c in an interactive SQL*Plus session terminates the running command, generates an informational ORA-01013 message, and leaves you at the SQL*Plus prompt - with the Oracle process still alive but idle (possibly oversimplifying somewhat).
You can get the equivalent effect by sending an interrupt signal, rather than default termination signal. This might vary slightly depending on your OS and shell, but is usually something like:
kill -int $pid_func_foo 2>/dev/null
This should still generate the ORA-01013 and the sqlplus process will continue. But as the next statement in your 'here document' is exit it will still stop and will do so more naturally than with a termination signal, and the Oracle session will clear down normally, removing the Oracle process. (If your procedure is doing any inserts or updates, there may still be a delay while the transaction rolls back).
I'm not sure this is a particularly good way to manage execution time limits; job control or resource management might be a better way to go.