Is there a way to enable procedure logging for an Oracle Scheduled Job? - oracle

I'm trying to enable logging on an Oracle Scheduled Job so that when I look at the run details of the job, I can discern what the procedure of the job worked on and what it did. Currently, the procedure is written to log out through dbms_output.put_line() since this is nice for procedures and SQL*Plus by just enabling set serveroutput on to see it. However, I cannot figure out how to have this logging information show up in the job run details. Specifically, I'm looking at:
select additional_info from dba_scheduler_job_run_details;
Also, this data seems to appear to be displaying in the run details within the enterprise manager under instances of the job.
So, is there a way, a setting or a different package, to have simple logging for an Oracle Scheduled Job?

You could add something at the end of the call. It can call dbms_output.get_lines, load the data into a CLOB and store that somewhere for your perusal. You'd probably want something to call DBMS_OUTPUT.ENABLE at the start too.
Personally, I've avoid using DBMS_OUTPUT.PUT_LINE and have your own logging routine, either to a flat file or a table (using autonomous transactions).

Autonomous transactions is the way to go!
You define a log table with the information you want to log and write a procedure that inserts & commits into this table as an autonomous transaction. That way, you can see what's going on, even when the job is still running.
AFAIK there is no other way to get the information while the job is still running, DBMS_OUTPUT will only display after the job is done.

Related

Rollback after coming out of the session in Oracle SQL*Plus

I am writing a wrapper shell or perl script which does open an oracle session using sqlplus and then execute some sql files by scanning a directory. So as part of this , lets say if we have multiple sql files in a directory,
for eg: first.sql,second.sql,third.sql
I am planning to create a single file(AllSqlFilesInDirectory.sql) with below content.
>cat AllSqlFilesInDirectory.sql
#first.sql
#second.sql
#third.sql
>
Now I am planning to run the file AllSqlFilesInDirectory.sql by opening an oracle sqlplus session.
After executing, I am planning to come out of the oracle sqlplus session and I am planning to search for any errors in the log file.
If there are any errors, I would like to execute rollback. But I think as I am out of that sqlplus session, rollback is not possible. I am just concerned about the DML statements that were executed as part of those multiple sql files in the directory.
So I have these doubts
Can I simply ignore and not be concerned about rollback at all
Can I do the rollback for a session which was already closed?
If above is valid, then how can do it?
Can I simply ignore and not be concerned about rollback at all
That's a business question you'd have to answer. If you want the changes to be rolled back if there is an error, you'd need to do a rollback.
Can I do the rollback for a session which was already closed?
As a practical matter, probably not. Technically, you can using flashback transaction backout but that is generally way more complexity that you'd normally want to deal with.
If above is valid, then how can do it?
Rather than writing to a log file and parsing the log file to determine if there were any errors, it is most likely vastly easier to simply put a
whenever sqlerror exit rollback
at the top of your script. That tells SQL*Plus to rollback the transaction and exit whenever an error is encountered. You don't have to write logic to parse the log file.
Whenever sqlerror documentation

Exporting data to csv file: sqlplus vs sqlcl parallel vs pl/sql utl_file dbms_parallel_execute, which one is faster

In my last project we were working on a requirement where huge data (40 million rows) needs to read and for each row we need to trigger a process. As part of design we used multithreading where each thread fetch data for a given partition using Jdbc Cursor with a configurable fetch size. However when we ran the job in the application in the Prod environment, we observed that it is slow as it is taking more time in querying data from database.
As we had very tight time lines on completion of job execution, we have come up with work around where the data is exported from SQL Developer in csv file format and split in to small files. These files are provided to job. This has improved the job performance significantly and helped completing the job on time.
As mentioned above we have used manual step to export the data to the file. If this need to automate this step, executing exporting step from Java App for instance, which one of the below options (which are suggested on the web) will be faster.
sqlplus (Java making native call to sqlplus)
sqlcl parallel spool
pl/sql procedure with utl_file and dbms_parallel_execute
Below link gives some details on the above but does not have stats.
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9536328100346697722
Please note that currently I don't have access to this Oracle environment, so could not test from my side. Also I am an application developer and don't have much expertise on DB side.
So looking for advise from some one who have worked on similar use case earlier or have relevant expertise.
Thanks in advance.

Execute Oracle script with SqlPlus; go to state before script processing if any exceptions

I have a lot of scripts that contain all kinds of transactions including complex DML and DDL.
I need to find a way to run them fully or not at all. I'd like to see following behavior: if any error occurs in the middle of script processing => go back to the state before the script processing.
I thought I would just put whole script into one big transaction, put a savepoint at the beginning and make a rollback to the savepoint in case of any exception, but AFAIK that's impossible, as Oracle does not support nested transactions.
Do you mind sharing your thoughts about that case?
I don't think there is an easy solution for this, because you have DDL in your script. DDL executes commit before processing, so rollback will not help.
As an alternative you could use flashback option of Oracle. But this impacts the entire database. You create a flashback restore point, run the script, if any errors occured then you flashback database to restore point. This will revert all the changes in all the schemas of your database. This is good when you have separate database for running/testing your scripts. It is rather fast. The database should be in archivlog mode.
Another option is to use export/import utility (expdp/impdp). This is also hardly automated in one script, so you do the recovery manually. You take the export dump, run the script, if any errors happened - you restore the dump of your db schemas running impdp.
Perhaps what you need is the "whenever sqlerror exit" clause.
Check it here out

How to trace errors logs of the Stored Procedure in PROD environment?

I am not an expertise in oracle DB. But I am curious to know that how can we check the logs of particular Stored procedure when it gets executed.
I check the trace folder but I dont know how and which file I have to analyse.
When I checked the UNIX logs it shows timeout error . It seems it did not get the response form one of the procedure. And after 2-3 hrs it get processed and sometimes it dosent. It should have done that job in 30 mnts max. I am not sure if DB is culprit or WEB SERVER (WAS) .
In extreme case I ask for DB restart and WAS restart and this solves our problem .
Is it possible to trace the problem? I am in PROD environment . The same is not behavior in UAT or in SIT environment
Could this be the problem from WAS or DB side? Please throw some light on this .
Thanks
I think what you want is DBMS_TRACE You'll have to enable tracing in your session and execute the procedure manually.
If by chance, this procedure is being executed by ORACLE scheduler you may find some info in alert log. I'd suggest checking that anyway.
If the procedure used to run in 30min and now takes 2h to complete and if there were no changes to it then the problem is not in the procedure.
I'd suggest you check for unusable indexes, redo log switches, blocking sessions, table locks etc. hard to say exactly without knowing the procedure. You say it's a prod environment. DBA must surely have some performance monitoring in place. If, by chance, you have Oracle Enterprise Manager go and take a look at what is happening while the procedure is being executed.

Oracle dbms_job with invalid owner

Ok, database at a clients site that has dbms_job entries where the schema_user is invalid. (It appears to be the effect of bringing over a schema from another machine using exp/imp.)
I would like to get rid of these jobs, but standard operating procedure says that you must connect as the owner of the jobs to dbms_job.remove() it.
I thought a workaround might be, to create the user in this instance, and then use it to remove the job.
Thoughts?
Edit:
Or even alternatively making direct edits to the sys.job$ table instead of going through the dbms_job interface?
There's a package owned by SYS called DBMS_IJOB. This offers pretty much the same functionality as DBMS_JOB but it allows us to manipulate jobs owned by other users.
If your rogue job is number 23 then this command should kill it:
SQL> exec dbms_ijob.remove(23)
By default privileges on this package are not granted to other users, so you need to connect as SYS in order to execute it. And remember to commit the change!

Resources