DBMS_JOB.Broken in oracle - oracle

Unable to find the replacement for DBMS_JOB.Broken command to make the job broken. Please assist me for the same.
Command used in my code is
dbms_job.broken(oracle_job_num, true);
Have done the changes for the below commands which was used in our code.
DBMS_JOB.SUBMIT -> DBMS_SCHEDULER.CREATE_JOB
DBMS_JOB.REMOVE -> DBMS_SCHEDULER.DROP_JOB

I don’t think there is a specific api to mark a job as broken in dbms_scheduler but there is one to enable it again:
DBMS_SCHEDULER.enable(name=>'test_job');
There is an api to change after how many failures a job will be disabled:
DBMS_SCHEDULER.set_attribute (name=>'test_job', attribute=>'max_failures',value=>3);

Use DBMS_SCHEDULER.DISABLE('JOB_NAME') and DBMS_SCHEDULER.ENABLE('JOBNAME') instead of DBMS_JOB.BROKEN(ORACLE_JOB_NUM, TRUE) and DBMS_JOB.BROKEN(ORACLE_JOB_NUM, FALSE). The "broken" and "enabled" functionality are not exactly the same, because DBMS_JOBS automatically breaks jobs after 16 failures, whereas DBMS_SCHEDULER does not. But if you're just using BROKEN to manually disable and enable, then the behavior should be close enough.

Related

does anyone know how to know if an sbmjob finished?

What I'm trying to do is execute 2 SBMJOBs, but first I need program A to finish, in order to execute program B. Does anyone know how to do it?
Program A:
SBMJOB CMD(CALL PGM(PROGRAM1) PARM(PARM1 PARM2)) JOB(PROGRAM1)
Program B:
To run this program, I need the A to finish first, but how can I validate that?
SBMJOB CMD(CALL PGM(PROGRAM2) PARM(PARM1 PARM2)) JOB(PROGRAM2)
Thanks for the help
It would be easier to submit your two jobs on a JOBQ connected to a subsystem which let only one job process at a time.
Your second job will run naturally after the first is finished.
You can choose the JOBQ with the JOBQ parameter on the SBMJOB command.
QBATCH is by default a jobq with only one simultaneous job. But check before, it could have been changed on your system.
See the CHGJOBQE to change configuration on a jobq. General explanation on jobq and subsystem is here IBM doc JOBS and job queues

Issues with mkdbfile in a simple "read a file > Create a hashfile job"

Hello datastage savvy people here.
Two days in a row, the same single datastage job failed (not stopping at all).
The job tries to create a hashfile using the command /logiciel/iis9.1/Server/DSEngine/bin/mkdbfile /[path to hashfile]/[name of hashfile] 30 1 4 20 50 80 1628
(last trace in the log)
Something to consider (or maybe not ? ) :
The [name of hashfile] directory exists, and was last modified at the time of execution) but the file D_[name of hashfile] does not.
I am trying to understand what happened to prevent the same incident to happen next run (tonight).
Previous to this day, this job is in production since ever, and we had no issue with it.
Using Datastage 9.1.0.1
Did you check the job log to see if captured an error? When a DataStage job executes any system command via command execution stage or similar methods, the stdout of the called job is captured and then added to a message in the job log. Thus if the mkdbfile command gives any output (success messages, errors, etc) it should be captured and logged. The event may not be flagged as an error in the job log depending on the return code, but the output should be there.
If there is no logged message revealing cause of directory non-create, a couple of things to check are:
-- Was target directory on a disk possibly out of space at that time?
-- Do you have any Antivirus software that scans directories on your system at certain times? If so, it can interfere with I/o. If it does a scan at same time you had the problem, you may wish to update the AV software settings to exclude the directory you were writing dbfile to.

How can I log the termination/any end of an SSIS package?

I've got an SSIS package (targeting SQL Server 2012) and I'm currently debugging it. What I'm after is how to log that the SSIS package has finished or stopped by any methods.
The closest ones look to be 'OnExecStatusChanged', 'OnPostExecute', and 'OnPostValidate' however NONE of these provide any log messages when I break execution in Visual Studio.
Here's a screenshot:
I suspect the answer may be "you can't", but I want to see if there are perhaps more exotic solutions before I give up.
You do have two options available that I can think of.
One has been highlighted above in using the pre and post execute functions. If you were to use this solution I would recommend using a table (Dim_Package_Log?) and inserting to this table using a stored procedure on pre and post execute.
Clarification: This wont catch package breaks, just start, end and errors.
As you rightly identified though this would not record package breaks. To capture this I have implemented a view that utilises two tables.
SSISDB.catalog.event_messages
SSISDB.catalog.executions
If you do some "exotic" joins you can utilise the execution_status from executions and the messages from event_messages to find the information you want.
I can't remember which msdn page I found it, but this is what the execution_status means in catalog.executions
The possible values are created (1), running (2), canceled (3), failed (4), pending (5), ended unexpectedly (6), succeeded (7), stopping (8), and completed (9).
Clarification:
Below is a sample line of what SSISDB.catalog.executions outputs for each package execution from a Job:
43198 FolderName ProjectName PackageName.dtsx NULL NULL NULL NULL 10405 GUID SERVICEACCOUNTNAME 0 200 2015-02-16 00:00:03.4156856 +11:00 20 18 7 2015-02-16 00:00:05.4409834 +11:00 2015-02-16 00:00:58.4567400 +11:00 GUID SERVICEACCOUNTNAME 10324 NULL NULL ID SERVER SERVER 16776756 3791028 20971060 8131948 2
In this example there is a column with a value of 7. As detailed above this status changes based upon the end state of the execution of the package. In this case, successful. If the package breaks midway it will be captured in this status.
Further information regarding this ssidb capability is located at this MSDN page.
I know that is a partly answer. What is covered here is to detect that a package is finished by an error of success. This you can do by calling the package from an another parent package.
But if the package is forced to stop then this won't have any effect.

Hive execution hook

I am in need to hook a custom execution hook in Apache Hive. Please let me know if somebody know how to do it.
The current environment I am using is given below:
Hadoop : Cloudera version 4.1.2
Operating system : Centos
Thanks,
Arun
There are several types of hooks depending on at which stage you want to inject your custom code:
Driver run hooks (Pre/Post)
Semantic analyizer hooks (Pre/Post)
Execution hooks (Pre/Failure/Post)
Client statistics publisher
If you run a script the processing flow looks like as follows:
Driver.run() takes the command
HiveDriverRunHook.preDriverRun()
(HiveConf.ConfVars.HIVE_DRIVER_RUN_HOOKS)
Driver.compile() starts processing the command: creates the abstract syntax tree
AbstractSemanticAnalyzerHook.preAnalyze()
(HiveConf.ConfVars.SEMANTIC_ANALYZER_HOOK)
Semantic analysis
AbstractSemanticAnalyzerHook.postAnalyze()
(HiveConf.ConfVars.SEMANTIC_ANALYZER_HOOK)
Create and validate the query plan (physical plan)
Driver.execute() : ready to run the jobs
ExecuteWithHookContext.run()
(HiveConf.ConfVars.PREEXECHOOKS)
ExecDriver.execute() runs all the jobs
For each job at every HiveConf.ConfVars.HIVECOUNTERSPULLINTERVAL interval:
ClientStatsPublisher.run() is called to publish statistics
(HiveConf.ConfVars.CLIENTSTATSPUBLISHERS)
If a task fails: ExecuteWithHookContext.run()
(HiveConf.ConfVars.ONFAILUREHOOKS)
Finish all the tasks
ExecuteWithHookContext.run() (HiveConf.ConfVars.POSTEXECHOOKS)
Before returning the result HiveDriverRunHook.postDriverRun() ( HiveConf.ConfVars.HIVE_DRIVER_RUN_HOOKS)
Return the result.
For each of the hooks I indicated the interfaces you have to implement. In the brackets
there's the corresponding conf. prop. key you have to set in order to register the
class at the beginning of the script.
E.g: setting the PreExecution hook (9th stage of the workflow)
HiveConf.ConfVars.PREEXECHOOKS -> hive.exec.pre.hooks :
set hive.exec.pre.hooks=com.example.MyPreHook;
Unfortunately these features aren't really documented, but you can always look into the Driver class to see the evaluation order of the hooks.
Remark: I assumed here Hive 0.11.0, I don't think that the Cloudera distribution
differs (too much)
a good start --> http://dharmeshkakadia.github.io/hive-hook/
there are examples...
note: hive cli from console show the messages if you execute from hue, add a logger and you can see the results in hiveserver2 log role.

Why are my delayed_job jobs re-running even though I tell them not to?

I have this in my initializer:
Delayed::Job.const_set( "MAX_ATTEMPTS", 1 )
However, my jobs are still re-running after failure, seemingly completely ignoring this setting.
What might be going on?
more info
Here's what I'm observing: jobs with a populated "last error" field and an "attempts" number of more than 1 (10+).
I've discovered I was reading the old/wrong wiki. The correct way to set this is
Delayed::Worker.max_attempts = 1
Check your dbms table "delayed_jobs" for records (jobs) that still exist after the job "fails". The job will be re-run if the record is still there. -- If it shows that the "attempts" is non-zero then you know that your constant setting isn't working right.
Another guess is that the job's "failure," for some reason, is not being caught by DelayedJob. -- In that case, the "attempts" would still be at 0.
Debug by examining the delayed_job/lib/delayed/job.rb file. Esp the self.workoff method when one of your jobs "fail"
Added #John, I don't use MAX_ATTEMPTS. To debug, look in the gem to see where it is used. Sounds like the problem is that the job is being handled in the normal way rather than limiting attempts to 1. Use the debugger or a logging stmt to ensure that your MAX_ATTEMPTS setting is getting through.
Remember that the DelayedJobs jobs runner is not a full Rails program. So it could be that your initializer setting is not being run. Look into the script you're using to run the jobs runner.

Resources