What I'm trying to do is execute 2 SBMJOBs, but first I need program A to finish, in order to execute program B. Does anyone know how to do it?
Program A:
SBMJOB CMD(CALL PGM(PROGRAM1) PARM(PARM1 PARM2)) JOB(PROGRAM1)
Program B:
To run this program, I need the A to finish first, but how can I validate that?
SBMJOB CMD(CALL PGM(PROGRAM2) PARM(PARM1 PARM2)) JOB(PROGRAM2)
Thanks for the help
It would be easier to submit your two jobs on a JOBQ connected to a subsystem which let only one job process at a time.
Your second job will run naturally after the first is finished.
You can choose the JOBQ with the JOBQ parameter on the SBMJOB command.
QBATCH is by default a jobq with only one simultaneous job. But check before, it could have been changed on your system.
See the CHGJOBQE to change configuration on a jobq. General explanation on jobq and subsystem is here IBM doc JOBS and job queues
Related
I'm using Azure-Pipelines for my CI Pipeline running on windows-2019. One of my cmd scripts generates output report files and continues to run until either Ctrl+C is pressed, or 'y' is typed. How is this done? If I add another script after this one as 'y', it will never reach because the previous command will never terminate.
In the meantime, I added the "timeoutInMinutes" param to the script so it timeouts in a minute, but this still causes a task error which is not ideal.
Does anyone have any ideas on how I can end the first script once it's complete? (It takes about 5 seconds to complete the necessary task)
You can't. it is impossible to enter a user input during Azure Pipeline.
You must edit your script to not finish and wait for user input.
Hello datastage savvy people here.
Two days in a row, the same single datastage job failed (not stopping at all).
The job tries to create a hashfile using the command /logiciel/iis9.1/Server/DSEngine/bin/mkdbfile /[path to hashfile]/[name of hashfile] 30 1 4 20 50 80 1628
(last trace in the log)
Something to consider (or maybe not ? ) :
The [name of hashfile] directory exists, and was last modified at the time of execution) but the file D_[name of hashfile] does not.
I am trying to understand what happened to prevent the same incident to happen next run (tonight).
Previous to this day, this job is in production since ever, and we had no issue with it.
Using Datastage 9.1.0.1
Did you check the job log to see if captured an error? When a DataStage job executes any system command via command execution stage or similar methods, the stdout of the called job is captured and then added to a message in the job log. Thus if the mkdbfile command gives any output (success messages, errors, etc) it should be captured and logged. The event may not be flagged as an error in the job log depending on the return code, but the output should be there.
If there is no logged message revealing cause of directory non-create, a couple of things to check are:
-- Was target directory on a disk possibly out of space at that time?
-- Do you have any Antivirus software that scans directories on your system at certain times? If so, it can interfere with I/o. If it does a scan at same time you had the problem, you may wish to update the AV software settings to exclude the directory you were writing dbfile to.
As you can see that when the SEQ_DIM_ACCOUNT Job executed it has 2 conditions with Success and Failure.
I wanted to run execute_command_60 when it's failed, but if execute_command_60 has been run, then i wanted the execute_command_60 to get to the SEQ_DIM_BUSINESS_PARTNER, but when i tried to link the execute_command_60 to SEQ_DIM_BUSINESS_PARTNER it gave me an error "the destination stage cannot support any more input links"
Is there a way to do that?
Yes it is possible with the help of a Sequencer stage.
Add that after the Execute_Command and before the SEQ_DIM_BUSINESS_PARTNER. This Stage kan take any number of Input-Links and you only have to specify if All or Any input links have been run to go on
I tried executing the following command for storm 1.1.1:
storm [topologyName] -n [number_of_worker]
The command successfully runs but the number of workers remain unchanged. I tried reducing the number of workers too. That also didn't work.
I have no clue whats happening. Any pointer will be helpful.
FYI:
i have implemented custom scheduling?. Is it because of that?
You can always check Storm's source code behind that CLI. Or code the re-balance (tested against 1.0.2):
RebalanceOptions rebalanceOptions = new RebalanceOptions();
rebalanceOptions.set_num_workers(newNumWorkers);
Nimbus.Client.rebalance("foo", rebalanceOptions);
I have this in my initializer:
Delayed::Job.const_set( "MAX_ATTEMPTS", 1 )
However, my jobs are still re-running after failure, seemingly completely ignoring this setting.
What might be going on?
more info
Here's what I'm observing: jobs with a populated "last error" field and an "attempts" number of more than 1 (10+).
I've discovered I was reading the old/wrong wiki. The correct way to set this is
Delayed::Worker.max_attempts = 1
Check your dbms table "delayed_jobs" for records (jobs) that still exist after the job "fails". The job will be re-run if the record is still there. -- If it shows that the "attempts" is non-zero then you know that your constant setting isn't working right.
Another guess is that the job's "failure," for some reason, is not being caught by DelayedJob. -- In that case, the "attempts" would still be at 0.
Debug by examining the delayed_job/lib/delayed/job.rb file. Esp the self.workoff method when one of your jobs "fail"
Added #John, I don't use MAX_ATTEMPTS. To debug, look in the gem to see where it is used. Sounds like the problem is that the job is being handled in the normal way rather than limiting attempts to 1. Use the debugger or a logging stmt to ensure that your MAX_ATTEMPTS setting is getting through.
Remember that the DelayedJobs jobs runner is not a full Rails program. So it could be that your initializer setting is not being run. Look into the script you're using to run the jobs runner.