ControlM job calling unix script fails repeatatively - shell

I have a controlM job which calls a shell script that takes 4 command line parameters. The command is below:
sh /appl/Script/Script1.sh ABC /appl/Landing SV_SID_NormalisedEvent_* Y
The 3rd parameter (SV_SID_NormalisedEvent_*) in the command line is a file wildcard/pattern for which the script looks for in the path provided in the second parameter (/appl/Landing).
This job was running fine to date when it aborted for one specific corrupt file: SV_SID_NormalisedEvent_20150810_151805.csv.gz. We have handled this failure manually by ignoring this file and forced ok job
Since then whenever this job is triggered during daily runs it always fires command as below and fails. Somehow the 3rd parameter is always passed as a specific file rather than the wildcard:
sh /appl/Script/Script1.sh ABC /appl/Landing SV_SID_NormalisedEvent_20150810_151805.csv SV_SID_NormalisedEvent_20150810_151805.csv.gz Y
The correct command output when the job was running fine is as below:
sh /appl/Script/Script1.sh ABC /appl/Landing 'SV_SID_NormalisedEvent*' Y
Any pointers to this issue? The above command output is from sysout file created during each run.

We have handled this failure manually by ignoring this file and forced
ok job
It sounds like perhaps the file has not been cleaned up and is being found every time. Is that possible?
The job will run with the command line that is defined in if this still contains the proper parameter SV_SID_NormalisedEvent_*.
I can't think of any other reason you are seeing this behaviour.

Related

about Unix Shell Script

Can someone help me with this:
How to write a UNIX shell script that takes a parameter. The parameter passed should be the name of a file that is executable. Validate the parameter and if it is not valid output an appropriate error message and ensure your script exits with an overall status of error. If the parameter is valid execute the parameter. If the execution of the parameter fails then output an appropriate error message and ensure your script exits with an overall status of error. If the parameter is valid and executed successfully my script should exit with an overall status of success.
Okay, that's not a prob.
Split your task into 4 simple steps:
How to pass command line arguments to bash script.
How to check whether executable file exists.
How to display message in bash.
How to execute something from bash.
You'r welcome!

Last run time of shell script?

I need to create some sort of fail safe in one of my scripts, to prevent it from being re-executed immediately after failure. Typically when a script fails, our support team reruns the script using a 3rd party tool. Which is usually ok, but it should not happen for this particular script.
I was going to echo out a time-stamp into the log, and then make a condition to see if the current time-stamp is at least 2 hrs greater than the one in the log. If so, the script will exit itself. I'm sure this idea will work. However, this got me curious to see if there is a way to pull in the last run time of the script from the system itself? Or if there is an alternate method of preventing the script from being immediately rerun.
It's a SunOS Unix system, using the Ksh Shell.
Just do it, as you proposed, save the date >some file and check it at the script start. You can:
check the last line (as an date string itself)
or the last modification time of the file (e.g. when the last date command modified the somefile
Other common method is create one specified lock-file, or pid-file such /var/run/script.pid, Its content is usually the PID (and hostname, if needed) of the process what created it. Of course, the file-modification time tell you when it is created, by its content you can check the running PID. If the PID doesn't exists, (e.g. pre process is died) and the file modification time is older as X minutes, you can start the script again.
This method is good mainly because you can use simply the cron + some script_starter.sh what will periodically check the script running status and restart it when needed.
If you want use system resources (and have root access) you can use the accton + lastcomm.
I don't know SunOS but probably knows those programs. The accton starts the system-wide accounting of all programs, (needs to be root) and the lastcomm command_name | tail -n 1 shows when the command_name is executed last time.
Check the man lastcomm for the command line switches.

script to redirect the output to a file not working with cron

I am trying to run the following script using cron for every hour
temp=`date`
date=${temp// /_}
exec 1> /home/ec2-user/benchmarks/results/cpu/$date
sysbench --test=cpu --cpu-max-prime=100 run
The problem is output is not getting redirected to the file although the file is getting created.
Could anyone tell what might be the problem??????
The problem is most probably that sysbench is not on the PATH used by cron jobs.
Instead of:
sysbench --test=cpu --cpu-max-prime=100 run
Use the absolute path of sysbench, for example:
/usr/local/bin/sysbench --test=cpu --cpu-max-prime=100 run
You can find the correct absolute path using which sysbench.

Redirect bash output called from batch file to console

I have a batch file (build.bat) which calls a bash script (makelibs.sh). The bash script contains several commands which build 20 libraries from source.
If I run makelibs.sh from MSYS, I get continuous output. If I call it from the batch file, then I see the full output only at the end of every single command.
This makes it difficult to assess the current status of the process.
Is it possible to redirect the output of makelibs.sh in order to get a continuous feedback on the execution?
I have a batch file (build.bat) which calls a bash script (makelibs.sh)
I strongly advise against doing this. You are calling a script with a script, when you could simply open up Bash and put
makelibs.sh
However if you insist on doing this then perhaps start would work
start bash.exe makelibs.sh
ref

Birt 2.5.2 report generates empty table data when run from a cron job

I've got a shell script that runs genReport.sh in order to create a .pdf formatted report, and it works perfectly when it's run from the command line. The data source for the report is a ClearQuest database.
When it's run from a CRON job the .pdf file is created, except that only the various report and column headers are displayed, and the data of the report is missing. There are no errors reported to STDERR during the execution of the script.
This screams "environment variable" to me.
Currently, the shell script is defining the following:
CQ_HOME
BIRT_HOME
ODBCINI
ODBCINST
LD_LIBRARY_PATH
If it's an environmental thing, what part of the environment am I missing?
Without seeing the scripts, it's only guesswork. It could be a quoting issue or something having to do with a relative path to a file or executable that should be absolute. Often, the problem is that the directories listed in $PATH are different in cron's environment than they are in the user's. One thing you can do to aid in the diagnosis is add this line to your script:
env > /tmp/someoutputfilename.$$
and run the script from the command line and from cron and compare.
The magic for making this run turned out to be evaluating the output of the clearquest -dumpsh command, which in turn required that the TZ variable be set. That command outputs a dozen or so variables.

Resources