How to check whether script has failed - parallel-processing

I want to run all the .sh files in a directory and check whether they failed or not. If they failed, I need to re-run only the failed scripts.
Here is the command I use:
parallel -j0 exec ::: ./*.sh //command to run files in parallel
Can you tell how to check whether a script has failed and how to run only the failed scripts?

--retries should work:
parallel -j0 --retries 2 ::: ./*.sh

Related

set -e and bash file termination status

I want to launch automated tests from a bash file and it's important that the bash file would exit with non-zero status if a test fails.
The issue is that I need to run an afterscript when the tests are done.
Ho do I do it with logical satements?
set -e
python -m pytest
python ./afterscript.py
So, I need to run ./afterscript.py even if the pytest fails, yet in case of test failure, I need the file to be exited with an error status after the afterscript is run.
There you go:
set -e
python -m pytest || true
python ./afterscript.py
That way, no matter what the exit status of python -m pytest is, the alternative is true and therefore set -e will not terminate your script.

The shell script called inside 'Execute Shell' build step fails, but Jenkins build is marked as passed

I am calling a shell script with some parameters within 'Execute Shell' build step in Jenkins.
./shell_1.sh "$Param1" "$Param2" "$Param3"
This shell script downloads puppet modules, sets up the directories and then finally calls a puppet script.
puppet apply ${WORKSPACE}/scripts/puppet/sample.pp
The problem here is that Jenkins console shows the error from the puppet script and then finish with Success status.
Following are the last 3 lines from Jenkins console:
06:46:44 [1;31mError: /Stage[main]/Main/Deployit_environment[Environments/Build/Test/UAT/UAT-env]: Could not evaluate: cannot send request to deployit server 404/Not Found:Repository entity [Infrastructure/Build/Test/UAT/server_123/test-sample-service] not found[0m
06:46:44 [mNotice: Applied catalog in 1.82 seconds[0m
06:46:45 Finished: SUCCESS
I want the Jenkins job to fail if there is any error in puppet script.
I tried calling shell script with -xe option. But didn't work.
Thanks in advance.
The Jenkins Build step checks if the error code of the script is 0. If it has another value it will fail the build.
You can use special shell variable called $? to get the exit status of the previously executed command. To print $? variable use the echo command:
echo $?

Error when using mpirun with a shell script

When I run
mpirun -np 4 mpi_script.sh
I get the error
Open MPI tried to fork a new process via the "execve" system call but failed.
...
Error: Exec format error
despite the fact that I can run the script with ./mpi_script.sh
In my case the problem was I didn't have a shebang.
Adding #!/usr/bin/env bash to the top of my script fixed it:
#!/usr/bin/env bash
# rest of script
# ...
N.b. be sure that the file has execute permissions:
chmod +x mpi_script.sh

Jenkins fails with Execute shell script

I have my bash script in ${JENKINS_HOME}/scripts/convertSubt.sh
My job has build step Execute shell:
However after I run job it fails:
The error message (i.e. the 0: part) suggests, that there is an error while executing the script.
You could run the script with
sh -x convertSubt.sh
For the safe side, you could also do a
ls -l convertSubt.sh
file convertSubt.sh
before you run it.
make sure that the script exist with ls
no need to sh , just ./convertSubs.sh ( make sure you have run permissions)

How can I use the Platform LSF blaunch command to start processes simultaneously?

I'm having a hard time figuring out why I can't launch commands in parallel using the LSF blaunch command:
for num in `seq 3`; do
blaunch -u JobHost ./cmd_${num}.sh &
done
Error message:
Oct 29 13:08:55 2011 18887 3 7.04 lsb_launch(): Failed while executing tasks.
Oct 29 13:08:55 2011 18885 3 7.04 lsb_launch(): Failed while executing tasks.
Oct 29 13:08:55 2011 18884 3 7.04 lsb_launch(): Failed while executing tasks.
Removing the ampersand (&) allows the commands to execute sequentially, but I am after parallel execution.
When executed within the context of bsub, a single invocation of blaunch -u <hostfile> <cmd> will take <cmd> and run it on all the hosts specified in <hostfile> in parallel as long as those hosts are within the job's allocation.
What you're trying to do is use 3 separate invocations of blaunch to run 3 separate commands. I can't find it in the documentation, but just some testing on a recent version of LSF shows that each individually executed task in such a job has a unique task ID stored for it in an environment variable called LSF_PM_TASKID. You can verify this in your version of LSF by running something like:
blaunch -I -n <num_tasks> blaunch env | grep TASKID
Now, what does this have to do with your question? You want to run ./cmd_$i.sh for i=1,2,3 in parallel through blaunch. To do this you can write a single script which I'll call cmd.sh as follows:
#!/bin/sh
./cmd_${LSF_PM_TASKID}.sh
Now you can replace your for loop with a single invocation of blaunch like so:
blaunch -u JobHost cmd.sh
This will run one instance of cmd.sh on each host listed in the file 'JobHost' in parallel, each of these instances will run the shell script cmd_X.sh where X is the value of $LSF_PM_TASKID for that particular task.
If there's exactly 3 hostnames in 'JobHost' then you will get 3 instances of cmd.sh which will in turn lead to one instance each of cmd_1.sh, cmd_2.sh, and cmd_3.sh
Have you tried nohup? This might work:
for num in `seq 3`; do
nohup blaunch -u JobHost ./cmd_${num}.sh &>/dev/null &
done
blaunch is not to be used outside of the job execution environment provided by bsub. I don't know how to handle running different commands for each process, but try something like:
bsub -n 3 blaunch ./cmd.sh

Resources