I am trying to run parallel commands on Azure using the bash script.
The script is working correctly when all are successful.
When one of them fails, the other processes are properly killed but the step is marked as successful. What am I missing?
- job: main
pool:
vmImageL 'ubuntu-latest'
steps:
# some steps before
- bash: |
PIDS=()
my_command_1 &
PIDS+=($!)
my_command_2 &
PIDS+=($!)
my_command_3 &
PIDS+=($!)
for pid in "${PIDS[#]}"; do
wait $pid
done
# some steps after
The same script used in CircleCI and on GitHub Actions is working.
Explicitly checking exit status is likely a safer bet. Although "set -eu" may work for the immediate use case, it can have unexpected behavior:
https://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail
Explicit check:
for pid in "${PIDS[#]}"; do
wait $pid
if [ $? -ne 0 ]; then
exit 1
fi
done
Found the issue.
By adding set -eu at the beginning of the bash script block, we ensure the step will return the error code when any of the commands fail.
Related
I have 1 bash script that runs another bash script, however the first bashscript isn't waiting for the second one to complete before proceeding, how can I force it to wait?
For example:
#!/bin/bash
# first.sh
#call to secondary script
sh second.sh
echo "second.sh has completed"
echo "continuing with the rest of first.sh..."
The way it is now, it will run second.sh, and continue on, without waiting for second.sh to complete.
AS I use scheme like this in few scripts - just calling second scripts in the same shell-copy using source.
In script-1:
source script2.sh
or:
. script2.sh
So - no one command in script-1 will not be proceeded till script2.sh will end all it's tasks.
Little example.
First script:
$ cat script-1.sh
#!/bin/bash
echo "I'm sccript $0."
echo "Runnig script-2..."
source script-2.sh
echo "script-2.sh finished!"
Second script:
$ cat script-2.sh
#bin/bash
echo "I'm script-2. Running wait operation..."
sleep 2
echo "I'm ended my task."
How it works:
$ ./script-1.sh
I'm sccript ./script-1.sh.
Runnig script-2...
I'm script-2. Running wait operation...
I'm ended my task.
script-2.sh finished!
Normally it does; something else is happening. Are you sure that the other script isn't running something in the background instead? You can try using wait regardless.
You can simply add the command wait after you execute the second script, it will wait for all process that you launch from your principal script
You can even recuperate the PID of your second script using the command echo $! directly after you call the second script, and then pass this PID as an argument to the wait command
try using bash second.sh and check your second.sh and make sure you don't have programs that run in the background
Another way to do it $(second.sh)
I'm trying to set up a GitLab pipeline, so that certain exit_codes are okay for a script I'm executing.
I have tried both shell and a ruby script, but both seem to have the same behaviour.
test_job:
stage: build
image: ruby:3.0
script:
- chmod +x ci/wrap.sh
- ./ci/wrap.sh
allow_failure:
exit_codes:
- 64
As you can see, I am just executing the script and nothing more, my expectation would be, that the last script executed is used a the exit status for the job.
In the script I'm only calling exit 64, which should be a "allowed failure" in that case, the pipeline log however says that the job failed because of exit code 1:
How do I get GitLab to accept the exit code of this (or a ruby) script as the job exit code?
I found a way to fix this problem. Apparently Gitlab Runner uses the -e flag, which means that any non-zero exit code will cancel the job. This can be updated by using set +e, but then you still need to capture the actual exit code for the job.
Using $? in two different lines of the configuration does not work, because Gitlab does echo calls in-between them.
So the exit code needs to be captured directly, example:
script:
- set +e
- ruby "ci/example.rb" || EXIT_CODE=$?
- exit $EXIT_CODE
Here's my trick for turning off early failure and checking the result code later:
script:
- set +e +o pipefail
- python missing_module.py 2>&1 | grep found; result=$?
- set -e -o pipefail
- "[ $result == 0 ]"
This turns off early exit and runs a command that we consider to have an acceptable failure if "found" is in the error text. It then turns early exit back on and tests whether the exit code we saved was good or not.
Similar to these questions (1) (2), I'm wanting to run a command in a background process, carry on processing, then later use the return value from that command.
I have one function in my script that takes particularly long, so I would like to run it first before the rest of the setup so that there is less of a delay when the return value of that script is given, but currently the return value doesn't get captured.
What I've tried:
if [ $LAZY_LOAD -eq 0 ]; then
echo "INFO - Getting least loaded server in background. Can take up to 30s."
local leastLoaded=$( getLeastLoaded ) &
fi
# Other setup stuff that doesn't use leastLoaded...
# setup setup setup....
if [ $LAZY_LOAD -eq 0 ]; then
echo "INFO - Waiting for least loaded server to be retrieved before continuing"
wait
fi
echo "INFO - Doing stuff with $leastLoaded."
doThingWithLeastLoaded $leastLoaded
getLeastLoaded definitely works without the &, so I'm sure this is a concurrency issue.
Thanks!
According to bash manual:
If a command is terminated by the control operator &, the shell executes the command in the background in a subshell.
So your local command would not affect the current shell.
I'd suggest like this:
do-something > /some/file &
... ...
wait
var=$( cat /some/file )
I am writing a script that executes around 10 back-end processes in sequence, depending on if the previous process was executed without any errors.
Now let's assume the scenario, in which lets say 5th process failed and script came out. But I want to code it in a way such that, when next time user runs it(after removing the error because of which script exited last time), he should be able to run from 5th process onwards and not again from 1st process.
To be more specific, assume following is the script:
Script Starts
Process1
if [ $? -eq 0 ] then
Process2
if [ $? -eq 0 ] then
Process3
if [ $? -eq 0 ] then
..
..
..
..
if [ $? -eq 0 ] then
Process10
else
exit
So here the script will exit anytime if any one of the process fails to complete with status 0. So again, if process5 fails, and user corrects the problem and restarts script, the script should start with process5 again and not process1 or at least there should be an option to user if he wants to resume the script or start it back from beginning i.e. process1.
What all possible ways we can code this kind of script, also please bear in mind, I am not allowed to use a temporary db, where I can store the status of each process.
I need to code in sh (shell script) in unix.
A simple solution would be to write stamp files:
#/bin/sh
set -e # Automatically abort if any simple command fails
if ! test -f cmd1-stamp; cmd1; fi
touch cmd1-stamp
if ! test -f cmd2-stamp; cmd2; fi
touch cmd2-stamp
When the script executes, if cmd1-stamp exists, cmd1 is not executed. Otherwise, cmd1 is executed. The script will abort if it fails. Note that it is very tempting to write test -f cmd1-stamp || cmd1, and this seems to work ( in bash ) but the shell specs state that the shell shall abort if the simple command that fails is not a part of an AND or OR list, and I suspect this is (yet another) instance of bash not conforming to the spec. (Although it doesn't seem to specify that the shell shall not abort if the failing command is part of an AND or OR list.)
I use SSH Secure Shell client to connect to a server and run my scripts.
I want to stop a script on some conditions, so when I use exit, not only the script stops, but all the client disconnects from the server!, Here is the code:
if [[ `echo $#` -eq 0 ]]; then
echo "Missing argument- must to get a friend list";
exit
fi
for user in $*; do
if [[ !(-f `echo ${user}.user`) ]]; then
echo "The user name ${user} doesn't exist.";
exit
fi
done
A picture of the client:
Why is this happening?
You use source to run the script, this runs it in the current shell. That means that exit terminates the current shell and with that the ssh session.
replace source with bash and it should work, or better put
#!/bin/bash
on to of the file and make it executable.
exit returns from the current shell - If you've started a script by running it directly, this will exit the shell that the script is running in.
return returns from a function or sourced file (TY Dennis Williamson) - Same thing, but it doesn't terminate your current shell.
break returns from a loop - Similar to return, but can be used anywhere within a loop to stop processing more items. This is probably what you want.
if you are running from the current shell, exit will obviously exit from the shell and disconnect you. try running it in a new shell ( use a . before the script) or else use 'return' instead of exit