Customized progress message for tasks in bash script - bash

I'm currently writing a bash script to do tasks automatically. In my script I want it to display progress message when it is doing a task.
For example:
user#ubuntu:~$ Configure something
->
Configure something .
->
Configure something ..
->
Configure something ...
->
Configure something ... done
All the progress message should appear in the same line.
Below is my workaround so far:
echo -n "Configure something "
exec "configure something 2>&1 /dev/null"
//pseudo code for progress message
echo -n "." and sleep 1 if the previous exec of configure something not done
echo " done" if exec of the command finished successfully
echo " failed" otherwise
Will exec wait for the command to finish and then continue with the script lines later?
If so, then how can I echo message at the same time the exec of configure something is taking place?
How do I know when exec finishes the previous command and return true? use $? ?

Just to put the editorial hat on, what if something goes wrong? How are you, or a user of your script going to know what went wrong? This is probably not the answer you're looking for but having your script just execute each build step individually may turn out to be better overall, especially for troubleshooting. Why not define a function to validate your build steps:
function validateCmd()
{
CODE=$1
COMMAND=$2
MODULE=$3
if [ ${CODE} -ne 0 ]; then
echo "ERROR Executing Command: \"${COMMAND}\" in Module: ${MODULE}"
echo "Exiting."
exit 1;
fi
}
./configure
validateCmd $? "./configure" "Configuration of something"
Anyways, yes as you probably noticed above, use $? to determine what the result of the last command was. For example:
rm -rf ${TMP_DIR}
if [ $? -ne 0 ]; then
echo "ERROR Removing directory: ${TMP_DIR}"
exit 1;
fi
To answer your first question, you can use:
echo -ne "\b"
To delete a character on the same line. So to count to ten on one line, you can do something like:
for i in $(seq -w 1 10); do
echo -en "\b\b${i}"
sleep .25
done
echo
The trick with that is you'll have to know how much to delete, but I'm sure you can figure that out.

You cannot call exec like that; exec never returns, and the lines after an exec will not execute. The standard way to print progress updates on a single line is to simply use \r instead of \n at the end of each line. For example:
#!/bin/bash
i=0
sleep 5 & # Start some command
pid=$! # Save the pid of the command
while sleep 1; do # Produce progress reports
printf '\rcontinuing in %d seconds...' $(( 5 - ++i ))
test $i -eq 5 && break
done
if wait $pid; then echo done; else echo failed; fi
Here's another example:
#!/bin/bash
execute() {
eval "$#" & # Execute the command
pid=$!
# Invoke a shell to print status. If you just invoke
# the while loop directly, killing it will generate a
# notification. By trapping SIGTERM, we suppress the notice.
sh -c 'trap exit SIGTERM
while printf "\r%3d:%s..." $((++i)) "$*"; do sleep 1
done' 0 "$#" &
last_report=$!
if wait $pid; then echo done; else echo failed; fi
kill $last_report
}
execute sleep 3
execute sleep 2 \| false # Execute a command that will fail
execute sleep 1

Related

Run a shell script with While condition in an infinite loop based on conditions

I need to create a shell script to place some indicator/flag files in a directory say /dir1/dir2/flag_file_directory based on the request flags received from a shell script in a directory /dir1/dir2/req_flag_file_directory and the source files present in a directory say dir1/dir2/source_file_directory. For this I need to run a script using a while condition in an infinite loop as I do not know when the source files will be made available.
So, my implementation plan is somewhat like this - Lets say JOB1 which is scheduled to run at some time in the morning will first place(touch) a request flag (eg. touch /dir1/dir2/req_flag_file_directory/req1.req), saying that this job is running, so look for the Source files of pattern file_pattern_YYYYMMDD.CSV (the file patterns are different for different jobs) present in the source file directory, if they are present, then count the number. If the count of the files is correct, then first delete the request flag for this job and then touch a indicator/flag file in the /dir1/dir2/flag_file_directory. This indicator/flag file will then be used as an indicator that the source files are all present and the job can be continued to load these files into our system.
I will have all the details related to the jobs and their flag files in a file whose structure is as shown below. Based on the request flag, the script should know what other criterias it should look for before placing the indicator file:
request_flags|source_name|job_name|file_pattern|file_count|indicator_flag_file
req1.req|Sourcename1|jobname1|file_pattern_1|3|ind1.ind
req2.req|Sourcename2|jobname2|file_pattern_2|6|ind2.ind
req3.req|Sourcename3|jobname3|file_pattern_3|1|ind3.ind
req**n**.req|Sourcename**n**|jobname**n**|file_pattern_**n**|2|ind**n**.ind
Please let me know how this can be achieved and also if you have other suggestions or solutions too
Rather have the service daemon script polling in an infinite loop (i.e. waking up periodically to check if it needs to do work), you could use file locking and a named pipe to create an event queue.
Outline of the service daemon, daemon.sh. This script will loop infinitely, blocking by reading from the named pipe at read line until a message arrives (i.e., some other process writes to $RequestPipe).
#!/bin/bash
# daemon.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
while true ; do
if read line < "$RequestPipe" ; then
# ... commands to be executed after message received ...
echo "$line" # for example
fi
done
An outline of requestor.sh, the script that wakes up the service daemon when everything is ready. This script does all the preparation necessary, e.g. creating files in req_flag_file_directory and source_file_directory, then wakes the service daemon script by writing to the named pipe. It could even send a message that that contains more information for the service daemon, say "Job 1 ready".
#!/bin/bash
# requestor.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
# ... create all the necessary files ...
(
flock --exclusive 200
# Unblock the service daemon/listener by sending a line of text.
echo Wake up sleepyhead. > "$RequestPipe"
) 200>"$LockFile" # subshell exit releases lock automatically
daemon.sh fleshed out with some error handling:
#!/bin/bash
# daemon.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
SharedGroup=$(echo need to put a group here 1>&2; exit 1)
#
if [[ ! -w "$RequestPipe" ]] ; then
# Handle 1st time. Or fix a problem.
mkfifo --mode=775 "$RequestPipe"
chgrp "$SharedGroup" "$RequestPipe"
if [[ ! -w "$RequestPipe" ]] ; then
echo "ERROR: request queue, can't write to $RequestPipe" 1>&2
exit 1
fi
fi
while true ; do
if read line < "$RequestPipe" ; then
# ... commands to be executed after message received ...
echo "$line" # for example
fi
done
requestor.sh fleshed out with some error handling:
#!/bin/bash
# requestor.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
SharedGroup=$(echo need to put a group here 1>&2; exit 1)
# ... create all the necessary files ...
#
if [[ ! -w "$LockFile" ]] ; then
# Handle 1st time. Or fix a problem.
touch "$LockFile"
chgrp "$SharedGroup" "$LockFile"
chmod 775 "$LockFile"
if [[ ! -w "$LockFile" ]] ; then
echo "ERROR: write lock, can't write to $LockFile" 1>&2
exit 1
fi
fi
if [[ ! -w "$RequestPipe" ]] ; then
# Handle 1st time. Or fix a problem.
mkfifo --mode=775 "$RequestPipe"
chgrp "$SharedGroup" "$RequestPipe"
if [[ ! -w "$RequestPipe" ]] ; then
echo "ERROR: request queue, can't write to $RequestPipe" 1>&2
exit 1
fi
fi
(
flock --exclusive 200 || {
echo "ERROR: write lock, $LockFile flock failed." 1>&2
exit 1
}
# Unblock the service daemon/listener by sending a line of text.
echo Wake up sleepyhead. > "$RequestPipe"
) 200> "$LockFile" # subshell exit releases lock automatically
Still having some doubts about the contents of requests file, but I think I've come up with a rather simple solution:
#!/bin/bash
DETAILS_FILE="details.txt"
DETAILS_LINES=$((`wc -l $DETAILS_FILE|awk '{print $1}'`-1)) # to remove banner line (first line)
DETAILS=`tail -$DETAILS_LINES $DETAILS_FILE|tr '\n\r' ' '`
PIDS=()
IFS=' '
waitall () { # PIDS...
## Wait for children to exit and indicate whether all exited with 0 status.
local errors=0
while :; do
debug "Processes remaining: $*"
for pid in $#; do
echo "PID: $pid"
shift
if kill -0 "$pid" 2>/dev/null; then
debug "$pid is still alive."
set -- "$#" "$pid"
elif wait "$pid"; then
debug "$pid exited with zero exit status."
else
debug "$pid exited with non-zero exit status."
((++errors))
fi
done
(("$#" > 0)) || break
# TODO: how to interrupt this sleep when a child terminates?
sleep ${WAITALL_DELAY:-1}
done
((errors == 0))
}
debug () { echo "DEBUG: $*" >&2; }
#function to check for # of sourcefiles matching pattern in dir
#params: req3.req Sourcename3 jobname3 file_pattern_3 1 ind3.ind
check () {
NOFILES=`find $2 -type f | egrep -c $4`
if [ $NOFILES -eq "$5" ];then
echo "Touching file $6. done."
touch $6
else
echo "$NOFILES matching $4 pattern. exiting"
fi
}
echo "parsing $DETAILS_FILE file..."
read -a lines <<< "$DETAILS"
for line in "${lines[#]}"
do
IFS='|'
read -a ARRAY <<< "$line"
echo "Line processed. Dispatching job ${ARRAY[2]}..."
check ${ARRAY[#]} &
IFS=' '
PIDS="$PIDS $!"
#echo $PIDS
done
waitall ${PIDS}
wait
Although not exactly in a infinite loop. This script is intended to run in a crontab.
First it reads details.txt file, as per your example.
After parsing all details, this script dispatches the check function, with sole purpose of counting the number of files matching file_pattern of each source_name folder, and if the number of files is equal to file_count, then touches the indicator_flag_file.
Hope that helps!

How to get exit status of piped command from inside the pipeline?

Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.

How to retry a command in Bash?

I have a command that should take less than 1 minute to execute, but for some reason has an extremely long built-in timeout mechanism. I want some bash that does the following:
success = False
try(my_command)
while(!(success))
wait 1 min
if my command not finished
retry(my_command)
else
success = True
end while
How can I do this in Bash?
Look at the GNU timeout command. This kills the process if it has not completed in a given time; you'd simply wrap a loop around this to wait for the timeout to complete successfully, with delays between retries as appropriate, etc.
while timeout -k 70 60 -- my_command; [ $? = 124 ]
do sleep 2 # Pause before retry
done
If you must do it in pure bash (which is not really feasible - bash uses lots of other commands), then you are in for a world of pain and frustration with signal handlers and all sorts of issues.
Please expand on your answer a little. -k 70 is --kill-after= 70 seconds, 124 exit on timeout; what is the 60?
The linked documentation does explain the command; I don't really plan to repeat it all here. The synopsis is timeout [options] duration command [arg]...; one of the options is -k duration. The -k duration says "if the command does not die after the SIGTERM signal is sent at 60 seconds, send a SIGKILL signal at 70 seconds" (and the command should die then). There are a number of documented exit statuses; 124 indicates that the command timed out; 137 that it died after being sent the SIGKILL signal, and so on. You can't tell if the command itself exits with one of the documented statuses.
I found a script from:
http://fahdshariff.blogspot.com/2014/02/retrying-commands-in-shell-scripts.html
#!/bin/bash
# Retries a command on failure.
# $1 - the max number of attempts
# $2... - the command to run
retry() {
local -r -i max_attempts="$1"; shift
local -i attempt_num=1
until "$#"
do
if ((attempt_num==max_attempts))
then
echo "Attempt $attempt_num failed and there are no more attempts left!"
return 1
else
echo "Attempt $attempt_num failed! Trying again in $attempt_num seconds..."
sleep $((attempt_num++))
fi
done
}
# example usage:
retry 5 ls -ltr foo
I liked #Jonathan's answer, but tried to make it more straight forward for future use:
until timeout 1 sleep 2
do
echo "Happening after 1s of sleep"
done
Adapting #Shin's answer to use kill -0 rather than jobs so that this should work even with classic Bourne shell, and allow for other background jobs. You may have to experiment with kill and wait depending on how my_command responds to those.
while true ; do
my_command &
sleep 60
if kill -0 $! 2>/dev/null; then
# Job took too long
kill $!
else
echo "Job is done"
# Reap exit status
wait $!
break
fi
done
You can run a command and retain control with the & background operator. Run your command in the background, sleep for as long as you wish in the foreground, and then, if the background job hasn't terminated, kill it and start over.
while true ; do
my_command &
sleep 60
if [[ $(jobs -r) == "" ]] ; then
echo "Job is done"
break
fi
# Job took too long
kill -9 $!
done
# Retries a given command given number of times and outputs to given variable
# $1 : Command to be passed : handles both simple, piped commands
# $2 : Final output of the command(if successfull)
# $3 : Number of retrial attempts[Default 5]
function retry_function() {
echo "Command to be executed : $1"
echo "Final output variable : $2"
echo "Total trials [Default:5] : $3"
counter=${3:-5}
local _my_output_=$2 #make sure passed variable is not same as this
i=1
while [ $i -le $counter ]; do
local my_result=$(eval "$1")
# this tests if output variable is populated and accordingly retries,
# Not possible to provide error status/logs(STDIN,STDERR)-owing to subshell execution of command
# if error logs are needed, execute the same code, outside function in same shell
if test -z "$my_result"
then
echo "Trial[$i/$counter]: Execution failed"
else
echo "Trial[$i/$counter]: Successfull execution"
eval $_my_output_="'$my_result'"
break
fi
let i+=1
done
}
retry_function "ping -c 4 google.com | grep \"min/avg/max\" | awk -F\"/\" '{print \$5}'" avg_rtt_time
echo $avg_rtt_time
- To pass in a lengthy command, pass a method echoing the content. Take care of method expansion accordingly in a subshell at appropriate place.
- Wait time can be added too - just before the increment!
- For a complex command, youll have to take care of stringifying it(Good luck)

Run bash commands in parallel, track results and count

I was wondering how, if possible, I can create a simple job management in BASH to process several commands in parallel. That is, I have a big list of commands to run, and I'd like to have two of them running at any given time.
I know quite a bit about bash, so here are the requirements that make it tricky:
The commands have variable running time so I can't just spawn 2, wait, and then continue with the next two. As soon as one command is done a next command must be run.
The controlling process needs to know the exit code of each command so that it can keep a total of how many failed
I'm thinking somehow I can use trap but I don't see an easy way to get the exit value of a child inside the handler.
So, any ideas on how this can be done?
Well, here is some proof of concept code that should probably work, but it breaks bash: invalid command lines generated, hanging, and sometimes a core dump.
# need monitor mode for trap CHLD to work
set -m
# store the PIDs of the children being watched
declare -a child_pids
function child_done
{
echo "Child $1 result = $2"
}
function check_pid
{
# check if running
kill -s 0 $1
if [ $? == 0 ]; then
child_pids=("${child_pids[#]}" "$1")
else
wait $1
ret=$?
child_done $1 $ret
fi
}
# check by copying pids, clearing list and then checking each, check_pid
# will add back to the list if it is still running
function check_done
{
to_check=("${child_pids[#]}")
child_pids=()
for ((i=0;$i<${#to_check};i++)); do
check_pid ${to_check[$i]}
done
}
function run_command
{
"$#" &
pid=$!
# check this pid now (this will add to the child_pids list if still running)
check_pid $pid
}
# run check on all pids anytime some child exits
trap 'check_done' CHLD
# test
for ((tl=0;tl<10;tl++)); do
run_command bash -c "echo FAIL; sleep 1; exit 1;"
run_command bash -c "echo OKAY;"
done
# wait for all children to be done
wait
Note that this isn't what I ultimately want, but would be groundwork to getting what I want.
Followup: I've implemented a system to do this in Python. So anybody using Python for scripting can have the above functionality. Refer to shelljob
GNU Parallel is awesomesauce:
$ parallel -j2 < commands.txt
$ echo $?
It will set the exit status to the number of commands that failed. If you have more than 253 commands, check out --joblog. If you don't know all the commands up front, check out --bg.
Can I persuade you to use make? This has the advantage that you can tell it how many commands to run in parallel (modify the -j number)
echo -e ".PHONY: c1 c2 c3 c4\nall: c1 c2 c3 c4\nc1:\n\tsleep 2; echo c1\nc2:\n\tsleep 2; echo c2\nc3:\n\tsleep 2; echo c3\nc4:\n\tsleep 2; echo c4" | make -f - -j2
Stick it in a Makefile and it will be much more readable
.PHONY: c1 c2 c3 c4
all: c1 c2 c3 c4
c1:
sleep 2; echo c1
c2:
sleep 2; echo c2
c3:
sleep 2; echo c3
c4:
sleep 2; echo c4
Beware, those are not spaces at the beginning of the lines, they're a TAB, so a cut and paste won't work here.
Put an "#" infront of each command if you don't the command echoed. e.g.:
#sleep 2; echo c1
This would stop on the first command that failed. If you need a count of the failures you'd need to engineer that in the makefile somehow. Perhaps something like
command || echo F >> failed
Then check the length of failed.
The problem you have is that you cannot wait for one of multiple background processes to complete. If you observe job status (using jobs) then finished background jobs are removed from the job list. You need another mechanism to determine whether a background job has finished.
The following example uses starts to background processes (sleeps). It then loops using ps to see if they are still running. If not it uses wait to gather the exit code and starts a new background process.
#!/bin/bash
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
while ( true ) do
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
done
Edit: Using SIGCHLD (without polling):
#!/bin/bash
set -bm
trap 'ChildFinished' SIGCHLD
function ChildFinished() {
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
}
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
sleep 1000d
I think the following example answers some of your questions, I am looking into the rest of question
(cat list1 list2 list3 | sort | uniq > list123) &
(cat list4 list5 list6 | sort | uniq > list456) &
from:
Running parallel processes in subshells
There is another package for debian systems named xjobs.
You might want to check it out:
http://packages.debian.org/wheezy/xjobs
If you cannot install parallel for some reason this will work in plain shell or bash
# String to detect failure in subprocess
FAIL_STR=failed_cmd
result=$(
(false || echo ${FAIL_STR}1) &
(true || echo ${FAIL_STR}2) &
(false || echo ${FAIL_STR}3)
)
wait
if [[ ${result} == *"$FAIL_STR"* ]]; then
failure=`echo ${result} | grep -E -o "$FAIL_STR[^[:space:]]+"`
echo The following commands failed:
echo "${failure}"
echo See above output of these commands for details.
exit 1
fi
Where true & false are placeholders for your commands. You can also echo $? along with the FAIL_STR to get the command status.
Yet another bash only example for your interest. Of course, prefer the use of GNU parallel, which will offer much more features out of the box.
This solution involve tmp file output creation for collecting of job status.
We use /tmp/${$}_ as temporary file prefix $$ is the actual parent process number and it is the same for all the script execution.
First, the loop for starting parallel job by batch. The batch size is set using max_parrallel_connection. try_connect_DB() is a slow bash function in the same file. Here we collect stdout + stderr 2>&1 for failure diagnostic.
nb_project=$(echo "$projects" | wc -w)
i=0
parrallel_connection=0
max_parrallel_connection=10
for p in $projects
do
i=$((i+1))
parrallel_connection=$((parrallel_connection+1))
try_connect_DB $p "$USERNAME" "$pass" > /tmp/${$}_${p}.out 2>&1 &
if [[ $parrallel_connection -ge $max_parrallel_connection ]]
then
echo -n " ... ($i/$nb_project)"
wait
parrallel_connection=0
fi
done
if [[ $nb_project -gt $max_parrallel_connection ]]
then
# final new line
echo
fi
# wait for all remaining jobs
wait
After run all jobs is finished review all results:
SQL_connection_failed is our convention of error, outputed by try_connect_DB() you may filter job success or failure the way that most suite your need.
Here we decided to only output failed results in order to reduce the amount of output on large sized jobs. Especially if most of them, or all, passed successfully.
# displaying result that failed
file_with_failure=$(grep -l SQL_connection_failed /tmp/${$}_*.out)
if [[ -n $file_with_failure ]]
then
nb_failed=$(wc -l <<< "$file_with_failure")
# we will collect DB name from our output file naming convention, for post treatment
db_names=""
echo "=========== failed connections : $nb_failed/$nb_project"
for failure in $file_with_failure
do
echo "============ $failure"
cat $failure
db_names+=" $(basename $failure | sed -e 's/^[0-9]\+_\([^.]\+\)\.out/\1/')"
done
echo "$db_names"
ret=1
else
echo "all tests passed"
ret=0
fi
# temporary files cleanup, could be kept is case of error, adapt to suit your needs.
rm /tmp/${$}_*.out
exit $ret

bash script: how to save return value of first command in a pipeline?

Bash: I want to run a command and pipe the results through some filter, but if the command fails, I want to return the command's error value, not the boring return value of the filter:
E.g.:
if !(cool_command | output_filter); then handle_the_error; fi
Or:
set -e
cool_command | output_filter
In either case it's the return value of cool_command that I care about -- for the 'if' condition in the first case, or to exit the script in the second case.
Is there some clean idiom for doing this?
Use the PIPESTATUS builtin variable.
From man bash:
PIPESTATUS
An array variable (see Arrays
below) containing a list of exit
status values from the processes in
the most-recently-executed foreground
pipeline (which may contain only a
single command).
If you didn't need to display the error output of the command, you could do something like
if ! echo | mysql $dbcreds mysql; then
error "Could not connect to MySQL. Did you forget to add '--db-user=' or '--db-password='?"
die "Check your credentials or ensure server is running with /etc/init.d/mysqld status"
fi
In the example, error and die are defined functions. elsewhere in the script. $dbcreds is also defined, though this is built from command line options. If there is no error generated by the command, nothing is returned. If an error occurs, text will be returned by this particular command.
Correct me if I'm wrong, but I get the impression you're really looking to do something a little more convoluted than
[ `id -u` -eq '0' ] || die "Must be run as root!"
where you actually grab the user ID prior to the if statement, and then perform the test. Doing it this way, you could then display the result if you choose. This would be
UID=`id -u`
if [ $UID -eq '0' ]; then
echo "User is root"
else
echo "User is not root"
exit 1 ##set an exit code higher than 0 if you're exiting because of an error
fi
The following script uses a fifo to filter the output in a separate process. This has the following advantages over the other answers. First, it is not bash specific. In particular it does not rely on PIPESTATUS. Second, output is not stalled until the command has completed.
$ cat >test_filter.sh <<EOF
#!/bin/sh
cmd()
{
echo $1
echo $2 >&2
return $3
}
filter()
{
while read line
do
echo "... $line"
done
}
tmpdir=$(mktemp -d)
fifo="$tmpdir"/out
mkfifo "$fifo"
filter <"$fifo" &
pid=$!
cmd a b 10 >"$fifo" 2>&1
ret=$?
wait $pid
echo exit code: $ret
rm -f "$fifo"
rmdir "$tmpdir"
EOF
$ sh ./test_filter.sh
... a
... b
exit code: 10

Resources