I wrote a script:
Old script:
var="$(sleep 5 && echo "Linux is...")" &
sleep 5
echo $var
New script:
var="$(cat file | grep Succeeded && kilall cat)" & killer1=$!
(sleep 60; kill $killer1) & killer2=$!
fg 1
kill $killer2
echo $var
Cat file works all the time. Should return "... \n Succeeded \n ...". Echo empty always returns. Is there a solution? I want to necessarily result in a variable.
When you terminate a command by &, the shell executes the command in a subshell. var is set in the subshell, not the original process.
If you run it on the background using &, it is in separate process, so you no more share variables. You need to use IPC (interprocess comunication) to assure this. Easiest IPC to use is a pipe:
{ sleep 2 && echo 'Linux is ...' ; } |
{
echo 'doing something here in the meantime...'
sleep 1
read var
echo $var
}
Remove the & from the assignment of var (line 1 in your script).
Related
I would like to run a bash script with a watchdog function launched in sub thread that will stop my program when a given variable reach a value. This variable is incremented in the main thread.
var=0
function watchdog()
{
if [[ $var -ge 3 ]]; then
echo "Error"
fi
}
{ watchdog;} &
# main program loop
((var++))
The problem in this code is that $var stays at 0. I also tried without {} around the watchdog call, same result.
Is my code style good ?
You cannot share variables between processes in bash, and it does not support multi-threading. So you need a form of Inter-Process Communication. One of the simplest is to use a named pipe, also known as a FIFO.
Here is and example:
pipe='/tmp/mypipe'
mkfifo "$pipe"
var=0
# Your definition is not strictly correct (although it will work)
watchdog()
{
# Note the loop
while read var
do
if (( var >= 3 )) # a better way to do numeric comparisons
then
echo "Error $var"
else
echo "$var"
fi
sleep 2 # to prevent CPU hogging
done
}
watchdog < "$pipe" & # No need for a group
# main program loop - ??? I see no loop
((var++))
echo "$var" > "$pipe"
((var++))
echo "$var" > "$pipe"
((var++))
echo "$var" > "$pipe"
echo "waiting"
wait
rm "$pipe"
Example run:
$ bash gash.sh
1
waiting
2
Error 3
However I really don't see the point in using a separate process. Why not just call a function to test the value after each change?
if you run your bashscript with a . before, it will be use the same environment and can change existing variable. Look at this:
$ cat test.sh
#!/usr/bin/env bash
a=12
echo $a
$ a=1
$ echo $a
1
$ ./test.sh
12
$ echo $a
1
$ . ./test.sh
12
$ echo $a
12
After i run . ./test.sh the variable $a has been changed through the script.
I have some bash script where I put a block of commands in background and then want to kill them
#!/bin/bash
{ sleep 117s; echo "test"; } &
ppid=$!
# do something important
<kill the subprocess somehow>
I need to find a way to kill the subprocess so if it still sleeps then it stops sleeping and "test" won't be printed. I need to do it automatically in the script, so I can't use another shell.
What I already tried so far:
kill $ppid - doesn't kill sleep at all (with -9 flag too), sleep ppid becomes 1 but test won't be printed
kill %1 - the same result as above
kill -- -$ppid - it complains kill: (-30847) - No such process (and the subprocess is still here)
pkill -P $ppid - test has been printed
How can I do it?
Just changue your code for:
{ sleep 117s && echo "test"; } &
From bash man:
command1 && command2
command2 is executed if, and only if, command1 returns an exit status
of zero.
Demo:
$ { sleep 117s; echo "test"; } &
[1] 48013
$ pkill -P $!
-bash: line 102: 48014 Terminated sleep 117s
$ test
[1]+ Done { sleep 117s; echo "test"; }
$ { sleep 117s && echo "test"; } &
[1] 50763
$ pkill -P $!
-bash: line 106: 50764 Terminated sleep 117s
Run the command group in its own sub-shell. Use set -m to run the sub-shell in its own process group. Kill the process group
#!/bin/bash
set -m
( sleep 117s; echo "test"; ) &
ppid=$!
# do something important
kill -- -$ppid
I have the following line in bash.
(sleep 1 ; echo "foo" ; sleep 1 ; echo "bar" ; sleep 30) | nc localhost 2222 \
| grep -m1 "baz"
This prints "baz" (if/when the other end of the TCP connection sends it) and exits after 32 seconds.
What I want it to do is to exit the sleep 30 early if it sees "baz". The -m flag exits grep, but does not kill the whole line.
How could I achieve this in bash (without using expect if possible)?
Update: the code above does quit, if and only if, the server tries to send something after baz. This does not solve this problem, as the server may not send anything for minutes.
If you like esoteric sides of Bash, you can use coproc for that.
coproc { { sleep 1; echo "foo"; sleep 1; echo "bar"; sleep 30; } | nc localhost 2222; }
grep -m1 baz <&${COPROC[0]}
[[ $COPROC_PID ]] && kill $COPROC_PID
Here, we're using coproc to run
{ { sleep 1; echo "foo"; sleep 1; echo "bar"; sleep 30; } | nc localhost 2222; }
in the background. coproc takes care to redirect the standard output and standard input of this compound command in the file descriptors set in ${COPROC[0]} and ${COPROC[1]}. Moreover, the PID of this job is in COPROC_PID. We then feed grep with the standard output of the background job. It's then easy to kill the job when we're done.
You can catch the pid of the subshell you are opening. Then, something like this should make:
( echo "start"; sleep 1; echo $BASHPID > /tmp/subpid; echo "hello"; sleep 20; ) \
| ( sleep 1; subpid=$(cat /tmp/subpid); grep -m1 hello && kill $subpid )
That is, you store the id of the subshell in a temp file and then continue with the descripting.
On the other side of the pipe, you read the content of the file (sleep 1 is to make sure it has been written in the file by the initial subshell) and, when you find the content with grep, you kill it.
From man bash:
BASHPID
Expands to the process ID of the current bash process. This differs
from $$ under certain circumstances, such as subshells that do not
require bash to be re-initialized.
Credits to:
Get pid of current subshell
How to get the process id of a bash subprocess on command line.
Suddenly found a solution based on Jidder`s comment.
(sleep 1 ; echo "foo" ; sleep 1 ; echo "bar" ; for i in `seq 1 30`; do echo -n '.'; sleep 1; done) | grep -m1 "bar"
Just sleeping in a loop does not work. But after adding echo -n '.' it works. It seems that an attempt to write to a closed pipe leads to abort. Though I have tested without nc.
I believe you really need to use expect ( http://expect.sourceforge.net/ , and there are packages for most OSes and distributions ).
Otherwise you'll have a hard time handling some cases and getting rid of buffering, etc. Expect does it for you (... well, once you wrote the right except script that handles all (or most) cases) (For a first draft, you can use autoexpect (http://linux.die.net/man/1/autoexpect) but you'll need to add variations (handling "wrong password" messages, etc))
Expect is an old tool (and is based, iirc, on Tcl), but there is not really a best tool for the job of "sending input and waiting for outputs (and reacting differently depending on outputs)"
I'm currently writing a bash script to do tasks automatically. In my script I want it to display progress message when it is doing a task.
For example:
user#ubuntu:~$ Configure something
->
Configure something .
->
Configure something ..
->
Configure something ...
->
Configure something ... done
All the progress message should appear in the same line.
Below is my workaround so far:
echo -n "Configure something "
exec "configure something 2>&1 /dev/null"
//pseudo code for progress message
echo -n "." and sleep 1 if the previous exec of configure something not done
echo " done" if exec of the command finished successfully
echo " failed" otherwise
Will exec wait for the command to finish and then continue with the script lines later?
If so, then how can I echo message at the same time the exec of configure something is taking place?
How do I know when exec finishes the previous command and return true? use $? ?
Just to put the editorial hat on, what if something goes wrong? How are you, or a user of your script going to know what went wrong? This is probably not the answer you're looking for but having your script just execute each build step individually may turn out to be better overall, especially for troubleshooting. Why not define a function to validate your build steps:
function validateCmd()
{
CODE=$1
COMMAND=$2
MODULE=$3
if [ ${CODE} -ne 0 ]; then
echo "ERROR Executing Command: \"${COMMAND}\" in Module: ${MODULE}"
echo "Exiting."
exit 1;
fi
}
./configure
validateCmd $? "./configure" "Configuration of something"
Anyways, yes as you probably noticed above, use $? to determine what the result of the last command was. For example:
rm -rf ${TMP_DIR}
if [ $? -ne 0 ]; then
echo "ERROR Removing directory: ${TMP_DIR}"
exit 1;
fi
To answer your first question, you can use:
echo -ne "\b"
To delete a character on the same line. So to count to ten on one line, you can do something like:
for i in $(seq -w 1 10); do
echo -en "\b\b${i}"
sleep .25
done
echo
The trick with that is you'll have to know how much to delete, but I'm sure you can figure that out.
You cannot call exec like that; exec never returns, and the lines after an exec will not execute. The standard way to print progress updates on a single line is to simply use \r instead of \n at the end of each line. For example:
#!/bin/bash
i=0
sleep 5 & # Start some command
pid=$! # Save the pid of the command
while sleep 1; do # Produce progress reports
printf '\rcontinuing in %d seconds...' $(( 5 - ++i ))
test $i -eq 5 && break
done
if wait $pid; then echo done; else echo failed; fi
Here's another example:
#!/bin/bash
execute() {
eval "$#" & # Execute the command
pid=$!
# Invoke a shell to print status. If you just invoke
# the while loop directly, killing it will generate a
# notification. By trapping SIGTERM, we suppress the notice.
sh -c 'trap exit SIGTERM
while printf "\r%3d:%s..." $((++i)) "$*"; do sleep 1
done' 0 "$#" &
last_report=$!
if wait $pid; then echo done; else echo failed; fi
kill $last_report
}
execute sleep 3
execute sleep 2 \| false # Execute a command that will fail
execute sleep 1
I was wondering how, if possible, I can create a simple job management in BASH to process several commands in parallel. That is, I have a big list of commands to run, and I'd like to have two of them running at any given time.
I know quite a bit about bash, so here are the requirements that make it tricky:
The commands have variable running time so I can't just spawn 2, wait, and then continue with the next two. As soon as one command is done a next command must be run.
The controlling process needs to know the exit code of each command so that it can keep a total of how many failed
I'm thinking somehow I can use trap but I don't see an easy way to get the exit value of a child inside the handler.
So, any ideas on how this can be done?
Well, here is some proof of concept code that should probably work, but it breaks bash: invalid command lines generated, hanging, and sometimes a core dump.
# need monitor mode for trap CHLD to work
set -m
# store the PIDs of the children being watched
declare -a child_pids
function child_done
{
echo "Child $1 result = $2"
}
function check_pid
{
# check if running
kill -s 0 $1
if [ $? == 0 ]; then
child_pids=("${child_pids[#]}" "$1")
else
wait $1
ret=$?
child_done $1 $ret
fi
}
# check by copying pids, clearing list and then checking each, check_pid
# will add back to the list if it is still running
function check_done
{
to_check=("${child_pids[#]}")
child_pids=()
for ((i=0;$i<${#to_check};i++)); do
check_pid ${to_check[$i]}
done
}
function run_command
{
"$#" &
pid=$!
# check this pid now (this will add to the child_pids list if still running)
check_pid $pid
}
# run check on all pids anytime some child exits
trap 'check_done' CHLD
# test
for ((tl=0;tl<10;tl++)); do
run_command bash -c "echo FAIL; sleep 1; exit 1;"
run_command bash -c "echo OKAY;"
done
# wait for all children to be done
wait
Note that this isn't what I ultimately want, but would be groundwork to getting what I want.
Followup: I've implemented a system to do this in Python. So anybody using Python for scripting can have the above functionality. Refer to shelljob
GNU Parallel is awesomesauce:
$ parallel -j2 < commands.txt
$ echo $?
It will set the exit status to the number of commands that failed. If you have more than 253 commands, check out --joblog. If you don't know all the commands up front, check out --bg.
Can I persuade you to use make? This has the advantage that you can tell it how many commands to run in parallel (modify the -j number)
echo -e ".PHONY: c1 c2 c3 c4\nall: c1 c2 c3 c4\nc1:\n\tsleep 2; echo c1\nc2:\n\tsleep 2; echo c2\nc3:\n\tsleep 2; echo c3\nc4:\n\tsleep 2; echo c4" | make -f - -j2
Stick it in a Makefile and it will be much more readable
.PHONY: c1 c2 c3 c4
all: c1 c2 c3 c4
c1:
sleep 2; echo c1
c2:
sleep 2; echo c2
c3:
sleep 2; echo c3
c4:
sleep 2; echo c4
Beware, those are not spaces at the beginning of the lines, they're a TAB, so a cut and paste won't work here.
Put an "#" infront of each command if you don't the command echoed. e.g.:
#sleep 2; echo c1
This would stop on the first command that failed. If you need a count of the failures you'd need to engineer that in the makefile somehow. Perhaps something like
command || echo F >> failed
Then check the length of failed.
The problem you have is that you cannot wait for one of multiple background processes to complete. If you observe job status (using jobs) then finished background jobs are removed from the job list. You need another mechanism to determine whether a background job has finished.
The following example uses starts to background processes (sleeps). It then loops using ps to see if they are still running. If not it uses wait to gather the exit code and starts a new background process.
#!/bin/bash
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
while ( true ) do
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
done
Edit: Using SIGCHLD (without polling):
#!/bin/bash
set -bm
trap 'ChildFinished' SIGCHLD
function ChildFinished() {
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
}
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
sleep 1000d
I think the following example answers some of your questions, I am looking into the rest of question
(cat list1 list2 list3 | sort | uniq > list123) &
(cat list4 list5 list6 | sort | uniq > list456) &
from:
Running parallel processes in subshells
There is another package for debian systems named xjobs.
You might want to check it out:
http://packages.debian.org/wheezy/xjobs
If you cannot install parallel for some reason this will work in plain shell or bash
# String to detect failure in subprocess
FAIL_STR=failed_cmd
result=$(
(false || echo ${FAIL_STR}1) &
(true || echo ${FAIL_STR}2) &
(false || echo ${FAIL_STR}3)
)
wait
if [[ ${result} == *"$FAIL_STR"* ]]; then
failure=`echo ${result} | grep -E -o "$FAIL_STR[^[:space:]]+"`
echo The following commands failed:
echo "${failure}"
echo See above output of these commands for details.
exit 1
fi
Where true & false are placeholders for your commands. You can also echo $? along with the FAIL_STR to get the command status.
Yet another bash only example for your interest. Of course, prefer the use of GNU parallel, which will offer much more features out of the box.
This solution involve tmp file output creation for collecting of job status.
We use /tmp/${$}_ as temporary file prefix $$ is the actual parent process number and it is the same for all the script execution.
First, the loop for starting parallel job by batch. The batch size is set using max_parrallel_connection. try_connect_DB() is a slow bash function in the same file. Here we collect stdout + stderr 2>&1 for failure diagnostic.
nb_project=$(echo "$projects" | wc -w)
i=0
parrallel_connection=0
max_parrallel_connection=10
for p in $projects
do
i=$((i+1))
parrallel_connection=$((parrallel_connection+1))
try_connect_DB $p "$USERNAME" "$pass" > /tmp/${$}_${p}.out 2>&1 &
if [[ $parrallel_connection -ge $max_parrallel_connection ]]
then
echo -n " ... ($i/$nb_project)"
wait
parrallel_connection=0
fi
done
if [[ $nb_project -gt $max_parrallel_connection ]]
then
# final new line
echo
fi
# wait for all remaining jobs
wait
After run all jobs is finished review all results:
SQL_connection_failed is our convention of error, outputed by try_connect_DB() you may filter job success or failure the way that most suite your need.
Here we decided to only output failed results in order to reduce the amount of output on large sized jobs. Especially if most of them, or all, passed successfully.
# displaying result that failed
file_with_failure=$(grep -l SQL_connection_failed /tmp/${$}_*.out)
if [[ -n $file_with_failure ]]
then
nb_failed=$(wc -l <<< "$file_with_failure")
# we will collect DB name from our output file naming convention, for post treatment
db_names=""
echo "=========== failed connections : $nb_failed/$nb_project"
for failure in $file_with_failure
do
echo "============ $failure"
cat $failure
db_names+=" $(basename $failure | sed -e 's/^[0-9]\+_\([^.]\+\)\.out/\1/')"
done
echo "$db_names"
ret=1
else
echo "all tests passed"
ret=0
fi
# temporary files cleanup, could be kept is case of error, adapt to suit your needs.
rm /tmp/${$}_*.out
exit $ret