Make grep exit early when it finds a match - bash

I have the following line in bash.
(sleep 1 ; echo "foo" ; sleep 1 ; echo "bar" ; sleep 30) | nc localhost 2222 \
| grep -m1 "baz"
This prints "baz" (if/when the other end of the TCP connection sends it) and exits after 32 seconds.
What I want it to do is to exit the sleep 30 early if it sees "baz". The -m flag exits grep, but does not kill the whole line.
How could I achieve this in bash (without using expect if possible)?
Update: the code above does quit, if and only if, the server tries to send something after baz. This does not solve this problem, as the server may not send anything for minutes.

If you like esoteric sides of Bash, you can use coproc for that.
coproc { { sleep 1; echo "foo"; sleep 1; echo "bar"; sleep 30; } | nc localhost 2222; }
grep -m1 baz <&${COPROC[0]}
[[ $COPROC_PID ]] && kill $COPROC_PID
Here, we're using coproc to run
{ { sleep 1; echo "foo"; sleep 1; echo "bar"; sleep 30; } | nc localhost 2222; }
in the background. coproc takes care to redirect the standard output and standard input of this compound command in the file descriptors set in ${COPROC[0]} and ${COPROC[1]}. Moreover, the PID of this job is in COPROC_PID. We then feed grep with the standard output of the background job. It's then easy to kill the job when we're done.

You can catch the pid of the subshell you are opening. Then, something like this should make:
( echo "start"; sleep 1; echo $BASHPID > /tmp/subpid; echo "hello"; sleep 20; ) \
| ( sleep 1; subpid=$(cat /tmp/subpid); grep -m1 hello && kill $subpid )
That is, you store the id of the subshell in a temp file and then continue with the descripting.
On the other side of the pipe, you read the content of the file (sleep 1 is to make sure it has been written in the file by the initial subshell) and, when you find the content with grep, you kill it.
From man bash:
BASHPID
Expands to the process ID of the current bash process. This differs
from $$ under certain circumstances, such as subshells that do not
require bash to be re-initialized.
Credits to:
Get pid of current subshell
How to get the process id of a bash subprocess on command line.

Suddenly found a solution based on Jidder`s comment.
(sleep 1 ; echo "foo" ; sleep 1 ; echo "bar" ; for i in `seq 1 30`; do echo -n '.'; sleep 1; done) | grep -m1 "bar"
Just sleeping in a loop does not work. But after adding echo -n '.' it works. It seems that an attempt to write to a closed pipe leads to abort. Though I have tested without nc.

I believe you really need to use expect ( http://expect.sourceforge.net/ , and there are packages for most OSes and distributions ).
Otherwise you'll have a hard time handling some cases and getting rid of buffering, etc. Expect does it for you (... well, once you wrote the right except script that handles all (or most) cases) (For a first draft, you can use autoexpect (http://linux.die.net/man/1/autoexpect) but you'll need to add variations (handling "wrong password" messages, etc))
Expect is an old tool (and is based, iirc, on Tcl), but there is not really a best tool for the job of "sending input and waiting for outputs (and reacting differently depending on outputs)"

Related

Bash skip sleep and go to next loop iteration

I have a bash loop that looks like this:
something(){
echo "something"
}
while true; do
something
sleep 10s
done | otherCommand
When the loop is in the sleep state, I want to be able to be able to run a function from the terminal that will skip the sleep step and go on to the next iteration of the loop.
For example, if the script has been sleeping for 5 seconds, I want the command to stop the script from sleeping, go on to the next iteration of the loop, run something, and continue sleeping again.
This is not foolproof, but may be robust enough:
To interrupt the sleep command, run:
pkill -f 'sleep 10s'
Note that the script running the sleep command prints something like the following to stderr when sleep is killed: <script-path>: line <line-number>: <pid> Terminated: 15 sleep 10s. Curiously, you cannot suppress this script with sleep 10s 2>/dev/null; to suppress it, you have to either apply 2>/dev/null to the while loop as a whole, or to the script invocation as a whole.
As #miken32 points out in the comments, this command tries to kill all commands whose invocation command line matches sleep 10s, though - unless you're running as root - killing will only succeed for matches among your processed due to lack of permission to kill other users' processes.
To be safer, you can explicitly restrict matches to your own processes:
pkill -u "$(id -u)" -f 'sleep 10s'
Truly safe use, however, requires that you capture your running script's PID (process ID), save it to a file, and use the PID from that file with pkill's -P option, which limits matches to child processes of the specified PID - see this answer (thanks, #miken32).
If you want to skip the sleep, use something like a file you can touch:
if [ ! -f /tmp/skipsleep ]; then
sleep 10
fi
When you want to interrupt the sleep command, kill it!
You could read from a named pipe with read -t 10 instead of sleeping: props for named pipe added by "that other guy"
something()
{
echo "something"
}
somethingelse()
{
echo "something else"
}
mkfifo ~/.myfifo
while cat ~/.myfifo; do true; done |
while true
do
something
read -t 10 && somethingelse
done
Now whenever another script writes to the fifo with echo > ~/.myfifo, the loop will skip its current wait and continue to the next iteration.
This way, different users or different scripts waiting ten seconds will not interfere with each other.
Solution below works if script is running in foreground and you are waiting. Of course you would need a valid loop exit condition. You can also check the value entered and act on it differently. In this case pressing any key except 'j' will iterate the loop. Pressing 'j' will pipe the output of somethingelse to awk.
something()
{
echo "something"
}
somethingelse()
{
echo "something else"
}
while true; do
something |awk '{print "piping something: " $0 }'
read -t 3 -s -n 1 answer
if [ $? == 0 ]; then
echo "you didn't want to wait!"
fi
if [ "$answer" = "j" ]; then
somethingelse | awk '{print "piping something else: " $0 }'
fi
done
If its interactive then just use "read -t #" instead of "sleep #".
You can then just press enter to skip the timeout.
This may be old, but a very simple, elegant solution is to iterate a non existant, empty or 0 set variable, while checking if it has some value first. In this case the ternary operator works as intended
for((;;)){
# skip an interation
(( i ))&& do something || i=1
# skip 5 iterations
(( n >= 5 ))&& do something ||
((n++))
}

How to retry a command in Bash?

I have a command that should take less than 1 minute to execute, but for some reason has an extremely long built-in timeout mechanism. I want some bash that does the following:
success = False
try(my_command)
while(!(success))
wait 1 min
if my command not finished
retry(my_command)
else
success = True
end while
How can I do this in Bash?
Look at the GNU timeout command. This kills the process if it has not completed in a given time; you'd simply wrap a loop around this to wait for the timeout to complete successfully, with delays between retries as appropriate, etc.
while timeout -k 70 60 -- my_command; [ $? = 124 ]
do sleep 2 # Pause before retry
done
If you must do it in pure bash (which is not really feasible - bash uses lots of other commands), then you are in for a world of pain and frustration with signal handlers and all sorts of issues.
Please expand on your answer a little. -k 70 is --kill-after= 70 seconds, 124 exit on timeout; what is the 60?
The linked documentation does explain the command; I don't really plan to repeat it all here. The synopsis is timeout [options] duration command [arg]...; one of the options is -k duration. The -k duration says "if the command does not die after the SIGTERM signal is sent at 60 seconds, send a SIGKILL signal at 70 seconds" (and the command should die then). There are a number of documented exit statuses; 124 indicates that the command timed out; 137 that it died after being sent the SIGKILL signal, and so on. You can't tell if the command itself exits with one of the documented statuses.
I found a script from:
http://fahdshariff.blogspot.com/2014/02/retrying-commands-in-shell-scripts.html
#!/bin/bash
# Retries a command on failure.
# $1 - the max number of attempts
# $2... - the command to run
retry() {
local -r -i max_attempts="$1"; shift
local -i attempt_num=1
until "$#"
do
if ((attempt_num==max_attempts))
then
echo "Attempt $attempt_num failed and there are no more attempts left!"
return 1
else
echo "Attempt $attempt_num failed! Trying again in $attempt_num seconds..."
sleep $((attempt_num++))
fi
done
}
# example usage:
retry 5 ls -ltr foo
I liked #Jonathan's answer, but tried to make it more straight forward for future use:
until timeout 1 sleep 2
do
echo "Happening after 1s of sleep"
done
Adapting #Shin's answer to use kill -0 rather than jobs so that this should work even with classic Bourne shell, and allow for other background jobs. You may have to experiment with kill and wait depending on how my_command responds to those.
while true ; do
my_command &
sleep 60
if kill -0 $! 2>/dev/null; then
# Job took too long
kill $!
else
echo "Job is done"
# Reap exit status
wait $!
break
fi
done
You can run a command and retain control with the & background operator. Run your command in the background, sleep for as long as you wish in the foreground, and then, if the background job hasn't terminated, kill it and start over.
while true ; do
my_command &
sleep 60
if [[ $(jobs -r) == "" ]] ; then
echo "Job is done"
break
fi
# Job took too long
kill -9 $!
done
# Retries a given command given number of times and outputs to given variable
# $1 : Command to be passed : handles both simple, piped commands
# $2 : Final output of the command(if successfull)
# $3 : Number of retrial attempts[Default 5]
function retry_function() {
echo "Command to be executed : $1"
echo "Final output variable : $2"
echo "Total trials [Default:5] : $3"
counter=${3:-5}
local _my_output_=$2 #make sure passed variable is not same as this
i=1
while [ $i -le $counter ]; do
local my_result=$(eval "$1")
# this tests if output variable is populated and accordingly retries,
# Not possible to provide error status/logs(STDIN,STDERR)-owing to subshell execution of command
# if error logs are needed, execute the same code, outside function in same shell
if test -z "$my_result"
then
echo "Trial[$i/$counter]: Execution failed"
else
echo "Trial[$i/$counter]: Successfull execution"
eval $_my_output_="'$my_result'"
break
fi
let i+=1
done
}
retry_function "ping -c 4 google.com | grep \"min/avg/max\" | awk -F\"/\" '{print \$5}'" avg_rtt_time
echo $avg_rtt_time
- To pass in a lengthy command, pass a method echoing the content. Take care of method expansion accordingly in a subshell at appropriate place.
- Wait time can be added too - just before the increment!
- For a complex command, youll have to take care of stringifying it(Good luck)

Run bash commands in parallel, track results and count

I was wondering how, if possible, I can create a simple job management in BASH to process several commands in parallel. That is, I have a big list of commands to run, and I'd like to have two of them running at any given time.
I know quite a bit about bash, so here are the requirements that make it tricky:
The commands have variable running time so I can't just spawn 2, wait, and then continue with the next two. As soon as one command is done a next command must be run.
The controlling process needs to know the exit code of each command so that it can keep a total of how many failed
I'm thinking somehow I can use trap but I don't see an easy way to get the exit value of a child inside the handler.
So, any ideas on how this can be done?
Well, here is some proof of concept code that should probably work, but it breaks bash: invalid command lines generated, hanging, and sometimes a core dump.
# need monitor mode for trap CHLD to work
set -m
# store the PIDs of the children being watched
declare -a child_pids
function child_done
{
echo "Child $1 result = $2"
}
function check_pid
{
# check if running
kill -s 0 $1
if [ $? == 0 ]; then
child_pids=("${child_pids[#]}" "$1")
else
wait $1
ret=$?
child_done $1 $ret
fi
}
# check by copying pids, clearing list and then checking each, check_pid
# will add back to the list if it is still running
function check_done
{
to_check=("${child_pids[#]}")
child_pids=()
for ((i=0;$i<${#to_check};i++)); do
check_pid ${to_check[$i]}
done
}
function run_command
{
"$#" &
pid=$!
# check this pid now (this will add to the child_pids list if still running)
check_pid $pid
}
# run check on all pids anytime some child exits
trap 'check_done' CHLD
# test
for ((tl=0;tl<10;tl++)); do
run_command bash -c "echo FAIL; sleep 1; exit 1;"
run_command bash -c "echo OKAY;"
done
# wait for all children to be done
wait
Note that this isn't what I ultimately want, but would be groundwork to getting what I want.
Followup: I've implemented a system to do this in Python. So anybody using Python for scripting can have the above functionality. Refer to shelljob
GNU Parallel is awesomesauce:
$ parallel -j2 < commands.txt
$ echo $?
It will set the exit status to the number of commands that failed. If you have more than 253 commands, check out --joblog. If you don't know all the commands up front, check out --bg.
Can I persuade you to use make? This has the advantage that you can tell it how many commands to run in parallel (modify the -j number)
echo -e ".PHONY: c1 c2 c3 c4\nall: c1 c2 c3 c4\nc1:\n\tsleep 2; echo c1\nc2:\n\tsleep 2; echo c2\nc3:\n\tsleep 2; echo c3\nc4:\n\tsleep 2; echo c4" | make -f - -j2
Stick it in a Makefile and it will be much more readable
.PHONY: c1 c2 c3 c4
all: c1 c2 c3 c4
c1:
sleep 2; echo c1
c2:
sleep 2; echo c2
c3:
sleep 2; echo c3
c4:
sleep 2; echo c4
Beware, those are not spaces at the beginning of the lines, they're a TAB, so a cut and paste won't work here.
Put an "#" infront of each command if you don't the command echoed. e.g.:
#sleep 2; echo c1
This would stop on the first command that failed. If you need a count of the failures you'd need to engineer that in the makefile somehow. Perhaps something like
command || echo F >> failed
Then check the length of failed.
The problem you have is that you cannot wait for one of multiple background processes to complete. If you observe job status (using jobs) then finished background jobs are removed from the job list. You need another mechanism to determine whether a background job has finished.
The following example uses starts to background processes (sleeps). It then loops using ps to see if they are still running. If not it uses wait to gather the exit code and starts a new background process.
#!/bin/bash
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
while ( true ) do
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
done
Edit: Using SIGCHLD (without polling):
#!/bin/bash
set -bm
trap 'ChildFinished' SIGCHLD
function ChildFinished() {
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
}
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
sleep 1000d
I think the following example answers some of your questions, I am looking into the rest of question
(cat list1 list2 list3 | sort | uniq > list123) &
(cat list4 list5 list6 | sort | uniq > list456) &
from:
Running parallel processes in subshells
There is another package for debian systems named xjobs.
You might want to check it out:
http://packages.debian.org/wheezy/xjobs
If you cannot install parallel for some reason this will work in plain shell or bash
# String to detect failure in subprocess
FAIL_STR=failed_cmd
result=$(
(false || echo ${FAIL_STR}1) &
(true || echo ${FAIL_STR}2) &
(false || echo ${FAIL_STR}3)
)
wait
if [[ ${result} == *"$FAIL_STR"* ]]; then
failure=`echo ${result} | grep -E -o "$FAIL_STR[^[:space:]]+"`
echo The following commands failed:
echo "${failure}"
echo See above output of these commands for details.
exit 1
fi
Where true & false are placeholders for your commands. You can also echo $? along with the FAIL_STR to get the command status.
Yet another bash only example for your interest. Of course, prefer the use of GNU parallel, which will offer much more features out of the box.
This solution involve tmp file output creation for collecting of job status.
We use /tmp/${$}_ as temporary file prefix $$ is the actual parent process number and it is the same for all the script execution.
First, the loop for starting parallel job by batch. The batch size is set using max_parrallel_connection. try_connect_DB() is a slow bash function in the same file. Here we collect stdout + stderr 2>&1 for failure diagnostic.
nb_project=$(echo "$projects" | wc -w)
i=0
parrallel_connection=0
max_parrallel_connection=10
for p in $projects
do
i=$((i+1))
parrallel_connection=$((parrallel_connection+1))
try_connect_DB $p "$USERNAME" "$pass" > /tmp/${$}_${p}.out 2>&1 &
if [[ $parrallel_connection -ge $max_parrallel_connection ]]
then
echo -n " ... ($i/$nb_project)"
wait
parrallel_connection=0
fi
done
if [[ $nb_project -gt $max_parrallel_connection ]]
then
# final new line
echo
fi
# wait for all remaining jobs
wait
After run all jobs is finished review all results:
SQL_connection_failed is our convention of error, outputed by try_connect_DB() you may filter job success or failure the way that most suite your need.
Here we decided to only output failed results in order to reduce the amount of output on large sized jobs. Especially if most of them, or all, passed successfully.
# displaying result that failed
file_with_failure=$(grep -l SQL_connection_failed /tmp/${$}_*.out)
if [[ -n $file_with_failure ]]
then
nb_failed=$(wc -l <<< "$file_with_failure")
# we will collect DB name from our output file naming convention, for post treatment
db_names=""
echo "=========== failed connections : $nb_failed/$nb_project"
for failure in $file_with_failure
do
echo "============ $failure"
cat $failure
db_names+=" $(basename $failure | sed -e 's/^[0-9]\+_\([^.]\+\)\.out/\1/')"
done
echo "$db_names"
ret=1
else
echo "all tests passed"
ret=0
fi
# temporary files cleanup, could be kept is case of error, adapt to suit your needs.
rm /tmp/${$}_*.out
exit $ret

How do you run multiple programs in parallel from a bash script?

I am trying to write a .sh file that runs many programs simultaneously
I tried this
prog1
prog2
But that runs prog1 then waits until prog1 ends and then starts prog2...
So how can I run them in parallel?
How about:
prog1 & prog2 && fg
This will:
Start prog1.
Send it to background, but keep printing its output.
Start prog2, and keep it in foreground, so you can close it with ctrl-c.
When you close prog2, you'll return to prog1's foreground, so you can also close it with ctrl-c.
To run multiple programs in parallel:
prog1 &
prog2 &
If you need your script to wait for the programs to finish, you can add:
wait
at the point where you want the script to wait for them.
If you want to be able to easily run and kill multiple process with ctrl-c, this is my favorite method: spawn multiple background processes in a (…) subshell, and trap SIGINT to execute kill 0, which will kill everything spawned in the subshell group:
(trap 'kill 0' SIGINT; prog1 & prog2 & prog3)
You can have complex process execution structures, and everything will close with a single ctrl-c (just make sure the last process is run in the foreground, i.e., don't include a & after prog1.3):
(trap 'kill 0' SIGINT; prog1.1 && prog1.2 & (prog2.1 | prog2.2 || prog2.3) & prog1.3)
If there is a chance the last command might exit early and you want to keep everything else running, add wait as the last command. In the following example, sleep 2 would have exited first, killing sleep 4 before it finished; adding wait allows both to run to completion:
(trap 'kill 0' SIGINT; sleep 4 & sleep 2 & wait)
You can use wait:
some_command &
P1=$!
other_command &
P2=$!
wait $P1 $P2
It assigns the background program PIDs to variables ($! is the last launched process' PID), then the wait command waits for them. It is nice because if you kill the script, it kills the processes too!
With GNU Parallel http://www.gnu.org/software/parallel/ it is as easy as:
(echo prog1; echo prog2) | parallel
Or if you prefer:
parallel ::: prog1 prog2
Learn more:
Watch the intro video for a quick introduction:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line
will love you for it.
Read: Ole Tange, GNU Parallel 2018 (Ole Tange, 2018).
xargs -P <n> allows you to run <n> commands in parallel.
While -P is a nonstandard option, both the GNU (Linux) and macOS/BSD implementations support it.
The following example:
runs at most 3 commands in parallel at a time,
with additional commands starting only when a previously launched process terminates.
time xargs -P 3 -I {} sh -c 'eval "$1"' - {} <<'EOF'
sleep 1; echo 1
sleep 2; echo 2
sleep 3; echo 3
echo 4
EOF
The output looks something like:
1 # output from 1st command
4 # output from *last* command, which started as soon as the count dropped below 3
2 # output from 2nd command
3 # output from 3rd command
real 0m3.012s
user 0m0.011s
sys 0m0.008s
The timing shows that the commands were run in parallel (the last command was launched only after the first of the original 3 terminated, but executed very quickly).
The xargs command itself won't return until all commands have finished, but you can execute it in the background by terminating it with control operator & and then using the wait builtin to wait for the entire xargs command to finish.
{
xargs -P 3 -I {} sh -c 'eval "$1"' - {} <<'EOF'
sleep 1; echo 1
sleep 2; echo 2
sleep 3; echo 3
echo 4
EOF
} &
# Script execution continues here while `xargs` is running
# in the background.
echo "Waiting for commands to finish..."
# Wait for `xargs` to finish, via special variable $!, which contains
# the PID of the most recently started background process.
wait $!
Note:
BSD/macOS xargs requires you to specify the count of commands to run in parallel explicitly, whereas GNU xargs allows you to specify -P 0 to run as many as possible in parallel.
Output from the processes run in parallel arrives as it is being generated, so it will be unpredictably interleaved.
GNU parallel, as mentioned in Ole's answer (does not come standard with most platforms), conveniently serializes (groups) the output on a per-process basis and offers many more advanced features.
#!/bin/bash
prog1 & 2> .errorprog1.log; prog2 & 2> .errorprog2.log
Redirect errors to separate logs.
Here is a function I use in order to run at max n process in parallel (n=4 in the example):
max_children=4
function parallel {
local time1=$(date +"%H:%M:%S")
local time2=""
# for the sake of the example, I'm using $2 as a description, you may be interested in other description
echo "starting $2 ($time1)..."
"$#" && time2=$(date +"%H:%M:%S") && echo "finishing $2 ($time1 -- $time2)..." &
local my_pid=$$
local children=$(ps -eo ppid | grep -w $my_pid | wc -w)
children=$((children-1))
if [[ $children -ge $max_children ]]; then
wait -n
fi
}
parallel sleep 5
parallel sleep 6
parallel sleep 7
parallel sleep 8
parallel sleep 9
wait
If max_children is set to the number of cores, this function will try to avoid idle cores.
There is a very useful program that calls nohup.
nohup - run a command immune to hangups, with output to a non-tty
This works beautifully for me (found here):
sh -c 'command1 & command2 & command3 & wait'
It outputs all the logs of each command intermingled (which is what I wanted), and all are killed with ctrl+c.
I had a similar situation recently where I needed to run multiple programs at the same time, redirect their outputs to separated log files and wait for them to finish and I ended up with something like that:
#!/bin/bash
# Add the full path processes to run to the array
PROCESSES_TO_RUN=("/home/joao/Code/test/prog_1/prog1" \
"/home/joao/Code/test/prog_2/prog2")
# You can keep adding processes to the array...
for i in ${PROCESSES_TO_RUN[#]}; do
${i%/*}/./${i##*/} > ${i}.log 2>&1 &
# ${i%/*} -> Get folder name until the /
# ${i##*/} -> Get the filename after the /
done
# Wait for the processes to finish
wait
Source: http://joaoperibeiro.com/execute-multiple-programs-and-redirect-their-outputs-linux/
You can try ppss (abandoned). ppss is rather powerful - you can even create a mini-cluster.
xargs -P can also be useful if you've got a batch of embarrassingly parallel processing to do.
Process Spawning Manager
Sure, technically these are processes, and this program should really be called a process spawning manager, but this is only due to the way that BASH works when it forks using the ampersand, it uses the fork() or perhaps clone() system call which clones into a separate memory space, rather than something like pthread_create() which would share memory. If BASH supported the latter, each "sequence of execution" would operate just the same and could be termed to be traditional threads whilst gaining a more efficient memory footprint. Functionally however it works the same, though a bit more difficult since GLOBAL variables are not available in each worker clone hence the use of the inter-process communication file and the rudimentary flock semaphore to manage critical sections. Forking from BASH of course is the basic answer here but I feel as if people know that but are really looking to manage what is spawned rather than just fork it and forget it. This demonstrates a way to manage up to 200 instances of forked processes all accessing a single resource. Clearly this is overkill but I enjoyed writing it so I kept on. Increase the size of your terminal accordingly. I hope you find this useful.
ME=$(basename $0)
IPC="/tmp/$ME.ipc" #interprocess communication file (global thread accounting stats)
DBG=/tmp/$ME.log
echo 0 > $IPC #initalize counter
F1=thread
SPAWNED=0
COMPLETE=0
SPAWN=1000 #number of jobs to process
SPEEDFACTOR=1 #dynamically compensates for execution time
THREADLIMIT=50 #maximum concurrent threads
TPS=1 #threads per second delay
THREADCOUNT=0 #number of running threads
SCALE="scale=5" #controls bc's precision
START=$(date +%s) #whence we began
MAXTHREADDUR=6 #maximum thread life span - demo mode
LOWER=$[$THREADLIMIT*100*90/10000] #90% worker utilization threshold
UPPER=$[$THREADLIMIT*100*95/10000] #95% worker utilization threshold
DELTA=10 #initial percent speed change
threadspeed() #dynamically adjust spawn rate based on worker utilization
{
#vaguely assumes thread execution average will be consistent
THREADCOUNT=$(threadcount)
if [ $THREADCOUNT -ge $LOWER ] && [ $THREADCOUNT -le $UPPER ] ;then
echo SPEED HOLD >> $DBG
return
elif [ $THREADCOUNT -lt $LOWER ] ;then
#if maxthread is free speed up
SPEEDFACTOR=$(echo "$SCALE;$SPEEDFACTOR*(1-($DELTA/100))"|bc)
echo SPEED UP $DELTA%>> $DBG
elif [ $THREADCOUNT -gt $UPPER ];then
#if maxthread is active then slow down
SPEEDFACTOR=$(echo "$SCALE;$SPEEDFACTOR*(1+($DELTA/100))"|bc)
DELTA=1 #begin fine grain control
echo SLOW DOWN $DELTA%>> $DBG
fi
echo SPEEDFACTOR $SPEEDFACTOR >> $DBG
#average thread duration (total elapsed time / number of threads completed)
#if threads completed is zero (less than 100), default to maxdelay/2 maxthreads
COMPLETE=$(cat $IPC)
if [ -z $COMPLETE ];then
echo BAD IPC READ ============================================== >> $DBG
return
fi
#echo Threads COMPLETE $COMPLETE >> $DBG
if [ $COMPLETE -lt 100 ];then
AVGTHREAD=$(echo "$SCALE;$MAXTHREADDUR/2"|bc)
else
ELAPSED=$[$(date +%s)-$START]
#echo Elapsed Time $ELAPSED >> $DBG
AVGTHREAD=$(echo "$SCALE;$ELAPSED/$COMPLETE*$THREADLIMIT"|bc)
fi
echo AVGTHREAD Duration is $AVGTHREAD >> $DBG
#calculate timing to achieve spawning each workers fast enough
# to utilize threadlimit - average time it takes to complete one thread / max number of threads
TPS=$(echo "$SCALE;($AVGTHREAD/$THREADLIMIT)*$SPEEDFACTOR"|bc)
#TPS=$(echo "$SCALE;$AVGTHREAD/$THREADLIMIT"|bc) # maintains pretty good
#echo TPS $TPS >> $DBG
}
function plot()
{
echo -en \\033[${2}\;${1}H
if [ -n "$3" ];then
if [[ $4 = "good" ]];then
echo -en "\\033[1;32m"
elif [[ $4 = "warn" ]];then
echo -en "\\033[1;33m"
elif [[ $4 = "fail" ]];then
echo -en "\\033[1;31m"
elif [[ $4 = "crit" ]];then
echo -en "\\033[1;31;4m"
fi
fi
echo -n "$3"
echo -en "\\033[0;39m"
}
trackthread() #displays thread status
{
WORKERID=$1
THREADID=$2
ACTION=$3 #setactive | setfree | update
AGE=$4
TS=$(date +%s)
COL=$[(($WORKERID-1)/50)*40]
ROW=$[(($WORKERID-1)%50)+1]
case $ACTION in
"setactive" )
touch /tmp/$ME.$F1$WORKERID #redundant - see main loop
#echo created file $ME.$F1$WORKERID >> $DBG
plot $COL $ROW "Worker$WORKERID: ACTIVE-TID:$THREADID INIT " good
;;
"update" )
plot $COL $ROW "Worker$WORKERID: ACTIVE-TID:$THREADID AGE:$AGE" warn
;;
"setfree" )
plot $COL $ROW "Worker$WORKERID: FREE " fail
rm /tmp/$ME.$F1$WORKERID
;;
* )
;;
esac
}
getfreeworkerid()
{
for i in $(seq 1 $[$THREADLIMIT+1])
do
if [ ! -e /tmp/$ME.$F1$i ];then
#echo "getfreeworkerid returned $i" >> $DBG
break
fi
done
if [ $i -eq $[$THREADLIMIT+1] ];then
#echo "no free threads" >> $DBG
echo 0
#exit
else
echo $i
fi
}
updateIPC()
{
COMPLETE=$(cat $IPC) #read IPC
COMPLETE=$[$COMPLETE+1] #increment IPC
echo $COMPLETE > $IPC #write back to IPC
}
worker()
{
WORKERID=$1
THREADID=$2
#echo "new worker WORKERID:$WORKERID THREADID:$THREADID" >> $DBG
#accessing common terminal requires critical blocking section
(flock -x -w 10 201
trackthread $WORKERID $THREADID setactive
)201>/tmp/$ME.lock
let "RND = $RANDOM % $MAXTHREADDUR +1"
for s in $(seq 1 $RND) #simulate random lifespan
do
sleep 1;
(flock -x -w 10 201
trackthread $WORKERID $THREADID update $s
)201>/tmp/$ME.lock
done
(flock -x -w 10 201
trackthread $WORKERID $THREADID setfree
)201>/tmp/$ME.lock
(flock -x -w 10 201
updateIPC
)201>/tmp/$ME.lock
}
threadcount()
{
TC=$(ls /tmp/$ME.$F1* 2> /dev/null | wc -l)
#echo threadcount is $TC >> $DBG
THREADCOUNT=$TC
echo $TC
}
status()
{
#summary status line
COMPLETE=$(cat $IPC)
plot 1 $[$THREADLIMIT+2] "WORKERS $(threadcount)/$THREADLIMIT SPAWNED $SPAWNED/$SPAWN COMPLETE $COMPLETE/$SPAWN SF=$SPEEDFACTOR TIMING=$TPS"
echo -en '\033[K' #clear to end of line
}
function main()
{
while [ $SPAWNED -lt $SPAWN ]
do
while [ $(threadcount) -lt $THREADLIMIT ] && [ $SPAWNED -lt $SPAWN ]
do
WID=$(getfreeworkerid)
worker $WID $SPAWNED &
touch /tmp/$ME.$F1$WID #if this loops faster than file creation in the worker thread it steps on itself, thread tracking is best in main loop
SPAWNED=$[$SPAWNED+1]
(flock -x -w 10 201
status
)201>/tmp/$ME.lock
sleep $TPS
if ((! $[$SPAWNED%100]));then
#rethink thread timing every 100 threads
threadspeed
fi
done
sleep $TPS
done
while [ "$(threadcount)" -gt 0 ]
do
(flock -x -w 10 201
status
)201>/tmp/$ME.lock
sleep 1;
done
status
}
clear
threadspeed
main
wait
status
echo
Since for some reason I can't use wait, I came up with this solution:
# create a hashmap of the tasks name -> its command
declare -A tasks=(
["Sleep 3 seconds"]="sleep 3"
["Check network"]="ping imdb.com"
["List dir"]="ls -la"
)
# execute each task in the background, redirecting their output to a custom file descriptor
fd=10
for task in "${!tasks[#]}"; do
script="${tasks[${task}]}"
eval "exec $fd< <(${script} 2>&1 || (echo $task failed with exit code \${?}! && touch tasks_failed))"
((fd+=1))
done
# print the outputs of the tasks and wait for them to finish
fd=10
for task in "${!tasks[#]}"; do
cat <&$fd
((fd+=1))
done
# determine the exit status
# by checking whether the file "tasks_failed" has been created
if [ -e tasks_failed ]; then
echo "Task(s) failed!"
exit 1
else
echo "All tasks finished without an error!"
exit 0
fi
Your script should look like:
prog1 &
prog2 &
.
.
progn &
wait
progn+1 &
progn+2 &
.
.
Assuming your system can take n jobs at a time. use wait to run only n jobs at a time.
If you're:
On Mac and have iTerm
Want to start various processes that stay open long-term / until Ctrl+C
Want to be able to easily see the output from each process
Want to be able to easily stop a specific process with Ctrl+C
One option is scripting the terminal itself if your use case is more app monitoring / management.
For example I recently did the following. Granted it's Mac specific, iTerm specific, and relies on a deprecated Apple Script API (iTerm has a newer Python option). It doesn't win any elegance awards but gets the job done.
#!/bin/sh
root_path="~/root-path"
auth_api_script="$root_path/auth-path/auth-script.sh"
admin_api_proj="$root_path/admin-path/admin.csproj"
agent_proj="$root_path/agent-path/agent.csproj"
dashboard_path="$root_path/dashboard-web"
osascript <<THEEND
tell application "iTerm"
set newWindow to (create window with default profile)
tell current session of newWindow
set name to "Auth API"
write text "pushd $root_path && $auth_api_script"
end tell
tell newWindow
set newTab to (create tab with default profile)
tell current session of newTab
set name to "Admin API"
write text "dotnet run --debug -p $admin_api_proj"
end tell
end tell
tell newWindow
set newTab to (create tab with default profile)
tell current session of newTab
set name to "Agent"
write text "dotnet run --debug -p $agent_proj"
end tell
end tell
tell newWindow
set newTab to (create tab with default profile)
tell current session of newTab
set name to "Dashboard"
write text "pushd $dashboard_path; ng serve -o"
end tell
end tell
end tell
THEEND
If you have a GUI terminal, you could spawn a new tabbed terminal instance for each process you want to run in parallel.
This has the benefit that each program runs in its own tab where it can be interacted with and managed independently of the other running programs.
For example, on Ubuntu 20.04:
gnome-terminal --tab -- bash -c 'prog1'
gnome-terminal --tab -- bash -c 'prog2'
To run certain programs or other commands sequentially, you can add ;
gnome-terminal --tab -- bash -c 'prog1_1; prog1_2'
gnome-terminal --tab -- bash -c 'prog2'
I've found that for some programs, the terminal closes before they start up. For these programs I append the terminal command with ; wait or ; sleep 1
gnome-terminal --tab -- bash -c 'prog1; wait'
For Mac OS, you would have to find an equivalent command for the terminal you are using - I haven't tested on Mac OS since I don't own a Mac.
There're a lot of interesting answers here, but I took inspiration from this answer and put together a simple script that runs multiple processes in parallel and handles the results once they're done. You can find it in this gist, or below:
#!/usr/bin/env bash
# inspired by https://stackoverflow.com/a/29535256/2860309
pids=""
failures=0
function my_process() {
seconds_to_sleep=$1
exit_code=$2
sleep "$seconds_to_sleep"
return "$exit_code"
}
(my_process 1 0) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 1 second to success"
(my_process 1 1) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 1 second to failure"
(my_process 2 0) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 2 seconds to success"
(my_process 2 1) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 2 seconds to failure"
echo "..."
for pid in $pids; do
if wait "$pid"; then
echo "Process $pid succeeded"
else
echo "Process $pid failed"
failures=$((failures+1))
fi
done
echo
echo "${failures} failures detected"
This results in:
86400: 1 second to success
86401: 1 second to failure
86402: 2 seconds to success
86404: 2 seconds to failure
...
Process 86400 succeeded
Process 86401 failed
Process 86402 succeeded
Process 86404 failed
2 failures detected
With bashj ( https://sourceforge.net/projects/bashj/ ) , you should be able to run not only multiple processes (the way others suggested) but also multiple Threads in one JVM controlled from your script. But of course this requires a java JDK. Threads consume less resource than processes.
Here is a working code:
#!/usr/bin/bashj
#!java
public static int cnt=0;
private static void loop() {u.p("java says cnt= "+(cnt++));u.sleep(1.0);}
public static void startThread()
{(new Thread(() -> {while (true) {loop();}})).start();}
#!bashj
j.startThread()
while [ j.cnt -lt 4 ]
do
echo "bash views cnt=" j.cnt
sleep 0.5
done

How to include a timer in Bash Scripting?

Good day! Is there any way to include a timer (timestamp?or whatever term it is) in a script using bash? Like for instance; every 60 seconds, a specific function checks if the internet is down, if it is, then it connects to the wifi device instead and vice versa. In short, the program checks the internet connection from time to time.
Any suggestions/answers will be much appreciated. =)
Blunt version
while sleep 60; do
if ! check_internet; then
if is_wifi; then
set_wired
else
set_wifi
fi
fi
done
Using the sleep itself as loop condition allows you to break out of the loop by killing the sleep (i.e. if it's a foreground process, ctrl-c will do).
If we're talking minutes or hours intervals, cron will probably do a better job, as Montecristo pointed out.
You may want to do a man cron.
Or if you just have to stick to bash just put the function call inside a loop and sleep 60 inside the iteration.
Please find here a script that you can use, first add an entry to your cron job like this:
$ sudo crontab -e
* * * * * /path/to/your/switcher
This is simple method that reside on pinging an alive server continuously every minute, if the server is not reachable, it will switch to the second router defined bellow.
surely there are better way, to exploit this issue.
$ cat > switcher
#!/bin/sh
route=`which route`
ip=`which ip`
# define your email here
mail="user#domain.tld"
# We define our pingable target like 'yahoo' or whatever, note that the host have to be
# reachable every time
target="www.yahoo.com"
# log file
file="/var/log/updown.log"
# your routers here
router1="192.168.0.1"
router2="192.168.0.254"
# default router
default=$($ip route | awk '/default/ { print $3 }')
# ping command
ping -c 2 ${target}
if [ $? -eq 0 ]; then
echo "`date +%Y%m%d-%H:%M:%S`: up" >> ${file}
else
echo "`date +%Y%m%d-%H:%M:%S`: down" >> ${file}
if [ ${default}==${router1} ]; then
${route} del default gw ${router1}
${route} add default gw ${router2}
elif [ ${default}==${router2} ]; then
${route} del default gw ${router2}
${route} add default gw ${router1}
fi
# sending a notification by mail or may be by sms
echo "Connection problem" |mail -s "Changing Routing table" ${mail}
fi
I liked William's answer, because it does not need polling. So I implemented the following script based on his idea. It works around the problem that control has to return to the shell.
#!/bin/sh
someproc()
{
sleep $1
return $2
}
run_or_timeout()
{
timeout=$1
shift
{
trap 'exit 0' 15
"$#"
} &
proc=$!
trap "kill $proc" ALRM
{
trap 'exit 0' 15
sleep $timeout
kill -ALRM $$
} &
alarm=$!
wait $proc
ret=$?
# cleanup
kill $alarm
trap - ALRM
return $ret
}
run_or_timeout 0 someproc 1 0
echo "exit: $? (expected: 142)"
run_or_timeout 1 someproc 0 0
echo "exit: $? (expected: 0)"
run_or_timeout 1 someproc 0 1
echo "exit: $? (expected: 1)"
You can do something like the following, but it is not reliable:
#!/bin/sh
trap handle_timer USR1
set_timer() { (sleep 2; kill -USR1 $$)& }
handle_timer() {
printf "%s:%s\n" "timer expired" "$(date)";
set_timer
}
set_timer
while true; do sleep 1; date; done
One problem with this technique is that the trap will not take effect until the current task returns to the shell (eg, replace the sleep 1 with sleep 10). If the shell is in control most of the time (eg if all the commands it calls will terminate quickly), this can work. One option of course, is to run everything in the background.
Create a bash script that checks once if internet connection is down and add the script in a crontab task that runs every 60 seconds.

Resources