Syntax for a single-line while loop in Bash - bash

I am having trouble coming up with the right combination of semicolons and/or braces. I'd like to do this, but as a one-liner from the command line:
while [ 1 ]
do
foo
sleep 2
done

while true; do foo; sleep 2; done
By the way, if you type it as a multiline (as you are showing) at the command prompt and then call the history with arrow up, you will get it on a single line, correctly punctuated.
$ while true
> do
> echo "hello"
> sleep 2
> done
hello
hello
hello
^C
$ <arrow up> while true; do echo "hello"; sleep 2; done

It's also possible to use sleep command in while's condition. Making one-liner looking more clean imho.
while sleep 2; do echo thinking; done

Colon is always "true":
while :; do foo; sleep 2; done

You can use semicolons to separate statements:
$ while [ 1 ]; do foo; sleep 2; done

You can also make use of until command:
until ((0)); do foo; sleep 2; done
Note that in contrast to while, until would execute the commands inside the loop as long as the test condition has an exit status which is not zero.
Using a while loop:
while read i; do foo; sleep 2; done < /dev/urandom
Using a for loop:
for ((;;)); do foo; sleep 2; done
Another way using until:
until [ ]; do foo; sleep 2; done

Using while:
while true; do echo 'while'; sleep 2s; done
Using for Loop:
for ((;;)); do echo 'forloop'; sleep 2; done
Using Recursion, (a little bit different than above, keyboard interrupt won't stop it)
list(){ echo 'recursion'; sleep 2; list; } && list;

A very simple infinite loop.. :)
while true ; do continue ; done
Fr your question it would be:
while true; do foo ; sleep 2 ; done

For simple process watching use watch instead

I like to use the semicolons only for the WHILE statement,
and the && operator to make the loop do more than one thing...
So I always do it like this
while true ; do echo Launching Spaceship into orbit && sleep 5s && /usr/bin/launch-mechanism && echo Launching in T-5 && sleep 1s && echo T-4 && sleep 1s && echo T-3 && sleep 1s && echo T-2 && sleep 1s && echo T-1 && sleep 1s && echo liftoff ; done

If you want the while loop to stop after some condition, and your foo command returns non-zero when this condition is met then you can get the loop to break like this:
while foo; do echo 'sleeping...'; sleep 5; done;
For example, if the foo command is deleting things in batches, and it returns 1 when there is nothing left to delete.
This works well if you have a custom script that needs to run a command many times until some condition. You write the script to exit with 1 when the condition is met and exit with 0 when it should be run again.
For example, say you have a python script batch_update.py which updates 100 rows in a database and returns 0 if there are more to update and 1 if there are no more. The the following command will allow you to update rows 100 at a time with sleeping for 5 seconds between updates:
while batch_update.py; do echo 'sleeping...'; sleep 5; done;

You don't even need to use do and done. For infinite loops I find it more readable to use for with curly brackets. For example:
for ((;;)) { date ; sleep 1 ; }
This works in bash and zsh. Doesn't work in sh.

If I can give two practical examples (with a bit of "emotion").
This writes the name of all files ended with ".jpg" in the folder "img":
for f in *; do if [ "${f#*.}" == 'jpg' ]; then echo $f; fi; done
This deletes them:
for f in *; do if [ "${f#*.}" == 'jpg' ]; then rm -r $f; fi; done
Just trying to contribute.

You can try this too
WARNING: this you should not do but since the question is asking for infinite loop with no end...this is how you could do it.
while [[ 0 -ne 1 ]]; do echo "it's looping"; sleep 2; done

You can also put that loop in the background (e.g. when you need to disconnect from a remote machine)
nohup bash -c "while true; do aws s3 sync xml s3://bucket-name/xml --profile=s3-profile-name; sleep 3600; done &"

Related

How to share variable with sub-thread?

I would like to run a bash script with a watchdog function launched in sub thread that will stop my program when a given variable reach a value. This variable is incremented in the main thread.
var=0
function watchdog()
{
if [[ $var -ge 3 ]]; then
echo "Error"
fi
}
{ watchdog;} &
# main program loop
((var++))
The problem in this code is that $var stays at 0. I also tried without {} around the watchdog call, same result.
Is my code style good ?
You cannot share variables between processes in bash, and it does not support multi-threading. So you need a form of Inter-Process Communication. One of the simplest is to use a named pipe, also known as a FIFO.
Here is and example:
pipe='/tmp/mypipe'
mkfifo "$pipe"
var=0
# Your definition is not strictly correct (although it will work)
watchdog()
{
# Note the loop
while read var
do
if (( var >= 3 )) # a better way to do numeric comparisons
then
echo "Error $var"
else
echo "$var"
fi
sleep 2 # to prevent CPU hogging
done
}
watchdog < "$pipe" & # No need for a group
# main program loop - ??? I see no loop
((var++))
echo "$var" > "$pipe"
((var++))
echo "$var" > "$pipe"
((var++))
echo "$var" > "$pipe"
echo "waiting"
wait
rm "$pipe"
Example run:
$ bash gash.sh
1
waiting
2
Error 3
However I really don't see the point in using a separate process. Why not just call a function to test the value after each change?
if you run your bashscript with a . before, it will be use the same environment and can change existing variable. Look at this:
$ cat test.sh
#!/usr/bin/env bash
a=12
echo $a
$ a=1
$ echo $a
1
$ ./test.sh
12
$ echo $a
1
$ . ./test.sh
12
$ echo $a
12
After i run . ./test.sh the variable $a has been changed through the script.

Nesting for loop inside a while loop on one line

I am trying to run a script looping forever every ten seconds five times one second apart. How can I do this from command line instead of a script?
This does not work:
while true; do; sleep 10 && for i in `seq 3` do; sleep 1 && date; done; done
This works in a script:
#!/bin/ash
while true; do
sleep 10
for i in `seq 3`; do
sleep 1 && date
done
done
If it's relevant this is to blink an led in a specific pattern on raspberry pi not to print the date the date command is just to see whats happening.
Try this:
while true; do sleep 10 ; for i in `seq 3`; do sleep 1 && date ; done ; done
No semi column should be used after do keyword. No need to use && before starting the for loop in your case. Using && before for loop mean, execute the loop only if sleep 10 command is success.
In your case, though you use && before for loop or not, it gives the behavior would be same.

Multiprocess with shared variable in bash

I'm trying to achieve a dynamic progress bar in bash script, the kind we see when installing new packages. In order to do this, a randomtask would call a progressbar script as a background task and feed it with some integer values.
The first script uses a pipe to feed the second.
#!/bin/bash
# randomtask
pbar_x=0 # percentage of progress
pbar_xmax=100
while [[ $pbar_x != $pbar_xmax ]]; do
echo "$pbar_x"
sleep 1
done | ./progressbar &
# do things
(( pbar_x++ ))
# when task is done
(( pbar_x = pbar_xmax ))
Hence, the second script needs to constantly receive the integer, and print it.
#!/bin/bash
# progressbar
while [ 1 ]; do
read x
echo "progress: $x%"
done
But here, the second script doesn't receive the values as they are updated. What did I do wrong ?
That can't work, the while loop is running in a subprocess, changes in the main program will not affect it in any way.
There are several IPC mechanisms, here I use a named pipe (FIFO):
pbar_x=0 # percentage of progress
pbar_xmax=100
pipename="mypipe"
# Create the pipe
mkfifo "$pipename"
# progressbar will block waiting on input
./progressbar < "$pipename" &
while (( pbar_x != pbar_xmax )); do
#do things
(( pbar_x++ ))
echo "$pbar_x"
sleep 1
# when task is done
#(( pbar_x = pbar_xmax ))
done > "$pipename"
rm "$pipename"
I also modified progressbar:
# This exits the loop when the pipe is closed
while read x
do
echo "progress: $x%"
done
With a third script you could use process substitution instead.
I'm on WSL, which means I can't use mkfifo. And coproc seemed to perfectly answer my need, so I searched and eventually found this:
coproc usage with exemples [bash-hackers wiki].
We start the process with coproc and redirect its output to stdout:
{ coproc PBAR { ./progressbar; } >&3; } 3>&1
Then we can access its in and out via file descriptors ${PBAR[0]}(output) and ${PBAR[1]}(input)
echo "$pbar_x" >&"${PBAR[1]}"
randomtask
#!/bin/bash
pbar_x=0 # percentage of progress
pbar_xmax=100
{ coproc PBAR { ./progressbar; } >&3; } 3>&1
while (( pbar_x <= 10)); do
echo $(( pbar_x++ )) >&"${PBAR[1]}"
sleep 1
done
# do things
echo $(( pbar_x++ )) >&"${PBAR[1]}"
# when task is done
echo $(( pbar_x = pbar_xmax )) >&"${PBAR[1]}"
progressbar
#!/bin/bash
while read x; do
echo "progress: $x%"
done
Please note that :
The coproc keyword is not specified by POSIX(R).
The coproc keyword appeared in Bash version 4.0-alpha

Does pushing a block of code to background in Bash result in parallelization? [duplicate]

Lets say I have a loop in Bash:
for foo in `some-command`
do
do-something $foo
done
do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once.
The naive approach seems to be:
for foo in `some-command`
do
do-something $foo &
done
This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished.
How would you write this loop so there are always X do-somethings running at once?
Depending on what you want to do xargs also can help (here: converting documents with pdf2ps):
cpus=$( ls -d /sys/devices/system/cpu/cpu[[:digit:]]* | wc -w )
find . -name \*.pdf | xargs --max-args=1 --max-procs=$cpus pdf2ps
From the docs:
--max-procs=max-procs
-P max-procs
Run up to max-procs processes at a time; the default is 1.
If max-procs is 0, xargs will run as many processes as possible at a
time. Use the -n option with -P; otherwise chances are that only one
exec will be done.
With GNU Parallel http://www.gnu.org/software/parallel/ you can write:
some-command | parallel do-something
GNU Parallel also supports running jobs on remote computers. This will run one per CPU core on the remote computers - even if they have different number of cores:
some-command | parallel -S server1,server2 do-something
A more advanced example: Here we list of files that we want my_script to run on. Files have extension (maybe .jpeg). We want the output of my_script to be put next to the files in basename.out (e.g. foo.jpeg -> foo.out). We want to run my_script once for each core the computer has and we want to run it on the local computer, too. For the remote computers we want the file to be processed transferred to the given computer. When my_script finishes, we want foo.out transferred back and we then want foo.jpeg and foo.out removed from the remote computer:
cat list_of_files | \
parallel --trc {.}.out -S server1,server2,: \
"my_script {} > {.}.out"
GNU Parallel makes sure the output from each job does not mix, so you can use the output as input for another program:
some-command | parallel do-something | postprocess
See the videos for more examples: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
maxjobs=4
parallelize () {
while [ $# -gt 0 ] ; do
jobcnt=(`jobs -p`)
if [ ${#jobcnt[#]} -lt $maxjobs ] ; then
do-something $1 &
shift
else
sleep 1
fi
done
wait
}
parallelize arg1 arg2 "5 args to third job" arg4 ...
Here an alternative solution that can be inserted into .bashrc and used for everyday one liner:
function pwait() {
while [ $(jobs -p | wc -l) -ge $1 ]; do
sleep 1
done
}
To use it, all one has to do is put & after the jobs and a pwait call, the parameter gives the number of parallel processes:
for i in *; do
do_something $i &
pwait 10
done
It would be nicer to use wait instead of busy waiting on the output of jobs -p, but there doesn't seem to be an obvious solution to wait till any of the given jobs is finished instead of a all of them.
Instead of a plain bash, use a Makefile, then specify number of simultaneous jobs with make -jX where X is the number of jobs to run at once.
Or you can use wait ("man wait"): launch several child processes, call wait - it will exit when the child processes finish.
maxjobs = 10
foreach line in `cat file.txt` {
jobsrunning = 0
while jobsrunning < maxjobs {
do job &
jobsrunning += 1
}
wait
}
job ( ){
...
}
If you need to store the job's result, then assign their result to a variable. After wait you just check what the variable contains.
If you're familiar with the make command, most of the time you can express the list of commands you want to run as a a makefile. For example, if you need to run $SOME_COMMAND on files *.input each of which produces *.output, you can use the makefile
INPUT = a.input b.input
OUTPUT = $(INPUT:.input=.output)
%.output : %.input
$(SOME_COMMAND) $< $#
all: $(OUTPUT)
and then just run
make -j<NUMBER>
to run at most NUMBER commands in parallel.
While doing this right in bash is probably impossible, you can do a semi-right fairly easily. bstark gave a fair approximation of right but his has the following flaws:
Word splitting: You can't pass any jobs to it that use any of the following characters in their arguments: spaces, tabs, newlines, stars, question marks. If you do, things will break, possibly unexpectedly.
It relies on the rest of your script to not background anything. If you do, or later you add something to the script that gets sent in the background because you forgot you weren't allowed to use backgrounded jobs because of his snippet, things will break.
Another approximation which doesn't have these flaws is the following:
scheduleAll() {
local job i=0 max=4 pids=()
for job; do
(( ++i % max == 0 )) && {
wait "${pids[#]}"
pids=()
}
bash -c "$job" & pids+=("$!")
done
wait "${pids[#]}"
}
Note that this one is easily adaptable to also check the exit code of each job as it ends so you can warn the user if a job fails or set an exit code for scheduleAll according to the amount of jobs that failed, or something.
The problem with this code is just that:
It schedules four (in this case) jobs at a time and then waits for all four to end. Some might be done sooner than others which will cause the next batch of four jobs to wait until the longest of the previous batch is done.
A solution that takes care of this last issue would have to use kill -0 to poll whether any of the processes have disappeared instead of the wait and schedule the next job. However, that introduces a small new problem: you have a race condition between a job ending, and the kill -0 checking whether it's ended. If the job ended and another process on your system starts up at the same time, taking a random PID which happens to be that of the job that just finished, the kill -0 won't notice your job having finished and things will break again.
A perfect solution isn't possible in bash.
Maybe try a parallelizing utility instead rewriting the loop? I'm a big fan of xjobs. I use xjobs all the time to mass copy files across our network, usually when setting up a new database server.
http://www.maier-komor.de/xjobs.html
function for bash:
parallel ()
{
awk "BEGIN{print \"all: ALL_TARGETS\\n\"}{print \"TARGET_\"NR\":\\n\\t#-\"\$0\"\\n\"}END{printf \"ALL_TARGETS:\";for(i=1;i<=NR;i++){printf \" TARGET_%d\",i};print\"\\n\"}" | make $# -f - all
}
using:
cat my_commands | parallel -j 4
Really late to the party here, but here's another solution.
A lot of solutions don't handle spaces/special characters in the commands, don't keep N jobs running at all times, eat cpu in busy loops, or rely on external dependencies (e.g. GNU parallel).
With inspiration for dead/zombie process handling, here's a pure bash solution:
function run_parallel_jobs {
local concurrent_max=$1
local callback=$2
local cmds=("${#:3}")
local jobs=( )
while [[ "${#cmds[#]}" -gt 0 ]] || [[ "${#jobs[#]}" -gt 0 ]]; do
while [[ "${#jobs[#]}" -lt $concurrent_max ]] && [[ "${#cmds[#]}" -gt 0 ]]; do
local cmd="${cmds[0]}"
cmds=("${cmds[#]:1}")
bash -c "$cmd" &
jobs+=($!)
done
local job="${jobs[0]}"
jobs=("${jobs[#]:1}")
local state="$(ps -p $job -o state= 2>/dev/null)"
if [[ "$state" == "D" ]] || [[ "$state" == "Z" ]]; then
$callback $job
else
wait $job
$callback $job $?
fi
done
}
And sample usage:
function job_done {
if [[ $# -lt 2 ]]; then
echo "PID $1 died unexpectedly"
else
echo "PID $1 exited $2"
fi
}
cmds=( \
"echo 1; sleep 1; exit 1" \
"echo 2; sleep 2; exit 2" \
"echo 3; sleep 3; exit 3" \
"echo 4; sleep 4; exit 4" \
"echo 5; sleep 5; exit 5" \
)
# cpus="$(getconf _NPROCESSORS_ONLN)"
cpus=3
run_parallel_jobs $cpus "job_done" "${cmds[#]}"
The output:
1
2
3
PID 56712 exited 1
4
PID 56713 exited 2
5
PID 56714 exited 3
PID 56720 exited 4
PID 56724 exited 5
For per-process output handling $$ could be used to log to a file, for example:
function job_done {
cat "$1.log"
}
cmds=( \
"echo 1 \$\$ >\$\$.log" \
"echo 2 \$\$ >\$\$.log" \
)
run_parallel_jobs 2 "job_done" "${cmds[#]}"
Output:
1 56871
2 56872
The project I work on uses the wait command to control parallel shell (ksh actually) processes. To address your concerns about IO, on a modern OS, it's possible parallel execution will actually increase efficiency. If all processes are reading the same blocks on disk, only the first process will have to hit the physical hardware. The other processes will often be able to retrieve the block from OS's disk cache in memory. Obviously, reading from memory is several orders of magnitude quicker than reading from disk. Also, the benefit requires no coding changes.
This might be good enough for most purposes, but is not optimal.
#!/bin/bash
n=0
maxjobs=10
for i in *.m4a ; do
# ( DO SOMETHING ) &
# limit jobs
if (( $(($((++n)) % $maxjobs)) == 0 )) ; then
wait # wait until all have finished (not optimal, but most times good enough)
echo $n wait
fi
done
Here is how I managed to solve this issue in a bash script:
#! /bin/bash
MAX_JOBS=32
FILE_LIST=($(cat ${1}))
echo Length ${#FILE_LIST[#]}
for ((INDEX=0; INDEX < ${#FILE_LIST[#]}; INDEX=$((${INDEX}+${MAX_JOBS})) ));
do
JOBS_RUNNING=0
while ((JOBS_RUNNING < MAX_JOBS))
do
I=$((${INDEX}+${JOBS_RUNNING}))
FILE=${FILE_LIST[${I}]}
if [ "$FILE" != "" ];then
echo $JOBS_RUNNING $FILE
./M22Checker ${FILE} &
else
echo $JOBS_RUNNING NULL &
fi
JOBS_RUNNING=$((JOBS_RUNNING+1))
done
wait
done
You can use a simple nested for loop (substitute appropriate integers for N and M below):
for i in {1..N}; do
(for j in {1..M}; do do_something; done & );
done
This will execute do_something N*M times in M rounds, each round executing N jobs in parallel. You can make N equal the number of CPUs you have.
My solution to always keep a given number of processes running, keep tracking of errors and handle ubnterruptible / zombie processes:
function log {
echo "$1"
}
# Take a list of commands to run, runs them sequentially with numberOfProcesses commands simultaneously runs
# Returns the number of non zero exit codes from commands
function ParallelExec {
local numberOfProcesses="${1}" # Number of simultaneous commands to run
local commandsArg="${2}" # Semi-colon separated list of commands
local pid
local runningPids=0
local counter=0
local commandsArray
local pidsArray
local newPidsArray
local retval
local retvalAll=0
local pidState
local commandsArrayPid
IFS=';' read -r -a commandsArray <<< "$commandsArg"
log "Runnning ${#commandsArray[#]} commands in $numberOfProcesses simultaneous processes."
while [ $counter -lt "${#commandsArray[#]}" ] || [ ${#pidsArray[#]} -gt 0 ]; do
while [ $counter -lt "${#commandsArray[#]}" ] && [ ${#pidsArray[#]} -lt $numberOfProcesses ]; do
log "Running command [${commandsArray[$counter]}]."
eval "${commandsArray[$counter]}" &
pid=$!
pidsArray+=($pid)
commandsArrayPid[$pid]="${commandsArray[$counter]}"
counter=$((counter+1))
done
newPidsArray=()
for pid in "${pidsArray[#]}"; do
# Handle uninterruptible sleep state or zombies by ommiting them from running process array (How to kill that is already dead ? :)
if kill -0 $pid > /dev/null 2>&1; then
pidState=$(ps -p$pid -o state= 2 > /dev/null)
if [ "$pidState" != "D" ] && [ "$pidState" != "Z" ]; then
newPidsArray+=($pid)
fi
else
# pid is dead, get it's exit code from wait command
wait $pid
retval=$?
if [ $retval -ne 0 ]; then
log "Command [${commandsArrayPid[$pid]}] failed with exit code [$retval]."
retvalAll=$((retvalAll+1))
fi
fi
done
pidsArray=("${newPidsArray[#]}")
# Add a trivial sleep time so bash won't eat all CPU
sleep .05
done
return $retvalAll
}
Usage:
cmds="du -csh /var;du -csh /tmp;sleep 3;du -csh /root;sleep 10; du -csh /home"
# Execute 2 processes at a time
ParallelExec 2 "$cmds"
# Execute 4 processes at a time
ParallelExec 4 "$cmds"
$DOMAINS = "list of some domain in commands"
for foo in some-command
do
eval `some-command for $DOMAINS` &
job[$i]=$!
i=$(( i + 1))
done
Ndomains=echo $DOMAINS |wc -w
for i in $(seq 1 1 $Ndomains)
do
echo "wait for ${job[$i]}"
wait "${job[$i]}"
done
in this concept will work for the parallelize. important thing is last line of eval is '&'
which will put the commands to backgrounds.

Customized progress message for tasks in bash script

I'm currently writing a bash script to do tasks automatically. In my script I want it to display progress message when it is doing a task.
For example:
user#ubuntu:~$ Configure something
->
Configure something .
->
Configure something ..
->
Configure something ...
->
Configure something ... done
All the progress message should appear in the same line.
Below is my workaround so far:
echo -n "Configure something "
exec "configure something 2>&1 /dev/null"
//pseudo code for progress message
echo -n "." and sleep 1 if the previous exec of configure something not done
echo " done" if exec of the command finished successfully
echo " failed" otherwise
Will exec wait for the command to finish and then continue with the script lines later?
If so, then how can I echo message at the same time the exec of configure something is taking place?
How do I know when exec finishes the previous command and return true? use $? ?
Just to put the editorial hat on, what if something goes wrong? How are you, or a user of your script going to know what went wrong? This is probably not the answer you're looking for but having your script just execute each build step individually may turn out to be better overall, especially for troubleshooting. Why not define a function to validate your build steps:
function validateCmd()
{
CODE=$1
COMMAND=$2
MODULE=$3
if [ ${CODE} -ne 0 ]; then
echo "ERROR Executing Command: \"${COMMAND}\" in Module: ${MODULE}"
echo "Exiting."
exit 1;
fi
}
./configure
validateCmd $? "./configure" "Configuration of something"
Anyways, yes as you probably noticed above, use $? to determine what the result of the last command was. For example:
rm -rf ${TMP_DIR}
if [ $? -ne 0 ]; then
echo "ERROR Removing directory: ${TMP_DIR}"
exit 1;
fi
To answer your first question, you can use:
echo -ne "\b"
To delete a character on the same line. So to count to ten on one line, you can do something like:
for i in $(seq -w 1 10); do
echo -en "\b\b${i}"
sleep .25
done
echo
The trick with that is you'll have to know how much to delete, but I'm sure you can figure that out.
You cannot call exec like that; exec never returns, and the lines after an exec will not execute. The standard way to print progress updates on a single line is to simply use \r instead of \n at the end of each line. For example:
#!/bin/bash
i=0
sleep 5 & # Start some command
pid=$! # Save the pid of the command
while sleep 1; do # Produce progress reports
printf '\rcontinuing in %d seconds...' $(( 5 - ++i ))
test $i -eq 5 && break
done
if wait $pid; then echo done; else echo failed; fi
Here's another example:
#!/bin/bash
execute() {
eval "$#" & # Execute the command
pid=$!
# Invoke a shell to print status. If you just invoke
# the while loop directly, killing it will generate a
# notification. By trapping SIGTERM, we suppress the notice.
sh -c 'trap exit SIGTERM
while printf "\r%3d:%s..." $((++i)) "$*"; do sleep 1
done' 0 "$#" &
last_report=$!
if wait $pid; then echo done; else echo failed; fi
kill $last_report
}
execute sleep 3
execute sleep 2 \| false # Execute a command that will fail
execute sleep 1

Resources