I'm trying to achieve a dynamic progress bar in bash script, the kind we see when installing new packages. In order to do this, a randomtask would call a progressbar script as a background task and feed it with some integer values.
The first script uses a pipe to feed the second.
#!/bin/bash
# randomtask
pbar_x=0 # percentage of progress
pbar_xmax=100
while [[ $pbar_x != $pbar_xmax ]]; do
echo "$pbar_x"
sleep 1
done | ./progressbar &
# do things
(( pbar_x++ ))
# when task is done
(( pbar_x = pbar_xmax ))
Hence, the second script needs to constantly receive the integer, and print it.
#!/bin/bash
# progressbar
while [ 1 ]; do
read x
echo "progress: $x%"
done
But here, the second script doesn't receive the values as they are updated. What did I do wrong ?
That can't work, the while loop is running in a subprocess, changes in the main program will not affect it in any way.
There are several IPC mechanisms, here I use a named pipe (FIFO):
pbar_x=0 # percentage of progress
pbar_xmax=100
pipename="mypipe"
# Create the pipe
mkfifo "$pipename"
# progressbar will block waiting on input
./progressbar < "$pipename" &
while (( pbar_x != pbar_xmax )); do
#do things
(( pbar_x++ ))
echo "$pbar_x"
sleep 1
# when task is done
#(( pbar_x = pbar_xmax ))
done > "$pipename"
rm "$pipename"
I also modified progressbar:
# This exits the loop when the pipe is closed
while read x
do
echo "progress: $x%"
done
With a third script you could use process substitution instead.
I'm on WSL, which means I can't use mkfifo. And coproc seemed to perfectly answer my need, so I searched and eventually found this:
coproc usage with exemples [bash-hackers wiki].
We start the process with coproc and redirect its output to stdout:
{ coproc PBAR { ./progressbar; } >&3; } 3>&1
Then we can access its in and out via file descriptors ${PBAR[0]}(output) and ${PBAR[1]}(input)
echo "$pbar_x" >&"${PBAR[1]}"
randomtask
#!/bin/bash
pbar_x=0 # percentage of progress
pbar_xmax=100
{ coproc PBAR { ./progressbar; } >&3; } 3>&1
while (( pbar_x <= 10)); do
echo $(( pbar_x++ )) >&"${PBAR[1]}"
sleep 1
done
# do things
echo $(( pbar_x++ )) >&"${PBAR[1]}"
# when task is done
echo $(( pbar_x = pbar_xmax )) >&"${PBAR[1]}"
progressbar
#!/bin/bash
while read x; do
echo "progress: $x%"
done
Please note that :
The coproc keyword is not specified by POSIX(R).
The coproc keyword appeared in Bash version 4.0-alpha
Related
I would like to run a bash script with a watchdog function launched in sub thread that will stop my program when a given variable reach a value. This variable is incremented in the main thread.
var=0
function watchdog()
{
if [[ $var -ge 3 ]]; then
echo "Error"
fi
}
{ watchdog;} &
# main program loop
((var++))
The problem in this code is that $var stays at 0. I also tried without {} around the watchdog call, same result.
Is my code style good ?
You cannot share variables between processes in bash, and it does not support multi-threading. So you need a form of Inter-Process Communication. One of the simplest is to use a named pipe, also known as a FIFO.
Here is and example:
pipe='/tmp/mypipe'
mkfifo "$pipe"
var=0
# Your definition is not strictly correct (although it will work)
watchdog()
{
# Note the loop
while read var
do
if (( var >= 3 )) # a better way to do numeric comparisons
then
echo "Error $var"
else
echo "$var"
fi
sleep 2 # to prevent CPU hogging
done
}
watchdog < "$pipe" & # No need for a group
# main program loop - ??? I see no loop
((var++))
echo "$var" > "$pipe"
((var++))
echo "$var" > "$pipe"
((var++))
echo "$var" > "$pipe"
echo "waiting"
wait
rm "$pipe"
Example run:
$ bash gash.sh
1
waiting
2
Error 3
However I really don't see the point in using a separate process. Why not just call a function to test the value after each change?
if you run your bashscript with a . before, it will be use the same environment and can change existing variable. Look at this:
$ cat test.sh
#!/usr/bin/env bash
a=12
echo $a
$ a=1
$ echo $a
1
$ ./test.sh
12
$ echo $a
1
$ . ./test.sh
12
$ echo $a
12
After i run . ./test.sh the variable $a has been changed through the script.
I want to build a stopwatch in Bash, with a pause feature. It should display an incrementing counter, like this one does, but pause it when I hit the "p" key.
How should I implement that? If I wait for user input with read I can't refresh the counter on the screen at the same time. Putting the read inside a loop, with a timeout, is my best plan so far, but it's non-trivial to use a timeout less than one second, which is what I would need here. (It's not supported by read or GNU timeout.) Interrupts would work, but I'd like to support arbitrary keys like "p" and "x".
Is there a reasonably simple way to achieve this?
Print to console while waiting for user input
Write one function that creates the output (example with: counter or if you like spin).
Write one function to read in user commands (readCommand)
Call both functions in a loop
Set timeouts so, that key presses are read soon enough. (sleep .1 and read -t.1)
function readCommand(){
lastCommand=$1
read -t.1 -n1 c;
if [ "$c" = "p" ]
then
printf "\n\r";
return 0
fi
if [ "$c" = "g" ]
then
printf "\n\r";
return 1
fi
return $lastCommand
}
function spin(){
for i in / - \\ \| ;
do
printf "\r$i";
sleep .1;
done
}
function countUp(){
currentCount=$1
return `expr $currentCount + 1`
}
function counter(){
countUp $count
count=$?
printf "\r$count"
sleep .1;
}
command=1
count=0
while :
do
if [[ $command == 1 ]]
then
counter
fi
readCommand $command
command=$?
done
The counter will stop if user presses 'p' and go on if user presses 'g'
Simple script with file descriptor and simple input redirection, leaving no temporary files to cleanup. The waiting is done by using read parameter -t.
counter() {
while ! read -t 0.05 -n1 _; do
printf '\r\t%s' "$(date +%T.%N)"
done
}
{
IFS= read -p "Your name, Sir?"$'\n' -r name
echo >&3
} 3> >(counter "$tmp")
echo "Sir $name, we exit"
Example output:
Your name, Sir?
2:12:17.153951623l
Sir Kamil, we exit
I have made a change in the code you refer.
...
while [ true ]; do
if [ -z $(cat /tmp/pause) ]; then
STOPWATCH=$(TZ=UTC datef $DATE_INPUT $DATE_FORMAT | ( [[ "$NANOS_SUPPORTED" ]] && sed 's/.\{7\}$//' || cat ) )
printf "\r\e%s" $STOPWATCH
sleep 0.03
fi
done
So what you need to do now is a shell script that waits the "p" char from stdin and writes 1 > /tmp/pause or clean /tmp/pause to get he stopwatch paused or working.
something like:
while read char;
do
if [ $char == "p" ]; then
if [ -z $(cat /tmp/pause) ];then
echo 1 > /tmp/pause
else
echo > /tmp/pause
fi
char=0
fi
done < /dev/stdin
I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash
function test {
echo "$1"
}
echo $(test "echo") &
wait
a=$(test "assignment") &
wait
echo $a
echo done
This code produces the output:
echo
done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes)
/dev/fd/63
Here, the expression <(yes) is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes (which prints the string y in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63
cat: /dev/fd/63: No such file or directory
The problem here is that the yes process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber`
$ exec 3< backingfile # open the file for reading in the current shell, as fd 3
$ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it.
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
#rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes never stops). But naturally that problem holds even for read if the line separator is never written additionally)
One very robust way to deal with coprocesses in Bash is to use... the coproc builtin.
Suppose you have a script or function called banana you wish to run in background, capture all its output while doing some stuff and wait until it's done. I'll do the simulation with this:
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
You will then run banana with the coproc as so:
coproc bananafd { banana; }
this is like running banana & but with the following extras: it creates two file descriptors that are in the array bananafd (at index 0 for output and index 1 for input). You'll capture the output of banana with the read builtin:
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
Try it:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
Caveat: you must be done with stuff before banana ends! if the gorilla is quicker than you:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
In this case, you'll obtain an error like this one:
./banana: line 22: read: : invalid file descriptor specification
You can check whether it's too late (i.e., whether you've taken too long doing your stuff) because after the coproc is done, bash removes the values in the array bananafd, and that's why we obtained the previous error.
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
if [[ -n ${bananafd[#]} ]]; then
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
else
echo "oh no, I took too long doing my stuff..."
fi
Finally, if you really don't want to miss any of gorilla's moves, even if you take too long for your stuff, you could copy banana's file descriptor to another fd, 3 for example, do your stuff and then read from 3:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
# Copy file descriptor banana[0] to 3
exec 3>&${bananafd[0]}
stuff
IFS= read -d '' -u 3 output
echo "$output"
This will work very well! the last read will also play the role of wait, so that output will contain the complete output of banana.
That was great: no temp files to deal with (bash handles everything silently) and 100% pure bash!
Hope this helps!
One way to capture background command's output is to redirect it's output in a file and capture output from file after background process has ended:
test "assignment" > /tmp/_out &
wait
a=$(</tmp/_out)
I also use file redirections. Like:
exec 3< <({ sleep 2; echo 12; }) # Launch as a job stdout -> fd3
cat <&3 # Lock read fd3
More real case
If I want the output of 4 parallel workers: toto, titi, tata and tutu.
I redirect each one to an different file descriptor (in fd variable).
Then reading these file descriptor will block until EOF <= pipe broken <= command completed
#!/usr/bin/env bash
# Declare data to be forked
a_value=(toto titi tata tutu)
msg=""
# Spawn child sub-processes
for i in {0..3}; do
((fd=50+i))
echo -e "1/ Launching command: $cmd with file descriptor: $fd!"
eval "exec $fd< <({ sleep $((i)); echo ${a_value[$i]}; })"
a_pid+=($!) # Store pid
done
# Join child: wait them all and collect std-output
for i in {0..3}; do
((fd=50+i));
echo -e "2/ Getting result of: $cmd with file descriptor: $fd!"
msg+="$(cat <&$fd)\n"
((i_fd--))
done
# Print result
echo -e "===========================\nResult:"
echo -e "$msg"
Should output:
1/ Launching command: with file descriptor: 50!
1/ Launching command: with file descriptor: 51!
1/ Launching command: with file descriptor: 52!
1/ Launching command: with file descriptor: 53!
2/ Getting result of: with file descriptor: 50!
2/ Getting result of: with file descriptor: 51!
2/ Getting result of: with file descriptor: 52!
2/ Getting result of: with file descriptor: 53!
===========================
Result:
toto
titi
tata
tutu
Note1: coproc is supporting only one coprocess and not multiple
Note2: wait command is buggy for old bash version (4.2) and cannot retrieve the status of the jobs I launched. It works well in bash 5 but file redirection works for all versions.
Just group the commands, when you run them in background and wait for both.
{ echo a & echo b & wait; } | nl
Output will be:
1 a
2 b
But notice that the output can be out of order, if the second task runs faster than the first.
{ { sleep 1; echo a; } & echo b & wait; } | nl
Reverse output:
1 b
2 a
If it is necessary to separate the output of both background jobs, it is necessary to buffer the output somewhere, typically in a file. Example:
#! /bin/bash
t0=$(date +%s) # Get start time
trap 'rm -f "$ta" "$tb"' EXIT # Remove temp files on exit.
ta=$(mktemp) # Create temp file for job a.
tb=$(mktemp) # Create temp file for job b.
{ exec >$ta; echo a1; sleep 2; echo a2; } & # Run job a.
{ exec >$tb; echo b1; sleep 3; echo b2; } & # Run job b.
wait # Wait for the jobs to finish.
cat "$ta" # Print output of job a.
cat "$tb" # Print output of job b.
t1=$(date +%s) # Get end time
echo "t1 - t0: $((t1-t0))" # Display execution time.
The overall runtime of the script is three seconds, although the combined sleeping time of both background jobs is five seconds. And the output of the background jobs is in order.
a1
a2
b1
b2
t1 - t0: 3
You can also use a memory buffer to store the output of your jobs. But this works only, if your buffer is big enough to store the whole output of your jobs.
#! /bin/bash
t0=$(date +%s)
trap 'rm -f /tmp/{a,b}' EXIT
mkfifo /tmp/{a,b}
buffer() { dd of="$1" status=none iflag=fullblock bs=1K; }
pids=()
{ echo a1; sleep 2; echo a2; } > >(buffer /tmp/a) &
pids+=($!)
{ echo b1; sleep 3; echo b2; } > >(buffer /tmp/b) &
pids+=($!)
# Wait only for the jobs but not for the buffering `dd`.
wait "${pids[#]}"
# This will wait for `dd`.
cat /tmp/{a,b}
t1=$(date +%s)
echo "t1 - t0: $((t1-t0))"
The above will also work with cat instead of dd. But then you can not control the buffer size.
If you have GNU Parallel you can probably use parset:
myfunc() {
sleep 3
echo "The input was"
echo "$#"
}
export -f myfunc
parset a,b,c myfunc ::: myarg-a "myarg b" myarg-c
echo "$a"
echo "$b"
echo "$c"
See: https://www.gnu.org/software/parallel/parset.html
I have a huge bash script and I want to log specific blocks of code to a specific & small log files (instead of just one huge log file).
I have the following two methods:
# in this case, 'log' is a bash function
# Using code block & piping
{
# ... bash code ...
} | log "file name"
# Using Process Substitution
log "file name" < <(
# ... bash code ...
)
Both methods may interfere with the proper execution of the bash script, e.g. when assigning values to a variable (like the problem presented here).
How do you suggest to log the output of commands to log files?
Edit:
This is what I tried to do (besides many other variations), but doesn't work as expected:
function log()
{
if [ -z "$counter" ]; then
counter=1
echo "" >> "./General_Log_File" # Create the summary log file
else
(( ++counter ))
fi
echo "" > "./${counter}_log_file" # Create specific log file
# Display text-to-be-logged on screen & add it to the summary log file
# & write text-to-be-logged to it's corresponding log file
exec 1> >(tee "./${counter}_log_file" | tee -a "./General_Log_File") 2>&1
}
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
The results of executions varies: sometimes the log files are created and sometimes they don't (which raise an error).
You could try something like this:
function log()
{
local logfile=$1
local errfile=$2
exec > $logfile
exec 2> $errfile # if $errfile is not an empty string
}
log $fileA $errfileA
echo stuff
log $fileB $errfileB
echo more stuff
This would redirect all stdout/stderr from current process to a file without any subprocesses.
Edit: The below might be a good solution then, but not tested:
pipe=$(mktemp)
mknod $pipe p
exec 1>$pipe
function log()
{
if ! [[ -z "$teepid2" ]]; then
kill $teepid2
else
tee <$pipe general_log_file &
teepid1=$!
count=1
fi
tee <$pipe ${count}_logfile &
teepid2=$!
(( ++count ))
}
log
echo stuff
log
echo stuff2
if ! [[ -z "$teepid1" ]]; then kill $teepid1; fi
Thanks to Sahas, I managed to achieve the following solution:
function log()
{
[ -z "$counter" ] && counter=1 || (( ++counter ))
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
# Display text-to-be-logged on screen and
# write it to the summary log & to it's corresponding log file
( tee "${counter}.log" < "$pipe" | tee -a "Summary.log" 1>&4 ) &
teepid=$!
exec 1>"$pipe" 2>&1 # redirect stdout & stderr to the pipe
}
# Create temporary FIFO/pipe
pipe_dir=$(mktemp -d)
pipe="${pipe_dir}/cmds_output"
mkfifo "$pipe"
exec 4<&1 # save value of FD1 to FD4
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
It works - I tested it.
References:
Force bash script to use tee without piping from the command line # superuser.com - helped a lot
I/O Redirection # tldp.org
$! - PID Variable # tldp.org
TEST Operators: Binary Comparison # tldp.org
For simple redirection of bash code block, without using a dedicated function, do:
(
echo "log this block of code"
# commands ...
# ...
# ...
) &> output.log
I am having trouble coming up with the right combination of semicolons and/or braces. I'd like to do this, but as a one-liner from the command line:
while [ 1 ]
do
foo
sleep 2
done
while true; do foo; sleep 2; done
By the way, if you type it as a multiline (as you are showing) at the command prompt and then call the history with arrow up, you will get it on a single line, correctly punctuated.
$ while true
> do
> echo "hello"
> sleep 2
> done
hello
hello
hello
^C
$ <arrow up> while true; do echo "hello"; sleep 2; done
It's also possible to use sleep command in while's condition. Making one-liner looking more clean imho.
while sleep 2; do echo thinking; done
Colon is always "true":
while :; do foo; sleep 2; done
You can use semicolons to separate statements:
$ while [ 1 ]; do foo; sleep 2; done
You can also make use of until command:
until ((0)); do foo; sleep 2; done
Note that in contrast to while, until would execute the commands inside the loop as long as the test condition has an exit status which is not zero.
Using a while loop:
while read i; do foo; sleep 2; done < /dev/urandom
Using a for loop:
for ((;;)); do foo; sleep 2; done
Another way using until:
until [ ]; do foo; sleep 2; done
Using while:
while true; do echo 'while'; sleep 2s; done
Using for Loop:
for ((;;)); do echo 'forloop'; sleep 2; done
Using Recursion, (a little bit different than above, keyboard interrupt won't stop it)
list(){ echo 'recursion'; sleep 2; list; } && list;
A very simple infinite loop.. :)
while true ; do continue ; done
Fr your question it would be:
while true; do foo ; sleep 2 ; done
For simple process watching use watch instead
I like to use the semicolons only for the WHILE statement,
and the && operator to make the loop do more than one thing...
So I always do it like this
while true ; do echo Launching Spaceship into orbit && sleep 5s && /usr/bin/launch-mechanism && echo Launching in T-5 && sleep 1s && echo T-4 && sleep 1s && echo T-3 && sleep 1s && echo T-2 && sleep 1s && echo T-1 && sleep 1s && echo liftoff ; done
If you want the while loop to stop after some condition, and your foo command returns non-zero when this condition is met then you can get the loop to break like this:
while foo; do echo 'sleeping...'; sleep 5; done;
For example, if the foo command is deleting things in batches, and it returns 1 when there is nothing left to delete.
This works well if you have a custom script that needs to run a command many times until some condition. You write the script to exit with 1 when the condition is met and exit with 0 when it should be run again.
For example, say you have a python script batch_update.py which updates 100 rows in a database and returns 0 if there are more to update and 1 if there are no more. The the following command will allow you to update rows 100 at a time with sleeping for 5 seconds between updates:
while batch_update.py; do echo 'sleeping...'; sleep 5; done;
You don't even need to use do and done. For infinite loops I find it more readable to use for with curly brackets. For example:
for ((;;)) { date ; sleep 1 ; }
This works in bash and zsh. Doesn't work in sh.
If I can give two practical examples (with a bit of "emotion").
This writes the name of all files ended with ".jpg" in the folder "img":
for f in *; do if [ "${f#*.}" == 'jpg' ]; then echo $f; fi; done
This deletes them:
for f in *; do if [ "${f#*.}" == 'jpg' ]; then rm -r $f; fi; done
Just trying to contribute.
You can try this too
WARNING: this you should not do but since the question is asking for infinite loop with no end...this is how you could do it.
while [[ 0 -ne 1 ]]; do echo "it's looping"; sleep 2; done
You can also put that loop in the background (e.g. when you need to disconnect from a remote machine)
nohup bash -c "while true; do aws s3 sync xml s3://bucket-name/xml --profile=s3-profile-name; sleep 3600; done &"