## bash script
foo(){
sleep 0.0001
read -s -t 0.0002 var ## get the pipeline input if have
if [ -n "$var" ];then
echo "has pipe input"
fi
if [ $# -gt 0 ];then
echo "has std input"
fi
read -s -t 30 var1 #1 # expect wait for 30 seconds , but acturaly not
echo "failed"
read -s -t 30 var2 #2
echo "failed again"
}
echo "ok1"| foo "ok"
## output:
has pipe input
has std input
failed
failed again
if the foo has pipeline input , the read command at #1 & #2 will return immediately without waiting for input TIMEOUT
in my real script there are three needs :
1 made function could accept pipeline input and parameter at the same time (cause i want my function can take configuration from pipeline input , i think that will be nice for me. for example: )
foo(){
local config
sleep 0.0001
read -t 0.0002 config
eval "$config"
}
then i can pass configuration like this
foo para1 para2 <<EOF
G_TIMEOUT=300
G_ROW_SPACE=2
G_LINE_NUM=10
EOF
2 in my function i need to read user input from keyboard( i need interactive with user by using read )
3 wait for user input should has a timeout ( i want to implament a screensaver ,if there is no action after TIMEOUT seconds passed from user, will call screensaver script , and after any keydown , screensaver will return , and again to wait for use input
if there is a way to redirect pipeline input to fd3 after i get the pipeline input ,and then close fd3 to make pipe broken , then reopen fd0 to standard input (keyboard) and wait for user input ?
It doesn't wait for input because it has reached the end of the pipe.
echo "ok1" | ... writes a single line to the pipe, then closes it. The first read in the function reads ok1 into var. All other read calls return immediately because there is no more data to read and no chance of more data appearing later because the write end of the pipe has already been closed.
If you want the pipe to stay open, you have to do something like
{ echo ok1; sleep 40; echo ok2; } | foo
because function foo has pipeline input , so in child process , the input fd been redirected to pipeline automaticly , just redirect the standard input to keyboard(/proc/pid/0) after get the pipeline input ,will solve the problem
thanks for those guys give me that clue , it is not read command problem ,it is fd problem acturaly
foo(){
local config
sleep 0.0001
read -t 0.0002 config
if [ -n "$config" ];then
config=$(cat -)
fi
echo "$config"
exec 3</dev/tty
read -t 10 -u 3 input
echo "success!"
}
a better approche:
foo(){
local config
sleep 0.0001
read -t 0.0002 config
if [ -n "$config" ];then
config=$(cat -)
fi
exec 0<&- ## close current pipeline input
exec 0</dev/tty ##reopen input fd with standard input
read -t 10 input ##read will wait for input for keyboard :) good !
echo "success!"
}
furthermore if i can detect current input is pipe or standard input , i colud not use read config to judge if there are pipeline input , but how to fullfill that ? [ -t 0 ] is a good idea
a better approche:
foo(){
local config
if [ ! -t 0 ];then
config=$(cat -)
exec 0<&- ## close current pipeline input
exec 0</dev/tty ##reopen input fd with standard input
fi
read -t 10 input ##read will wait for input for keyboard :) great !
echo "success!"
}
As an extra to the answer of melpomene you can see this when executing the following line:
$ echo foo | { read -t 10 var1; echo $?; read -t 10 var2; echo $?; }
0
1
$ read -t 1 var; echo $?
142
This line outputs the return codes of read and the manual states
The return code is zero, unless end-of-file is encountered, read times out (in which case the return code is greater than 128), or an invalid
file descriptor is supplied as the argument to -u
source: man bash
From this we see that the second read in the first command fails because EOF is reached.
Related
I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash
function test {
echo "$1"
}
echo $(test "echo") &
wait
a=$(test "assignment") &
wait
echo $a
echo done
This code produces the output:
echo
done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes)
/dev/fd/63
Here, the expression <(yes) is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes (which prints the string y in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63
cat: /dev/fd/63: No such file or directory
The problem here is that the yes process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber`
$ exec 3< backingfile # open the file for reading in the current shell, as fd 3
$ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it.
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
#rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes never stops). But naturally that problem holds even for read if the line separator is never written additionally)
One very robust way to deal with coprocesses in Bash is to use... the coproc builtin.
Suppose you have a script or function called banana you wish to run in background, capture all its output while doing some stuff and wait until it's done. I'll do the simulation with this:
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
You will then run banana with the coproc as so:
coproc bananafd { banana; }
this is like running banana & but with the following extras: it creates two file descriptors that are in the array bananafd (at index 0 for output and index 1 for input). You'll capture the output of banana with the read builtin:
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
Try it:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
Caveat: you must be done with stuff before banana ends! if the gorilla is quicker than you:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
In this case, you'll obtain an error like this one:
./banana: line 22: read: : invalid file descriptor specification
You can check whether it's too late (i.e., whether you've taken too long doing your stuff) because after the coproc is done, bash removes the values in the array bananafd, and that's why we obtained the previous error.
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
if [[ -n ${bananafd[#]} ]]; then
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
else
echo "oh no, I took too long doing my stuff..."
fi
Finally, if you really don't want to miss any of gorilla's moves, even if you take too long for your stuff, you could copy banana's file descriptor to another fd, 3 for example, do your stuff and then read from 3:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
# Copy file descriptor banana[0] to 3
exec 3>&${bananafd[0]}
stuff
IFS= read -d '' -u 3 output
echo "$output"
This will work very well! the last read will also play the role of wait, so that output will contain the complete output of banana.
That was great: no temp files to deal with (bash handles everything silently) and 100% pure bash!
Hope this helps!
One way to capture background command's output is to redirect it's output in a file and capture output from file after background process has ended:
test "assignment" > /tmp/_out &
wait
a=$(</tmp/_out)
I also use file redirections. Like:
exec 3< <({ sleep 2; echo 12; }) # Launch as a job stdout -> fd3
cat <&3 # Lock read fd3
More real case
If I want the output of 4 parallel workers: toto, titi, tata and tutu.
I redirect each one to an different file descriptor (in fd variable).
Then reading these file descriptor will block until EOF <= pipe broken <= command completed
#!/usr/bin/env bash
# Declare data to be forked
a_value=(toto titi tata tutu)
msg=""
# Spawn child sub-processes
for i in {0..3}; do
((fd=50+i))
echo -e "1/ Launching command: $cmd with file descriptor: $fd!"
eval "exec $fd< <({ sleep $((i)); echo ${a_value[$i]}; })"
a_pid+=($!) # Store pid
done
# Join child: wait them all and collect std-output
for i in {0..3}; do
((fd=50+i));
echo -e "2/ Getting result of: $cmd with file descriptor: $fd!"
msg+="$(cat <&$fd)\n"
((i_fd--))
done
# Print result
echo -e "===========================\nResult:"
echo -e "$msg"
Should output:
1/ Launching command: with file descriptor: 50!
1/ Launching command: with file descriptor: 51!
1/ Launching command: with file descriptor: 52!
1/ Launching command: with file descriptor: 53!
2/ Getting result of: with file descriptor: 50!
2/ Getting result of: with file descriptor: 51!
2/ Getting result of: with file descriptor: 52!
2/ Getting result of: with file descriptor: 53!
===========================
Result:
toto
titi
tata
tutu
Note1: coproc is supporting only one coprocess and not multiple
Note2: wait command is buggy for old bash version (4.2) and cannot retrieve the status of the jobs I launched. It works well in bash 5 but file redirection works for all versions.
Just group the commands, when you run them in background and wait for both.
{ echo a & echo b & wait; } | nl
Output will be:
1 a
2 b
But notice that the output can be out of order, if the second task runs faster than the first.
{ { sleep 1; echo a; } & echo b & wait; } | nl
Reverse output:
1 b
2 a
If it is necessary to separate the output of both background jobs, it is necessary to buffer the output somewhere, typically in a file. Example:
#! /bin/bash
t0=$(date +%s) # Get start time
trap 'rm -f "$ta" "$tb"' EXIT # Remove temp files on exit.
ta=$(mktemp) # Create temp file for job a.
tb=$(mktemp) # Create temp file for job b.
{ exec >$ta; echo a1; sleep 2; echo a2; } & # Run job a.
{ exec >$tb; echo b1; sleep 3; echo b2; } & # Run job b.
wait # Wait for the jobs to finish.
cat "$ta" # Print output of job a.
cat "$tb" # Print output of job b.
t1=$(date +%s) # Get end time
echo "t1 - t0: $((t1-t0))" # Display execution time.
The overall runtime of the script is three seconds, although the combined sleeping time of both background jobs is five seconds. And the output of the background jobs is in order.
a1
a2
b1
b2
t1 - t0: 3
You can also use a memory buffer to store the output of your jobs. But this works only, if your buffer is big enough to store the whole output of your jobs.
#! /bin/bash
t0=$(date +%s)
trap 'rm -f /tmp/{a,b}' EXIT
mkfifo /tmp/{a,b}
buffer() { dd of="$1" status=none iflag=fullblock bs=1K; }
pids=()
{ echo a1; sleep 2; echo a2; } > >(buffer /tmp/a) &
pids+=($!)
{ echo b1; sleep 3; echo b2; } > >(buffer /tmp/b) &
pids+=($!)
# Wait only for the jobs but not for the buffering `dd`.
wait "${pids[#]}"
# This will wait for `dd`.
cat /tmp/{a,b}
t1=$(date +%s)
echo "t1 - t0: $((t1-t0))"
The above will also work with cat instead of dd. But then you can not control the buffer size.
If you have GNU Parallel you can probably use parset:
myfunc() {
sleep 3
echo "The input was"
echo "$#"
}
export -f myfunc
parset a,b,c myfunc ::: myarg-a "myarg b" myarg-c
echo "$a"
echo "$b"
echo "$c"
See: https://www.gnu.org/software/parallel/parset.html
My professor gave me this confusing assignment :
write a program/script which asks the user for a filename to get input
from, a filename to send errors and a filename to write output if the
filename for output and errors are the same, use the proper redirects
input file will have instructions for your program such as: CHANGE:
this command will tell your program to change the redirects like this:
CHANGE STDIN newfilename CHANGE STOUT newoutfile CHANGE STDERR
newerorfile STOP: stop the program and exit STOP
any other input should be written to the output file Write the number
of lines copied to standard error
your error file might look like this:
1
2
ERROR: Input file not found
filename
CHANGE: redirecting output to filename
3
4
CHANGE: redirecting errors to
**at this point the new error file would start with
5
5
6
STOP requested
**
here are some sample files:
> fileio.test.1
line 1 of first input file
line 2 of first inputt file
CHANGE STDIN stdin.1
since input has changed this line should never
get read EXIT
> stdin.1
line 1 of stdin.1
line 2 of stdin.1
CHANGE STDERR stderr.1
line 3 of stdin.1
CHANGE STDOUT stdout.1
line 4 of stdin.1
CHANGE STDOUT stdout.2
line 5 of stdin.1
CHANGE STDIN stdin.2
since input has changed this line should never get read EXIT
> stdin.2
this should go wherever it's supposed to
EXIT
this line shouldn't be read since our program should have exited by
now
However, I am not quite sure how to interpret these directions. As a stab in the dark, I've done:
#!/bin/bash
read -p "Provide filename to receive input from: " input
read -p "Provide filename to send errors to: " errorFile
read -p "Provide filename to write output to: " writeFile
#set -x
if [ -z $input ]; then
echo "Input file needed."
exit=1
elif [ -z $errorFile ] || [ -z $writeFile ]; then
echo "Supply the correct number of filenames."
exit=1
elif [ $errorFile == $writeFile ]; then
exec 2> $errorFile
exec >&2
else
exec 2> $errorFile
exec 1> $writeFile
fi
while read -r line
do
eval $(echo "$line")
done< $input
Edit: For a function to interpret the input, I was thinking I could match for the specific commands in the testfiles and then trigger the corresponding bash command. Anything that doesn't match would be echo'd and directed to its appropriate file.
ChangeFD () {
if [[ "CHANGE STDE" == ${line:0:11} ]]; then
exec $(echo "2>${line:14}")
elif [[ "CHANGE STDO" == ${line:0:11} ]]; then
exec $(echo "1>${line:14}")
elif [[ "CHANGE STDI" == ${line:0:11} ]]; then
exec $(echo "0<${line:13}")
else
echo $1
fi
}
to count the number of lines (and output that count at regular intervals) you will need to create another variable.
Before your while loop
count=0
and within the while loop (probably just before the 'done < $input' line)
# increment the counter
count=$(( count + 1 ))
# output the counter var to stderr (wherever it is pointing)
echo $count >>&2
exec $(echo "0<${line:13}")
This doesn't work, because redirection operators are not recognized in the replacement of $(…); just use
exec <${line:13}
instead.
I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash
function test {
echo "$1"
}
echo $(test "echo") &
wait
a=$(test "assignment") &
wait
echo $a
echo done
This code produces the output:
echo
done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes)
/dev/fd/63
Here, the expression <(yes) is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes (which prints the string y in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63
cat: /dev/fd/63: No such file or directory
The problem here is that the yes process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber`
$ exec 3< backingfile # open the file for reading in the current shell, as fd 3
$ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it.
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
#rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes never stops). But naturally that problem holds even for read if the line separator is never written additionally)
One very robust way to deal with coprocesses in Bash is to use... the coproc builtin.
Suppose you have a script or function called banana you wish to run in background, capture all its output while doing some stuff and wait until it's done. I'll do the simulation with this:
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
You will then run banana with the coproc as so:
coproc bananafd { banana; }
this is like running banana & but with the following extras: it creates two file descriptors that are in the array bananafd (at index 0 for output and index 1 for input). You'll capture the output of banana with the read builtin:
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
Try it:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
Caveat: you must be done with stuff before banana ends! if the gorilla is quicker than you:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
In this case, you'll obtain an error like this one:
./banana: line 22: read: : invalid file descriptor specification
You can check whether it's too late (i.e., whether you've taken too long doing your stuff) because after the coproc is done, bash removes the values in the array bananafd, and that's why we obtained the previous error.
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
if [[ -n ${bananafd[#]} ]]; then
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
else
echo "oh no, I took too long doing my stuff..."
fi
Finally, if you really don't want to miss any of gorilla's moves, even if you take too long for your stuff, you could copy banana's file descriptor to another fd, 3 for example, do your stuff and then read from 3:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
# Copy file descriptor banana[0] to 3
exec 3>&${bananafd[0]}
stuff
IFS= read -d '' -u 3 output
echo "$output"
This will work very well! the last read will also play the role of wait, so that output will contain the complete output of banana.
That was great: no temp files to deal with (bash handles everything silently) and 100% pure bash!
Hope this helps!
One way to capture background command's output is to redirect it's output in a file and capture output from file after background process has ended:
test "assignment" > /tmp/_out &
wait
a=$(</tmp/_out)
I also use file redirections. Like:
exec 3< <({ sleep 2; echo 12; }) # Launch as a job stdout -> fd3
cat <&3 # Lock read fd3
More real case
If I want the output of 4 parallel workers: toto, titi, tata and tutu.
I redirect each one to an different file descriptor (in fd variable).
Then reading these file descriptor will block until EOF <= pipe broken <= command completed
#!/usr/bin/env bash
# Declare data to be forked
a_value=(toto titi tata tutu)
msg=""
# Spawn child sub-processes
for i in {0..3}; do
((fd=50+i))
echo -e "1/ Launching command: $cmd with file descriptor: $fd!"
eval "exec $fd< <({ sleep $((i)); echo ${a_value[$i]}; })"
a_pid+=($!) # Store pid
done
# Join child: wait them all and collect std-output
for i in {0..3}; do
((fd=50+i));
echo -e "2/ Getting result of: $cmd with file descriptor: $fd!"
msg+="$(cat <&$fd)\n"
((i_fd--))
done
# Print result
echo -e "===========================\nResult:"
echo -e "$msg"
Should output:
1/ Launching command: with file descriptor: 50!
1/ Launching command: with file descriptor: 51!
1/ Launching command: with file descriptor: 52!
1/ Launching command: with file descriptor: 53!
2/ Getting result of: with file descriptor: 50!
2/ Getting result of: with file descriptor: 51!
2/ Getting result of: with file descriptor: 52!
2/ Getting result of: with file descriptor: 53!
===========================
Result:
toto
titi
tata
tutu
Note1: coproc is supporting only one coprocess and not multiple
Note2: wait command is buggy for old bash version (4.2) and cannot retrieve the status of the jobs I launched. It works well in bash 5 but file redirection works for all versions.
Just group the commands, when you run them in background and wait for both.
{ echo a & echo b & wait; } | nl
Output will be:
1 a
2 b
But notice that the output can be out of order, if the second task runs faster than the first.
{ { sleep 1; echo a; } & echo b & wait; } | nl
Reverse output:
1 b
2 a
If it is necessary to separate the output of both background jobs, it is necessary to buffer the output somewhere, typically in a file. Example:
#! /bin/bash
t0=$(date +%s) # Get start time
trap 'rm -f "$ta" "$tb"' EXIT # Remove temp files on exit.
ta=$(mktemp) # Create temp file for job a.
tb=$(mktemp) # Create temp file for job b.
{ exec >$ta; echo a1; sleep 2; echo a2; } & # Run job a.
{ exec >$tb; echo b1; sleep 3; echo b2; } & # Run job b.
wait # Wait for the jobs to finish.
cat "$ta" # Print output of job a.
cat "$tb" # Print output of job b.
t1=$(date +%s) # Get end time
echo "t1 - t0: $((t1-t0))" # Display execution time.
The overall runtime of the script is three seconds, although the combined sleeping time of both background jobs is five seconds. And the output of the background jobs is in order.
a1
a2
b1
b2
t1 - t0: 3
You can also use a memory buffer to store the output of your jobs. But this works only, if your buffer is big enough to store the whole output of your jobs.
#! /bin/bash
t0=$(date +%s)
trap 'rm -f /tmp/{a,b}' EXIT
mkfifo /tmp/{a,b}
buffer() { dd of="$1" status=none iflag=fullblock bs=1K; }
pids=()
{ echo a1; sleep 2; echo a2; } > >(buffer /tmp/a) &
pids+=($!)
{ echo b1; sleep 3; echo b2; } > >(buffer /tmp/b) &
pids+=($!)
# Wait only for the jobs but not for the buffering `dd`.
wait "${pids[#]}"
# This will wait for `dd`.
cat /tmp/{a,b}
t1=$(date +%s)
echo "t1 - t0: $((t1-t0))"
The above will also work with cat instead of dd. But then you can not control the buffer size.
If you have GNU Parallel you can probably use parset:
myfunc() {
sleep 3
echo "The input was"
echo "$#"
}
export -f myfunc
parset a,b,c myfunc ::: myarg-a "myarg b" myarg-c
echo "$a"
echo "$b"
echo "$c"
See: https://www.gnu.org/software/parallel/parset.html
I've been trying to send some AT commands to my modem and want to capture response into a variable. Here's my code:
exec 3<>/dev/ttyUSB3
echo -e "AT+CGSN\n" >&3
cat <&3
#read -r RESPONSE <&3
#echo "Response was $RESPONSE"
exec 3<&-
exec 3>&-
Results:
$ ./imei_checker.sh
AT+CGSN
356538041935676
OK
AT+CGSN
356538041935676
OK
But if I change cat to read, it doesn't work:
$ ./imei_checker.sh
Response was AT+CGSN
2 more questions:
Why it show the dupplicate output?
How do I close the file handle properly? exec 3<&- and exec 3>&-
seems doesn't work. I have to press Ctrl+C to
get control of the Terminal back.
read will only read a single line, unlike cat which will basically read and echo until end of file.
For a read version, you're best off reading with a timeout up until the point you get the OK (and storing any line that contains a lot of digits).
I think you'll find that it's not the closing of the number 3 file handle that's stopping things - it's more likely to be the cat which will continue to read/echo until an end-of-file event that isn't happening.
You can be certain of this if you just put:
echo XYZZY
immediately before the closing exec statements. If it's still in the cat, you'll never see it.
So, using a looping read version will probably fix that as well.
By way of example, here's how you can use read to do this with standard input:
#!/bin/bash
NUM=
while true ; do
read -p "> " -t 10 -r RESP <&0
if [[ $? -ge 128 ]] ; then RESP=OK ; fi
echo "Entered: $RESP"
if [[ $RESP = OK ]] ; then break ; fi
if [[ $RESP =~ ^[0-9] ]] ; then NUM=$RESP ; fi
done
echo "Finished, numerics were: '$NUM'"
It uses the timeout feature of read to detect if there's no more input (setting the input to OK so as to force loop exit). If you do get an OK before then, it exits normally anyway, the timeout simply caters for the possibility that the modem doesn't answer as expected.
The number is set initially to nothing but overwritten by any line from the "modem" that starts with a number.
Two sample runs, with and without an OK response from the "modem":
pax> ./testprog.sh
> hello
Entered: hello
> 12345
Entered: 12345
> OK
Entered: OK
Finished, numerics were: '12345'
pax> ./testprog.sh
> hello
Entered: hello
> now we wait 10 secs
Entered: now we wait 10 secs
> Entered: OK
Finished, numerics were: ''
It wouldn't be too hard to convert that to something similar with you modem device (either read <&3 or read -u3 will work just fine).
That would basically translate to your environment as:
exec 3<>/dev/ttyUSB3
echo -e "AT+CGSN\n" >&3
NUM=
while true ; do
read -t 10 -r RESP <&3
if [[ $? -ge 128 ]] ; then RESP=OK ; fi
echo "Entered: $RESP"
if [[ $RESP = OK ]] ; then break ; fi
if [[ $RESP =~ ^[0-9] ]] ; then NUM=$RESP ; fi
done
echo "Finished, numerics were: '$NUM'"
exec 3<&-
exec 3>&-
Now I haven't tested that since I don't have a modem hooked up (been on broadband for quite a while now) but it should be close to what you need, if not exact.
read takes the descriptor to read from as the argument following -u. See help read for details.
If you want to get the individual lines into variables I'd suggest to wrap read into a while:
while read -r RESPONSE <&3; do
echo "Response was $RESPONSE"
## e.g.:
[ "$RESPONSE" = "OK" ] && break
done
However, if you want "everything" that is sent back to you to reside in $RESPONSE you could do it like this:
RESPONSE="$(cat <&3)"
I need to write an infinite loop that stops when any key is pressed.
Unfortunately this one loops only when a key is pressed.
Ideas please?
#!/bin/bash
count=0
while : ; do
# dummy action
echo -n "$a "
let "a+=1"
# detect any key press
read -n 1 keypress
echo $keypress
done
echo "Thanks for using this script."
exit 0
You need to put the standard input in non-blocking mode. Here is an example that works:
#!/bin/bash
if [ -t 0 ]; then
SAVED_STTY="`stty --save`"
stty -echo -icanon -icrnl time 0 min 0
fi
count=0
keypress=''
while [ "x$keypress" = "x" ]; do
let count+=1
echo -ne $count'\r'
keypress="`cat -v`"
done
if [ -t 0 ]; then stty "$SAVED_STTY"; fi
echo "You pressed '$keypress' after $count loop iterations"
echo "Thanks for using this script."
exit 0
Edit 2014/12/09: Add the -icrnl flag to stty to properly catch the Return key, use cat -v instead of read in order to catch Space.
It is possible that cat reads more than one character if it is fed data fast enough; if not the desired behaviour, replace cat -v with dd bs=1 count=1 status=none | cat -v.
Edit 2019/09/05: Use stty --save to restore the TTY settings.
read has a number of characters parameter -n and a timeout parameter -t which could be used.
From bash manual:
-n nchars
read returns after reading nchars characters rather than waiting for a complete line of input, but honors a delimiter if fewer than nchars characters are read before the delimiter.
-t timeout
Cause read to time out and return failure if a complete line of input (or a specified number of characters) is not read within timeout seconds. timeout may be a decimal number with a fractional portion following the decimal point. This option is only effective if read is reading input from a terminal, pipe, or other special file; it has no effect when reading from regular files. If read times out, read saves any partial input read into the specified variable name. If timeout is 0, read returns immediately, without trying to read any data. The exit status is 0 if input is available on the specified file descriptor, non-zero otherwise. The exit status is greater than 128 if the timeout is exceeded.
However, the read builtin uses the terminal which has its own settings. So as other answers have pointed out we need to set the flags for the terminal using stty.
#!/bin/bash
old_tty=$(stty --save)
# Minimum required changes to terminal. Add -echo to avoid output to screen.
stty -icanon min 0;
while true ; do
if read -t 0; then # Input ready
read -n 1 char
echo -e "\nRead: ${char}\n"
break
else # No input
echo -n '.'
sleep 1
fi
done
stty $old_tty
Usually I don't mind breaking a bash infinite loop with a simple CTRL-C. This is the traditional way for terminating a tail -f for instance.
Pure bash: unattended user input over loop
I've done this without having to play with stty:
loop=true
while $loop; do
trapKey=
if IFS= read -d '' -rsn 1 -t .002 str; then
while IFS= read -d '' -rsn 1 -t .002 chr; do
str+="$chr"
done
case $str in
$'\E[A') trapKey=UP ;;
$'\E[B') trapKey=DOWN ;;
$'\E[C') trapKey=RIGHT ;;
$'\E[D') trapKey=LEFT ;;
q | $'\E') loop=false ;;
esac
fi
if [ "$trapKey" ] ;then
printf "\nDoing something with '%s'.\n" $trapKey
fi
echo -n .
done
This will
loop with a very small footprint (max 2 millisecond)
react to keys cursor left, cursor right, cursor up and cursor down
exit loop with key Escape or q.
Here is another solution. It works for any key pressed, including space, enter, arrows, etc.
The original solution tested in bash:
IFS=''
if [ -t 0 ]; then stty -echo -icanon raw time 0 min 0; fi
while [ -z "$key" ]; do
read key
done
if [ -t 0 ]; then stty sane; fi
An improved solution tested in bash and dash:
if [ -t 0 ]; then
old_tty=$(stty --save)
stty raw -echo min 0
fi
while
IFS= read -r REPLY
[ -z "$REPLY" ]
do :; done
if [ -t 0 ]; then stty "$old_tty"; fi
In bash you could even leave out REPLY variable for the read command, because it is the default variable there.
I found this forum post and rewrote era's post into this pretty general use format:
# stuff before main function
printf "INIT\n\n"; sleep 2
INIT(){
starting="MAIN loop starting"; ending="MAIN loop success"
runMAIN=1; i=1; echo "0"
}; INIT
# exit script when MAIN is done, if ever (in this case counting out 4 seconds)
exitScript(){
trap - SIGINT SIGTERM SIGTERM # clear the trap
kill -- -$$ # Send SIGTERM to child/sub processes
kill $( jobs -p ) # kill any remaining processes
}; trap exitScript SIGINT SIGTERM # set trap
MAIN(){
echo "$starting"
sleep 1
echo "$i"; let "i++"
if (($i > 4)); then printf "\nexiting\n"; exitScript; fi
echo "$ending"; echo
}
# main loop running in subshell due to the '&'' after 'done'
{ while ((runMAIN)); do
if ! MAIN; then runMain=0; fi
done; } &
# --------------------------------------------------
tput smso
# echo "Press any key to return \c"
tput rmso
oldstty=`stty -g`
stty -icanon -echo min 1 time 0
dd bs=1 count=1 >/dev/null 2>&1
stty "$oldstty"
# --------------------------------------------------
# everything after this point will occur after user inputs any key
printf "\nYou pressed a key!\n\nGoodbye!\n"
Run this script