bash 4.2 centos7
test.sh
LOG_FILE=./logs/result.log
exec > >(tee -a ${LOG_FILE} )
exec 2> >(tee -a ${LOG_FILE} >&2)
echo
for x in {1..3}
do
echo $x
done
# I want the second part to be displayed but not logged.
for x in {4..6}
do
echo $x
done
# Recommence logging
for x in {7..9}
do
echo $x
done
desired output
console
1
2
3
4
5
6
7
8
9
result.log
1
2
3
7
8
9
question
Can I stop or revert the redirection of exec > >(tee -a ${LOG_FILE} ) inside the same shell?
Or should I use different method than tee?
EDIT2
I'm struggling to get this function out of the log.
spinner()
{
trap "kill 0" SIGINT
spin='-\|/'
i=0
while kill -0 $1 2>/dev/null
do
i=$(( (i+1) %4 ))
printf "\e[1;33m" >&3
printf "\r${spin:$i:1}" >&3
printf "\e[m" >&3
sleep .1
done
}
I put >&3 in three printf lines but it results in Bad file descriptor. The function gets a pid parameter and spins the wheel until the delivered pid is done.
You need to save a copy of the original standard output so that you can send the middle section to it once more, like this:
#!/bin/bash
LOG_DIR="./logs"
LOG_FILE="$LOG_DIR/result.log"
mkdir -p "$LOG_DIR"
exec 3>&1 # Save a copy of standard output as file descriptor 3
exec > >(tee -a ${LOG_FILE} )
exec 2> >(tee -a ${LOG_FILE} >&2)
echo
for x in {1..3}
do
echo $x
done
# I want the second part to be displayed but not logged.
{
for x in {4..6}
do
echo $x
done
} 1>&3
# Recommence logging
for x in {7..9}
do
echo $x
done
There are multiple ways of restoring the middle section to write just to the original standard output, but I think the { … } 1>&3 notation is probably the simplest.
Beware: you have two copies of tee both writing to the same file. If you're lucky (careful), the one processing standard error won't have anything to write to the log file. You might be better off using 2>&1 like this exec > >(tee …) 2>&1 and omitting the exec 2> >(tee …) line. Also, note that the prompt may appear before the last of the logging output.
$ bash script.sh
4
5
6
$
1
2
3
7
8
9
echo Hi
Hi
$
I typed the echo Hi to the prompt $ on the line after the 6.
The file logs/result.log contained one copy of the sequence "blank line, numbers 1-3, numbers 7-9" for each time I tested the script.
See also the Bash manual on:
I/O Redirection
Duplicating file descriptors
Command grouping for an explanation of { … }.
Related
I am assigning the output of a command to variable A:
A=$(some_command)
How can I "capture" stderr into a variable B ?
I have tried some variations with 2>&1 and read but that does not work:
A=$(some_command) 2>&1 | read B
echo $B
Here's a code snippet that might help you
# capture stderr into a variable and print it
echo "capture stderr into a variable and print it"
var=$(lt -l /tmp 2>&1)
echo $var
capture stderr into a variable and print it
zsh: command not found: lt
# capture stdout into a variable and print it
echo "capture stdout into a variable and print it"
var=$(ls -l /tmp)
echo $var
# capture both stderr and stdout into a variable and print it
echo "capture both stderr and stdout into a variable and print it"
var=$(ls -l /tmp 2>&1)
echo $var
# more classic way of executing a command which I always follow is as follows. This way I am always in control of what is going on and can act accordingly
if somecommand ; then
echo "command succeeded"
else
echo "command failed"
fi
If you have to capture the output and stderr in different variables, then the following might help as well
## create a file using file descriptor for stdout
exec 3> stdout.txt
# create a file using file descriptor for stderr
exec 4> stderr.txt
A=$($1 /tmp 2>&4 >&3);
## close file descriptor
exec 3>&-
exec 4>&-
## open file descriptor for reading
exec 3< stdout.txt
exec 4< stderr.txt
## read from file using file descriptor
read line <&3
read line2 <&4
## close file descriptor
exec 3<&-
exec 4<&-
## print line read from file
echo "stdout: $line"
echo "stderr: $line2"
## delete file
rm stdout.txt
rm stderr.txt
You can try running it with the following
╰─ bash test.sh pwd
stdout: /tmp/somedir
stderr:
╰─ bash test.sh pwdd
stdout:
stderr: test.sh: line 8: pwdd: command not found
As noted in a comment your use case may be better served in other scripting languages. An example: in Perl you can achieve what you want quite simple:
#!/usr/bin/env perl
use v5.26; # or earlier versions
use Capture::Tiny 'capture'; # library is not in core
my $cmd = 'date';
my #arg = ('-R', '-u');
my ($stdout, $stderr, $exit) = capture {
system( $cmd, #arg );
};
say "STDOUT: $stdout";
say "STDERR: $stderr";
say "EXIT: $exit";
I'm sure similar solutions are available in python, ruby, and all the rest.
I gave it another try using process substitution and came up with this:
# command with no error
date +%b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
September
# command with error
date b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
date: invalid date “b“
# command with both at the same time should work too
I had no success "exporting" the variables from the subprocesses back to the original script. It might be possible though. I just couldn't figure it out.
But this gives you at least access to stdout and stderr as a variable. This means you can do whatever processing you want on them as variables. It depends on your use case if this is of any help to you. Good luck :-)
I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash
function test {
echo "$1"
}
echo $(test "echo") &
wait
a=$(test "assignment") &
wait
echo $a
echo done
This code produces the output:
echo
done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes)
/dev/fd/63
Here, the expression <(yes) is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes (which prints the string y in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63
cat: /dev/fd/63: No such file or directory
The problem here is that the yes process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber`
$ exec 3< backingfile # open the file for reading in the current shell, as fd 3
$ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it.
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
#rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes never stops). But naturally that problem holds even for read if the line separator is never written additionally)
One very robust way to deal with coprocesses in Bash is to use... the coproc builtin.
Suppose you have a script or function called banana you wish to run in background, capture all its output while doing some stuff and wait until it's done. I'll do the simulation with this:
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
You will then run banana with the coproc as so:
coproc bananafd { banana; }
this is like running banana & but with the following extras: it creates two file descriptors that are in the array bananafd (at index 0 for output and index 1 for input). You'll capture the output of banana with the read builtin:
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
Try it:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
Caveat: you must be done with stuff before banana ends! if the gorilla is quicker than you:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
In this case, you'll obtain an error like this one:
./banana: line 22: read: : invalid file descriptor specification
You can check whether it's too late (i.e., whether you've taken too long doing your stuff) because after the coproc is done, bash removes the values in the array bananafd, and that's why we obtained the previous error.
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
if [[ -n ${bananafd[#]} ]]; then
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
else
echo "oh no, I took too long doing my stuff..."
fi
Finally, if you really don't want to miss any of gorilla's moves, even if you take too long for your stuff, you could copy banana's file descriptor to another fd, 3 for example, do your stuff and then read from 3:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
# Copy file descriptor banana[0] to 3
exec 3>&${bananafd[0]}
stuff
IFS= read -d '' -u 3 output
echo "$output"
This will work very well! the last read will also play the role of wait, so that output will contain the complete output of banana.
That was great: no temp files to deal with (bash handles everything silently) and 100% pure bash!
Hope this helps!
One way to capture background command's output is to redirect it's output in a file and capture output from file after background process has ended:
test "assignment" > /tmp/_out &
wait
a=$(</tmp/_out)
I also use file redirections. Like:
exec 3< <({ sleep 2; echo 12; }) # Launch as a job stdout -> fd3
cat <&3 # Lock read fd3
More real case
If I want the output of 4 parallel workers: toto, titi, tata and tutu.
I redirect each one to an different file descriptor (in fd variable).
Then reading these file descriptor will block until EOF <= pipe broken <= command completed
#!/usr/bin/env bash
# Declare data to be forked
a_value=(toto titi tata tutu)
msg=""
# Spawn child sub-processes
for i in {0..3}; do
((fd=50+i))
echo -e "1/ Launching command: $cmd with file descriptor: $fd!"
eval "exec $fd< <({ sleep $((i)); echo ${a_value[$i]}; })"
a_pid+=($!) # Store pid
done
# Join child: wait them all and collect std-output
for i in {0..3}; do
((fd=50+i));
echo -e "2/ Getting result of: $cmd with file descriptor: $fd!"
msg+="$(cat <&$fd)\n"
((i_fd--))
done
# Print result
echo -e "===========================\nResult:"
echo -e "$msg"
Should output:
1/ Launching command: with file descriptor: 50!
1/ Launching command: with file descriptor: 51!
1/ Launching command: with file descriptor: 52!
1/ Launching command: with file descriptor: 53!
2/ Getting result of: with file descriptor: 50!
2/ Getting result of: with file descriptor: 51!
2/ Getting result of: with file descriptor: 52!
2/ Getting result of: with file descriptor: 53!
===========================
Result:
toto
titi
tata
tutu
Note1: coproc is supporting only one coprocess and not multiple
Note2: wait command is buggy for old bash version (4.2) and cannot retrieve the status of the jobs I launched. It works well in bash 5 but file redirection works for all versions.
Just group the commands, when you run them in background and wait for both.
{ echo a & echo b & wait; } | nl
Output will be:
1 a
2 b
But notice that the output can be out of order, if the second task runs faster than the first.
{ { sleep 1; echo a; } & echo b & wait; } | nl
Reverse output:
1 b
2 a
If it is necessary to separate the output of both background jobs, it is necessary to buffer the output somewhere, typically in a file. Example:
#! /bin/bash
t0=$(date +%s) # Get start time
trap 'rm -f "$ta" "$tb"' EXIT # Remove temp files on exit.
ta=$(mktemp) # Create temp file for job a.
tb=$(mktemp) # Create temp file for job b.
{ exec >$ta; echo a1; sleep 2; echo a2; } & # Run job a.
{ exec >$tb; echo b1; sleep 3; echo b2; } & # Run job b.
wait # Wait for the jobs to finish.
cat "$ta" # Print output of job a.
cat "$tb" # Print output of job b.
t1=$(date +%s) # Get end time
echo "t1 - t0: $((t1-t0))" # Display execution time.
The overall runtime of the script is three seconds, although the combined sleeping time of both background jobs is five seconds. And the output of the background jobs is in order.
a1
a2
b1
b2
t1 - t0: 3
You can also use a memory buffer to store the output of your jobs. But this works only, if your buffer is big enough to store the whole output of your jobs.
#! /bin/bash
t0=$(date +%s)
trap 'rm -f /tmp/{a,b}' EXIT
mkfifo /tmp/{a,b}
buffer() { dd of="$1" status=none iflag=fullblock bs=1K; }
pids=()
{ echo a1; sleep 2; echo a2; } > >(buffer /tmp/a) &
pids+=($!)
{ echo b1; sleep 3; echo b2; } > >(buffer /tmp/b) &
pids+=($!)
# Wait only for the jobs but not for the buffering `dd`.
wait "${pids[#]}"
# This will wait for `dd`.
cat /tmp/{a,b}
t1=$(date +%s)
echo "t1 - t0: $((t1-t0))"
The above will also work with cat instead of dd. But then you can not control the buffer size.
If you have GNU Parallel you can probably use parset:
myfunc() {
sleep 3
echo "The input was"
echo "$#"
}
export -f myfunc
parset a,b,c myfunc ::: myarg-a "myarg b" myarg-c
echo "$a"
echo "$b"
echo "$c"
See: https://www.gnu.org/software/parallel/parset.html
I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash
function test {
echo "$1"
}
echo $(test "echo") &
wait
a=$(test "assignment") &
wait
echo $a
echo done
This code produces the output:
echo
done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes)
/dev/fd/63
Here, the expression <(yes) is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes (which prints the string y in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63
cat: /dev/fd/63: No such file or directory
The problem here is that the yes process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber`
$ exec 3< backingfile # open the file for reading in the current shell, as fd 3
$ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it.
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
#rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes never stops). But naturally that problem holds even for read if the line separator is never written additionally)
One very robust way to deal with coprocesses in Bash is to use... the coproc builtin.
Suppose you have a script or function called banana you wish to run in background, capture all its output while doing some stuff and wait until it's done. I'll do the simulation with this:
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
You will then run banana with the coproc as so:
coproc bananafd { banana; }
this is like running banana & but with the following extras: it creates two file descriptors that are in the array bananafd (at index 0 for output and index 1 for input). You'll capture the output of banana with the read builtin:
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
Try it:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
Caveat: you must be done with stuff before banana ends! if the gorilla is quicker than you:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
In this case, you'll obtain an error like this one:
./banana: line 22: read: : invalid file descriptor specification
You can check whether it's too late (i.e., whether you've taken too long doing your stuff) because after the coproc is done, bash removes the values in the array bananafd, and that's why we obtained the previous error.
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
if [[ -n ${bananafd[#]} ]]; then
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
else
echo "oh no, I took too long doing my stuff..."
fi
Finally, if you really don't want to miss any of gorilla's moves, even if you take too long for your stuff, you could copy banana's file descriptor to another fd, 3 for example, do your stuff and then read from 3:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
# Copy file descriptor banana[0] to 3
exec 3>&${bananafd[0]}
stuff
IFS= read -d '' -u 3 output
echo "$output"
This will work very well! the last read will also play the role of wait, so that output will contain the complete output of banana.
That was great: no temp files to deal with (bash handles everything silently) and 100% pure bash!
Hope this helps!
One way to capture background command's output is to redirect it's output in a file and capture output from file after background process has ended:
test "assignment" > /tmp/_out &
wait
a=$(</tmp/_out)
I also use file redirections. Like:
exec 3< <({ sleep 2; echo 12; }) # Launch as a job stdout -> fd3
cat <&3 # Lock read fd3
More real case
If I want the output of 4 parallel workers: toto, titi, tata and tutu.
I redirect each one to an different file descriptor (in fd variable).
Then reading these file descriptor will block until EOF <= pipe broken <= command completed
#!/usr/bin/env bash
# Declare data to be forked
a_value=(toto titi tata tutu)
msg=""
# Spawn child sub-processes
for i in {0..3}; do
((fd=50+i))
echo -e "1/ Launching command: $cmd with file descriptor: $fd!"
eval "exec $fd< <({ sleep $((i)); echo ${a_value[$i]}; })"
a_pid+=($!) # Store pid
done
# Join child: wait them all and collect std-output
for i in {0..3}; do
((fd=50+i));
echo -e "2/ Getting result of: $cmd with file descriptor: $fd!"
msg+="$(cat <&$fd)\n"
((i_fd--))
done
# Print result
echo -e "===========================\nResult:"
echo -e "$msg"
Should output:
1/ Launching command: with file descriptor: 50!
1/ Launching command: with file descriptor: 51!
1/ Launching command: with file descriptor: 52!
1/ Launching command: with file descriptor: 53!
2/ Getting result of: with file descriptor: 50!
2/ Getting result of: with file descriptor: 51!
2/ Getting result of: with file descriptor: 52!
2/ Getting result of: with file descriptor: 53!
===========================
Result:
toto
titi
tata
tutu
Note1: coproc is supporting only one coprocess and not multiple
Note2: wait command is buggy for old bash version (4.2) and cannot retrieve the status of the jobs I launched. It works well in bash 5 but file redirection works for all versions.
Just group the commands, when you run them in background and wait for both.
{ echo a & echo b & wait; } | nl
Output will be:
1 a
2 b
But notice that the output can be out of order, if the second task runs faster than the first.
{ { sleep 1; echo a; } & echo b & wait; } | nl
Reverse output:
1 b
2 a
If it is necessary to separate the output of both background jobs, it is necessary to buffer the output somewhere, typically in a file. Example:
#! /bin/bash
t0=$(date +%s) # Get start time
trap 'rm -f "$ta" "$tb"' EXIT # Remove temp files on exit.
ta=$(mktemp) # Create temp file for job a.
tb=$(mktemp) # Create temp file for job b.
{ exec >$ta; echo a1; sleep 2; echo a2; } & # Run job a.
{ exec >$tb; echo b1; sleep 3; echo b2; } & # Run job b.
wait # Wait for the jobs to finish.
cat "$ta" # Print output of job a.
cat "$tb" # Print output of job b.
t1=$(date +%s) # Get end time
echo "t1 - t0: $((t1-t0))" # Display execution time.
The overall runtime of the script is three seconds, although the combined sleeping time of both background jobs is five seconds. And the output of the background jobs is in order.
a1
a2
b1
b2
t1 - t0: 3
You can also use a memory buffer to store the output of your jobs. But this works only, if your buffer is big enough to store the whole output of your jobs.
#! /bin/bash
t0=$(date +%s)
trap 'rm -f /tmp/{a,b}' EXIT
mkfifo /tmp/{a,b}
buffer() { dd of="$1" status=none iflag=fullblock bs=1K; }
pids=()
{ echo a1; sleep 2; echo a2; } > >(buffer /tmp/a) &
pids+=($!)
{ echo b1; sleep 3; echo b2; } > >(buffer /tmp/b) &
pids+=($!)
# Wait only for the jobs but not for the buffering `dd`.
wait "${pids[#]}"
# This will wait for `dd`.
cat /tmp/{a,b}
t1=$(date +%s)
echo "t1 - t0: $((t1-t0))"
The above will also work with cat instead of dd. But then you can not control the buffer size.
If you have GNU Parallel you can probably use parset:
myfunc() {
sleep 3
echo "The input was"
echo "$#"
}
export -f myfunc
parset a,b,c myfunc ::: myarg-a "myarg b" myarg-c
echo "$a"
echo "$b"
echo "$c"
See: https://www.gnu.org/software/parallel/parset.html
I'm trying to use bash to open a new descriptor for writing extra diagnostic messages. I don't want to use stderr, because stderr should only contain output from the programs called by bash. I also want my custom descriptor to be redirectable by the user.
I tried this:
exec 3>/dev/tty
echo foo1
echo foo2 >&2
echo foo3 >&3
But when I try to redirect fd 3, the output still writes to the terminal.
$ ./test.sh >/dev/null 2>/dev/null 3>/dev/null
foo3
First the parent shell sets file descriptor 3 to /dev/null
Then your program sets file descriptor 3 to /dev/tty
So your symptoms are not really surprising.
Edit: You could check to see if fd 3 has been set:
if [[ ! -e /proc/$$/fd/3 ]]
then
exec 3>/dev/tty
fi
Simple enough: If the parent shell is not redirecting fd 3, then test.sh will be redirecting fd 3 to /dev/tty.
if ! { exec 0>&3; } 1>/dev/null 2>&1; then
exec 3>/dev/tty
fi
echo foo1
echo foo2 >&2
echo foo3 >&3
Here's a way to check if a file descriptor has already been set using the (Bash) shell only.
(
# cf. "How to check if file descriptor exists?",
# http://www.linuxmisc.com/12-unix-shell/b451b17da3906edb.htm
exec 3<<<hello
# open file descriptors get inherited by child processes,
# so we can use a subshell to test for existence of fd 3
(exec 0>&3) 1>/dev/null 2>&1 &&
{ echo bash: fd exists; fdexists=true; } ||
{ echo bash: fd does NOT exists; fdexists=false; }
perl -e 'open(TMPOUT, ">&3") or die' 1>/dev/null 2>&1 &&
echo perl: fd exists || echo perl: fd does NOT exist
${fdexists} && cat <&3
)
#Kelvin: Here's your modified script you asked for (plus some tests).
echo '
#!/bin/bash
# If test.sh is redirecting fd 3 to somewhere, fd 3 gets redirected to /dev/null;
# otherwise fd 3 gets redirected to /dev/tty.
#{ exec 0>&3; } 1>/dev/null 2>&1 && exec 3>&- || exec 3>/dev/tty
{ exec 0>&3; } 1>/dev/null 2>&1 && exec 3>/dev/null || exec 3>/dev/tty
echo foo1
echo foo2 >&2
echo foo3 >&3
' > test.sh
chmod +x test.sh
./test.sh
./test.sh 1>/dev/null
./test.sh 2>/dev/null
./test.sh 3>/dev/null
./test.sh 1>/dev/null 2>/dev/null
./test.sh 1>/dev/null 2>/dev/null 3>&-
./test.sh 1>/dev/null 2>/dev/null 3>/dev/null
./test.sh 1>/dev/null 2>/dev/null 3>/dev/tty
# fd 3 is opened for reading the Here String 'hello'
# test.sh should see that fd 3 has already been set by the environment
# man bash | less -Ip 'here string'
exec 3<<<hello
cat <&3
# If fd 3 is not explicitly closed, test.sh will determine fd 3 to be set.
#exec 3>&-
./test.sh
exec 3<<<hello
./test.sh 3>&-
Update
This can be done. See kaluy's answer for the simplest way.
Original Answer
It seems the answer is "you can't". Any descriptors created in a script don't apply to the shell which called the script.
I figured out how to do it using ruby though, if anyone is interested. See also the update using perl.
begin
out = IO.new(3, 'w')
rescue Errno::EBADF, ArgumentError
out = File.open('/dev/tty', 'w')
end
p out.fileno
out.puts "hello world"
Note that this obviously won't work in a daemon - it's not connected to a terminal.
UPDATE
If ruby isn't your thing, you can simply call a bash script from the ruby script. You'll need the open4 gem/library for reliable piping of output:
require 'open4'
# ... insert begin/rescue/end block from above
Open4.spawn('./out.sh', :out => out)
UPDATE 2
Here's a way using a bit of perl and mostly bash. You must make sure perl is working properly on your system, because a missing perl executable will also return a non-zero exit code.
perl -e 'open(TMPOUT, ">&3") or die' 2>/dev/null
if [[ $? != 0 ]]; then
echo "fd 3 wasn't open"
exec 3>/dev/tty
else
echo "fd 3 was open"
fi
echo foo1
echo foo2 >&2
echo foo3 >&3
In bash/ksh can we add timestamp to STDERR redirection?
E.g. myscript.sh 2> error.log
I want to get a timestamp written on the log too.
If you're talking about an up-to-date timestamp on each line, that's something you'd probably want to do in your actual script (but see below for a nifty solution if you have no power to change it). If you just want a marker date on its own line before your script starts writing, I'd use:
( date 1>&2 ; myscript.sh ) 2>error.log
What you need is a trick to pipe stderr through another program that can add timestamps to each line. You could do this with a C program but there's a far more devious way using just bash.
First, create a script which will add the timestamp to each line (called predate.sh):
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
For example:
( echo a ; sleep 5 ; echo b ; sleep 2 ; echo c ) | ./predate.sh
produces:
Fri Oct 2 12:31:39 WAST 2009: a
Fri Oct 2 12:31:44 WAST 2009: b
Fri Oct 2 12:31:46 WAST 2009: c
Then you need another trick that can swap stdout and stderr, this little monstrosity here:
( myscript.sh 3>&1 1>&2- 2>&3- )
Then it's simple to combine the two tricks by timestamping stdout and redirecting it to your file:
( myscript.sh 3>&1 1>&2- 2>&3- ) | ./predate.sh >error.log
The following transcript shows this in action:
pax> cat predate.sh
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
pax> cat tstdate.sh
#!/bin/bash
echo a to stderr then wait five seconds 1>&2
sleep 5
echo b to stderr then wait two seconds 1>&2
sleep 2
echo c to stderr 1>&2
echo d to stdout
pax> ( ( ./tstdate.sh ) 3>&1 1>&2- 2>&3- ) | ./predate.sh >error.log
d to stdout
pax> cat error.log
Fri Oct 2 12:49:40 WAST 2009: a to stderr then wait five seconds
Fri Oct 2 12:49:45 WAST 2009: b to stderr then wait two seconds
Fri Oct 2 12:49:47 WAST 2009: c to stderr
As already mentioned, predate.sh will prefix each line with a timestamp and the tstdate.sh is simply a test program to write to stdout and stderr with specific time gaps.
When you run the command, you actually get "d to stdout" written to stderr (but that's your TTY device or whatever else stdout may have been when you started). The timestamped stderr lines are written to your desired file.
The devscripts package in Debian/Ubuntu contains a script called annotate-output which does that (for both stdout and stderr).
$ annotate-output make
21:41:21 I: Started make
21:41:21 O: gcc -Wall program.c
21:43:18 E: program.c: Couldn't compile, and took me ages to find out
21:43:19 E: collect2: ld returned 1 exit status
21:43:19 E: make: *** [all] Error 1
21:43:19 I: Finished with exitcode 2
Here's a version that uses a while read loop like pax's, but doesn't require extra file descriptors or a separate script (although you could use one). It uses process substitution:
myscript.sh 2> >( while read line; do echo "$(date): ${line}"; done > error.log )
Using pax's predate.sh:
myscript.sh 2> >( predate.sh > error.log )
The program ts from the moreutils package pipes standard input to standard output, and prefixes each line with a timestamp.
To prefix stdout lines: command | ts
To prefix both stdout and stderr: command 2>&1 | ts
I like those portable shell scripts but a little disturbed that they fork/exec date(1) for every line. Here is a quick perl one-liner to do the same more efficiently:
perl -p -MPOSIX -e 'BEGIN {$!=1} $_ = strftime("%T ", localtime) . $_'
To use it, just feed this command input through its stdin:
(echo hi; sleep 1; echo lo) | perl -p -MPOSIX -e 'BEGIN {$|=1} $_ = strftime("%T ", localtime) . $_'
Rather than writing a script to pipe to, I prefer to write the logger as a function inside the script, and then send the entirety of the process into it with brackets, like so:
# Vars
logfile=/path/to/scriptoutput.log
# Defined functions
teelogger(){
log=$1
while read line ; do
print "$(date +"%x %T") :: $line" | tee -a $log
done
}
# Start process
{
echo 'well'
sleep 3
echo 'hi'
sleep 3
echo 'there'
sleep 3
echo 'sailor'
} | teelogger $logfile
If you want to redirect back to stdout I found you have to do this:
myscript.sh >> >( while read line; do echo "$(date): ${line}"; done )
Not sure why I need the > in front of the (, so <(, but thats what works.
I was too lazy for all current solutions... So I figured out new one (works for stdout could be adjusted for stderr as well):
echo "output" | xargs -L1 -I{} bash -c "echo \$(date +'%x %T') '{}'" | tee error.log
would save to file and print something like that:
11/3/16 16:07:52 output
Details:
-L1 means "for each new line"
-I{} means "replace {} by input"
bash -c is used to update $(date) each time for new call
%x %T formats timestamp to minimal form.
It should work like a charm if stdout and stderr doesn't have quotes (" or `). If it has (or could have) it's better to use:
echo "output" | awk '{cmd="(date +'%H:%M:%S')"; cmd | getline d; print d,$0; close(cmd)} | tee error.log'
(Took from my answer in another topic: https://stackoverflow.com/a/41138870/868947)
How about timestamping the remaining output, redirecting all to stdout?
This answer combines some techniques from above, as well as from unix stackexchange here and here. bash >= 4.2 is assumed, but other advanced shells may work. For < 4.2, replace printf with a (slower) call to date.
: ${TIMESTAMP_FORMAT:="%F %T"} # override via environment
_loglines() {
while IFS= read -r _line ; do
printf "%(${TIMESTAMP_FORMAT})T#%s\n" '-1' "$_line";
done;
}
exec 7<&2 6<&1
exec &> >( _loglines )
# Logit
To restore stdout/stderr:
exec 1>&6 2>&7
You can then use tee to send the timestamps to stdout and a logfile.
_tmpfile=$(mktemp)
exec &> >( _loglines | tee $_tmpfile )
Not a bad idea to have cleanup code if the process exited without error:
trap "_cleanup \$?" 0 SIGHUP SIGINT SIGABRT SIGBUS SIGQUIT SIGTRAP SIGUSR1 SIGUSR2 SIGTERM
_cleanup() {
exec >&6 2>&7
[[ "$1" != 0 ]] && cat "$_logtemp"
rm -f "$_logtemp"
exit "${1:-0}"
}
#!/bin/bash
DEBUG=1
LOG=$HOME/script_${0##*/}_$(date +%Y.%m.%d-%H.%M.%S-%N).log
ERROR=$HOME/error.log
exec 2> $ERROR
exec 1> >(tee -ai $LOG)
if [ $DEBUG = 0 ] ; then
exec 4> >(xargs -i echo -e "[ debug ] {}")
else
exec 4> /dev/null
fi
# test
echo " debug sth " >&4
echo " log sth normal "
type -f this_is_error
echo " errot sth ..." >&2
echo " finish ..." >&2>&4
# close descriptor 4
exec 4>&-
This thing: nohup myscript.sh 2> >( while read line; do echo "$(date): ${line}"; done > mystd.err ) < /dev/null &
Works as such but when I log out and log back in to the server, it does not work. that is mystd.err stop getting populated with stderr stream even though my process (myscript.sh here) still runs.
Does someone know how to get back the lost stderr in the mystd.err file back?
Thought I would add my 2 cents worth..
#!/bin/sh
timestamp(){
name=$(printf "$1%*s" `expr 15 - ${#1}`)
awk "{ print strftime(\"%b %d %H:%M:%S\"), \"- $name -\", $$, \"- INFO -\", \$0; fflush() }";
}
echo "hi" | timestamp "process name" >> /tmp/proccess.log
printf "$1%*s" `expr 15 - ${#1}`
Spaces the name out so it looks nice, where 15 is the max space issued, increase if desired
outputs >> Date - Process name - Process ID - INFO - Message
Jun 27 13:57:20 - process name - 18866 - INFO - hi
Redirections are taken in order. Try this:
Given a script -
$: cat tst
echo a
sleep 2
echo 1 >&2
echo b
sleep 2
echo 2 >&2
echo c
sleep 2
echo 3 >&2
echo d
I get the following
$: ./tst 2>&1 1>stdout | sed 's/^/echo $(date +%Y%m%dT%H%M%S) /; e'
20180925T084008 1
20180925T084010 2
20180925T084012 3
And as much as I dislike awk, it does avoid the redundant subcalls to date.
$: ./tst 2>&1 1>stdout | awk "{ print strftime(\"%Y%m%dT%H%M%S \") \$0; fflush() }"; >stderr
$: cat stderr
20180925T084414 1
20180925T084416 2
20180925T084418 3
$: cat stdout
a
b
c
d