Bash string interpolation without subshell - bash

I have a function like this
print_stuff_and_set_vars() {
IMPORTANT_VAL_1=""
IMPORTANT_VAL_2=""
echo -n "some stuff"
echo -n "some more stuff"
echo -n "result"
}
and I call it like this:
my_main_func() {
print_stuff_and_set_vars
print_stuff_and_set_vars
print_stuff_and_set_vars
echo "IMPORTANT_VAL_1 was $IMPORTANT_VAL_1"
}
Instead, I want to save all the echoed results to a string
my_main_func() {
# doesn't work -- result is empty
result="${print_stuff_and_set_vars}${print_stuff_and_set_vars}${print_stuff_and_set_vars}"
echo "the result length was ${#result}"
echo "$result"
echo "IMPORTANT_VAL_1 was $IMPORTANT_VAL_1"
}
This does work if I use $() instead of ${} to start a subshell, but then the global variables are not set.
Is there any way to save the result of a function to a string without starting a subshell? I know the obvious answer in this example would be to save "result" to a global variable instead of echoing it but in my actual script that would require a lot of changes and I want to avoid it if possible.
Actually, I only need to know the length so if there is a way to keep track of how much has been printed since the start of the function that would work fine too. I'm actually using zsh if that makes a difference, too.

Assuming the output from the function print_stuff_and_set_vars does not contain newline characters, how about:
mkfifo p # create a named pipe "p"
exec 3<>p # open fd 3 for both reading and writing
rm p # now "p" can be closed
print_stuff_and_set_vars() {
IMPORTANT_VAL_1="foo"
IMPORTANT_VAL_2="bar"
echo -n "some stuff "
echo -n "some more stuff "
echo -n "result "
}
my_main_func() {
print_stuff_and_set_vars 1>&3 # redirect to fd 3
print_stuff_and_set_vars 1>&3 # same as above
print_stuff_and_set_vars 1>&3 # same as above
echo 1>&3 # send newline as an end of input
IFS= read -r -u 3 result # read a line from fd 3
echo "the result length was ${#result}"
echo "$result"
echo "IMPORTANT_VAL_1 was $IMPORTANT_VAL_1"
}
my_main_func
exec 3>&- # close fd 3
Output:
the result length was 102
some stuff some more stuff result some stuff some more stuff result some stuff some more stuff result
IMPORTANT_VAL_1 was foo

This is very easy to do with temp file. Example:
print_stuff_and_set_vars() {
IMPORTANT_VAL_1="x"
IMPORTANT_VAL_2="y"
echo -n "some stuff"
echo -n "some more stuff"
echo -n "result"
}
print_stuff_and_set_vars > /tmp/$$
myVar=$(< /tmp/$$)
echo $myVar
echo $IMPORTANT_VAL_1
echo $IMPORTANT_VAL_2
It is possible to implement this using named pipe or file descriptor as well.

Related

Capturing the output of a detached command makes the execution sequential [duplicate]

I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash
function test {
echo "$1"
}
echo $(test "echo") &
wait
a=$(test "assignment") &
wait
echo $a
echo done
This code produces the output:
echo
done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes)
/dev/fd/63
Here, the expression <(yes) is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes (which prints the string y in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63
cat: /dev/fd/63: No such file or directory
The problem here is that the yes process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber`
$ exec 3< backingfile # open the file for reading in the current shell, as fd 3
$ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it.
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
#rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes never stops). But naturally that problem holds even for read if the line separator is never written additionally)
One very robust way to deal with coprocesses in Bash is to use... the coproc builtin.
Suppose you have a script or function called banana you wish to run in background, capture all its output while doing some stuff and wait until it's done. I'll do the simulation with this:
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
You will then run banana with the coproc as so:
coproc bananafd { banana; }
this is like running banana & but with the following extras: it creates two file descriptors that are in the array bananafd (at index 0 for output and index 1 for input). You'll capture the output of banana with the read builtin:
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
Try it:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
Caveat: you must be done with stuff before banana ends! if the gorilla is quicker than you:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
In this case, you'll obtain an error like this one:
./banana: line 22: read: : invalid file descriptor specification
You can check whether it's too late (i.e., whether you've taken too long doing your stuff) because after the coproc is done, bash removes the values in the array bananafd, and that's why we obtained the previous error.
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
if [[ -n ${bananafd[#]} ]]; then
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
else
echo "oh no, I took too long doing my stuff..."
fi
Finally, if you really don't want to miss any of gorilla's moves, even if you take too long for your stuff, you could copy banana's file descriptor to another fd, 3 for example, do your stuff and then read from 3:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
# Copy file descriptor banana[0] to 3
exec 3>&${bananafd[0]}
stuff
IFS= read -d '' -u 3 output
echo "$output"
This will work very well! the last read will also play the role of wait, so that output will contain the complete output of banana.
That was great: no temp files to deal with (bash handles everything silently) and 100% pure bash!
Hope this helps!
One way to capture background command's output is to redirect it's output in a file and capture output from file after background process has ended:
test "assignment" > /tmp/_out &
wait
a=$(</tmp/_out)
I also use file redirections. Like:
exec 3< <({ sleep 2; echo 12; }) # Launch as a job stdout -> fd3
cat <&3 # Lock read fd3
More real case
If I want the output of 4 parallel workers: toto, titi, tata and tutu.
I redirect each one to an different file descriptor (in fd variable).
Then reading these file descriptor will block until EOF <= pipe broken <= command completed
#!/usr/bin/env bash
# Declare data to be forked
a_value=(toto titi tata tutu)
msg=""
# Spawn child sub-processes
for i in {0..3}; do
((fd=50+i))
echo -e "1/ Launching command: $cmd with file descriptor: $fd!"
eval "exec $fd< <({ sleep $((i)); echo ${a_value[$i]}; })"
a_pid+=($!) # Store pid
done
# Join child: wait them all and collect std-output
for i in {0..3}; do
((fd=50+i));
echo -e "2/ Getting result of: $cmd with file descriptor: $fd!"
msg+="$(cat <&$fd)\n"
((i_fd--))
done
# Print result
echo -e "===========================\nResult:"
echo -e "$msg"
Should output:
1/ Launching command: with file descriptor: 50!
1/ Launching command: with file descriptor: 51!
1/ Launching command: with file descriptor: 52!
1/ Launching command: with file descriptor: 53!
2/ Getting result of: with file descriptor: 50!
2/ Getting result of: with file descriptor: 51!
2/ Getting result of: with file descriptor: 52!
2/ Getting result of: with file descriptor: 53!
===========================
Result:
toto
titi
tata
tutu
Note1: coproc is supporting only one coprocess and not multiple
Note2: wait command is buggy for old bash version (4.2) and cannot retrieve the status of the jobs I launched. It works well in bash 5 but file redirection works for all versions.
Just group the commands, when you run them in background and wait for both.
{ echo a & echo b & wait; } | nl
Output will be:
1 a
2 b
But notice that the output can be out of order, if the second task runs faster than the first.
{ { sleep 1; echo a; } & echo b & wait; } | nl
Reverse output:
1 b
2 a
If it is necessary to separate the output of both background jobs, it is necessary to buffer the output somewhere, typically in a file. Example:
#! /bin/bash
t0=$(date +%s) # Get start time
trap 'rm -f "$ta" "$tb"' EXIT # Remove temp files on exit.
ta=$(mktemp) # Create temp file for job a.
tb=$(mktemp) # Create temp file for job b.
{ exec >$ta; echo a1; sleep 2; echo a2; } & # Run job a.
{ exec >$tb; echo b1; sleep 3; echo b2; } & # Run job b.
wait # Wait for the jobs to finish.
cat "$ta" # Print output of job a.
cat "$tb" # Print output of job b.
t1=$(date +%s) # Get end time
echo "t1 - t0: $((t1-t0))" # Display execution time.
The overall runtime of the script is three seconds, although the combined sleeping time of both background jobs is five seconds. And the output of the background jobs is in order.
a1
a2
b1
b2
t1 - t0: 3
You can also use a memory buffer to store the output of your jobs. But this works only, if your buffer is big enough to store the whole output of your jobs.
#! /bin/bash
t0=$(date +%s)
trap 'rm -f /tmp/{a,b}' EXIT
mkfifo /tmp/{a,b}
buffer() { dd of="$1" status=none iflag=fullblock bs=1K; }
pids=()
{ echo a1; sleep 2; echo a2; } > >(buffer /tmp/a) &
pids+=($!)
{ echo b1; sleep 3; echo b2; } > >(buffer /tmp/b) &
pids+=($!)
# Wait only for the jobs but not for the buffering `dd`.
wait "${pids[#]}"
# This will wait for `dd`.
cat /tmp/{a,b}
t1=$(date +%s)
echo "t1 - t0: $((t1-t0))"
The above will also work with cat instead of dd. But then you can not control the buffer size.
If you have GNU Parallel you can probably use parset:
myfunc() {
sleep 3
echo "The input was"
echo "$#"
}
export -f myfunc
parset a,b,c myfunc ::: myarg-a "myarg b" myarg-c
echo "$a"
echo "$b"
echo "$c"
See: https://www.gnu.org/software/parallel/parset.html

Force Bash-Script to wait for a Perl-Script that awaits input

I'm having a Bash-Script that sequentially runs some Perl-Scripts which are read from a file. These scripts require the press of Enter to continue.
Strangely when I run the script it's never waiting for the input but just continues. I assume something in the Bash-Script is interpreted as an Enter or some other Key-Press and makes the Perl continue.
I'm sure there is a solution out there but don't really know what to look for.
My Bash has this while-Loop which iterates through the list of Perl-Scripts (which is listed in seqfile)
while read zeile; do
if [[ ${datei:0:1} -ne 'p' ]]; then
datei=${zeile:8}
else
datei=$zeile
fi
case ${zeile: -3} in
".pl")
perl $datei #Here it just goes on...
#echo "Test 1"
#echo "Test 2"
;;
".pm")
echo $datei "is a Perl Module"
;;
*)
echo "Something elso"
;;
esac
done <<< $seqfile;
You notice the two commented lines With echo "Test 1/2". I wanted to know how they are displayed.
Actually they are written under each other like there was an Enter-Press:
Test 1
Test 2
The output of the Perl-Scripts is correct I just have to figure out a way how to force the input to be read from the user and not from the script.
Have the perl script redirect input from /dev/tty.
Proof of concept:
while read line ; do
export line
perl -e 'print "Enter $ENV{line}: ";$y=<STDIN>;print "$ENV{line} is $y\n"' </dev/tty
done <<EOF
foo
bar
EOF
Program output (user input in bold):
Enter foo: 123
foo is 123
Enter bar: 456
bar is 456
#mob's answer is interesting, but I'd like to propose an alternative solution for your use case that will also work if your overall bash script is run with a specific input redirection (i.e. not /dev/tty).
Minimal working example:
script.perl
#!/usr/bin/env perl
use strict;
use warnings;
{
local( $| ) = ( 1 );
print "Press ENTER to continue: ";
my $resp = <STDIN>;
}
print "OK\n";
script.bash
#!/bin/bash
exec 3>&0 # backup STDIN to fd 3
while read line; do
echo "$line"
perl "script.perl" <&3 # redirect fd 3 to perl's input
done <<EOF
First
Second
EOF
exec 3>&- # close fd 3
So this will work with both: ./script.bash in a terminal and yes | ./script.bash for example...
For more info on redirections, see e.g. this article or this cheat sheet.
Hoping this helps

Bash: Capture output of command run in background

I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash
function test {
echo "$1"
}
echo $(test "echo") &
wait
a=$(test "assignment") &
wait
echo $a
echo done
This code produces the output:
echo
done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes)
/dev/fd/63
Here, the expression <(yes) is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes (which prints the string y in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63
cat: /dev/fd/63: No such file or directory
The problem here is that the yes process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber`
$ exec 3< backingfile # open the file for reading in the current shell, as fd 3
$ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it.
$ for i in 1 2 3; do read <&3 line; echo "$line"; done
y
y
y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
#rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes never stops). But naturally that problem holds even for read if the line separator is never written additionally)
One very robust way to deal with coprocesses in Bash is to use... the coproc builtin.
Suppose you have a script or function called banana you wish to run in background, capture all its output while doing some stuff and wait until it's done. I'll do the simulation with this:
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
You will then run banana with the coproc as so:
coproc bananafd { banana; }
this is like running banana & but with the following extras: it creates two file descriptors that are in the array bananafd (at index 0 for output and index 1 for input). You'll capture the output of banana with the read builtin:
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
Try it:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
Caveat: you must be done with stuff before banana ends! if the gorilla is quicker than you:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
In this case, you'll obtain an error like this one:
./banana: line 22: read: : invalid file descriptor specification
You can check whether it's too late (i.e., whether you've taken too long doing your stuff) because after the coproc is done, bash removes the values in the array bananafd, and that's why we obtained the previous error.
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
stuff
if [[ -n ${bananafd[#]} ]]; then
IFS= read -r -d '' -u "${bananafd[0]}" banana_output
echo "$banana_output"
else
echo "oh no, I took too long doing my stuff..."
fi
Finally, if you really don't want to miss any of gorilla's moves, even if you take too long for your stuff, you could copy banana's file descriptor to another fd, 3 for example, do your stuff and then read from 3:
#!/bin/bash
banana() {
for i in {1..4}; do
echo "gorilla eats banana $i"
sleep 1
done
echo "gorilla says thank you for the delicious bananas"
}
stuff() {
echo "I'm doing this stuff"
sleep 1
echo "I'm doing that stuff"
sleep 1
echo "I'm done doing my stuff."
}
coproc bananafd { banana; }
# Copy file descriptor banana[0] to 3
exec 3>&${bananafd[0]}
stuff
IFS= read -d '' -u 3 output
echo "$output"
This will work very well! the last read will also play the role of wait, so that output will contain the complete output of banana.
That was great: no temp files to deal with (bash handles everything silently) and 100% pure bash!
Hope this helps!
One way to capture background command's output is to redirect it's output in a file and capture output from file after background process has ended:
test "assignment" > /tmp/_out &
wait
a=$(</tmp/_out)
I also use file redirections. Like:
exec 3< <({ sleep 2; echo 12; }) # Launch as a job stdout -> fd3
cat <&3 # Lock read fd3
More real case
If I want the output of 4 parallel workers: toto, titi, tata and tutu.
I redirect each one to an different file descriptor (in fd variable).
Then reading these file descriptor will block until EOF <= pipe broken <= command completed
#!/usr/bin/env bash
# Declare data to be forked
a_value=(toto titi tata tutu)
msg=""
# Spawn child sub-processes
for i in {0..3}; do
((fd=50+i))
echo -e "1/ Launching command: $cmd with file descriptor: $fd!"
eval "exec $fd< <({ sleep $((i)); echo ${a_value[$i]}; })"
a_pid+=($!) # Store pid
done
# Join child: wait them all and collect std-output
for i in {0..3}; do
((fd=50+i));
echo -e "2/ Getting result of: $cmd with file descriptor: $fd!"
msg+="$(cat <&$fd)\n"
((i_fd--))
done
# Print result
echo -e "===========================\nResult:"
echo -e "$msg"
Should output:
1/ Launching command: with file descriptor: 50!
1/ Launching command: with file descriptor: 51!
1/ Launching command: with file descriptor: 52!
1/ Launching command: with file descriptor: 53!
2/ Getting result of: with file descriptor: 50!
2/ Getting result of: with file descriptor: 51!
2/ Getting result of: with file descriptor: 52!
2/ Getting result of: with file descriptor: 53!
===========================
Result:
toto
titi
tata
tutu
Note1: coproc is supporting only one coprocess and not multiple
Note2: wait command is buggy for old bash version (4.2) and cannot retrieve the status of the jobs I launched. It works well in bash 5 but file redirection works for all versions.
Just group the commands, when you run them in background and wait for both.
{ echo a & echo b & wait; } | nl
Output will be:
1 a
2 b
But notice that the output can be out of order, if the second task runs faster than the first.
{ { sleep 1; echo a; } & echo b & wait; } | nl
Reverse output:
1 b
2 a
If it is necessary to separate the output of both background jobs, it is necessary to buffer the output somewhere, typically in a file. Example:
#! /bin/bash
t0=$(date +%s) # Get start time
trap 'rm -f "$ta" "$tb"' EXIT # Remove temp files on exit.
ta=$(mktemp) # Create temp file for job a.
tb=$(mktemp) # Create temp file for job b.
{ exec >$ta; echo a1; sleep 2; echo a2; } & # Run job a.
{ exec >$tb; echo b1; sleep 3; echo b2; } & # Run job b.
wait # Wait for the jobs to finish.
cat "$ta" # Print output of job a.
cat "$tb" # Print output of job b.
t1=$(date +%s) # Get end time
echo "t1 - t0: $((t1-t0))" # Display execution time.
The overall runtime of the script is three seconds, although the combined sleeping time of both background jobs is five seconds. And the output of the background jobs is in order.
a1
a2
b1
b2
t1 - t0: 3
You can also use a memory buffer to store the output of your jobs. But this works only, if your buffer is big enough to store the whole output of your jobs.
#! /bin/bash
t0=$(date +%s)
trap 'rm -f /tmp/{a,b}' EXIT
mkfifo /tmp/{a,b}
buffer() { dd of="$1" status=none iflag=fullblock bs=1K; }
pids=()
{ echo a1; sleep 2; echo a2; } > >(buffer /tmp/a) &
pids+=($!)
{ echo b1; sleep 3; echo b2; } > >(buffer /tmp/b) &
pids+=($!)
# Wait only for the jobs but not for the buffering `dd`.
wait "${pids[#]}"
# This will wait for `dd`.
cat /tmp/{a,b}
t1=$(date +%s)
echo "t1 - t0: $((t1-t0))"
The above will also work with cat instead of dd. But then you can not control the buffer size.
If you have GNU Parallel you can probably use parset:
myfunc() {
sleep 3
echo "The input was"
echo "$#"
}
export -f myfunc
parset a,b,c myfunc ::: myarg-a "myarg b" myarg-c
echo "$a"
echo "$b"
echo "$c"
See: https://www.gnu.org/software/parallel/parset.html

bash/shell how to access to memory/buffer to use what I have echoed before

I have 2 scripts.
one of the lines at the first script is
"...
./second_script >> $outputfile
..."
The second script has a lot of calculation and variables. Now at some point I need to use everything I echoed to the outputfile:
".....
echo $var1
echo $var2
.....
echo $var3
echo What I have echoed | script3
..."
What I have echoed - its $var1 $var2 $var3
How can I do it?
Its a big code, so I cannot do something like that for each line
echo $var
echo $var >> tmp
I cannot do that also cause I have like 2000 $var($var isn't realy a variable, its more like "grep......")
echo $var1 $var2 | script3
I need somehow get an access to what in memory/buffer to what I echoed.
Try something like this:
{ echo $var1
echo $var2
echo $var3
...
} | script3
Add this to the beginning of your script:
exec > >( tee tmp )
Everything you write to standard output will also be added to the file "tmp".
Using /bin/sh, you'll need to simulate the process. No guarantees that this is correct:
# Create a named pipe to act as a buffer, and set up a background job
# that continuously duplicates whatever is written to it to both
# a regular file and standard output
mkfifo buffer
( tail -f buffer | tee tmp ) &
# Now, redirect standard output to the named pipe
exec > buffer

Test whether stdout has been written to

I have a script that prints in a loop. I want the loop to print differently the first time from all other times (i.e., it should print differently if anything has been printed at all). I am thinking a simple way would be to check whether anything has been printed yet (i.e., stdout has been written to). Is there any way to determine that?
I know I could also write to a variable and test whether it's empty, but I'd like to avoid a variable if I can.
I think that will do what you need. If you echo something between # THE SCRIPT ITSELF and # END, THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT will be printed STDOUT HAS NOT BEEN TOUCHED else...
#!/bin/bash
readonly TMP=$(mktemp /tmp/test_XXXXXX)
exec 3<> "$TMP" # open tmp file as fd 3
exec 4>&1 # save current value of stdout as fd 4
exec >&3 # redirect stdout to fd 3 (tmp file)
# THE SCRIPT ITSELF
echo Hello World
# END
exec >&4 # restore save stdout
exec 3>&- # close tmp file
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo "THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT"
echo
cat "$TMP"
else
echo "STDOUT HAS NOT BEEN TOUCHED"
fi
rm "$TMP"
So, output of the script as is:
THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT
Hello World
and if you remove the echo Hello World line:
STDOUT HAS NOT BEEN TOUCHED
And if you really want to test that while running the script itself, you can do that, too :-)
#!/bin/bash
#FIRST ELSE
function echo_fl() {
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo $2
else
echo $1
fi
}
TMP=$(mktemp /tmp/test_XXXXXX)
exec 3 "$TMP" # open tmp file as fd 3
exec 4>&1 # save current value of stdout as fd 4
exec >&3 # redirect stdout to fd 3 (tmp file)
# THE SCRIPT ITSELF
for f in fst snd trd; do
echo_fl "$(echo $f | tr a-z A-Z)" "$f"
done
# END
exec >&4 # restore save stdout
exec 3>&- # close tmp file
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo "THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT"
echo
cat "$TMP"
else
echo "STDOUT HAS NOT BEEN TOUCHED"
fi
rm "$TMP"
output is:
THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT
FST
snd
trd
as you can see: Only the first line (FST) has caps on. That's what the echo_fl function does for you: If it's the first line of output, if echoes the first argument, if it's not it echoes the second argument :-)
It's hard to tell what you are trying to do here, but if your script is printing to stdout, you could simply pipe it to perl:
yourcommand | perl -pe 'if ($. == 1) { print "First line is: $_" }'
It all depends on what kind of changes you are attempting to do.
You cannot use the -f option with %z. The line TMP_SIZE=$(stat -f %z "$TMP") produces a long string that fails the test in if [ $TMP_SIZE -gt 0 ].

Resources