How can I send multiple commands' output to a single shell pipeline? - bash

I have multiple pipelines, which looks like:
tee -a $logfilename.txt | jq string2object.jq >> $logfilename.json
or
tee -a $logfilename.txt | jq array2object.jq >> $logfilename.json
For each pipeline, I want to apply to multiple commands.
Each set of commands looks something like:
echo "start filelist:"
printf '%s\n' "$PWD"/*
or
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
and the output from those commands should all go through the pipe.
What I've tried in the past is putting the pipeline on each command separately:
echo "start filelist:" | tee -a $logfilename | jq -sRf array2object.jq >>$logfilename.json
printf '%s\n' "$PWD"/* | tee -a $logfilename | jq -sRf array2object.jq >>$logfilename.json
but in that case the JSON script can only see one line at a time, so it doesn't work correctly.

The Portable Approach
The following is portable to POSIX sh:
#!/bin/sh
die() { rm -rf -- "$tempdir"; [ "$#" -gt 0 ] && echo "$*" >&2; exit 1; }
logfilename="whatever"
tempdir=$(mktemp -d "${TMPDIR:-/tmp}"/fifodir.XXXXXX) || exit
mkfifo "$tempdir/fifo" || die "mkfifo failed"
tee -a "$logfilename" <"$tempdir/fifo" \
| jq -sRf json_log_s2o.jq \
>>"$logfilename.json" & fifo_pid=$!
exec 3>"$tempdir/fifo" || die "could not open fifo for write"
echo "start filelist:" >&3
printf '%s\n' "$PWD"/* >&3
echo "start wget:" >&3
wget -nv http://web.site.com/downloads/2017/file_1.zip >&3 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip >&3 2>&1
exec 3>&- # close the write end of the FIFO
wait "$fifo_pid" # and wait for the process to exit
rm -rf "$tempdir" # delete the temporary directory with the FIFO
Avoiding FIFO Management (Using Bash)
With bash, one can avoid needing to manage the FIFO by using a process substitution:
#!/bin/bash
logfilename="whatever"
exec 3> >(tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"$logfilename.json")
echo "start filelist:" >&3
printf '%s\n' "$PWD/*" >&3
echo "start wget:" >&3
wget -nv http://web.site.com/downloads/2017/file_1.zip >&3 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip >&3 2>&1
exec 3>&1
Waiting For Exit (Using Linux-y Tools)
However, the thing this doesn't let you do (without bash 4.4) is detect when jq failed, or wait for jq to finish writing before your script exits. If you want to ensure that jq finishes before your script exits, then you might consider using flock, like so:
writelogs() {
exec 4>"${1}.json"
flock -x 4
tee -a "$1" | jq -sRf json_log_s2o.jq >&4
}
exec 3> >(writelogs "$logfilename")
and later:
exec 3>&-
flock -s "$logfilename.json" -c :
Because the jq process inside the writelogs function holds a lock on the output file, the final flock -s command isn't able to also grab a lock on the output file until jq exits.
An Aside: Avoiding All The >&3 Redirections
In either shell, the below is just as valid:
{
echo "start filelist:"
printf '%s\n' "$PWD"/*
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
} >&3
It's also possible, but not advisable, to pipe a code block into a pipeline, thus replacing the FIFO use or process substitution altogether:
{
echo "start filelist:"
printf '%s\n' "$PWD"/*
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
} | tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"${logfilename}.json"
...why not advisable? Because there's no guarantee in POSIX sh as to which components of a pipeline if any run in the same shell interpreter as the rest of your script; and if the above isn't run in the same piece of the script, then variables will be thrown away (and without extensions such as pipefail, exit status as well). See BashFAQ #24 for more information.
Waiting For Exit On Bash 4.4
With bash 4.4, process substitutions export their PIDs in $!, and these can be waited for. Thus, you get an alternate way to wait for the FIFO to exit:
exec 3> >(tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"$logfilename.json"); log_pid=$!
...and then, later on:
wait "$log_pid"
as an alternative to the flock approach given earlier. Obviously, do this only if you have bash 4.4 available.

Related

Bash use of zenity with console redirection

In efforts to create more manageable scripts that write their own output to only one location themselves (via 'exec > file'), is there a better solution than below for combining stdout redirection + zenity (which in this use relies on piped stdout)?
parent.sh:
#!/bin/bash
exec >> /var/log/parent.out
( true; sh child.sh ) | zenity --progress --pulsate --auto-close --text='Executing child.sh')
[[ "$?" != "0" ]] && exit 1
...
child.sh:
#!/bin/bash
exec >> /var/log/child.out
echo 'Now doing child.sh things..'
...
When doing something like-
sh child.sh | zenity --progress --pulsate --auto-close --text='Executing child.sh'
zenity never receives stdout from child.sh since it is being redirected from within child.sh. Even though it seems to be a bit of a hack, is using a subshell containing a 'true' + execution of child.sh acceptable? Or is there a better way to manage stdout?
I get that 'tee' is acceptable to use in this scenario, though I would rather not have to write out child.sh's logfile location each time I want to execute child.sh.
Your redirection exec > stdout.txt will lead to error.
$ exec > stdout.txt
$ echo hello
$ cat stdout.txt
cat: stdout.txt: input file is output file
You need an intermediary file descriptor.
$ exec 3> stdout.txt
$ echo hello >&3
$ cat stdout.txt
hello

Assign results of a command to a variable while checking results of said command

I would like to combine the following loop:
while ps -p PID_OF_JAVA_PROCESS; do
sleep 1;
done;
Into the following loop:
if pgrep -f java.*name_of_file > /dev/null; then
echo "Shutting down java process!"
pkill -f java.*name_of_file
else
echo "Not currently running!"
fi
By assigning the results of pgrep into a variable (the PID of this java process) -- like something along the following:
if pgrep -f java.*name_of_file > /dev/null; then
echo "Our java process is currently running!"
pkill -f java.*name_of_file
echo "Please wait while our process shuts down!"
while ps -p $(pgrep -f java.*name_of_file); do
sleep 1;
done;
else
echo "Not currently running!"
fi
I would like to combine the above while keeping the results of each command quiet (except echo, of course).
if pids=${pgrep -f java.*name_of_file 2>/dev/null }; then
echo "Our java process is currently running!"
kill -f $pids > /dev/null 2>&1
echo "Please wait while our process shuts down!"
while ps -p $(pgrep -f java.*name_of_file 2> /dev/null); do
sleep 1;
done;
else
echo "Not currently running!"
fi
> /dev/null redirects stdout to /dev/null
2> /dev/null redirects stderr to /dev/null
> /dev/null 2>&1 redirects stderr to stdout to /dev/null, thus silencing the command whatsoever
If I assume your two scripts above run right, this slightly modified version should be what you want :)

Pass command via variable in shell

I have following code in my build script:
if [ -z "$1" ]; then
make -j10 $1 2>&1 | tee log.txt && notify-send -u critical -t 7 "BUILD DONE"
else
make -j10 $1 2>&1 | tee log.txt | grep -i --color "Error" && notify-send -u critical -t 7 "BUILD DONE"
fi
I tried to optimize it to:
local GREP=""
[[ ! -z "$1" ]] && GREP="| grep -i --color Error" && echo "Grepping for ERRORS"
make -j10 $1 2>&1 | tee log.txt "$GREP" && notify-send -u critical -t 7 "BUILD DONE"
But error thrown in make line if $1 isn't empty. I just can't figure out how to pass command with grep pipe through the variable.
Like others have already pointed out, you cannot, in general, expect a command in a variable to work. This is a FAQ.
What you can do is execute commands conditionally. Like this, for example:
( make -j10 $1 2>&1 && notify-send -u critical -t 7 "BUILD DONE" ) |
tee log.txt |
if [ -z "$1" ]; then
grep -i --color "Error"
else
cat
fi
This has the additional unexpected benefit that the notify-send is actually conditioned on the exit code of make (which is probably what you intended) rather than tee (which I would expect to succeed unless you run out of disk or something).
(Or if you want the notification regardless of the success status, change && to just ; -- I think this probably makes more sense.)
This is one of those rare Useful Uses of cat (although I still feel the urge to try to get rid of it!)
You can't put pipes in command variables:
$ foo='| cat'
$ echo bar $foo
bar | cat
The linked article explains how to do such things very well.
As mentioned in #l0b0's answer, the | will not be interpreted as you are hoping.
If you wanted to cut down on repetition, you could do something like this:
if [ $(make -j10 "$1" 2>&1 > log.txt) ]; then
[ "$1" ] && grep -i --color "error" log.txt
notify-send -u critical -t 7 "BUILD DONE"
fi
The inside of the test is common to both branches. Instead of using tee so that the output can be piped, you can just indirect the output to log.txt. If "$1" isn't empty, grep for any errors in log.txt. Either way, do the notify-send.

Redirect stderr to console and file

How can i redirect sdterr of bash script to console and file?
I am using:
exec 2>> myfile
to log It to myfile. How to extend it to log to console as well?
For example:
exec 2>&1 | tee myfile
or you can use tail -f
$ touch myfile
$ tail -f myfile &
$ command 2>myfile
You can create a fifo
$ mknod mypipe p
let tee read from the fifo. It writes to stdout and the file you specified
$ tee myfile <mypipe &
[1] 17121
now run the command and pipe stderr to the fifo
$ ls kkk 2>mypipe
ls: cannot access kkk: No such file or directory
[1]+ Done tee myfile < mypipe
Try to open that file by another command (like cat) in background.
exec 2>> myfile
cat myfile & >&2
CAT_PID=$!
... # your script
kill $CAT_PID
Pure Bash solution which builts upon #mpapis's answer:
exec 2> >( while read -r line; do printf '%s\n' "${line}" >&2; printf '%s\n' "${line}" >> err.log; done )
and expanded:
exec 2> >(
while read -r line; do
printf '%s\n' "${line}" >&2
printf '%s\n' "${line}" >> err.log
done
)
You can redirect output to a process and use tee in that process:
#!/usr/bin/env bash
exec 2> >( tee -a err.log )
echo bla >&2

Get exit code from subshell through the pipes

How can I get exit code of wget from the subshell process?
So, main problem is that $? is equal 0. Where can $?=8 be founded?
$> OUT=$( wget -q "http://budueba.com/net" | tee -a "file.txt" ); echo "$?"
0
It works without tee, actually.
$> OUT=$( wget -q "http://budueba.com/net" ); echo "$?"
8
But ${PIPESTATUS} array (I'm not sure it's related to that case) also does not contain that value.
$> OUT=$( wget -q "http://budueba.com/net" | tee -a "file.txt" ); echo "${PIPESTATUS[1]}"
$> OUT=$( wget -q "http://budueba.com/net" | tee -a "file.txt" ); echo "${PIPESTATUS[0]}"
0
$> OUT=$( wget -q "http://budueba.com/net" | tee -a "file.txt" ); echo "${PIPESTATUS[-1]}"
0
So, my question is - how can I get wget's exit code through tee and subshell?
If it could be helpful, my bash version is 4.2.20.
By using $() you are (effectively) creating a subshell. Thus the PIPESTATUS instance you need to look at is only available inside your subshell (i.e. inside the $()), since environment variables do not propagate from child to parent processes.
You could do something like this:
OUT=$( wget -q "http://budueba.com/net" | tee -a "file.txt"; exit ${PIPESTATUS[0]} );
echo $? # prints exit code of wget.
You can achieve a similar behavior by using the following:
OUT=$(wget -q "http://budueba.com/net")
rc=$? # safe exit code for later
echo "$OUT" | tee -a "file.txt"
Beware of this when using local variables:
local OUT=$(command; exit 1)
echo $? # 0
OUT=$(command; exit 1)
echo $? # 1
Copy the PIPESTATUS array first. Any reads destroy the current state.
declare -a PSA
cmd1 | cmd2 | cmd3
PSA=( "${PIPESTATUS[#]}" )
I used fifos to solve the sub-shell/PIPESTATUS problem. See
bash pipestatus in backticked command?
I also found these useful:
bash script: how to save return value of first command in a pipeline?
and https://unix.stackexchange.com/questions/14270/get-exit-status-of-process-thats-piped-to-another/70675#70675

Resources