bash: want errors from piped commands going to stderr, not to screen - bash

In my script, If I want set a variable to the output of a command and avoid any errors from the command failing going to the screen, I can do something like:
var=$(command 2>/dev/null)
If I have commands piped together, i.e.
var=$(command1 | command2 | command3 2>/dev/null)
what's an elegant way to suppress any errors coming from any of the commands. I don't mind if var doesn't get set, I just don't want the user to see the errors from these "lower level commands" on the screen; I want to test var separately after.
Here's an example with two, but I've got a chain of command so I don't want to echo the variable results every time into the next command.
res=$(ls bogusfile | grep morebogus 2>/dev/null)

Put the whole pipeline in a group:
res=$( { ls bogusfile | grep morebogus; } 2>/dev/null)

You need to redirect stderr for each command in the pipeline:
res=$(ls bogusfile 2>/dev/null | grep morebogus 2>/dev/null)
Or you could wrap everything in a subshell whose output is redirected:
res=$( (ls bogusfile | grep morebogus) 2>/dev/null)

You should be able to use {} to group multiple commands:
var=$( { command1 | command2 | command3; } 2>/dev/null)

You can also just redirect it for the entire script, using exec 2>/dev/null, e.g.
#!/bin/bash
return 2>/dev/null # prevent sourcing
exec 3>&2 2>/dev/null
# file descriptor 2 is directed to /dev/null for any commands here
exec 2>&3
# fd 2 is directed back to where it was originally for any commands here
Note: This will prevent interactive output and displaying the prompt. So you can execute the script, but you shouldn't just run the commands in an interactive shell or source it without the initial return line. You also won't be able to use read normally without redirecting the file descriptor back

Related

how to execute/terminate command in the pipe conditionally

I'm working on a script which need to detect the first call to FFMPEG in a program and run a script from then on.
the core code was like:
strace -f -etrace=execve <program> 2>&1 | grep <some_pattern> | <run_some_script>
The desired behaviours is, when the first greped result comes out, the script should start. And if nothing matched before <program> terminates, the script should be ignored.
The main problem is how to conditionally execute the script based on the grep's output and how to terminate the script after the program terminates.
I think the first one could be solved using read, since the greped text are used as signals, its contents are irrelevant:
... | read -N 1 && <run_some_script>
and the second could be solved using broken pipe mechanism:
<run_some_script> > >(...)
but I don't know how to make them work together. Or is there a better solution?
You could ask grep to just match the pattern once and return and make it return a success error code. Putting this together in a if conditional altogether as
if strace -f -etrace=execve <program> 2>&1 | grep -q <some_pattern>; then
echo 'run a program'
fi
The -q flag is to suppress the usual stdout content returned by the grep command as you've mentioned you only want to use grep result to perform an action and not use the results.
Or may be you needed to use coproc running the command to run in background and check every line of the output produced. Just write a wrapper over the command you want to run as below. The function is not needed for single commands but for multiple commands a function would be more relevant.
wrapper() { strace -f -etrace=execve <program> 2>&1 ; }
Use coproc is just similar to running the command in background but provides an easy way to capture the output of the command run
coproc outputfd { wrapper; }
Now watch the output of the commands run inside wrapper by reading from the file descriptor provided by coproc. The below code will watch on the output and on the first match of the pattern it starts a background job for the command to run and the process id is stored in pid.
flag=1
while IFS= read -r -u "${outputfd[0]}" output; do
if [[ $output == *"pattern"* && $flag -eq 1 ]]; then
flag=0
command_to_run & pid=$!
fi
done
When the loop terminates, which means the background job started by coproc is complete. At that point kill the script started. For safety purposes, see if its alive and do the kill
kill "$pid" >/dev/null 2>&1
Using the ifne util:
strace -f -etrace=execve <program> 2>&1 |
grep <some_pattern> | ifne <some_script>

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

How do I automatically save the output of the last command I've run (every time)?

If I wanted to have the output of the last command stored in a file such as ~/.last_command.txt (overwriting output of previous command), how would I go about doing so in bash so that the output goes to both stdout and that file? I imagine it would involve piping to tee ~/.last_command.txt but I don't know what to pipe to that, and I definitely don't want to add that to every command I run manually.
Also, how could I extend this to save the output of the last n commands?
Under bash this seems to have the desired effect.
bind 'RETURN: "|tee ~/.last_command.txt\n"'
You can add it to your bashrc file to make it permanent.
I should point out it's not perfect. Just hitting the enter key and you get:
matt#devpc:$ |tee ~/.last_command.txt
bash: syntax error near unexpected token `|'
So I think it needs a little more work.
This will break program/feature expecting a TTY, but...
exec 4>&1
PROMPT_COMMAND="exec 1>&4; exec > >(mv ~/.last_command{_tmp,}; tee ~/.last_command_tmp)"
If it is acceptable to record all output, this can be simplified:
exec > >(tee ~/.commands)
Overwrite for 1 command:
script -c ls ~/.last_command.txt
If you want more than 1 command:
$ script ~/.last_command.txt
$ command1
$ command2
$ command3
$ exit
If you want to save during 1 login session, append "script" to .bashrc
When starting a new session (after login, or after opening the terminal), you can start another "nested" shell, and redirect its output:
<...login...>
% bash | tee -a ~/.bash_output
% ls # this is the nested shell
% exit
% cat ~/.bash_output
% exit
Actually, you don't even have to enter a nested shell every time. You can simply replace your shell-command in /etc/passwd from bash to bash | tee -a ~USERNAME/.bash_output.

Bash process substitution and exit codes

I'd like to turn the following:
git status --short && (git status --short | xargs -Istr test -z str)
which gets me the desired result of mirroring the output to stdout and doing a zero length check on the result into something closer to:
git status --short | tee >(xargs -Istr test -z str)
which unfortunately returns the exit code of tee (always zero).
Is there any way to get at the exit code of the substituted process elegantly?
[EDIT]
I'm going with the following for now, it prevents running the same command twice but seems to beg for something better:
OUT=$(git status --short) && echo "${OUT}" && test -z "${OUT}"
Look here:
$ echo xxx | tee >(xargs test -n); echo $?
xxx
0
$ echo xxx | tee >(xargs test -z); echo $?
xxx
0
and look here:
$echo xxx | tee >(xargs test -z; echo "${PIPESTATUS[*]}")
xxx
123
$echo xxx | tee >(xargs test -n; echo "${PIPESTATUS[*]}")
xxx
0
Is it?
See also Pipe status after command substitution
I've been working on this for a while, and it seems that there is no way to do that with process substitution, except for resorting to inline signalling, and that can really be used only for input pipes, so I'm not going to expand on it.
However, bash-4.0 provides coprocesses which can be used to replace process substitution in this context and provide clean reaping.
The following snippet provided by you:
git status --short | tee >(xargs -Istr test -z str)
can be replaced by something alike:
coproc GIT_XARGS { xargs -Istr test -z str; }
{ git status --short | tee; } >&${GIT_XARGS[1]}
exec {GIT_XARGS[1]}>&-
wait ${GIT_XARGS_PID}
Now, for some explanation:
The coproc call creates a new coprocess, naming it GIT_XARGS (you can use any name you like), and running the command in braces. A pair of pipes is created for the coprocess, redirecting its stdin and stdout.
The coproc call sets two variables:
${GIT_XARGS[#]} containing pipes to process' stdin and stdout, appropriately ([0] to read from stdout, [1] to write to stdin),
${GIT_XARGS_PID} containing the coprocess' PID.
Afterwards, your command is run and its output is directed to the second pipe (i.e. coprocess' stdin). The cryptically looking >&${GIT_XARGS[1]} part is expanded to something like >&60 which is regular output-to-fd redirection.
Please note that I needed to put your command in braces. This is because a pipeline causes subprocesses to be spawned, and they don't inherit file descriptors from the parent process. In other words, the following:
git status --short | tee >&${GIT_XARGS[1]}
would fail with invalid file descriptor error, since the relevant fd exists in parent process and not the spawned tee process. Putting it in brace causes bash to apply the redirection to the whole pipeline.
The exec call is used to close the pipe to your coprocess. When you used process substitution, the process was spawned as part of output redirection and the pipe to it was closed immediately after the redirection no longer had effect. Since coprocess' pipe's lifetime extends beyond a single redirection, we need to close it explicitly.
Closing the output pipe should cause the process to get EOF condition on stdin and terminate gracefully. We use wait to wait for its termination and reap it. wait returns the coprocess' exit status.
As a last note, please note that in this case, you can't use kill to terminate the coprocess since that would alter its exit status.
#!/bin/bash
if read q < <(git status -s)
then
echo $q
exit
fi

Write STDOUT & STDERR to a logfile, also write STDERR to screen

I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console

Resources