Capture output from ssh-add launched by another command - bash

Here's the full version of my question. I'm including all this detail in case my hunch is wrong, but you may want to skip to the tl;dr below.
I'm trying to write a function that runs an arbitrary command and also captures whether any output was printed to the terminal. I don't want to interfere with the output being printed. In case it's a relevant complication (probably not), I also want to branch on the exit code of the command.
Here's what I have:
function run_and_inspect {
# this subshell ensures the stdout of ${#} is printed and captured
if output=$(
set -o pipefail
"${#}" | tee /dev/tty
); then
did_cmd_work="yes"
else
did_cmd_work="no"
fi
if [ -z "$output" ]; then
was_there_output="no"
else
was_there_output="yes"
fi
echo "output?" $was_there_output
}
Generally this works fine:
$ run_and_inspect true
output? no
$ run_and_inspect "echo hello"
hello
output? yes
But I've found one problem command:
git pull | grep -v 'Already up to date.'
If there is nothing to pull, this pipeline usually produces no output. But, if ssh-add needs to prompt for a passphrase, there is output. It just doesn't get noticed by run_and_inspect:
function git_pull_quiet {
git pull | grep -v 'Already up to date.'
}
$ run_and_inspect git_pull_quiet
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
output? no
There was output to my terminal. I assume the problem is it didn't come from stdout of the git pull pipeline, which is all run_and_inspect knows about. Where did it come from? How do I fix this? I've tried redirecting stderr too (i.e. git pull 2>&1), but with no luck. Is there some way to monitor /dev/tty directly?
tl;dr (I think!)
I think this question boils down to: why isn't the passphrase prompt in log.txt?
$ git pull 2>&1 | tee log.txt
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
Already up to date.
$ cat log.txt
Already up to date.

why isn't the passphrase prompt in log.txt?
The prompt is printed from openssh load_identify_file with readpass.c read_passphrase(). The function does open(_PATH_TTY with _PATH_TTY "/dev/tty" and then write()s to it.
The output is displayed directly to the terminal /dev/tty, not with standard streams.
Just like you do with your tee /dev/tty, which means that your function will also not work. Prefer to preserve the stdout-ness of the child program and preserve buffering:
if { tmp=$("$#" > >(tee >(cat >&3))); } 3>&1; then
Is there some way to monitor /dev/tty directly?
Write your own terminal emulator and spawn your process in it, and then parse all input inside that terminal emulator. Programs like screen or tmux may be of use.
The workaround could be to download proot and open a file descriptor to some file, then create a chroot with just /dev/tty file symlinked to /proc/self/fd/<that file descriptor> and run the process with proot inside that chroot. The idea is that process will see that chroot with /dev/tty file replaced and will write to your file descriptor instead to the terminal.

Related

How to capture the output of a bash command which prompts for a user's confirmation without blocking the output nor the command

I need to capture the output of a bash command which prompts for a user's confirmation without altering its flow.
I know only 2 ways to capture a command output:
- output=$(command)
- command > file
In both cases, the whole process is blocked without any output.
For instance, without --assume-yes:
output=$(apt purge 2>&1 some_package)
I cannot print the output back because the command is not done yet.
Any suggestion?
Edit 1: The user must be able to answer the prompt.
EDIT 2: I used dash-o's answer to complete a bash script allowing a user to remove/purge all obsolete packages (which have no installation candidate) from any Debian/Ubuntu distribution.
To capture partial output from that is waiting for a prompt, one can use a tail on temporary file, potentiality with 'tee' to keep the output flowing if needed. The downside of this approach is that stderr need to be tied with stdout, making it hard to tell between the two (if this is an issue)
#! /bin/bash
log=/path/to/log-file
echo > $log
(
while ! grep -q -F 'continue?' $log ; do sleep 2 ; done ;
output=$(<$log)
echo do-something "$output"
) &
# Run command with output to terminal
apt purge 2>&1 some_package | tee -a $log
# If output to terminal not needed, replace above command with
apt purge 2>&1 some_package > $log
There is no generic way to tell (from a script) when exactly a program prompts for input. The above code looks for the prompt string ('continue?'), so this will have to be customized per command.

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

Assign output to variable for command run under different user on OSX

I run a bash python command as current user, by doing this:
su $USER -c 'python3 -m site --user-site'
This works properly and prints the following:
/Users/standarduser7/Library/Python/3.6/lib/python/site-packages
I want to assign this output to a variable, so I'm using "$(command)":
target="$(su $USER -c 'python3 -m site --user-site')"
At this point, the OSX terminal hangs and has to be killed. Using backticks instead of "$(command)" leads to same result.
However, if I run the command without user, everything works as it should:
target="$(python3 -m site --user-site)"
echo target
output: /Users/standarduser7/Library/Python/3.6/lib/python/site-packages
How can I assign the output from a command run as the current user to a variable?
I don’t think it’s hanging; I think it’s showing a blank (prompt-less) command-line and is waiting for input. When I key in the user password, it returns this result:
Password:/Users/CK/Library/Python/2.7/lib/python/site-packages
and this is what ends up being stored in the target variable. A quick parameter substitution can rectify this anomalous output:
target=${target#*:}
The other solution (credit given to this answer) is to create a file descriptor as a copy of stdout, then tee the command to the copy, which then allows stdout to be piped to grep in order to process the output:
exec 3>&1 # create a copy of stdout
target=$(su $USER -c "python -m site --user-site" | tee /dev/fd/3 | grep -o '\/.*')
exec 3>&- # close copy

How to undo exec > /dev/null in bash?

I used
exec > /dev/null
to suppress output.
Is there a command to undo this? (Without restarting the script.)
To do it right, you need to copy the original FD 1 somewhere else before repointing it to /dev/null. In this case, I store a backup on FD 5:
exec 5>&1 >/dev/null
...
exec 1>&5
Another option is to redirect stdout within a block rather than using exec:
{
...
} >/dev/null
If you just want to get output again at the command prompt, you can do this:
exec >/dev/tty
If you are creating a script, and you want to have the output of a certain group of commands redirected, put those commands in braces:
{
command
command
} >/dev/null
Save the original output targets beforehand.
# $$ = the PID of the running script instance
STDOUT=`readlink -f /proc/$$/fd/1`
STDERR=`readlink -f /proc/$$/fd/2`
And restore them again using exec.
exec 1>$STDOUT 2>$STDERR
If you use /dev/tty for restoration as in the answers above, contrary to this, you won't be able to do redirections in call-level, e.g. bash script.sh &>/dev/null won't work.
Not really, as that would require changing the state of a running process. Even assuming you could, whatever you wrote before resetting standard output is truly, completely gone, as it was sent to the bit bucket.
To restore stdout I use
unset &1

Write STDOUT & STDERR to a logfile, also write STDERR to screen

I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console

Resources