How to capture the output of a bash command which prompts for a user's confirmation without blocking the output nor the command - bash

I need to capture the output of a bash command which prompts for a user's confirmation without altering its flow.
I know only 2 ways to capture a command output:
- output=$(command)
- command > file
In both cases, the whole process is blocked without any output.
For instance, without --assume-yes:
output=$(apt purge 2>&1 some_package)
I cannot print the output back because the command is not done yet.
Any suggestion?
Edit 1: The user must be able to answer the prompt.
EDIT 2: I used dash-o's answer to complete a bash script allowing a user to remove/purge all obsolete packages (which have no installation candidate) from any Debian/Ubuntu distribution.

To capture partial output from that is waiting for a prompt, one can use a tail on temporary file, potentiality with 'tee' to keep the output flowing if needed. The downside of this approach is that stderr need to be tied with stdout, making it hard to tell between the two (if this is an issue)
#! /bin/bash
log=/path/to/log-file
echo > $log
(
while ! grep -q -F 'continue?' $log ; do sleep 2 ; done ;
output=$(<$log)
echo do-something "$output"
) &
# Run command with output to terminal
apt purge 2>&1 some_package | tee -a $log
# If output to terminal not needed, replace above command with
apt purge 2>&1 some_package > $log
There is no generic way to tell (from a script) when exactly a program prompts for input. The above code looks for the prompt string ('continue?'), so this will have to be customized per command.

Related

Capture output from ssh-add launched by another command

Here's the full version of my question. I'm including all this detail in case my hunch is wrong, but you may want to skip to the tl;dr below.
I'm trying to write a function that runs an arbitrary command and also captures whether any output was printed to the terminal. I don't want to interfere with the output being printed. In case it's a relevant complication (probably not), I also want to branch on the exit code of the command.
Here's what I have:
function run_and_inspect {
# this subshell ensures the stdout of ${#} is printed and captured
if output=$(
set -o pipefail
"${#}" | tee /dev/tty
); then
did_cmd_work="yes"
else
did_cmd_work="no"
fi
if [ -z "$output" ]; then
was_there_output="no"
else
was_there_output="yes"
fi
echo "output?" $was_there_output
}
Generally this works fine:
$ run_and_inspect true
output? no
$ run_and_inspect "echo hello"
hello
output? yes
But I've found one problem command:
git pull | grep -v 'Already up to date.'
If there is nothing to pull, this pipeline usually produces no output. But, if ssh-add needs to prompt for a passphrase, there is output. It just doesn't get noticed by run_and_inspect:
function git_pull_quiet {
git pull | grep -v 'Already up to date.'
}
$ run_and_inspect git_pull_quiet
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
output? no
There was output to my terminal. I assume the problem is it didn't come from stdout of the git pull pipeline, which is all run_and_inspect knows about. Where did it come from? How do I fix this? I've tried redirecting stderr too (i.e. git pull 2>&1), but with no luck. Is there some way to monitor /dev/tty directly?
tl;dr (I think!)
I think this question boils down to: why isn't the passphrase prompt in log.txt?
$ git pull 2>&1 | tee log.txt
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
Already up to date.
$ cat log.txt
Already up to date.
why isn't the passphrase prompt in log.txt?
The prompt is printed from openssh load_identify_file with readpass.c read_passphrase(). The function does open(_PATH_TTY with _PATH_TTY "/dev/tty" and then write()s to it.
The output is displayed directly to the terminal /dev/tty, not with standard streams.
Just like you do with your tee /dev/tty, which means that your function will also not work. Prefer to preserve the stdout-ness of the child program and preserve buffering:
if { tmp=$("$#" > >(tee >(cat >&3))); } 3>&1; then
Is there some way to monitor /dev/tty directly?
Write your own terminal emulator and spawn your process in it, and then parse all input inside that terminal emulator. Programs like screen or tmux may be of use.
The workaround could be to download proot and open a file descriptor to some file, then create a chroot with just /dev/tty file symlinked to /proc/self/fd/<that file descriptor> and run the process with proot inside that chroot. The idea is that process will see that chroot with /dev/tty file replaced and will write to your file descriptor instead to the terminal.

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

Run Shell scripts without having messages [duplicate]

I want to make my Bash scripts more elegant for the end user. How do I hide the output when Bash is executing commands?
For example, when Bash executes
yum install nano
The following will show up to the user who executed the Bash:
Loaded plugins: fastestmirror
base | 3.7 kB 00:00
base/primary_db | 4.4 MB 00:03
extras | 3.4 kB 00:00
extras/primary_db | 18 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 3.8 MB 00:02
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nano.x86_64 0:2.0.9-7.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
nano x86_64 2.0.9-7.el6 base 436 k
Transaction Summary
================================================================================
Install 1 Package(s)
Total download size: 436 k
Installed size: 1.5 M
Downloading Packages:
nano-2.0.9-7.el6.x86_64.rpm | 436 kB 00:00
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID c105b9de: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Importing GPG key 0xC105B9DE:
Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <centos-6-key#centos.org>
Package: centos-release-6-4.el6.centos.10.x86_64 (#anaconda-CentOS-201303020151.x86_64/6.4)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : nano-2.0.9-7.el6.x86_64 1/1
Verifying : nano-2.0.9-7.el6.x86_64 1/1
Installed:
nano.x86_64 0:2.0.9-7.el6
Complete!
Now I want to hide this from the user and instead show:
Installing nano ......
How can I accomplish this task? I will definitely help to make the script more user friendly. In case an error occurs then it should be shown to the user.
I would like to know how to show same message while a set of commands are being executed.
Use this.
{
/your/first/command
/your/second/command
} &> /dev/null
Explanation
To eliminate output from commands, you have two options:
Close the output descriptor file, which keeps it from accepting any more input. That looks like this:
your_command "Is anybody listening?" >&-
Usually, output goes either to file descriptor 1 (stdout) or 2 (stderr). If you close a file descriptor, you'll have to do so for every numbered descriptor, as &> (below) is a special BASH syntax incompatible with >&-:
/your/first/command >&- 2>&-
Be careful to note the order: >&- closes stdout, which is what you want to do; &>- redirects stdout and stderr to a file named - (hyphen), which is not what what you want to do. It'll look the same at first, but the latter creates a stray file in your working directory. It's easy to remember: >&2 redirects stdout to descriptor 2 (stderr), >&3 redirects stdout to descriptor 3, and >&- redirects stdout to a dead end (i.e. it closes stdout).
Also beware that some commands may not handle a closed file descriptor particularly well ("write error: Bad file descriptor"), which is why the better solution may be to...
Redirect output to /dev/null, which accepts all output and does nothing with it. It looks like this:
your_command "Hello?" > /dev/null
For output redirection to a file, you can direct both stdout and stderr to the same place very concisely, but only in bash:
/your/first/command &> /dev/null
Finally, to do the same for a number of commands at once, surround the whole thing in curly braces. Bash treats this as a group of commands, aggregating the output file descriptors so you can redirect all at once. If you're familiar instead with subshells using ( command1; command2; ) syntax, you'll find the braces behave almost exactly the same way, except that unless you involve them in a pipe the braces will not create a subshell and thus will allow you to set variables inside.
{
/your/first/command
/your/second/command
} &> /dev/null
See the bash manual on redirections for more details, options, and syntax.
You can redirect stdout to /dev/null.
yum install nano > /dev/null
Or you can redirect both stdout and stderr,
yum install nano &> /dev/null.
But if the program has a quiet option, that's even better.
A process normally has two outputs to screen: stdout (standard out), and stderr (standard error).
Normally informational messages go to sdout, and errors and alerts go to stderr.
You can turn off stdout for a command by doing
MyCommand >/dev/null
and turn off stderr by doing:
MyCommand 2>/dev/null
If you want both off, you can do:
MyCommand >/dev/null 2>&1
The 2>&1 says send stderr to the same place as stdout.
You can redirect the output to /dev/null. For more info regarding /dev/null read this link.
You can hide the output of a comand in the following ways :
echo -n "Installing nano ......"; yum install nano > /dev/null; echo " done.";
Redirect the standard output to /dev/null, but not the standard error. This will show the errors occurring during the installation, for example if yum cannot find a package.
echo -n "Installing nano ......"; yum install nano &> /dev/null; echo " done.";
While this code will not show anything in the terminal since both standard error and standard output are redirected and thus nullified to /dev/null.
>/dev/null 2>&1 will mute both stdout and stderr
yum install nano >/dev/null 2>&1
You should not use bash in this case to get rid of the output. Yum does have an option -q which suppresses the output.
You'll most certainly also want to use -y
echo "Installing nano..."
yum -y -q install nano
To see all the options for yum, use man yum.
you can also do it by assigning its output to a variable, this is particularly useful when you don't have /dev/null.
Yes, I came across a situation when I can't use /dev/null.
The solution I found was to assign the output to a variable which I will never use there after:
hide_output=$([[ -d /proc ]] && [[ mountpoint -q /proc ]] && umount -l /proc)
This:
command > /dev/null
Or this: (to suppress errors as well)
command > /dev/null 2>&1
Similar to lots of other answers but they didn't work for me with 2> being in front of dev/null.
.SILENT:
Type " .SILENT: " in the beginning of your script without colons.

Print all script output to file from within another script

English is not my native language, please accept my apologies for any language issues.
I want to execute a script (bash / sh) through CRON, which will perform various maintenance actions, including backup. This script will execute other scripts, one for each function. And I want the entirety of what is printed to be saved in a separate file for each script executed.
The problem is that each of these other scripts executes commands like "duplicity", "certbot", "maldet", among others. The "ECHO" commands in each script are printed in the file, but the outputs of the "duplicity", "certbot" and "maldet" commands do not!
I want to avoid having to put "| tee --append" or another command on each line. But even doing this on each line, the "subscripts" do not save in the log file. That is, ideally in the parent script, you could specify in which file each script prints.
Does not work:
sudo bash /duplicityscript > /path/log
or
sudo bash /duplicityscript >> /path/log
sudo bash /duplicityscript | sudo tee –append /path/log > /dev/null
or
sudo bash /duplicityscript | sudo tee –append /path/log
Using exec (like this):
exec > >(tee -i /path/log)
sudo bash /duplicityscript
exec > >(tee -i /dev/null)`
Example:
./maincron:
sudo ./duplicityscript > /myduplicity.log
sudo ./maldetscript > /mymaldet.log
sudo ./certbotscript > /mycertbot.log
./duplicityscript:
echo "Exporting Mysql/MariaDB..."
{dump command}
echo "Exporting postgres..."
{dump command}
echo "Start duplicity data backup to server 1..."
{duplicity command}
echo "Start duplicity data backup to server 2..."
{duplicity command}
In the log file, this will print:
Exporting Mysql/MariaDB...
Exporting postgres...
Start duplicity data backup to server 1...
Start duplicity data backup to server 2...
In the example above, the "ECHO" commands in each script will be saved in the log file, but the output of the duplicity and dump commands will be printed on the screen and not on the log file.
I made a googlada, I even saw this topic, but I could not adapt it to my necessities.
There is no problem in that the output is also printed on the screen, as long as it is in its entirety, printed on the file.
try 2>&1 at the end of the line, it should help. Or run the script in sh -x mode to see what is causing the issue.
Hope this helps

How do I automatically save the output of the last command I've run (every time)?

If I wanted to have the output of the last command stored in a file such as ~/.last_command.txt (overwriting output of previous command), how would I go about doing so in bash so that the output goes to both stdout and that file? I imagine it would involve piping to tee ~/.last_command.txt but I don't know what to pipe to that, and I definitely don't want to add that to every command I run manually.
Also, how could I extend this to save the output of the last n commands?
Under bash this seems to have the desired effect.
bind 'RETURN: "|tee ~/.last_command.txt\n"'
You can add it to your bashrc file to make it permanent.
I should point out it's not perfect. Just hitting the enter key and you get:
matt#devpc:$ |tee ~/.last_command.txt
bash: syntax error near unexpected token `|'
So I think it needs a little more work.
This will break program/feature expecting a TTY, but...
exec 4>&1
PROMPT_COMMAND="exec 1>&4; exec > >(mv ~/.last_command{_tmp,}; tee ~/.last_command_tmp)"
If it is acceptable to record all output, this can be simplified:
exec > >(tee ~/.commands)
Overwrite for 1 command:
script -c ls ~/.last_command.txt
If you want more than 1 command:
$ script ~/.last_command.txt
$ command1
$ command2
$ command3
$ exit
If you want to save during 1 login session, append "script" to .bashrc
When starting a new session (after login, or after opening the terminal), you can start another "nested" shell, and redirect its output:
<...login...>
% bash | tee -a ~/.bash_output
% ls # this is the nested shell
% exit
% cat ~/.bash_output
% exit
Actually, you don't even have to enter a nested shell every time. You can simply replace your shell-command in /etc/passwd from bash to bash | tee -a ~USERNAME/.bash_output.

Resources