Assign output to variable for command run under different user on OSX - bash

I run a bash python command as current user, by doing this:
su $USER -c 'python3 -m site --user-site'
This works properly and prints the following:
/Users/standarduser7/Library/Python/3.6/lib/python/site-packages
I want to assign this output to a variable, so I'm using "$(command)":
target="$(su $USER -c 'python3 -m site --user-site')"
At this point, the OSX terminal hangs and has to be killed. Using backticks instead of "$(command)" leads to same result.
However, if I run the command without user, everything works as it should:
target="$(python3 -m site --user-site)"
echo target
output: /Users/standarduser7/Library/Python/3.6/lib/python/site-packages
How can I assign the output from a command run as the current user to a variable?

I don’t think it’s hanging; I think it’s showing a blank (prompt-less) command-line and is waiting for input. When I key in the user password, it returns this result:
Password:/Users/CK/Library/Python/2.7/lib/python/site-packages
and this is what ends up being stored in the target variable. A quick parameter substitution can rectify this anomalous output:
target=${target#*:}
The other solution (credit given to this answer) is to create a file descriptor as a copy of stdout, then tee the command to the copy, which then allows stdout to be piped to grep in order to process the output:
exec 3>&1 # create a copy of stdout
target=$(su $USER -c "python -m site --user-site" | tee /dev/fd/3 | grep -o '\/.*')
exec 3>&- # close copy

Related

script doesn't promt message if called from another script

I have the following example:
run_docker_script
#!/bin/bash
argument=$1
if [ argument==c1 ]; then
DOCKERNAME=container1
else
DOCKERNAME=container2
fi
docker run -it --rm --entrypoint /bin/bash $DOCKERNAME -c 'read -rp "username:" user'
This is working fine if I call it like ./run_docker_script.sh (means I was asked to give a username).
If I call this script from another one and redirect the output to a file, nothing will be prompted to the console! The script sits there waiting for the input but the user doesn't see anything:
#!/bin/bash
LOG_DIR=results
mkdir -p $LOG_DIR
./run_docker_script.sh c1 >"$LOG_DIR"/result.txt
Any hints?
You are redirecting the prompt to the log file. Probably use tee instead of a plain redirection.
#!/bin/bash
LOG_DIR=results
mkdir -p "$LOG_DIR" # notice quoting
./run_docker_script.sh arg1 arg2 | tee "$LOG_DIR"/result.txt
You will still probably have some issues with buffering. I'm thinking passing the input as an argument to the Docker container would be a better design.
#!/bin/bash
# ^ notice fixed spacing
if [ argument = c1 ]; then
# ^ ^ notice fixed spacing
DOCKERNAME=debian
else
DOCKERNAME=ubuntu
fi
read -r -p "username: " username
docker run -it --rm --entrypoint /bin/bash $DOCKERNAME -c "user=$username"
It's slightly weird that Docker outputs the standard error from the shell within the container to standard output, too, but that's what it does. I don't think there is an easy way to change that.
as i said, the script is working well if i don't redirect the output to a file, means that the user will be asked to provide some input in the console.
But if i redirect the output to the file, the text "username:" will be as well redirected to the file and the user doesn't see anything.

Capture output from ssh-add launched by another command

Here's the full version of my question. I'm including all this detail in case my hunch is wrong, but you may want to skip to the tl;dr below.
I'm trying to write a function that runs an arbitrary command and also captures whether any output was printed to the terminal. I don't want to interfere with the output being printed. In case it's a relevant complication (probably not), I also want to branch on the exit code of the command.
Here's what I have:
function run_and_inspect {
# this subshell ensures the stdout of ${#} is printed and captured
if output=$(
set -o pipefail
"${#}" | tee /dev/tty
); then
did_cmd_work="yes"
else
did_cmd_work="no"
fi
if [ -z "$output" ]; then
was_there_output="no"
else
was_there_output="yes"
fi
echo "output?" $was_there_output
}
Generally this works fine:
$ run_and_inspect true
output? no
$ run_and_inspect "echo hello"
hello
output? yes
But I've found one problem command:
git pull | grep -v 'Already up to date.'
If there is nothing to pull, this pipeline usually produces no output. But, if ssh-add needs to prompt for a passphrase, there is output. It just doesn't get noticed by run_and_inspect:
function git_pull_quiet {
git pull | grep -v 'Already up to date.'
}
$ run_and_inspect git_pull_quiet
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
output? no
There was output to my terminal. I assume the problem is it didn't come from stdout of the git pull pipeline, which is all run_and_inspect knows about. Where did it come from? How do I fix this? I've tried redirecting stderr too (i.e. git pull 2>&1), but with no luck. Is there some way to monitor /dev/tty directly?
tl;dr (I think!)
I think this question boils down to: why isn't the passphrase prompt in log.txt?
$ git pull 2>&1 | tee log.txt
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
Already up to date.
$ cat log.txt
Already up to date.
why isn't the passphrase prompt in log.txt?
The prompt is printed from openssh load_identify_file with readpass.c read_passphrase(). The function does open(_PATH_TTY with _PATH_TTY "/dev/tty" and then write()s to it.
The output is displayed directly to the terminal /dev/tty, not with standard streams.
Just like you do with your tee /dev/tty, which means that your function will also not work. Prefer to preserve the stdout-ness of the child program and preserve buffering:
if { tmp=$("$#" > >(tee >(cat >&3))); } 3>&1; then
Is there some way to monitor /dev/tty directly?
Write your own terminal emulator and spawn your process in it, and then parse all input inside that terminal emulator. Programs like screen or tmux may be of use.
The workaround could be to download proot and open a file descriptor to some file, then create a chroot with just /dev/tty file symlinked to /proc/self/fd/<that file descriptor> and run the process with proot inside that chroot. The idea is that process will see that chroot with /dev/tty file replaced and will write to your file descriptor instead to the terminal.

How to capture the output of a bash command which prompts for a user's confirmation without blocking the output nor the command

I need to capture the output of a bash command which prompts for a user's confirmation without altering its flow.
I know only 2 ways to capture a command output:
- output=$(command)
- command > file
In both cases, the whole process is blocked without any output.
For instance, without --assume-yes:
output=$(apt purge 2>&1 some_package)
I cannot print the output back because the command is not done yet.
Any suggestion?
Edit 1: The user must be able to answer the prompt.
EDIT 2: I used dash-o's answer to complete a bash script allowing a user to remove/purge all obsolete packages (which have no installation candidate) from any Debian/Ubuntu distribution.
To capture partial output from that is waiting for a prompt, one can use a tail on temporary file, potentiality with 'tee' to keep the output flowing if needed. The downside of this approach is that stderr need to be tied with stdout, making it hard to tell between the two (if this is an issue)
#! /bin/bash
log=/path/to/log-file
echo > $log
(
while ! grep -q -F 'continue?' $log ; do sleep 2 ; done ;
output=$(<$log)
echo do-something "$output"
) &
# Run command with output to terminal
apt purge 2>&1 some_package | tee -a $log
# If output to terminal not needed, replace above command with
apt purge 2>&1 some_package > $log
There is no generic way to tell (from a script) when exactly a program prompts for input. The above code looks for the prompt string ('continue?'), so this will have to be customized per command.

How do I automatically save the output of the last command I've run (every time)?

If I wanted to have the output of the last command stored in a file such as ~/.last_command.txt (overwriting output of previous command), how would I go about doing so in bash so that the output goes to both stdout and that file? I imagine it would involve piping to tee ~/.last_command.txt but I don't know what to pipe to that, and I definitely don't want to add that to every command I run manually.
Also, how could I extend this to save the output of the last n commands?
Under bash this seems to have the desired effect.
bind 'RETURN: "|tee ~/.last_command.txt\n"'
You can add it to your bashrc file to make it permanent.
I should point out it's not perfect. Just hitting the enter key and you get:
matt#devpc:$ |tee ~/.last_command.txt
bash: syntax error near unexpected token `|'
So I think it needs a little more work.
This will break program/feature expecting a TTY, but...
exec 4>&1
PROMPT_COMMAND="exec 1>&4; exec > >(mv ~/.last_command{_tmp,}; tee ~/.last_command_tmp)"
If it is acceptable to record all output, this can be simplified:
exec > >(tee ~/.commands)
Overwrite for 1 command:
script -c ls ~/.last_command.txt
If you want more than 1 command:
$ script ~/.last_command.txt
$ command1
$ command2
$ command3
$ exit
If you want to save during 1 login session, append "script" to .bashrc
When starting a new session (after login, or after opening the terminal), you can start another "nested" shell, and redirect its output:
<...login...>
% bash | tee -a ~/.bash_output
% ls # this is the nested shell
% exit
% cat ~/.bash_output
% exit
Actually, you don't even have to enter a nested shell every time. You can simply replace your shell-command in /etc/passwd from bash to bash | tee -a ~USERNAME/.bash_output.

How to undo exec > /dev/null in bash?

I used
exec > /dev/null
to suppress output.
Is there a command to undo this? (Without restarting the script.)
To do it right, you need to copy the original FD 1 somewhere else before repointing it to /dev/null. In this case, I store a backup on FD 5:
exec 5>&1 >/dev/null
...
exec 1>&5
Another option is to redirect stdout within a block rather than using exec:
{
...
} >/dev/null
If you just want to get output again at the command prompt, you can do this:
exec >/dev/tty
If you are creating a script, and you want to have the output of a certain group of commands redirected, put those commands in braces:
{
command
command
} >/dev/null
Save the original output targets beforehand.
# $$ = the PID of the running script instance
STDOUT=`readlink -f /proc/$$/fd/1`
STDERR=`readlink -f /proc/$$/fd/2`
And restore them again using exec.
exec 1>$STDOUT 2>$STDERR
If you use /dev/tty for restoration as in the answers above, contrary to this, you won't be able to do redirections in call-level, e.g. bash script.sh &>/dev/null won't work.
Not really, as that would require changing the state of a running process. Even assuming you could, whatever you wrote before resetting standard output is truly, completely gone, as it was sent to the bit bucket.
To restore stdout I use
unset &1

Resources