Sending a small bit of output to a remote heroku run, I see the stdin echoed back, like so:
$ echo 'foobar' | heroku run wc
Running `wc` attached to terminal... up, run.2758
foobar
1 1 7
I'd rather not have the process stdin echoed back, rather having it work like a local run:
$ echo foobar | wc
1 1 7
(My real command is sending hundreds of megabytes up to the remote command.)
Is there any way to invoke heroku run, piping it local data, but suppressing its echo back of the data?
heroku run --no-tty will prevent stdin from being echoed back so you can pipe local data to the command.
Related
Here's the full version of my question. I'm including all this detail in case my hunch is wrong, but you may want to skip to the tl;dr below.
I'm trying to write a function that runs an arbitrary command and also captures whether any output was printed to the terminal. I don't want to interfere with the output being printed. In case it's a relevant complication (probably not), I also want to branch on the exit code of the command.
Here's what I have:
function run_and_inspect {
# this subshell ensures the stdout of ${#} is printed and captured
if output=$(
set -o pipefail
"${#}" | tee /dev/tty
); then
did_cmd_work="yes"
else
did_cmd_work="no"
fi
if [ -z "$output" ]; then
was_there_output="no"
else
was_there_output="yes"
fi
echo "output?" $was_there_output
}
Generally this works fine:
$ run_and_inspect true
output? no
$ run_and_inspect "echo hello"
hello
output? yes
But I've found one problem command:
git pull | grep -v 'Already up to date.'
If there is nothing to pull, this pipeline usually produces no output. But, if ssh-add needs to prompt for a passphrase, there is output. It just doesn't get noticed by run_and_inspect:
function git_pull_quiet {
git pull | grep -v 'Already up to date.'
}
$ run_and_inspect git_pull_quiet
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
output? no
There was output to my terminal. I assume the problem is it didn't come from stdout of the git pull pipeline, which is all run_and_inspect knows about. Where did it come from? How do I fix this? I've tried redirecting stderr too (i.e. git pull 2>&1), but with no luck. Is there some way to monitor /dev/tty directly?
tl;dr (I think!)
I think this question boils down to: why isn't the passphrase prompt in log.txt?
$ git pull 2>&1 | tee log.txt
Enter passphrase for key '/home/foo/.ssh/id_ed25519':
Already up to date.
$ cat log.txt
Already up to date.
why isn't the passphrase prompt in log.txt?
The prompt is printed from openssh load_identify_file with readpass.c read_passphrase(). The function does open(_PATH_TTY with _PATH_TTY "/dev/tty" and then write()s to it.
The output is displayed directly to the terminal /dev/tty, not with standard streams.
Just like you do with your tee /dev/tty, which means that your function will also not work. Prefer to preserve the stdout-ness of the child program and preserve buffering:
if { tmp=$("$#" > >(tee >(cat >&3))); } 3>&1; then
Is there some way to monitor /dev/tty directly?
Write your own terminal emulator and spawn your process in it, and then parse all input inside that terminal emulator. Programs like screen or tmux may be of use.
The workaround could be to download proot and open a file descriptor to some file, then create a chroot with just /dev/tty file symlinked to /proc/self/fd/<that file descriptor> and run the process with proot inside that chroot. The idea is that process will see that chroot with /dev/tty file replaced and will write to your file descriptor instead to the terminal.
I need to capture the output of a bash command which prompts for a user's confirmation without altering its flow.
I know only 2 ways to capture a command output:
- output=$(command)
- command > file
In both cases, the whole process is blocked without any output.
For instance, without --assume-yes:
output=$(apt purge 2>&1 some_package)
I cannot print the output back because the command is not done yet.
Any suggestion?
Edit 1: The user must be able to answer the prompt.
EDIT 2: I used dash-o's answer to complete a bash script allowing a user to remove/purge all obsolete packages (which have no installation candidate) from any Debian/Ubuntu distribution.
To capture partial output from that is waiting for a prompt, one can use a tail on temporary file, potentiality with 'tee' to keep the output flowing if needed. The downside of this approach is that stderr need to be tied with stdout, making it hard to tell between the two (if this is an issue)
#! /bin/bash
log=/path/to/log-file
echo > $log
(
while ! grep -q -F 'continue?' $log ; do sleep 2 ; done ;
output=$(<$log)
echo do-something "$output"
) &
# Run command with output to terminal
apt purge 2>&1 some_package | tee -a $log
# If output to terminal not needed, replace above command with
apt purge 2>&1 some_package > $log
There is no generic way to tell (from a script) when exactly a program prompts for input. The above code looks for the prompt string ('continue?'), so this will have to be customized per command.
I run a bash python command as current user, by doing this:
su $USER -c 'python3 -m site --user-site'
This works properly and prints the following:
/Users/standarduser7/Library/Python/3.6/lib/python/site-packages
I want to assign this output to a variable, so I'm using "$(command)":
target="$(su $USER -c 'python3 -m site --user-site')"
At this point, the OSX terminal hangs and has to be killed. Using backticks instead of "$(command)" leads to same result.
However, if I run the command without user, everything works as it should:
target="$(python3 -m site --user-site)"
echo target
output: /Users/standarduser7/Library/Python/3.6/lib/python/site-packages
How can I assign the output from a command run as the current user to a variable?
I don’t think it’s hanging; I think it’s showing a blank (prompt-less) command-line and is waiting for input. When I key in the user password, it returns this result:
Password:/Users/CK/Library/Python/2.7/lib/python/site-packages
and this is what ends up being stored in the target variable. A quick parameter substitution can rectify this anomalous output:
target=${target#*:}
The other solution (credit given to this answer) is to create a file descriptor as a copy of stdout, then tee the command to the copy, which then allows stdout to be piped to grep in order to process the output:
exec 3>&1 # create a copy of stdout
target=$(su $USER -c "python -m site --user-site" | tee /dev/fd/3 | grep -o '\/.*')
exec 3>&- # close copy
If I wanted to have the output of the last command stored in a file such as ~/.last_command.txt (overwriting output of previous command), how would I go about doing so in bash so that the output goes to both stdout and that file? I imagine it would involve piping to tee ~/.last_command.txt but I don't know what to pipe to that, and I definitely don't want to add that to every command I run manually.
Also, how could I extend this to save the output of the last n commands?
Under bash this seems to have the desired effect.
bind 'RETURN: "|tee ~/.last_command.txt\n"'
You can add it to your bashrc file to make it permanent.
I should point out it's not perfect. Just hitting the enter key and you get:
matt#devpc:$ |tee ~/.last_command.txt
bash: syntax error near unexpected token `|'
So I think it needs a little more work.
This will break program/feature expecting a TTY, but...
exec 4>&1
PROMPT_COMMAND="exec 1>&4; exec > >(mv ~/.last_command{_tmp,}; tee ~/.last_command_tmp)"
If it is acceptable to record all output, this can be simplified:
exec > >(tee ~/.commands)
Overwrite for 1 command:
script -c ls ~/.last_command.txt
If you want more than 1 command:
$ script ~/.last_command.txt
$ command1
$ command2
$ command3
$ exit
If you want to save during 1 login session, append "script" to .bashrc
When starting a new session (after login, or after opening the terminal), you can start another "nested" shell, and redirect its output:
<...login...>
% bash | tee -a ~/.bash_output
% ls # this is the nested shell
% exit
% cat ~/.bash_output
% exit
Actually, you don't even have to enter a nested shell every time. You can simply replace your shell-command in /etc/passwd from bash to bash | tee -a ~USERNAME/.bash_output.
In my case I have to run openvpn before ssh'ing into a server, and the openvpn command echos out "Initialization Sequence Completed".
So, I want my script to setup the openvpn and then ssh in.
My question is: How do you execute a command in bash in the background and await it to echo "completed" before running another program?
My current way of doing this is having 2 terminal panes open, one running:
sudo openvpn --config FILE
and in the other I run:
ssh SERVER
once the the first terminal pane has shown me the "Initialization Sequence Completed" text.
It seems like you want to run openvpn as a process in the background while processing its stdout in the foreground.
exec 3< <(sudo openvpn --config FILE)
sed '/Initialization Sequence Completed$/q' <&3 ; cat <&3 &
# VPN initialization is now complete and running in the background
ssh SERVER
Explanation
Let's break it into pieces:
echo <(sudo openvpn --config FILE) will print out something like /dev/fd63
the <(..) runs openvpn in the background, and...
attaches its stdout to a file descriptor, which is printed out by echo
exec 3< /dev/fd63
(where /dev/fd63 is the file descriptor printed from step 1)
this tells the shell to open the file descriptor (/dev/fd63) for reading, and...
make it available at the file descriptor 3
sed '/Initialization Sequence Completed$/q' <&3
now we run sed in the foreground, but make it read from the file descriptor 3 we just opened
as soon as sed sees that the current line ends with "Initialization Sequence Completed", it quits (the /q part)
cat <&3 &
openvpn will keep writing to file descriptor 3 and eventually block if nothing reads from it
to prevent that, we run cat in the background to read the rest of the output
The basic idea is to run openvpn in the background, but capture its output somewhere so that we can run a command in the foreground that will block until it reads the magic words, "Initialization Sequence Completed". The above code tries to do it without creating messy temporary files, but a simpler way might be just to use a temporary file.
Use -m 1 together with --line-buffered in grep to terminate a grep after first match in a continuous stream. This should work:
sudo openvpn --config FILE | grep -m "Initialization Sequence Completed" --line-buffered && ssh SERVER