bash read builtin does not echo input if script is piped to less - bash

I stumbled upon this strange behavior of the bash builtin read.
I have a interactive script which has the potential of generating a large output. So naturally you append | less to it.
The script will still ask you for your input but it will not echo what you typed.
Here is a small sample.sh:
#!/bin/bash
echo "Type:"
read -r input
echo "Typed: ${input}"
sample.sh | less
I noticed that this is not a general issue with pipes (e.g. |cat works).
Any clue would be appreciated.
A SOLUTION which works for me:
#!/bin/bash
STTY_ORIG="$(stty -g)" # save stty settings
stty echo # enable echo
echo "Type:"
read -e -r input # use readline (backspace will not work otherwise)
echo "Typed: ${input}"
stty "${STTY_ORIG}" # restore stty settings

A SOLUTION which works for me and did not show an side effects.
Basically just tweak and restore the terminal settings...
#!/bin/bash
STTY_ORIG="$(stty -g)" # save stty settings
stty echo # enable echo
echo "Type:"
read -e -r input # use readline (backspace will not work otherwise)
echo "Typed: ${input}"
stty "${STTY_ORIG}" # restore stty settings

It actualy works for me.
The same script
martus#makus-pc:/tmp/src$ dpkg -l | grep bash
ii bash 4.4-5 amd64 GNU Bourne Again SHell
martus#makus-pc:/tmp/src$ uname -a
Linux makus-pc 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3+deb9u1 (2017-12-23) x86_64 GNU/Linux
Edit: Does the script works without piping less? Less won't show anything typed untill you hit enter.

Related

echo flag -n is printing out when run from script [duplicate]

How come sh UsersInput.sh gives a different output compared to bash UsersInput.sh?
My script is below:
#!/bin/bash
echo -n "Enter: ";
read usersinput;
echo "You entered, \"$usersinput\"";
bash
localhost:Bash henry$ bash UsersInput.sh
Enter: input
You entered, "input"
sh
localhost:Bash henry$ sh UsersInput.sh
-n Enter:
input
You entered, "input"
How come -n behaves properly with the first, but not with the second? What's the reason for this and is there a workaround?
From man echo:
Some shells may provide a builtin echo command which is similar or identical to this utility. Most notably, the builtin echo in sh(1) does not accept the -n option. Consult the builtin(1) manual page.
In bash, the Bourne-again shell, echo accepts the -n option whereas in sh, the Bourne shell, echo does not, so it simply echos everything you wrote, including the -n.
/bin/sh is a version of bash (not a Bourne shell) on OS X. It has POSIX mode enabled and has a few other changes as well. One of them is that the xpg_echo shell option is enabled by default so that the builtin echo conforms to POSIX.
http://pubs.opengroup.org/onlinepubs/009696799/utilities/echo.html:
Implementations shall not support any options
http://www.gnu.org/software/bash/manual/bash.html#Bash-POSIX-Mode:
44. When the xpg_echo option is enabled, Bash does not attempt to interpret any arguments to echo as options. Each argument is displayed, after escape characters are converted.
[...]
As noted above, Bash requires the xpg_echo option to be enabled for the echo builtin to be fully conformant.
You can unset xpg_echo, use /bin/echo, or preferably just use printf:
sh -c 'shopt -u xpg_echo; echo -n aa'
sh -c '/bin/echo -n aa'
sh -c 'printf %s aa'

Why does tmux not insert newlines when pasting a multi-line command?

I often use a REPL-style coding method when writing shell scripts (or other relevant languages), and recently noticed the following issue. I run tmux so I can have my script open in vim in a pane side-by-side with a terminal prompt.
Tmux
If I try to paste multiple lines of commands at once using CMD-v on a Mac, i.e.
a=hello
b=World
echo $a $b
tmux does not process the newlines properly, but instead gives the following output:
[user#host: ~]$ a=hello
b='World'
echo $a $b
[user#host: ~]$ b='World'echo $a $b
If I clear the prompt, and run echo $a, I get hello echo'ed to the screen, but echo $b produces an empty line, and obviously the echo $a $b line does not get run.
I get the same output using a REPL like gnuplot, or when using rlwrap.
Alternate tmux attempt
The same issue occurs when using vim-slime, or using the relevant vim-slime system calls manually:
[user#host: ~]$ tmux set-buffer 'a=hello
> b=World
> echo $a $b
> '
[user#host: ~]$ tmux paste-buffer -p
a=hello
b=World
echo $a $b
[user#host: ~]$ a=hellob=Worldecho $a $b
I have tried tmux paste-buffer with, and without the -p flag for bracketed paste mode.
Plain bash shell, or GNU screen
If I perform the same CMD-v paste action in a normal bash shell (not in tmux), I get:
[user#host: ~]$ a=hello
[user#host: ~]$ b=World
[user#host: ~]$ echo $a $b
hello World
[user#host: ~]$
as expected. I get the same output when pasting in GNU screen (v4.04.00).
Question
Why does tmux not process the pasted commands line-by-line, as bash/gnu screen do? How do we fix this problem?
Already asked?
The same issue appears to have been asked at this stackoverflow question, and this other stackoverflow question, but not yet answered satisfactorily.
This answer offers a solution of a sleep line between each command, which does the trick, but it's a bit of a hack to assume how long each command will take to process before sending the next line of text. There must be a better way.
Versions
I am running Mac OS X El Capitan (v10.11.6), iTerm2 (v3.0.10), tmux (v2.2), GNU bash (v4.4.0).
The same results can be reproduced using Terminal.app (v2.6).
I solved the problem. I had been using reattach-to-user-namespace to interact with the OS X clipboard; however, according to the reattach-to-user-namespace github page:
Note: Under Yosemite (and later) pasteboard access seems to work fine
without the program from this repository.
I removed the set-option -g default-command "reattach-to-user-namespace -l bash" line from my .tmux.conf file. I also changed my tmux mapping to
bind -t vi-copy y copy-pipe "pbcopy"
and it copies text to the OS X clipboard from vi-copy mode as expected. Pasting text using the OS X default Cmd-v produces the expected behavior (like in screen or plain bash shell as described in the question). Thanks to #Alex Torok for prompting my config file debugging.

How can I save and restore TTY settings from a subshell? (Or: understanding pty/tty within subshells)

I am trying to understand stty's operation within a subshell. Here is a small Ruby script I am working with to wrap my head around the behavior. I'm just trying to save and restore the tty state after we exit an infinite loop via ^C.
original_tty = `stty -f /dev/tty -g`
begin
loop do end
rescue Interrupt
# Suppress exception
ensure
$stderr.puts `stty -f /dev/tty #{original_tty}`
$stderr.puts "stty exits with: #{$?}, i.e. success == #{$?.success?}"
end
Now, if you put this into a file wut.rb, when I run it at my command line with bash -c 'echo | ruby wut.rb it works; i.e. stty returns 0. But if I run it in a subshell via command substitution, bash -c 'echo $(echo | ruby wut.rb)', it does not; i.e. stty returns 1.
Running these directly from the command line works as I might expect:
$ foo=$(stty -f /dev/tty -g)
$ echo $foo
gfmt1:cflag=4b00:iflag=6b02:lflag=200005cf:oflag=3:discard=f:dsusp=19:eof=4:eol=ff:eol2=ff:erase=7f:intr=3:kill=15:lnext=16:min=1:quit=1c:reprint=12:start=11:status=14:stop=13:susp=1a:time=0:werase=17:ispeed=38400:ospeed=38400
$ echo | stty -f /dev/tty $foo
$ echo $?
0
$ echo $(echo | stty -f /dev/tty $foo)
$ echo $?
0
I'm on Mac OS X 10.10.2 using bash 3.2 and ruby 2.2.0p0 (2014-12-25 revision 49005) [x86_64-darwin14] if it matters…
Why is this? I think it has to do with the subshell created by command substitution not being/having a tty, but I'm not sure. I would very much appreciate:
any explanation of what's going on (including good readings on PTY/TTY—so far "A Brief Introduction to Termios" and "The TTY demystified" have been the most helpful)
and suggestions as to how I might successfully restore the original tty settings within a command substitution.
Thanks!

How to invoke bash, run commands inside the new shell, and then give control back to user?

This must either be really simple or really complex, but I couldn't find anything about it... I am trying to open a new bash instance, then run a few commands inside it, and give the control back to the user inside that same instance.
I tried:
$ bash -lic "some_command"
but this executes some_command inside the new instance, then closes it. I want it to stay open.
One more detail which might affect answers: if I can get this to work I will use it in my .bashrc as alias(es), so bonus points for an alias implementation!
bash --rcfile <(echo '. ~/.bashrc; some_command')
dispenses the creation of temporary files. Question on other sites:
https://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
https://unix.stackexchange.com/questions/123103/how-to-keep-bash-running-after-command-execution
This is a late answer, but I had the exact same problem and Google sent me to this page, so for completeness here is how I got around the problem.
As far as I can tell, bash does not have an option to do what the original poster wanted to do. The -c option will always return after the commands have been executed.
Broken solution: The simplest and obvious attempt around this is:
bash -c 'XXXX ; bash'
This partly works (albeit with an extra sub-shell layer). However, the problem is that while a sub-shell will inherit the exported environment variables, aliases and functions are not inherited. So this might work for some things but isn't a general solution.
Better: The way around this is to dynamically create a startup file and call bash with this new initialization file, making sure that your new init file calls your regular ~/.bashrc if necessary.
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
echo "source ~/.bashrc" > $TMPFILE
echo "<other commands>" >> $TMPFILE
echo "rm -f $TMPFILE" >> $TMPFILE
# Start the new bash shell
bash --rcfile $TMPFILE
The nice thing is that the temporary init file will delete itself as soon as it is used, reducing the risk that it is not cleaned up correctly.
Note: I'm not sure if /etc/bashrc is usually called as part of a normal non-login shell. If so you might want to source /etc/bashrc as well as your ~/.bashrc.
You can pass --rcfile to Bash to cause it to read a file of your choice. This file will be read instead of your .bashrc. (If that's a problem, source ~/.bashrc from the other script.)
Edit: So a function to start a new shell with the stuff from ~/.more.sh would look something like:
more() { bash --rcfile ~/.more.sh ; }
... and in .more.sh you would have the commands you want to execute when the shell starts. (I suppose it would be elegant to avoid a separate startup file -- you cannot use standard input because then the shell will not be interactive, but you could create a startup file from a here document in a temporary location, then read it.)
bash -c '<some command> ; exec /bin/bash'
will avoid additional shell sublayer
You can get the functionality you want by sourcing the script instead of running it. eg:
$cat script
cmd1
cmd2
$ . script
$ at this point cmd1 and cmd2 have been run inside this shell
Append to ~/.bashrc a section like this:
if [ "$subshell" = 'true' ]
then
# commands to execute only on a subshell
date
fi
alias sub='subshell=true bash'
Then you can start the subshell with sub.
The accepted answer is really helpful! Just to add that process substitution (i.e., <(COMMAND)) is not supported in some shells (e.g., dash).
In my case, I was trying to create a custom action (basically a one-line shell script) in Thunar file manager to start a shell and activate the selected Python virtual environment. My first attempt was:
urxvt -e bash --rcfile <(echo ". $HOME/.bashrc; . %f/bin/activate;")
where %f is the path to the virtual environment handled by Thunar.
I got an error (by running Thunar from command line):
/bin/sh: 1: Syntax error: "(" unexpected
Then I realized that my sh (essentially dash) does not support process substitution.
My solution was to invoke bash at the top level to interpret the process substitution, at the expense of an extra level of shell:
bash -c 'urxvt -e bash --rcfile <(echo "source $HOME/.bashrc; source %f/bin/activate;")'
Alternatively, I tried to use here-document for dash but with no success. Something like:
echo -e " <<EOF\n. $HOME/.bashrc; . %f/bin/activate;\nEOF\n" | xargs -0 urxvt -e bash --rcfile
P.S.: I do not have enough reputation to post comments, moderators please feel free to move it to comments or remove it if not helpful with this question.
With accordance with the answer by daveraja, here is a bash script which will solve the purpose.
Consider a situation if you are using C-shell and you want to execute a command
without leaving the C-shell context/window as follows,
Command to be executed: Search exact word 'Testing' in current directory recursively only in *.h, *.c files
grep -nrs --color -w --include="*.{h,c}" Testing ./
Solution 1: Enter into bash from C-shell and execute the command
bash
grep -nrs --color -w --include="*.{h,c}" Testing ./
exit
Solution 2: Write the intended command into a text file and execute it using bash
echo 'grep -nrs --color -w --include="*.{h,c}" Testing ./' > tmp_file.txt
bash tmp_file.txt
Solution 3: Run command on the same line using bash
bash -c 'grep -nrs --color -w --include="*.{h,c}" Testing ./'
Solution 4: Create a sciprt (one-time) and use it for all future commands
alias ebash './execute_command_on_bash.sh'
ebash grep -nrs --color -w --include="*.{h,c}" Testing ./
The script is as follows,
#!/bin/bash
# =========================================================================
# References:
# https://stackoverflow.com/a/13343457/5409274
# https://stackoverflow.com/a/26733366/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://www.linuxquestions.org/questions/other-%2Anix-55/how-can-i-run-a-command-on-another-shell-without-changing-the-current-shell-794580/
# https://www.tldp.org/LDP/abs/html/internalvariables.html
# https://stackoverflow.com/a/4277753/5409274
# =========================================================================
# Enable following line to see the script commands
# getting printing along with their execution. This will help for debugging.
#set -o verbose
E_BADARGS=85
if [ ! -n "$1" ]
then
echo "Usage: `basename $0` grep -nrs --color -w --include=\"*.{h,c}\" Testing ."
echo "Usage: `basename $0` find . -name \"*.txt\""
exit $E_BADARGS
fi
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
#echo "echo Hello World...." >> $TMPFILE
#initialize the variable that will contain the whole argument string
argList=""
#iterate on each argument
for arg in "$#"
do
#if an argument contains a white space, enclose it in double quotes and append to the list
#otherwise simply append the argument to the list
if echo $arg | grep -q " "; then
argList="$argList \"$arg\""
else
argList="$argList $arg"
fi
done
#remove a possible trailing space at the beginning of the list
argList=$(echo $argList | sed 's/^ *//')
# Echoing the command to be executed to tmp file
echo "$argList" >> $TMPFILE
# Note: This should be your last command
# Important last command which deletes the tmp file
last_command="rm -f $TMPFILE"
echo "$last_command" >> $TMPFILE
#echo "---------------------------------------------"
#echo "TMPFILE is $TMPFILE as follows"
#cat $TMPFILE
#echo "---------------------------------------------"
check_for_last_line=$(tail -n 1 $TMPFILE | grep -o "$last_command")
#echo $check_for_last_line
#if tail -n 1 $TMPFILE | grep -o "$last_command"
if [ "$check_for_last_line" == "$last_command" ]
then
#echo "Okay..."
bash $TMPFILE
exit 0
else
echo "Something is wrong"
echo "Last command in your tmp file should be removing itself"
echo "Aborting the process"
exit 1
fi
$ bash --init-file <(echo 'some_command')
$ bash --rcfile <(echo 'some_command')
In case you can't or don't want to use process substitution:
$ cat script
some_command
$ bash --init-file script
Another way:
$ bash -c 'some_command; exec bash'
$ sh -c 'some_command; exec sh'
sh-only way (dash, busybox):
$ ENV=script sh
Here is yet another (working) variant:
This opens a new gnome terminal, then in the new terminal it runs bash. The user's rc file is read first, then a command ls -la is sent for execution to the new shell before it turns interactive.
The last echo adds an extra newline that is needed to finish execution.
gnome-terminal -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'
I also find it useful sometimes to decorate the terminal, e.g. with colorfor better orientation.
gnome-terminal --profile green -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Resources