Why doesn't "which myscript | xargs vim" work well? - bash

Why doesn't
which myscript | xargs vim
work nicely? My terminal (ubuntu 14.04) freezes when I exit vim.
Or, is there an a nice clean alternative?

Why The Original Doesn't Work
You can't meaningfully pipe anything into vim, if you're going to use it as an interactive editor: A pipeline overrides stdin; an editor needs to be able to access your stdin (unless it's, say, interacting via X11 -- but that would be gvim).
To go into a little more detail: foo | bar runs both foo and bar at the same time, with the stdout of foo connected to the stdin of bar. Thus, which myscript | xargs vim has the shell originally starting two processes -- which myscript and xargs vim -- with the stdout of which myscript connected to the stdin of xargs vim.
However, this means that xargs vim is getting its input from which, and not from the terminal/console that the user was typing at. Thus, when xargs vim starts vim, the stdin which vim inherits isn't connected to the terminal either -- and vim, being an interactive editor built to get input from the user at a terminal, fails (perhaps spectacularly or entertainingly).
What To Do Instead
vim "$(which myscript)"
The $() syntax above is a command substitution, which is replaced with the stdout of the command which it runs. As such, while this overrides the stdout of which (directed into a FIFO which the shell reads from for purposes of that substitution), it does not in any respect redirect the input and output handed to vim.
Alternately, if you really want to use xargs (note the following uses -d, a GNUism, to ensure that it works correctly when passed filenames with spaces -- though not filenames with newlines):
which myscript | xargs -d $'\n' sh -c 'exec vim "$#" <&2'
The above has xargs, instead of directly running vim, start a shell which copies stderr (file descriptor 2) to stdin (file descriptor 0, the default target of redirection with <), and then starts vim, so as to provide that copy of vim a file descriptor for stdin that's attached to your terminal -- if your stderr isn't open to your TTY, replace <&2 with </dev/tty instead.

Related

How to make tee in Linux provide screen output line by line, not at the end of execution? [duplicate]

Usually, stdout is line-buffered. In other words, as long as your printf argument ends with a newline, you can expect the line to be printed instantly. This does not appear to hold when using a pipe to redirect to tee.
I have a C++ program, a, that outputs strings, always \n-terminated, to stdout.
When it is run by itself (./a), everything prints correctly and at the right time, as expected. However, if I pipe it to tee (./a | tee output.txt), it doesn't print anything until it quits, which defeats the purpose of using tee.
I know that I could fix it by adding a fflush(stdout) after each printing operation in the C++ program. But is there a cleaner, easier way? Is there a command I can run, for example, that would force stdout to be line-buffered, even when using a pipe?
you can try stdbuf
$ stdbuf --output=L ./a | tee output.txt
(big) part of the man page:
-i, --input=MODE adjust standard input stream buffering
-o, --output=MODE adjust standard output stream buffering
-e, --error=MODE adjust standard error stream buffering
If MODE is 'L' the corresponding stream will be line buffered.
This option is invalid with standard input.
If MODE is '0' the corresponding stream will be unbuffered.
Otherwise MODE is a number which may be followed by one of the following:
KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G, T, P, E, Z, Y.
In this case the corresponding stream will be fully buffered with the buffer
size set to MODE bytes.
keep this in mind, though:
NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does
for e.g.) then that will override corresponding settings changed by 'stdbuf'.
Also some filters (like 'dd' and 'cat' etc.) dont use streams for I/O,
and are thus unaffected by 'stdbuf' settings.
you are not running stdbuf on tee, you're running it on a, so this shouldn't affect you, unless you set the buffering of a's streams in a's source.
Also, stdbuf is not POSIX, but part of GNU-coreutils.
Try unbuffer (man page) which is part of the expect package. You may already have it on your system.
In your case you would use it like this:
unbuffer ./a | tee output.txt
The -p option is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments.
You can use setlinebuf from stdio.h.
setlinebuf(stdout);
This should change the buffering to "line buffered".
If you need more flexibility you can use setvbuf.
You may also try to execute your command in a pseudo-terminal using the script command (which should enforce line-buffered output to the pipe)!
script -q /dev/null ./a | tee output.txt # Mac OS X, FreeBSD
script -c "./a" /dev/null | tee output.txt # Linux
Be aware the script command does not propagate back the exit status of the wrapped command.
The unbuffer command from the expect package at the #Paused until further notice answer did not worked for me the way it was presented.
Instead of using:
./a | unbuffer -p tee output.txt
I had to use:
unbuffer -p ./a | tee output.txt
(-p is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments)
The expect package can be installed on:
MSYS2 with pacman -S expect
Mac OS with brew install expect
Update
I recently had buffering problems with python inside a shell script (when trying to append timestamp to its output). The fix was to pass -u flag to python this way:
run.sh with python -u script.py
unbuffer -p /bin/bash run.sh 2>&1 | tee /dev/tty | ts '[%Y-%m-%d %H:%M:%S]' >> somefile.txt
This command will put a timestamp on the output and send it to a file and stdout at the same time.
The ts program (timestamp) can be installed with the moreutils package.
Update 2
Recently, also had problems with grep buffering the output, when I used the argument grep --line-buffered on grep to it stop buffering the output.
If you use the C++ stream classes instead, every std::endl is an implicit flush. Using C-style printing, I think the method you suggested (fflush()) is the only way.
The best answer IMO is grep's --line-buffer option as stated here:
https://unix.stackexchange.com/a/53445/40003

Use pipe ("|") as first symbol in bash or zsh command

I have two simple scripts:
./cpthat
BlueM/cliclick types on the keyboard: Shift+Cmd+A, then Cmd+C, to the active iTerm terminal:
#!/bin/zsh
cliclick kd:shift,cmd t:a ku:shift t:c ku:cmd
pbpaste>$THATF
Shift+Cmd+A selects the output from the previous command, and
Cmd+C copies "that" to the clipboard.
pbpaste then writes that to the file $THATF defined system-wide.
./that
#!/bin/zsh
cat $THATF
This prints out the output of the last command as stored by cpthat.
(I know I can run $ command > $THATF directly but for other reasons I need to act retroactively on the command output. Also, not thread safe.)
The challenge:
I'm trying to get to where I can start a zsh or bash command with a pipe:
$ |grep -i sometext
Where, in effect, this happens:
$ that|grep -i sometext
Would this be possible somehow?
Overriding the pipe operator?
zsh config magic?
I'm using zsh heavily but am open for any solution.
You don't need to start with a |. Thegrep utility naturally reads STDIN.
Here's a contrived example:
# /bin/sh
# count_matches
grep $1 | wc -l
$ cat file | count_matches thing
You can see the | you're looking for is on the command line itself, not within the script
Similarly this works:
$ count_matches thing < file
In the first example, the STDIN is connected (via the pipe) to the output of the first command (trivially cat). In the second, it's from the actual file via redirection.
So, just get rid of the | and you should be good to go.
Animation showing that | alone at the beginning of command can be replaced automatically by the output of previous command:
Edit ~/.zshrc to override zsh's zle accept-line widget:
readonly THATF="path/to/your/temporary/file"
my-accept-line () {
if [[ "${BUFFER:0:1}" == '|' ]]; then
/usr/local/bin/cliclick kd:shift,cmd t:a ku:shift w:100 t:c ku:cmd
pbpaste>"${THATF}"
BUFFER='cat ${THATF} '${BUFFER}
fi
zle .accept-line
}
zle -N accept-line my-accept-line
Explanation
When you hit enter after entering a command, zsh runs the accept-line widget.
We override that widget, but before exiting we remember to call the original widget with zle .accept-line. With the dot prefix the factory widget is ran.
In iTerm2, shift+cmd+a selects all the output from the previous command, and cmd+c copies that to the system pasteboard.
We paste the contents of the pasteboard and redirects that to the temporary file declared earlier, pointed to by ${THATF}.
We prepend $BUFFER, the zsh special variable available within zle widget code, with the output of the previous command.
Dependencies, caveats:
This particular solution depends on:
cliclick dispatching macOS keyboard events. Perhaps a native solutions exist e.g. ANSI/escape sequence.
iTerm to handle the keybind for copying last commands output.
zsh for the zle widget.
Xode snippet above is proof-of-concept only and is wildly insecure.

Remove color and redirect ouptut of a bash script from within

My question is simple, how can I redirect all outputs of a bash script to file and terminal and remove color characters from within the script itself?
I can't find an answer which fits all my needs.
So far I tried tee to output to file and terminal, combined with 2>&1 to get stederr and stdout, sed to remove color characters and all of this with exec to do it from within my script but it doesn't work, I only get colored logs into the terminal and nothing in file.
#!/usr/bin/env bash
exec 2>&1 | sed -r 's/\x1b\[[0-9;]*m//g' | tee script.logs
python somepython.py
python someotherpython.py
Here the python scripts produce outputs which are colored.
I want to log those to terminal (untouched) and in file (without the color). In reality there is a lot more going on in my bash script than those just two python scripts, that's why I want to globally redirect the output of my script bash and not just pipe after each python script.
Thus I used exec because I though it allowed to redirect all output produce by a script.
Thanks in advance for any advice and help,
PS: I dont want colored logs in file but I don't care in terminal if this is what needs to be done for logs to not be colored in file.
You may put all your calls in a curly-braced group and redirect the whole lot, e.g.:
#!/usr/bin/env bash
{
python somepython.py
python someotherpython.py
} 2>&1 | sed -r 's/\x1b\[[0-9;]*m//g' | tee script.logs
This way, all stdout and stderr outputs will be passed along the filter.
Color the terminal, not the file
If you want to write the colors to the terminal and write the uncolored text to the file, you may apply the sed filter to the file written by tee, e.g. your script would look something like:
#!/usr/bin/env bash
{
python somepython.py
python someotherpython.py
} 2>&1 | tee >(sed -r 's/\x1b\[[0-9;]*m//g' > script.logs)
This uses process substitution which is a very powerful tool in bash, albeit a bit difficult at first.
Remove buffering
Assuming you would like to read the contents as soon as possible, you may want to deactivate python’s block buffering. This can be done using the -u option:
#!/usr/bin/env bash
{
python -u somepython.py
python -u someotherpython.py
} 2>&1 | tee >(sed -r 's/\x1b\[[0-9;]*m//g' > script.logs)
Side Note: Beware of CSI
Not all special characters are based on the Control Sequence Initiator ESC [.
If a text is colored in green with tput setaf 1, it will use the control sequence ESC [31m, but if the color is reset with tput sgr0, the control sequence may be ESC (B ESC [m (please note the ESC (B sequence). So if you filter only the ESC [ sequences, you may still have control sequence waste in your log file.
Things get even worse if the program uses other types of control characters such as cursor commands.
For those reasons, the best way to avoid problems is to simply avoid writing control sequences from your python scripts. Most builtin programs are protected against that by checking if the output is a terminal before choosing to display colors or not. When the output is not a terminal it assumes that colors may cause issues.
With that said I don’t know if you have control over the python scripts (or other calls you might have), but if you do you might want to test if the output is a terminal. In Bash you check this way:
if [ -t 1 ] # does stdout end up on a terminal?
then
# Display fancy colors
else
# Minimalist display
fi
In Python it would be:
if sys.stdout.isatty(): # does stdout end up on a terminal?
# Display fancy colors
else:
# Minimalist display

How can I duplicate standard input (stdin) to multiple subprocesses in a bash script?

I want to redirect stdin to multiple scripts, in order to test an in-development git hook while leaving the old one in place. I know I should use tee somehow, I don't see how I can use the basic >, < and pipe | redirection features of bash to do this. Furthermore, how can I redirect the stdin of a script? I don't want to use read, because that only reads one line at a time and I'd have to re-execute all subprocesses for each line.
You could use tee with normal files (possibly temp files via mktemp), then cat those files to your various scripts. More directly, you could replace those normal files with Named Pipes created with mkfifo. But you can do it in one pipe using Bash's powerful Process Substitution >( cmd ) and <( cmd ) features to replace the file tee expects with your subprocesses.
Use <&0 for the first tee to get the script's stdin. Edit: as chepner pointed out, tee by default inherits the shell's stdin.
The final result is this wrapper script:
#!/bin/bash
set +o pipefail
tee >(testscipt >> testscript.out.log 2>> testscript.err.log) | oldscript
Some notes:
use set +o pipefail to disable Bash's pipefail feature if it was previously enabled. When enabled, Bash will report errors from within the pipe. When disabled, it will only report errors of the last command, which is what we want here to keep our testscript invisible to the wrapper (we want it to behave as if it was just calling oldscript to avoid disruption.
redirect the stdout of testscript, otherwise it'll be forwarded to the the next command in the pipeline which is probably not what you want. Redirect stderr too while you're at it.
Any number of tees can be pipe chained like this to duplicate your input (but don't copy the <&0 stdin redirection from the initial one) (initial <&0 has been removed)

bash redirection via xterm and mpg123

Here is a part of my .fluxbox/startup file
(a=($(grep "^1 " $HOME/Documents/4.n3u|awk '{print "/home/g" $2}'|sort -R|head -20)); \
xterm -e mpg123 -C ${a[#]} &>$HOME/Documents/mpg123.dat &)
As written, the redirection fails, all such output appearing in the xterm instead. The man page for xterm reads, in part,
-e program [ arguments ... ]
This option specifies the program (and its command line argu‐
ments) to be run in the xterm window. It also sets the window
title and icon name to be the basename of the program being
executed if neither -T nor -n are given on the command line.
This must be the last option on the command line.
mpg123 plays the content of array a as desired, and can be controlled through the keyboard as option -C specifies, but xterm seems to frustrate the redirect to file. Is that redirection possible in this context?
Alternatively, I can run it without the xterm to contain mpg123, in which case I get the redirect, but cannot control mpg123 thru the keyboard because it is running in some background subshell with no connections to the keyboard. Is there any way to establish that connection?
You have redirected the stdout and stderr of the xterm process, but xterm does not normally print anything on its own stdout and stderr. The only things that would show up there would be errors related to xterm itself (like if it unexpectedly lost its connection to the X server).
xterm creates a tty and runs the child process (-e command or a shell) with stdin, stdout, and stderr attached to that tty. You need to put the redirection inside the -e to have it apply in the child process, like this:
xterm -e 'your command > whatever'
SECOND ATTEMPT
To keep the ${a[#]} argument list intact but also use the shell redirection operator, you're going to have to explicitly invoke a shell with -c. Like this:
xterm -e sh -c 'your command "$#" > whatever' dummy "${a[#]}"

Resources