Why does `ls | cat` != `ls`? - bash

Why is there a difference in output between these two commands:
ls | cat
ls
The first one seems to separate filenames with a newline.
This also applies to commands suc has ls > outfile, and similar things.
I'm on Mac OSX, if that makes any difference.

The ls command checks the file type of its standard output. If it detects a “tty”, it emits multicolumn output. If it detects any other file type (like a disk file or a pipe), it emits single column output.
Tty is short for teletype. In the old days, you would use an actual teletype to interact with a Unix system. So if standard output was a teletype, ls would produce output optimized for humans. Otherwise, it would produce output optimized for programs.
With the advent of other ways to run an interactive shell, like telnet sessions and window systems, the Unix authors created the pseudo-teletype, or “pty”, a software-only “device” which pretends to be a teletype. One program (like the telnet server, the ssh server, or the terminal window) can use a pty to make another program (the shell) think it is talking to a teletype.
You can use the -C flag to force ls to emit multicolumn output. You can use the -1 (digit one) flag to force single column output.

Related

tee output not appearing until cmd finishes

Usually if I want to print the output of a command and in addition capture that output in a file, tee is the solution. But I'm making a script using a utility which seems to have a special behaviour. It's the wps wireless assessment tool bully.
If I run a bully command as normal (without tee), the output is shown in a standard way, step by step. But If I put the pipe at the end to log like this | tee "/path/to/my/logfile" the output on the screen freezes. It shows nothing until the command ends. And after ending, it shows everything in one shot (not step by step) and of course it puts the output also in the log tee file.
An example of bully command: bully wlan0mon -b 00:11:22:33:44:55 -c 8 -L -F -B -v 3 -p 12345670 | tee /root/Desktop/log.txt
Why? Not sure if it only happens with bully or if there are other programs with the same behaviour.
Is there another way to capture the output into a file having the output on screen in real time?
What you're seeing is full buffering vs line buffering. By default, when stdout is writing to a tty (i.e. interactive) you'll have line buffering, vs the default of being fully buffered otherwise. You can see the setvbuf(3) man page for a more detailed explanation.
Some commands offer an option to force line buffering (e.g. GNU grep has --line-buffered). But that sort of option is not widely available.
Another option is to use something like expect's unbuffer command if you want to be able to see the output more interactively (at the cost of depending on expect, of course).

Do some programs not accept process substitution for input files?

I'm trying to use process substitution for an input file to a program, and it isn't working. Is it because some programs don't allow process substitution for input files?
The following doesn't work:
bash -c "cat meaningless_name"
>sequence1
gattacagattacagattacagattacagattacagattacagattacagattaca
>sequence2
gattacagattacagattacagattacagattacagattacagattacagattaca
bash -c "clustalw -align -infile=<(cat meaningless_name) -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Less verbose output, finishing with:
No sequences in file. No alignment!
But the following controls do work:
bash -c "clustalw -align -infile=meaningless_name -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
bash -c "cat <(cat meaningless_name) > meaningless_name2"
diff meaningless_name meaningless_name2
(No output: the two files are the same)
bash -c "clustalw -align -infile=meaningless_name2 -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
Which suggest that process substitution itself works, but that the clustalw program itself doesn't like process substitution - perhaps because it creates a non-standard file, or creates files with an unusual filename.
Is it common for programs to not accept process substitution? How would I check whether this is the issue?
I'm running GNU bash version 4.0.33(1)-release (x86_64-pc-linux-gnu) on Ubuntu 9.10. Clustalw is version 2.0.10.
Process substitution creates a named pipe. You can't seek into a named pipe.
Yes. I've noticed the same thing in other programs. For instance, it doesn't work in emacs either. It gives "File exists but can not be read". And it's definitely a special file, for me /proc/self/fd/some_number. And it doesn't work reliably in either less nor most, with default settings.
For most:
most <(/bin/echo 'abcdef')
and shorter displays nothing. Longer values truncate the beginning. less apparently works, but only if you specify -f.
I find zsh's = much more useful in practice. It's syntactically the same, except = instead of <. But it just creates a temporary file, so support doesn't depend on the program.
EDIT:
I found zsh uses TMPPREFIX to choose the temporary filename. So even if you don't want your real /tmp to be tmpfs, you can mount one for zsh.

Diff output from two programs without temporary files

Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.

Can colorized output be captured via shell redirect? [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

How do I jump to the first line of shell output? (shell equivalent of emacs comint-show-output)

I recently discovered 'comint-show-output' in emacs shell mode, which jumps to the first line of shell output, which I find incredibly handy when looking at shell output that exceeds a screen length. The advantages of this command over scrolling with 'page up' are A) you don't have to scan with your eyes for the first line of the output B) you only have to hit the key combo once (instead of 'page up' a number of times which probably is not known beforehand).
I thought about ending all my commands with '| more' but actually this is not what I want since most of the time, I want to retain all output in the terminal buffer, and I usually want to see the end of the shell output first.
I use OSX. Is there a terminal app (on os x) and shell (on remote linux) combination equivalent (so I can do something similar without using emacs all the time - I know, crazy talk)? I normally use bash, but would be fine with switching shells just for this feature.
The way I do this sort of thing is by sending my output to a file and then watching the file as it is written. You still get the results of the command dumped to terminal history in real time and can still inspect the output's actual contents further after the fact (or in another terminal, etc...)
command > output &
tail -f output
head output
You could always do something in bash like this:
alias foo='!! | more'
which would make foo run the previous command with more. I'm not sure of any way to do exactly what you are suggesting.
If you're expecting a lot of output and don't want to run your command twice, you can use tee(1) to fork the output:
my-command | tee /tmp/my-command.log | less
This will pipe the output to a paginator (less), while simultaneously logging the output to a file (in this case, a file named /tmp/my-command.log). If you need to review the output after you've quit from less, you can just cat the log file instead of re-running the command.

Resources