Do some programs not accept process substitution for input files? - bash

I'm trying to use process substitution for an input file to a program, and it isn't working. Is it because some programs don't allow process substitution for input files?
The following doesn't work:
bash -c "cat meaningless_name"
>sequence1
gattacagattacagattacagattacagattacagattacagattacagattaca
>sequence2
gattacagattacagattacagattacagattacagattacagattacagattaca
bash -c "clustalw -align -infile=<(cat meaningless_name) -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Less verbose output, finishing with:
No sequences in file. No alignment!
But the following controls do work:
bash -c "clustalw -align -infile=meaningless_name -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
bash -c "cat <(cat meaningless_name) > meaningless_name2"
diff meaningless_name meaningless_name2
(No output: the two files are the same)
bash -c "clustalw -align -infile=meaningless_name2 -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
Which suggest that process substitution itself works, but that the clustalw program itself doesn't like process substitution - perhaps because it creates a non-standard file, or creates files with an unusual filename.
Is it common for programs to not accept process substitution? How would I check whether this is the issue?
I'm running GNU bash version 4.0.33(1)-release (x86_64-pc-linux-gnu) on Ubuntu 9.10. Clustalw is version 2.0.10.

Process substitution creates a named pipe. You can't seek into a named pipe.

Yes. I've noticed the same thing in other programs. For instance, it doesn't work in emacs either. It gives "File exists but can not be read". And it's definitely a special file, for me /proc/self/fd/some_number. And it doesn't work reliably in either less nor most, with default settings.
For most:
most <(/bin/echo 'abcdef')
and shorter displays nothing. Longer values truncate the beginning. less apparently works, but only if you specify -f.
I find zsh's = much more useful in practice. It's syntactically the same, except = instead of <. But it just creates a temporary file, so support doesn't depend on the program.
EDIT:
I found zsh uses TMPPREFIX to choose the temporary filename. So even if you don't want your real /tmp to be tmpfs, you can mount one for zsh.

Related

Is it possible to make a .bat Bash hybrid?

In cmd, it is possible to use Linux commands with the ubuntu or bash commands, but they are very fickle. In batch, it is also possible to make a VBScript-batch hybrid, which got me thinking, is it possible to make a Bash-batch hybrid? Besides being a tongue-twister, I feel that Bash-batch scripts may be really useful.
What I have tried so far
So far I tried using the empty bash and ubuntu commands alone since they switch the normal command-prompt to the Ubuntu/Bash shell, but even if you put commands after the ubuntu/bash they wouldn't show or do anything.
After I tried that, I tried using the ubuntu -run command, but like I said earlier, it’s really fickle and inconsistent on what things work and what things don't. It is less inconsistent when you pipe things into it, but it still usually doesn't work.
I looked here since it seemed like it would answer my question and I tried it, but it didn't work since it required another program (I think).
I also looked to this and I guess it failed miserably, but interesting concept.
What I've gotten from all of my research is that most people think when this is mentioned of a file that could be run either as a .bat file or as .sh shell file instead of my goal, to make a file that runs both batch and Bash commands in the same instance.
What I want this for relates to my other question where I am trying to hash a string instead of a file in cmd, and you could do it with a Bash command, but I would still like to keep the file as a batch file.
Sure you can use Bash in batch, assuming it is available. Just use the command bash -c 'cmd', where cmd is the command that you want to run in Bash.
The following batch line pipes the Hello to cat -A command that prints it including the invisible symbols:
echo Hello | bash -c "cat -A"
Compare the output with the result of the version completely written in Bash:
bash -c "echo Hello | cat -A"
They will slightly differ!

Why does `ls | cat` != `ls`?

Why is there a difference in output between these two commands:
ls | cat
ls
The first one seems to separate filenames with a newline.
This also applies to commands suc has ls > outfile, and similar things.
I'm on Mac OSX, if that makes any difference.
The ls command checks the file type of its standard output. If it detects a “tty”, it emits multicolumn output. If it detects any other file type (like a disk file or a pipe), it emits single column output.
Tty is short for teletype. In the old days, you would use an actual teletype to interact with a Unix system. So if standard output was a teletype, ls would produce output optimized for humans. Otherwise, it would produce output optimized for programs.
With the advent of other ways to run an interactive shell, like telnet sessions and window systems, the Unix authors created the pseudo-teletype, or “pty”, a software-only “device” which pretends to be a teletype. One program (like the telnet server, the ssh server, or the terminal window) can use a pty to make another program (the shell) think it is talking to a teletype.
You can use the -C flag to force ls to emit multicolumn output. You can use the -1 (digit one) flag to force single column output.

shell >& operator?

I have a question about what I think is an operator or argument passer but google hasn't turned up anything. The script this is contained in is
#!/bin/sh
ln mopac.in FOR005
mopac >& FOR006
mv FOR006 mopac.out
When I call "mopac mopac.in", the program runs fine, but, for my needs, mopac is called within another program by using this script, but it seems like the input file is failing to pass so mopac is not running. I don't understand what the ">&" is supposed to do so I am having problems troubleshooting.
Thanks.
>& FILE is deprecated bash (from csh) shorthand for > FILE 2>&1, that is, redirect both standard output and standard error. (If /bin/sh is not bash, as is true on a number of Linux distributions, this will elicit an error.) Older bash (before 3.0) preferred this form, so most newer bash still understand it, although possibly very recent bash has finally removed it as they seem to finally be removing deprecated constructs of late.
Your script there is not passing mopac.in at all, but appears to be assuming that mopac will read its input from FOR005, so uses ln to make it available there. Perhaps you should change the script to read mopac.in as a parameter, just as you're running it directly.
Explanation here : http://tldp.org/LDP/abs/html/io-redirection.html
>&j
# Redirects, by default, file descriptor 1 (stdout) to j.
# All stdout gets sent to file pointed to by j.

Diff output from two programs without temporary files

Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.

Can colorized output be captured via shell redirect? [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Resources