How to pipe output of a command that expects file argument - shell

I know you can pipe output of a one command to another - for example
ls -la | less
to see output of ls -la inside of less instead of terminal stdio.
But if you use a command with a parameter that saves the output to a file
command --save-to-file file.txt
Then how to pipe that to another command?
this way will not work:
command --save-to-file | less
because command will complain that you used --save-to-file without any argument (filename)
If I remember well there was something like a buffer or a temp file in ram you could put instead of file.txt so you could do something like:
command --save-to-file ram-buffer.txt && cat ram-buffer.txt
Without even creating a file on disk, it that right?
Why I need that?
Some of commands have only a basic output to the stdio and more useful type of output cannot be printed by them but only saved to file. The thing is I am not interested in saving the more useful type of output to any file at all but just to print it in terminal or pipe to chain of another commands that do the filtering etc and then eventually print the processed output.
I would not like to be responsible to crating a tmp file then delete it etc. Perfectly I would like to just use a kind of magic file (or redirection) in place of file.txt that I could pipe to another command.
It is important to me to not write any content of the output to a disk if this is possible. Just print it in terminal or pipe to other command(s).
At this moment I'm trying to capture output of PHPUnit
phpunit --log-junit log.xml
which is not a shell command but a PHP script that uses:
#!/usr/bin/env php
But I remember I used to have an example with linux command that I wanted get the output but the form of it was only available with a parameter --save-to-file outputfile.txt
Perhaps because piping/redirecting an output designed to be saved to a file is not binary safe and therefore such output can be corrupted when piped/redirected - can it be?

Some programs have special handling for -. For example, you can tell tar to write to stdout so it can be used in a pipeline. This would create a tarball locally and untar it remotely without the tarball ever being written to disk:
tar -cf - *.txt | ssh user#host tar -C /dir/ -xf -
You can use /dev/stdout with nearly all programs, as long as they don't need a seekable file.
command --save-to-file /dev/stdout

As #Benjamin W. pointed out in the comments, you can save it to /dev/stdout, which is the output and then pipe the output to whatever you want(e.g. less)
command --save-to-file /dev/stdout | less
Take care because there may be additional output to stdout. In this case you could throw that away, save it to stderr and redirect it to stdout:
command --save-to-file /dev/stderr >/dev/null 2>/dev/stdout | less
If both, stderr and stdout are used you may be able to write your own driver for this of manipulate /proc/pid/mem or something like this.

Related

Pipe command in Bash

Pipe command is showing it's results properly .When i try to use it cat or > it doesn't show the output
i have try to run the command with different spaces but it didn't help
sort spiderman.txt | cat > superman.txt
sort spiderman.txt | > superman.txt
in the first above code cat is not showing it's output (the cat command is not showing contents of superman.txt ) however if i write is separately the cat command it's showing the contents
in the second command nothing happens to superman.txt
ideally it should have replaced all contents of superman.txt and replaced with sorted contents of spiderman.txt but nothing happens.
If you're trying simple output redirection you shouldn't pipe (|), just redirect (>):
sort spiderman.txt > superman.txt
If you want to show the content as well as redirect to a file - perhaps what you're looking for is tee?
sort spiderman.txt | tee superman.txt
Description:
The tee utility copies standard input to standard output, making a copy in zero or more files. The output is unbuffered.
> superman.txt (with no command) is processed as follows:
superman.txt is opened for writing and truncated
The output redirection is removed from the current command.
Since there is nothing left, the empty command is treated as having
run and exited successfully. Nothing actually reads from the pipe
or writes to superman.txt.
cat is necessary as a command which does read from standard input and writes to standard output.
It sometimes seems a little odd to me that more shells don't provide a minimal built-in that simply copies input to output with no frills, to avoid otherwise having to fork and exec cat. ( I should say "no" rather than "more", as I'm not aware of any shell that does. zsh might, if I bothered to search through the documentation to find it.)
(Some shells will optimize away an extra fork when processing a command line; bash is not one of them, though. It forks once to create a process for the write end of the pipe, then forks again to run cat. I believe ksh would simply exec cat directly instead of unnecessarily forking, in which case a built-in cat is less necessary.)

How to read a file when i am redirecting the script to a text file in shell

I am executing a script and redirecting the output to text file using command sample.sh -base BUG2 1> output.txt 2>&1
now in the script i want to read the contents of the text file to grep some words.so how can we read that text file while the script is running.
If I get it well, you want to execute a script, and redirect its standard output into a file and on standard output: tee is what you are looking for.
This command redirect its standard input into a file and on its standard output (http://ss64.com/bash/tee.html)
sample.sh -base BUG2 | tee 'file.txt' | more_script
I'm not sure I understand your question correctly, but from what I can guess and assume, it appears that you're trying to read from, and write to the same file.
You will be able to do so, but you won't be able to rewind, as Bash can not seek.
Read more on redirections.
If you're trying to do something like this: cat file | sed s/foo/bar/ > file, i.e. Reading from a file and writing to it in the same pipeline.
That woul'd be impossible.
You cannot read from a file and write to it in the same pipeline. Depending on what your pipeline does, the file may be clobbered (to 0 bytes, or possibly to a number of bytes equal to the size of your operating system's pipeline buffer), or it may grow until it fills the available disk space, or reaches your operating system's file size limitation, or your quota, etc.
( Quoted from Bash Pitfalls 13 )

How to get error output and store it in a variable or file

I'm having a little trouble figuring out how to get error output and store it in a variable or file in ksh. So in my script I have cp -p source.file destination inside a while loop.
When I get the below error
cp: source.file: The file access permissions do not allow the specified action.
I want to grab it and store it in a variable or file.
Thanks
You can redirect the error output of the command like so:
cp -p source.file destination 2>> my_log.txt
It will append the error message to the my_log.txt file.
In case you want a variable you can redirect stderr to stdout and assign the command output to a variable:
my_error_var=$(cp -p source.file destination 2>&1)
In ksh (as per Q), as in bash and other sh derivatives, you can get all/just stderr output from cp using redirection, then grabbing in a var (using $(), better than backtick if using a vaguely recent version):
output=$(cp -p source.file destination 2>&1)
cp doesn't normally output anything though this would capture stdout and stderr; to capture just stderr this way, use 1>/dev/null also. The other solutions redirecting to a file could use cat/various other commands to output/process the logfile.
Reason why I don't suggest using outputting to temporary files:
Redirecting to a file then reading that in (via read command or more inefficiently via $(cat file) ), particularly for just a single line, is less efficient and slower; though not so bad if you want to append to it each time for multiple operations before displaying the errors. You'll also leave the temporary file around unless you ALWAYS clear it up, don't forget when people interrupt (ie. Ctrl-C) or kill the script.
Using temporary files also could be a problem if the script is run multiple times at once (eg. could happen via cron if filesystem/other delays cause massive overruns or just from multiple users), unless the temporary filename is unique.
Generating temporary files is also a security risk unless done very carefully, especially if the file data is processed again or the contents could be rewritten before display by something else to confuse/phish the user/break the script. Don't get into a habit of doing it too casually, read up on temporary files (eg. mktemp) first via other questions here/google.
You can do STDERR redirects by doing:
command 2> /path/to/file.txt

Diff output from two programs without temporary files

Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.

Switch from file contents to STDIN in piped command? (Linux Shell)

I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.

Resources