Bash store result of a command in a variable and print it to the console while using here documents - bash

I'm running rman commands from Bash scripts. I pass my commands to rman using here documents. I want to capture the output but also at the same time print it to the console (real-time).
I found this solution, but I don't how to make it to work with here-docs.
VAR=$(ls | tee /dev/tty)
What I currently run is:
output=$(rman <<RMAN
$rman_script
RMAN
)
Do you know how in this RMAN example I could also print stdout to the console apart from storing it in the output variable? Any help is appreciated.
Cheers.

The here document is no different from other redirections, although the syntax is of course slightly different.
var=$(rman <<\... | tee /dev/stderr
$rman_script
...
)
If this is a representative snippet of your code, you might as well
var=$(rman <<<"$rman_script" | tee /dev/stderr)
By the by, if you genuinely need the script multiple times (why else keep it in a variable?) maybe refactor into a function:
rman_script () {
rman <<\____HERE
Actual script
Probably multiple lines
____HERE
}
var=$(rman_script | tee /dev/stderr)
You'll notice that I use /dev/stderr instead of /dev/tty. Having a script require, and muck with, your tty should probably be avoided unless your script is really short and simple, and only makes sense to use interactively (password manipulation comes to mind as one soenario where it's sometimes hard to avoid).

output=$(rman <<RMAN)
$rman_script
RMAN
Note that a HERE-document looks syntactically like a input redirection, only that you have << instead of <. The input will be taken from the subsequent lines.

Related

Capture stdout for a long time in bash

I'm using a script which is calling another, like this :
# stuff...
OUT="$(./scriptB)"
# do stuff with the variable OUT
Basically, the scriptB script displays text in multiple time. Ie : it displays a line, 2s late another, 3s later another and so on.
With the snippet i use, i only get the first output of my command, i miss a lot.
How can i get the whole output, by capturing stdout for a given time ? Something like :
begin capture
./scriptB
stop capture
I don't mind if the output is not shown on screen.
Thanks.
If I understand your question, then I believe you can use the tee command, like
./scriptB | tee $HOME/scriptB.log
It will display the stdout from scriptB and write stdout to the log file at the same time.
Some of your output seems to be coming on the STDERR stream. So we have to redirect that as needed. As in my comment, you can do
{ ./scriptB ; } > /tmp/scriptB.log 2>&1
Which can almost certainly be reduced to
./scriptB > /tmp/scriptB.log 2>&1
And in newer versions of bash, can further be reduced to
./scriptB >& /tmp/scriptB.log
AND finally, as your original question involved storing the output to a variable, you can do
OUT=$(./scriptB > /tmp/scriptB.log 2>&1)
The notation 2>&1 says, take the file descriptor 2 of this process (STDERR) and tie it (&) into the file descriptor 1 of the process (STDOUT).
The alternate notation provided ( ... >& file) is a shorthand for the 2>&1.
Personally, I'd recommend using the 2>&1 syntax, as this is understood by all Bourne derived shells (not [t]csh).
As an aside, all processes by default have 3 file descriptors created when the process is created, 0=STDIN, 1=STDOUT, 2=STDERR. Manipulation of those streams is usually as simple as illustrated here. More advanced (rare) manipulations are possible. Post a separate question if you need to know more.
IHTH

Pipe with `tee` in a `for` loop

This is probably a newbie's escaping problem. I'm trying run command in a for loop like this
$ for SET in `ls ../../mybook/WS/wsc_production/`; do ~/sandbox/scripts/ftype-switch/typesort.pl /media/mybook/WS/wsc_production/$SET ./wsc_sorter/$SET | tee -a sorter.log; done;
but I end up with sorter.log being empty. (I'm sure there is some output.) If I escape the pipe symbol (\|), I end up with no sorter.log at all.
What am I doing wrong?
$ bash --version
GNU bash, version 4.1.5(1)-release (i486-pc-linux-gnu)
Edit: Oops, /media/mybook/ fell asleep, so there actually was no output. The code was correct in the first place. Thanks to all for comments, though.
Glenn said it well. I would like to offer a different angle: you can move the 'tee' command outside of the for loop. The advantage to this approach is tee is invoked only once:
dir1=$HOME/sandbox/scripts/ftype-switch
dir2=/media/mybook/WS/wsc_production
for SET in ../../mybook/WS/wsc_production/*; do
$dir1/typesort.pl $dir2/$SET ./wsc_sorter/$SET 2>&1
done | tee -a sorter.log
You're using tee, so if there is output, you'd see it on your terminal. What do you see?
If you see output, it's probably stderr you're seeing, so you might want to redirect it:
dir1=$HOME/sandbox/scripts/ftype-switch
dir2=/media/mybook/WS/wsc_production
for SET in ../../mybook/WS/wsc_production/*; do
$dir1/typesort.pl $dir2/$SET ./wsc_sorter/$SET 2>&1 | tee -a sorter.log
done
My deepest apologies, the problem was somewhere else and my script actually did not output anything at all. Now it works.
Two reasons why I got the illusion that the problem is in escaping:
of course, lack of confidence in bash scripting, which is effect of lack of knowledge and experience
and also, lack of attention--it did not come into my mind that the disk on USB fell asleep, so when I tried the loop there actually was no output
Well, that's for some stumbling on my way to knowledge... :)

why does redirect (<) not create a subshell

I wrote the following code
var=0
cat $file | while read line do
var=$line
done
echo $var
Now as I understand it the pipe (|) will cause a sub shell to be created an therefore the variable var on line 1 will have the same value on the last line.
However this will solve it:
var=0
while read line do
var=$line
done < $file
echo $line
My question is why does the redirect not cause a subshell to be created, or if you like why does pipe cause one to be created?
Thanks
The cat command is a command which means it needs its own process and has its own STDIN and STDOUT. You're basically taking the STDOUT produced by the cat command and redirecting it into the process of the while loop.
When you use redirection, you're not using a separate process. Instead, you're merely redirecting the STDIN of the while loop from the console to the lines of the file.
Needless to say, the second way is more efficient. In the old Usenet days before all of you little whippersnappers got ahold of our Internet (_Hey you kids! Get off of my Internet!) and destroyed it with your fancy graphics and all them web page, some people use to give out the Useless Use of Cat award for people who contributed to the comp.unix.shell group and had a spurious cat command because the use of cat is almost never necessary and is usually more inefficient.
If you're using a cat in your code, you probably don't need it. The cat command comes from concatenate and is suppose to be used only to concatenate files together. For example, when we use to use SneakerNet on 800K floppies, we would have to split up long files with the Unix split command and then use cat to merge them back together.
A pipe is there to hook the stdout of one program to the stdin or another one. Two processes, possibly two shells. When you do redirection (> and <), all you're doing remapping stdin (or stdout) to a file. reading/writing a file can be done without another process or shell.

Diff output from two programs without temporary files

Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.

Switch from file contents to STDIN in piped command? (Linux Shell)

I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.

Resources