This is probably a newbie's escaping problem. I'm trying run command in a for loop like this
$ for SET in `ls ../../mybook/WS/wsc_production/`; do ~/sandbox/scripts/ftype-switch/typesort.pl /media/mybook/WS/wsc_production/$SET ./wsc_sorter/$SET | tee -a sorter.log; done;
but I end up with sorter.log being empty. (I'm sure there is some output.) If I escape the pipe symbol (\|), I end up with no sorter.log at all.
What am I doing wrong?
$ bash --version
GNU bash, version 4.1.5(1)-release (i486-pc-linux-gnu)
Edit: Oops, /media/mybook/ fell asleep, so there actually was no output. The code was correct in the first place. Thanks to all for comments, though.
Glenn said it well. I would like to offer a different angle: you can move the 'tee' command outside of the for loop. The advantage to this approach is tee is invoked only once:
dir1=$HOME/sandbox/scripts/ftype-switch
dir2=/media/mybook/WS/wsc_production
for SET in ../../mybook/WS/wsc_production/*; do
$dir1/typesort.pl $dir2/$SET ./wsc_sorter/$SET 2>&1
done | tee -a sorter.log
You're using tee, so if there is output, you'd see it on your terminal. What do you see?
If you see output, it's probably stderr you're seeing, so you might want to redirect it:
dir1=$HOME/sandbox/scripts/ftype-switch
dir2=/media/mybook/WS/wsc_production
for SET in ../../mybook/WS/wsc_production/*; do
$dir1/typesort.pl $dir2/$SET ./wsc_sorter/$SET 2>&1 | tee -a sorter.log
done
My deepest apologies, the problem was somewhere else and my script actually did not output anything at all. Now it works.
Two reasons why I got the illusion that the problem is in escaping:
of course, lack of confidence in bash scripting, which is effect of lack of knowledge and experience
and also, lack of attention--it did not come into my mind that the disk on USB fell asleep, so when I tried the loop there actually was no output
Well, that's for some stumbling on my way to knowledge... :)
Related
I'm running rman commands from Bash scripts. I pass my commands to rman using here documents. I want to capture the output but also at the same time print it to the console (real-time).
I found this solution, but I don't how to make it to work with here-docs.
VAR=$(ls | tee /dev/tty)
What I currently run is:
output=$(rman <<RMAN
$rman_script
RMAN
)
Do you know how in this RMAN example I could also print stdout to the console apart from storing it in the output variable? Any help is appreciated.
Cheers.
The here document is no different from other redirections, although the syntax is of course slightly different.
var=$(rman <<\... | tee /dev/stderr
$rman_script
...
)
If this is a representative snippet of your code, you might as well
var=$(rman <<<"$rman_script" | tee /dev/stderr)
By the by, if you genuinely need the script multiple times (why else keep it in a variable?) maybe refactor into a function:
rman_script () {
rman <<\____HERE
Actual script
Probably multiple lines
____HERE
}
var=$(rman_script | tee /dev/stderr)
You'll notice that I use /dev/stderr instead of /dev/tty. Having a script require, and muck with, your tty should probably be avoided unless your script is really short and simple, and only makes sense to use interactively (password manipulation comes to mind as one soenario where it's sometimes hard to avoid).
output=$(rman <<RMAN)
$rman_script
RMAN
Note that a HERE-document looks syntactically like a input redirection, only that you have << instead of <. The input will be taken from the subsequent lines.
I was interested in keeping the header of the input in the grep output just like in the question Include header in the 'grep' result.
Although a working solution (using sed) is presented right in the question, the answer with most upvotes attracted my attention. It allows utilizing grep --color, for example. It suggests piping the input to a command group with the commands head and grep, e.g.:
ps -ef | { head -1; grep python; }
However, as written in the linked answer and a comment, this does not work with all commands on the left side of the pipe. It works well with ps or lsof but does not work for df, ls and others. When using them, only the output of the first command in the group is printed:
$ df | { head -1; grep '/'; }
Filesystem 1K-blocks Used Available Use% Mounted on
$
Could anyone explain why piping to a command group only works for some commands? Is there any way to make it work universally?
The answer to your question is in comments of the linked answer.
Apparently ps -e is sending the header line first, then not sending anything, then buffering the rest of the output. head must think the stream is closed after the first line, so it exits, leaving grep to see the rest.
It only works by accident.
Is there any way to make it work universally?
Everything is possible, but, you may need to recode and recompile everything else. So... not possible, sorry.
I have a third-part software that accepts command line arguments. I want to pipe the output in a file. I have found that for some inexplicable reasons the code hangs if I try:
./run_third_part.py &> log
but it works if
./run_third_part.py
I believe that piping the output is messing up with the process of reading command line arguments, although other ideas are welcome. How can I isolate the program from the pipe command? (I was thinking about putting some sort of parentheses.)
Probably the script is waiting for input on an interactive prompt. The easiest way around this is usually to give it some input:
./run_third_part.py < /dev/null &> log
Can you try creating a subshell and run the script,
bash$ `./run_third_part.py` &> log
Please notice ` is not '
(single quote)
This is a pointless task, I know. But I'm just messing around and trying to familiarize myself.
I thought this might work, but no:
root#debian: Jibberish 2> file.txt && file.txt < /dev/tty0
I thought this might generate an error message which would then be sent to file.txt which would in turn be the input back into the shell (/dev/tty0).
Anyway, if anyone knows how to make an infinite loop using just redirects and pipelines I'd be interested to know.
Thanks
The short answer is:
root#debian: yes
y
y
...
^C
root#debian:
;-)
This seems to be done in several ways, one is like that:
cat < /dev/zero > /dev/null
In a shell, I run following commands without problem,
ls -al
!ls
the second invocation to ls also list files with -al flag. However, when I put the above script to a bash script, complaints are thrown,
!ls, command not found.
how to realise the same effects in script?
You would need to turn on both command history and !-style history expansion in your script (both are off by default in non-interactive shells):
set -o history
set -o histexpand
The expanded command is also echoed to standard error, just like in an interactive shell. You can prevent that by turning on the histverify shell option (shopt -s histverify), but in a non-interactive shell, that seems to make the history expansion a null-op.
Well, I wanted to have this working as well, and I have to tell everybody that the set -o history ; set -o histexpand method will not work in bash 4.x. It's not meant to be used there, anyway, since there are better ways to accomplish this.
First of all, a rather trivial example, just wanting to execute history in a script:
(bash 4.x or higher ONLY)
#!/bin/bash -i
history
Short answer: it works!!
The spanking new -i option stands for interactive, and history will work. But for what purpose?
Quoting Michael H.'s comment from the OP:
"Although you can enable this, this is bad programming practice. It will make your scripts (...) hard to understand. There is a reason it is disabled by default. Why do you want to do this?"
Yes, why? What is the deeper sense of this?
Well, THERE IS, which I'm going to demonstrate in the follow-up section.
My history buffer has grown HUGE, while some of those lines are script one-liners, which I really would not want to retype every time. But sometimes, I also want to alter these lines a little, because I probably want to give a third parameter, whereas I had only needed two in total before.
So here's an ideal way of using the bash 4.0+ feature to invoke history:
$ history
(...)
<lots of lines>
(...)
1234 while IFS='whatever' read [[ $whatever -lt max ]]; do ... ; done < <(workfile.fil)
<25 more lines>
So 1234 from history is exactly the line we want. Surely, we could take the mouse and move there, chucking the whole line in the primary buffer? But we're on *NIX, so why can't we make our life a bit easier?
This is why I wrote the little script below. Again, this is for bash 4.0+ ONLY (but might be adapted for bash 3.x and older with the aforementioned set -o ... stuff...)
#!/bin/bash -i
[[ $1 == "" ]] || history | grep "^\s*$1" |
awk '{for (i=2; i<=NF; i++) printf $i" "}' | tr '\n' '\0'
If you save this as xselauto.sh for example, you may invoke
$ ./xselauto.sh 1234
and the contents of history line #1234 will be in your primary buffer, ready for re-use!
Now if anyone still says "this has no purpose AFAICS" or "who'd ever be needing this feature?" - OK, I won't care. But I would no longer want to live without this feature, as I'm just too lazy to retype complex lines every time. And I wouldn't want to touch the mouse for each marked line from history either, TBH. This is what xsel was written for.
BTW, the tr part of the pipe is a dirty hack which will prevent the command from being executed. For "dangerous" commands, it is extremely important to always leave the user a way to look before he/she hits the Enter key to execute it. You may omit it, but ... you have been warned.
P.S. This scriptlet is in fact a workaround, simulating !1234 typed on a bash shell. As I could never make the ! work directly in a script (echo would never let me reveal the contents of history line 1234), I worked around the problem by simply greping for the line I wanted to copy.
History expansion is part of the interactive command-line editing features of a shell, not part of the scripting language. It's not generally available in the context of a script, only when interacting with a (pseudo-)human operator. (pseudo meaning that it can be made to work with things like expect or other keystroke repeating automation tools that generally try to play act a human, not implying that any particular operator might be sub-human or anything).