If I enter a command that runs for a long time or produce a lot of output, I often want to process that output in some way, but I don't want to re-run the command. For example, I might run
$ command
$ command | grep foo
$ command | grep foo | sort | uniq
But if command takes a long time, this is tedious to re-run. Is there a way to have bash (or any other shell) save the output of the last command, similar to the Python REPL's _?. I am aware of tee, but I would rather have my shell do this automatically without having to use tee all the time.
I am also aware I could store the output of a command, but again, I would like my shell to do this automatically, so I don't have to think about storing the command and I can just use my shell normally, and process the previous output when I want to.
You can store the output into a variable:
output=$(command)
echo $output | grep foo
echo $output | grep foo | sort | uniq
Related
grep "String" war_err.txt > list_of_wlan_common_war.txt | cat
This command is passing on command line. But to include in script I have to give | cat. can I know what does | cat do?
| (pipe) symbol links the output of one process to another process input. So using | cat should the print the output of prev command ran. Because cat commands take the input and print it.
However, in your case it's not doing anything. As you are redirecting the standered output of grep command to text file. So, no further piping is happening.
I stumbled upon a shell instruction which looks odd:
ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t ; echo $?
I suspect that the returned value would represent an error. Could anyone confirm this or explain in which circumstances this instruction would be applied ?
The first grep only allows through lines that contain qmail (preceded by any character and followed by a dash, but that is largely immaterial). The second grep strips out lines that contain mail, which means every line passed by the first grep is deleted by the second. There's nothing left for the third one to process, so the file t will always be empty. The value for $? should be 1, failure, since the third grep failed to find any lines that matched its pattern (because it got no lines to process).
It is a mistake.
It is hard to know how to fix it without knowing what it is trying to do.
The bash shell (and most other shells) let users use the output of one command as the input of another. This is accomplished with the | operator which is called a pipe. So the output of of ls -a is fed to grep ".qmail-" and so on. The > operator sends the output of the command to a file, in this case t. So ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t lists the contents of a directory and passes the output through successive filters before finally saving the output to the file t.
The semicolon signals the end of a command and allows multiple bash commands to be entered on a single line.
echo $? prints out the return value of the last executed command, in this case, ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t. By convention, any value besides 0 indicates some sort of error occurred.The Linux Documentation Project gives a list of some common exit codes.
I am trying to execute a couple of complicated grep commands via a shells script that work fin in the terminal manually executed. I can't for the life of me figure out why this doesn't work.
The goal of the first grep is to get out any process id attached to the parent myPattern. The 2nd get the process id of the process myPattern
Currently my shell script
returns nothing for the 1st.
ignores the "grep -v 'grep'" part in the 2nd.
#!/bin/sh
ps -ef | grep "$(ps -ef | grep 'myPattern' | grep -v grep | awk '{print $2}')" | grep -v grep | grep -v myPattern | awk '{print $2}'
ps -ef | grep 'myPattern' | grep -v 'grep' | awk '{print $2}'
This works fine when run in the terminal manually. Any ideas where i have stuffed this up?
Your first command is vague. I don't think it would reliably do what you describe. It also does not guard against getting the id of the first grep call. The second one works for me. For the fists query it highly depends on the system you are using. It's easier to use pstree to show you the whole process tree under a pid. Like:
pstree -p 1782 | sed 's/-/\n/g' | sed -n -e 's/.*(\([0-9]\+\)).*/\1/p'
You need to limit pid to be a single value. If you have more values, then you have to loop through them. If you don't have pstree, then you can craft some loop around ps. Make note that even if you current commands worked, then thwy would catch only one level parent/child relationships. pstree does any level.
I also have to tell you that a process can escape original parent as a parent process by forking.
In any case without exact details what you are trying to achieve and why, and on what platform it is hard to give you a great answer. Also these utilities albeit present virtually everywhere are not as portable as one would wish.
One more note - /bin/sh is often not your current shell. On many linux systems user has a default shell of bash and /bin/sh is dash or some other shell variant. So if you see diffs with what you have in console and script, it can be difference in actual shell you are using.
Based on user feedback it would be much easier to have something like this in the java process launching script:
java <your params here> &
echo $! > /var/run/myprog.pid
Then the kill script would look like echo /var/run/myprog.pid | xargs kill. There are shorter commands but I think this is more portable. Give actual code if you want more specific.
I'm looking for a way to use the ouput of a command (say command1) as an argument for another command (say command2).
I encountered this problem when trying to grep the output of who command but using a pattern given by another set of command (actually tty piped to sed).
Context:
If tty displays:
/dev/pts/5
And who displays:
root pts/4 2012-01-15 16:01 (xxxx)
root pts/5 2012-02-25 10:02 (yyyy)
root pts/2 2012-03-09 12:03 (zzzz)
Goal:
I want only the line(s) regarding "pts/5"
So I piped tty to sed as follows:
$ tty | sed 's/\/dev\///'
pts/5
Test:
The attempted following command doesn't work:
$ who | grep $(echo $(tty) | sed 's/\/dev\///')"
Possible solution:
I've found out that the following works just fine:
$ eval "who | grep $(echo $(tty) | sed 's/\/dev\///')"
But I'm sure the use of eval could be avoided.
As a final side node: I've noticed that the "-m" argument to who gives me exactly what I want (get only the line of who that is linked to current user). But I'm still curious on how I could make this combination of pipes and command nesting to work...
One usually uses xargs to make the output of one command an option to another command. For example:
$ cat command1
#!/bin/sh
echo "one"
echo "two"
echo "three"
$ cat command2
#!/bin/sh
printf '1 = %s\n' "$1"
$ ./command1 | xargs -n 1 ./command2
1 = one
1 = two
1 = three
$
But ... while that was your question, it's not what you really want to know.
If you don't mind storing your tty in a variable, you can use bash variable mangling to do your substitution:
$ tty=`tty`; who | grep -w "${tty#/dev/}"
ghoti pts/198 Mar 8 17:01 (:0.0)
(You want the -w because if you're on pts/6 you shouldn't see pts/60's logins.)
You're limited to doing this in a variable, because if you try to put the tty command into a pipe, it thinks that it's not running associated with a terminal anymore.
$ true | echo `tty | sed 's:/dev/::'`
not a tty
$
Note that nothing in this answer so far is specific to bash. Since you're using bash, another way around this problem is to use process substitution. For example, while this does not work:
$ who | grep "$(tty | sed 's:/dev/::')"
This does:
$ grep $(tty | sed 's:/dev/::') < <(who)
You can do this without resorting to sed with the help of Bash variable mangling, although as #ruakh points out this won't work in the single line version (without the semicolon separating the commands). I'm leaving this first approach up because I think it's interesting that it doesn't work in a single line:
TTY=$(tty); who | grep "${TTY#/dev/}"
This first puts the output of tty into a variable, then erases the leading /dev/ on grep's use of it. But without the semicolon TTY is not in the environment by the moment bash does the variable expansion/mangling for grep.
Here's a version that does work because it spawns a subshell with the already modified environment (that has TTY):
TTY=$(tty) WHOLINE=$(who | grep "${TTY#/dev/}")
The result is left in $WHOLINE.
#Eduardo's answer is correct (and as I was writing this, a couple of other good answers have appeared), but I'd like to explain why the original command is failing. As usual, set -x is very useful to see what's actually happening:
$ set -x
$ who | grep $(echo $(tty) | sed 's/\/dev\///')
+ who
++ sed 's/\/dev\///'
+++ tty
++ echo not a tty
+ grep not a tty
grep: a: No such file or directory
grep: tty: No such file or directory
It's not completely explicit in the above, but what's happening is that tty is outputting "not a tty". This is because it's part of the pipeline being fed the output of who, so its stdin is indeed not a tty. This is the real reason everyone else's answers work: they get tty out of the pipeline, so it can see your actual terminal.
BTW, your proposed command is basically correct (except for the pipeline issue), but unnecessarily complex. Don't use echo $(tty), it's essentially the same as just tty.
You can do it like this:
tid=$(tty | sed 's#/dev/##') && who | grep "$tid"
As I build *nix piped commands I find that I want to see the output of one stage to verify correctness before building the next stage but I don't want to re-run each stage. Does anyone know of a program that will help with that? It would keep the output of the last stage automatically to use for any new stages. I usually do this by sending the result of each command to a temporary file (i.e. tee or run each command one at a time) but it would be nice for a program to handle this.
I envision something like a tabbed interface where each tab is labeled with each pipe command and selecting a tab shows the output (at least a hundred lines) of applying that command to to the previous result.
Use 'tee' to copy the intermediate results out to some file as well as pass them on to the next stage of the pipe, like so:
cat /var/log/syslog | tee /tmp/syslog.out | grep something | tee /tmp/grep.out | sed 's/foo/bar/g' | tee /tmp/sed.out | cat >>/var/log/syslog.cleaned
You can also use pipes if you need bidirectional communication (i.e. with netcat):
mknod backpipe p
nc -l -p 80 0<backpipe | tee -a inflow | nc localhost 81 | tee -a outflow 1>backpipe
(via)
There's also the "pv" command - available in debian / ubuntu repostitories which shows you the throughput of your pipes.
An example from the man page :
Transferring a file from another process and passing the expected size to pv:
cat file | pv -s 12345 | nc -w 1 somewhere.com 3000
tee(1) is your friend. It sends its input to both the specified file and stdout.
Stick it between your pipes. For example:
ls | tee /tmp/out1 | sort | tee /tmp/out2 | sed 's/foo/bar/g'