We offen use #echo "do..." to letting to print only do....
But who can tell me what is the mean of this condition ?
COUNT=$(shell ls | wc -l )
Then
#COUNT=$( shell ls | grep abc | wc -l )
What's the mean of the second?
It disables printing the command line being executed. Any output from the command itself still appears. See this previous question or see this Makefile reference.
It will hide the output of the commandline when executing. Normally each command, when executing a rule, is printed to the console. This will suppress this output.
Related
I want to create a crontab entry for it.
So the solution has to be a one liner as I do not want to write a script and then invoke that script in crontab (with full if else inside a script-file, I can do this easily).
But I want to use some && or || to achieve something like:
cmd1 | cmd2 | cmd3 | wc -l != 0? mail -s "Found xyz" ...
How can this be done?
Try this:
test "$(unreadable | long | list | of | commands | in | one | line)" && mail -s ...
It is useless to count the lines, if you just want to know, if the commands produce any output.
You can use something like this:
cmd1 | cmd2 | cmd3 | grep '^' >/dev/null && mail -s "Found xyz"
With &&, it'll run the final command (mail) if there's at least one line of output. If you want to send mail if there isn't any output, use || instead.
Explanation: the grep command succeeds if it finds any lines matching the given pattern. The pattern ^ matches the beginning of a line... any line. So grep '^' succeeds if there's at least one line (even a blank one). The >/dev/null then discards the output.
(You could also use grep -q instead of the >/dev/null, but then grep will exit after the first match, which may cause SIGPIPE errors for the earlier commands, and possibly weird problems. I consider the >/dev/null method safer when used in a pipeline.)
If I enter a command that runs for a long time or produce a lot of output, I often want to process that output in some way, but I don't want to re-run the command. For example, I might run
$ command
$ command | grep foo
$ command | grep foo | sort | uniq
But if command takes a long time, this is tedious to re-run. Is there a way to have bash (or any other shell) save the output of the last command, similar to the Python REPL's _?. I am aware of tee, but I would rather have my shell do this automatically without having to use tee all the time.
I am also aware I could store the output of a command, but again, I would like my shell to do this automatically, so I don't have to think about storing the command and I can just use my shell normally, and process the previous output when I want to.
You can store the output into a variable:
output=$(command)
echo $output | grep foo
echo $output | grep foo | sort | uniq
I stumbled upon a shell instruction which looks odd:
ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t ; echo $?
I suspect that the returned value would represent an error. Could anyone confirm this or explain in which circumstances this instruction would be applied ?
The first grep only allows through lines that contain qmail (preceded by any character and followed by a dash, but that is largely immaterial). The second grep strips out lines that contain mail, which means every line passed by the first grep is deleted by the second. There's nothing left for the third one to process, so the file t will always be empty. The value for $? should be 1, failure, since the third grep failed to find any lines that matched its pattern (because it got no lines to process).
It is a mistake.
It is hard to know how to fix it without knowing what it is trying to do.
The bash shell (and most other shells) let users use the output of one command as the input of another. This is accomplished with the | operator which is called a pipe. So the output of of ls -a is fed to grep ".qmail-" and so on. The > operator sends the output of the command to a file, in this case t. So ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t lists the contents of a directory and passes the output through successive filters before finally saving the output to the file t.
The semicolon signals the end of a command and allows multiple bash commands to be entered on a single line.
echo $? prints out the return value of the last executed command, in this case, ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t. By convention, any value besides 0 indicates some sort of error occurred.The Linux Documentation Project gives a list of some common exit codes.
I'm looking for a way to use the ouput of a command (say command1) as an argument for another command (say command2).
I encountered this problem when trying to grep the output of who command but using a pattern given by another set of command (actually tty piped to sed).
Context:
If tty displays:
/dev/pts/5
And who displays:
root pts/4 2012-01-15 16:01 (xxxx)
root pts/5 2012-02-25 10:02 (yyyy)
root pts/2 2012-03-09 12:03 (zzzz)
Goal:
I want only the line(s) regarding "pts/5"
So I piped tty to sed as follows:
$ tty | sed 's/\/dev\///'
pts/5
Test:
The attempted following command doesn't work:
$ who | grep $(echo $(tty) | sed 's/\/dev\///')"
Possible solution:
I've found out that the following works just fine:
$ eval "who | grep $(echo $(tty) | sed 's/\/dev\///')"
But I'm sure the use of eval could be avoided.
As a final side node: I've noticed that the "-m" argument to who gives me exactly what I want (get only the line of who that is linked to current user). But I'm still curious on how I could make this combination of pipes and command nesting to work...
One usually uses xargs to make the output of one command an option to another command. For example:
$ cat command1
#!/bin/sh
echo "one"
echo "two"
echo "three"
$ cat command2
#!/bin/sh
printf '1 = %s\n' "$1"
$ ./command1 | xargs -n 1 ./command2
1 = one
1 = two
1 = three
$
But ... while that was your question, it's not what you really want to know.
If you don't mind storing your tty in a variable, you can use bash variable mangling to do your substitution:
$ tty=`tty`; who | grep -w "${tty#/dev/}"
ghoti pts/198 Mar 8 17:01 (:0.0)
(You want the -w because if you're on pts/6 you shouldn't see pts/60's logins.)
You're limited to doing this in a variable, because if you try to put the tty command into a pipe, it thinks that it's not running associated with a terminal anymore.
$ true | echo `tty | sed 's:/dev/::'`
not a tty
$
Note that nothing in this answer so far is specific to bash. Since you're using bash, another way around this problem is to use process substitution. For example, while this does not work:
$ who | grep "$(tty | sed 's:/dev/::')"
This does:
$ grep $(tty | sed 's:/dev/::') < <(who)
You can do this without resorting to sed with the help of Bash variable mangling, although as #ruakh points out this won't work in the single line version (without the semicolon separating the commands). I'm leaving this first approach up because I think it's interesting that it doesn't work in a single line:
TTY=$(tty); who | grep "${TTY#/dev/}"
This first puts the output of tty into a variable, then erases the leading /dev/ on grep's use of it. But without the semicolon TTY is not in the environment by the moment bash does the variable expansion/mangling for grep.
Here's a version that does work because it spawns a subshell with the already modified environment (that has TTY):
TTY=$(tty) WHOLINE=$(who | grep "${TTY#/dev/}")
The result is left in $WHOLINE.
#Eduardo's answer is correct (and as I was writing this, a couple of other good answers have appeared), but I'd like to explain why the original command is failing. As usual, set -x is very useful to see what's actually happening:
$ set -x
$ who | grep $(echo $(tty) | sed 's/\/dev\///')
+ who
++ sed 's/\/dev\///'
+++ tty
++ echo not a tty
+ grep not a tty
grep: a: No such file or directory
grep: tty: No such file or directory
It's not completely explicit in the above, but what's happening is that tty is outputting "not a tty". This is because it's part of the pipeline being fed the output of who, so its stdin is indeed not a tty. This is the real reason everyone else's answers work: they get tty out of the pipeline, so it can see your actual terminal.
BTW, your proposed command is basically correct (except for the pipeline issue), but unnecessarily complex. Don't use echo $(tty), it's essentially the same as just tty.
You can do it like this:
tid=$(tty | sed 's#/dev/##') && who | grep "$tid"
I'm well aware of the source (aka .) utility, which will take the contents from a file and execute them within the current shell.
Now, I'm transforming some text into shell commands, and then running them, as follows:
$ ls | sed ... | sh
ls is just a random example, the original text can be anything. sed too, just an example for transforming text. The interesting bit is sh. I pipe whatever I got to sh and it runs it.
My problem is, that means starting a new sub shell. I'd rather have the commands run within my current shell. Like I would be able to do with source some-file, if I had the commands in a text file.
I don't want to create a temp file because feels dirty.
Alternatively, I'd like to start my sub shell with the exact same characteristics as my current shell.
update
Ok, the solutions using backtick certainly work, but I often need to do this while I'm checking and changing the output, so I'd much prefer if there was a way to pipe the result into something in the end.
sad update
Ah, the /dev/stdin thing looked so pretty, but, in a more complex case, it didn't work.
So, I have this:
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/git mv -f $1 $2.doc/i' | source /dev/stdin
Which ensures all .doc files have their extension lowercased.
And which incidentally, can be handled with xargs, but that's besides the point.
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/$1 $2.doc/i' | xargs -L1 git mv
So, when I run the former, it'll exit right away, nothing happens.
The eval command exists for this very purpose.
eval "$( ls | sed... )"
More from the bash manual:
eval
eval [arguments]
The arguments are concatenated together
into a single command, which
is then read and executed, and its
exit status returned as the exit
status of eval. If there are no
arguments or only empty arguments, the
return status is zero.
$ ls | sed ... | source /dev/stdin
UPDATE: This works in bash 4.0, as well as tcsh, and dash (if you change source to .). Apparently this was buggy in bash 3.2. From the bash 4.0 release notes:
Fixed a bug that caused `.' to fail to read and execute commands from non-regular files such as devices or named pipes.
Try using process substitution, which replaces output of a command with a temporary file which can then be sourced:
source <(echo id)
Wow, I know this is an old question, but I've found myself with the same exact problem recently (that's how I got here).
Anyway - I don't like the source /dev/stdin answer, but I think I found a better one. It's deceptively simple actually:
echo ls -la | xargs xargs
Nice, right? Actually, this still doesn't do what you want, because if you have multiple lines it will concat them into a single command instead of running each command separately. So the solution I found is:
ls | ... | xargs -L 1 xargs
the -L 1 option means you use (at most) 1 line per command execution. Note: if your line ends with a trailing space, it will be concatenated with the next line! So make sure each line ends with a non-space.
Finally, you can do
ls | ... | xargs -L 1 xargs -t
to see what commands are executed (-t is verbose).
Hope someone reads this!
`ls | sed ...`
I sort of feel like ls | sed ... | source - would be prettier, but unfortunately source doesn't understand - to mean stdin.
I believe this is "the right answer" to the question:
ls | sed ... | while read line; do $line; done
That is, one can pipe into a while loop; the read command command takes one line from its stdin and assigns it to the variable $line. $line then becomes the command executed within the loop; and it continues until there are no further lines in its input.
This still won't work with some control structures (like another loop), but it fits the bill in this case.
To use the mark4o's solution on bash 3.2 (macos) a here string can be used instead of pipelines like in this example:
. /dev/stdin <<< "$(grep '^alias' ~/.profile)"
I think your solution is command substitution with backticks: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html
See section 3.4.5
Why not use source then?
$ ls | sed ... > out.sh ; source out.sh