Use output of bash command (with pipe) as a parameter for another command - bash

I'm looking for a way to use the ouput of a command (say command1) as an argument for another command (say command2).
I encountered this problem when trying to grep the output of who command but using a pattern given by another set of command (actually tty piped to sed).
Context:
If tty displays:
/dev/pts/5
And who displays:
root pts/4 2012-01-15 16:01 (xxxx)
root pts/5 2012-02-25 10:02 (yyyy)
root pts/2 2012-03-09 12:03 (zzzz)
Goal:
I want only the line(s) regarding "pts/5"
So I piped tty to sed as follows:
$ tty | sed 's/\/dev\///'
pts/5
Test:
The attempted following command doesn't work:
$ who | grep $(echo $(tty) | sed 's/\/dev\///')"
Possible solution:
I've found out that the following works just fine:
$ eval "who | grep $(echo $(tty) | sed 's/\/dev\///')"
But I'm sure the use of eval could be avoided.
As a final side node: I've noticed that the "-m" argument to who gives me exactly what I want (get only the line of who that is linked to current user). But I'm still curious on how I could make this combination of pipes and command nesting to work...

One usually uses xargs to make the output of one command an option to another command. For example:
$ cat command1
#!/bin/sh
echo "one"
echo "two"
echo "three"
$ cat command2
#!/bin/sh
printf '1 = %s\n' "$1"
$ ./command1 | xargs -n 1 ./command2
1 = one
1 = two
1 = three
$
But ... while that was your question, it's not what you really want to know.
If you don't mind storing your tty in a variable, you can use bash variable mangling to do your substitution:
$ tty=`tty`; who | grep -w "${tty#/dev/}"
ghoti pts/198 Mar 8 17:01 (:0.0)
(You want the -w because if you're on pts/6 you shouldn't see pts/60's logins.)
You're limited to doing this in a variable, because if you try to put the tty command into a pipe, it thinks that it's not running associated with a terminal anymore.
$ true | echo `tty | sed 's:/dev/::'`
not a tty
$
Note that nothing in this answer so far is specific to bash. Since you're using bash, another way around this problem is to use process substitution. For example, while this does not work:
$ who | grep "$(tty | sed 's:/dev/::')"
This does:
$ grep $(tty | sed 's:/dev/::') < <(who)

You can do this without resorting to sed with the help of Bash variable mangling, although as #ruakh points out this won't work in the single line version (without the semicolon separating the commands). I'm leaving this first approach up because I think it's interesting that it doesn't work in a single line:
TTY=$(tty); who | grep "${TTY#/dev/}"
This first puts the output of tty into a variable, then erases the leading /dev/ on grep's use of it. But without the semicolon TTY is not in the environment by the moment bash does the variable expansion/mangling for grep.
Here's a version that does work because it spawns a subshell with the already modified environment (that has TTY):
TTY=$(tty) WHOLINE=$(who | grep "${TTY#/dev/}")
The result is left in $WHOLINE.

#Eduardo's answer is correct (and as I was writing this, a couple of other good answers have appeared), but I'd like to explain why the original command is failing. As usual, set -x is very useful to see what's actually happening:
$ set -x
$ who | grep $(echo $(tty) | sed 's/\/dev\///')
+ who
++ sed 's/\/dev\///'
+++ tty
++ echo not a tty
+ grep not a tty
grep: a: No such file or directory
grep: tty: No such file or directory
It's not completely explicit in the above, but what's happening is that tty is outputting "not a tty". This is because it's part of the pipeline being fed the output of who, so its stdin is indeed not a tty. This is the real reason everyone else's answers work: they get tty out of the pipeline, so it can see your actual terminal.
BTW, your proposed command is basically correct (except for the pipeline issue), but unnecessarily complex. Don't use echo $(tty), it's essentially the same as just tty.

You can do it like this:
tid=$(tty | sed 's#/dev/##') && who | grep "$tid"

Related

While-read nested loop giving me nothing in return [duplicate]

I want to write a script that loops through the output (array possibly?) of a shell command, ps.
Here is the command and the output:
$ ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh
3089 python /var/www/atm_securit 37:02
17116 python /var/www/atm_securit 00:01
17119 python /var/www/atm_securit 00:01
17122 python /var/www/atm_securit 00:01
17125 python /var/www/atm_securit 00:00
Convert it into bash script (snippet):
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
But the output becomes:
3089
python
/var/www/atm_securit
38:06
17438
python
/var/www/atm_securit
00:02
17448
python
/var/www/atm_securit
00:01
How do I loop through every row like in the shell output, but in a bash script?
Never for loop over the results of a shell command if you want to process it line by line unless you are changing the value of the internal field separator $IFS to \n. This is because the lines will get subject of word splitting which leads to the actual results you are seeing. Meaning if you for example have a file like this:
foo bar
hello world
The following for loop
for i in $(cat file); do
echo "$i"
done
gives you:
foo
bar
hello
world
Even if you use IFS='\n' the lines might still get subject of Filename expansion
I recommend to use while + read instead because read reads line by line.
Furthermore I would use pgrep if you are searching for pids belonging to a certain binary. However, since python might appear as different binaries, like python2.7 or python3.4 I suggest to pass -f to pgrep which makes it search the whole command line rather than just searching for binaries called python. But this will also find processes which have been started like cat foo.py. You have been warned! At the end you can refine the regex passed to pgrep like you wish.
Example:
pgrep -f python | while read -r pid ; do
echo "$pid"
done
or if you also want the process name:
pgrep -af python | while read -r line ; do
echo "$line"
done
If you want the process name and the pid in separate variables:
pgrep -af python | while read -r pid cmd ; do
echo "pid: $pid, cmd: $cmd"
done
You see, read offers a flexible and stable way to process the output of a command line-by-line.
Btw, if you prefer your ps .. | grep command line over pgrep use the following loop:
ps -ewo pid,etime,cmd | grep python | grep -v grep | grep -v sh \
| while read -r pid etime cmd ; do
echo "$pid $cmd $etime"
done
Note how I changed the order of etime and cmd. Thus to be able to read cmd, which can contain whitespace, into a single variable. This works because read will break down the line into variables, as many times as you specified variables. The remaining part of the line - possibly including whitespace - will get assigned to the last variable which has been specified in the command line.
I found you can do this just use double quotes:
while read -r proc; do
#do work
done <<< "$(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)"
This will save each line to the array rather than each item.
When using for loops in bash it splits the given list by default by whitespaces, this can be adapted by using the so called Internal Field Seperator, or IFS in short .
IFS The Internal Field Separator that is used for word splitting after
expansion and to split lines into words with the read builtin command.
The default value is "".
For your example we would need to tell IFS to use new-lines as break point.
IFS=$'\n'
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
This example returns the following output on my machine.
668 /usr/bin/python /usr/bin/ud 03:05:54
27892 python 00:01
Here is another bash-based solution, inspired by comment of #Gordon Davisson.
For this we need (atleast bash v1.13.5 (1992) or later verison), because Process-Substitution2,3,4 while read var; do { ... }; done < <(...);, etc are used.
#!/bin/bash
while IFS= read -a oL ; do { # reads single/one line
echo "${oL}"; # prints that single/one line
};
done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh);
unset oL;
Note: You can use any simple or complex command/command-set inside the <(...) which may have multiple output lines.
And what code does what function is shown here.
And here is a single/one-liner way:
while IFS= read -a oL ; do { echo "${oL}"; }; done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh); unset oL;
( As Process-Substitution is not part of POSIX yet So its not supported in many POSIX compliant shell or in POSIX shell mode of bash-shell. Process-Substitution existed in bash since 1992 (so that is 28yrs ago from now/2020), & existed in ksh86 (before 1985)1. So POSIX should've included it. )
If you or any user wants to use something similar as Process-Substitution in POSIX compliant shell (i.e: sh, ash, dash, pdksh/mksh, etc), then look into NamedPipes.

JQ is iterating over each word in a given key, as opposed to each instance of the key [duplicate]

I want to write a script that loops through the output (array possibly?) of a shell command, ps.
Here is the command and the output:
$ ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh
3089 python /var/www/atm_securit 37:02
17116 python /var/www/atm_securit 00:01
17119 python /var/www/atm_securit 00:01
17122 python /var/www/atm_securit 00:01
17125 python /var/www/atm_securit 00:00
Convert it into bash script (snippet):
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
But the output becomes:
3089
python
/var/www/atm_securit
38:06
17438
python
/var/www/atm_securit
00:02
17448
python
/var/www/atm_securit
00:01
How do I loop through every row like in the shell output, but in a bash script?
Never for loop over the results of a shell command if you want to process it line by line unless you are changing the value of the internal field separator $IFS to \n. This is because the lines will get subject of word splitting which leads to the actual results you are seeing. Meaning if you for example have a file like this:
foo bar
hello world
The following for loop
for i in $(cat file); do
echo "$i"
done
gives you:
foo
bar
hello
world
Even if you use IFS='\n' the lines might still get subject of Filename expansion
I recommend to use while + read instead because read reads line by line.
Furthermore I would use pgrep if you are searching for pids belonging to a certain binary. However, since python might appear as different binaries, like python2.7 or python3.4 I suggest to pass -f to pgrep which makes it search the whole command line rather than just searching for binaries called python. But this will also find processes which have been started like cat foo.py. You have been warned! At the end you can refine the regex passed to pgrep like you wish.
Example:
pgrep -f python | while read -r pid ; do
echo "$pid"
done
or if you also want the process name:
pgrep -af python | while read -r line ; do
echo "$line"
done
If you want the process name and the pid in separate variables:
pgrep -af python | while read -r pid cmd ; do
echo "pid: $pid, cmd: $cmd"
done
You see, read offers a flexible and stable way to process the output of a command line-by-line.
Btw, if you prefer your ps .. | grep command line over pgrep use the following loop:
ps -ewo pid,etime,cmd | grep python | grep -v grep | grep -v sh \
| while read -r pid etime cmd ; do
echo "$pid $cmd $etime"
done
Note how I changed the order of etime and cmd. Thus to be able to read cmd, which can contain whitespace, into a single variable. This works because read will break down the line into variables, as many times as you specified variables. The remaining part of the line - possibly including whitespace - will get assigned to the last variable which has been specified in the command line.
I found you can do this just use double quotes:
while read -r proc; do
#do work
done <<< "$(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)"
This will save each line to the array rather than each item.
When using for loops in bash it splits the given list by default by whitespaces, this can be adapted by using the so called Internal Field Seperator, or IFS in short .
IFS The Internal Field Separator that is used for word splitting after
expansion and to split lines into words with the read builtin command.
The default value is "".
For your example we would need to tell IFS to use new-lines as break point.
IFS=$'\n'
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
This example returns the following output on my machine.
668 /usr/bin/python /usr/bin/ud 03:05:54
27892 python 00:01
Here is another bash-based solution, inspired by comment of #Gordon Davisson.
For this we need (atleast bash v1.13.5 (1992) or later verison), because Process-Substitution2,3,4 while read var; do { ... }; done < <(...);, etc are used.
#!/bin/bash
while IFS= read -a oL ; do { # reads single/one line
echo "${oL}"; # prints that single/one line
};
done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh);
unset oL;
Note: You can use any simple or complex command/command-set inside the <(...) which may have multiple output lines.
And what code does what function is shown here.
And here is a single/one-liner way:
while IFS= read -a oL ; do { echo "${oL}"; }; done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh); unset oL;
( As Process-Substitution is not part of POSIX yet So its not supported in many POSIX compliant shell or in POSIX shell mode of bash-shell. Process-Substitution existed in bash since 1992 (so that is 28yrs ago from now/2020), & existed in ksh86 (before 1985)1. So POSIX should've included it. )
If you or any user wants to use something similar as Process-Substitution in POSIX compliant shell (i.e: sh, ash, dash, pdksh/mksh, etc), then look into NamedPipes.

pipe tail output into another script

I am trying to pipe the output of a tail command into another bash script to process:
tail -n +1 -f your_log_file | myscript.sh
However, when I run it, the $1 parameter (inside the myscript.sh) never gets reached. What am I missing? How do I pipe the output to be the input parameter of the script?
PS - I want tail to run forever and continue piping each individual line into the script.
Edit
For now the entire contents of myscripts.sh are:
echo $1;
Generally, here is one way to handle standard input to a script:
#!/bin/bash
while read line; do
echo $line
done
That is a very rough bash equivalent to cat. It does demonstrate a key fact: each command inside the script inherits its standard input from the shell, so you don't really need to do anything special to get access to the data coming in. read takes its input from the shell, which (in your case) is getting its input from the tail process connected to it via the pipe.
As another example, consider this script; we'll call it 'mygrep.sh'.
#!/bin/bash
grep "$1"
Now the pipeline
some-text-producing-command | ./mygrep.sh bob
behaves identically to
some-text-producing-command | grep bob
$1 is set if you call your script like this:
./myscript.sh foo
Then $1 has the value "foo".
The positional parameters and standard input are separate; you could do this
tail -n +1 -f your_log_file | myscript.sh foo
Now standard input is still coming from the tail process, and $1 is still set to 'foo'.
Perhaps your were confused with awk?
tail -n +1 -f your_log_file | awk '{
print $1
}'
would print the first column from the output of the tail command.
In the shell, a similar effect can be achieved with:
tail -n +1 -f your_log_file | while read first junk; do
echo "$first"
done
Alternatively, you could put the whole while ... done loop inside myscript.sh
Piping connects the output (stdout) of one process to the input (stdin) of another process. stdin is not the same thing as the arguments sent to a process when it starts.
What you want to do is convert the lines in the output of your first process into arguments for the the second process. This is exactly what the xargs command is for.
All you need to do is pipe an xargs in between the initial command and it will work:
tail -n +1 -f your_log_file | xargs | myscript.sh

Shell scripting obtain command PID

In a shell script lets say i have run a command like this
for i in `ps -ax|grep "myproj"`
do
echo $i
done
Here, the grep command would be executed as a separate process. Then how do i get its PID in the shell script ?
I'm going out on a limb here, and understand this looks more like a comment.
Why do you need the PID of the grep command?
In your comment you say you want to compare it in the loop against something. I would suppose that it is your issue that that the loop will (sometimes) not only include myproj but also an item about your grep command? If so, try the following:
for i in `ps -ax | grep -v grep | grep "myproj"`
do
echo $i
done
The -v switch basically inverts the pattern, so grep -v grep (or grep -v "grep", which maybe looks a bit less awkward) will include only lines that do not include the string "grep" (see man grep).
Note that this maybe overly vague for some cases, for example if the pattern you actually look for also contains the string "grep". For example, the following might not work as you'd expect: ps -ax | grep -v grep | grep mygrepling
However, in your particular case, where you only look for "myproj" it will do.
Or you could simply use
for i in `ps -ax | grep "my[p]roj"`
do
echo $i
done
That way there is no need to know the PID of the grep command, because it simply never shows up as a loop iteration.
When you run a process in background, you can get its PID in $!
$ ps aux | grep dddddd & echo $!
[1] 27948
27948
ic 27948 0.0 0.0 3932 760 pts/3 R 08:49 0:00 grep dddddd
When in foreground --- the process does not exist anymore at the point you want to find its PID. When you are in the loop, the for statement is already executed and grep is already exited, so you can not find its PID anymore.

How to execute the output of a command within the current shell?

I'm well aware of the source (aka .) utility, which will take the contents from a file and execute them within the current shell.
Now, I'm transforming some text into shell commands, and then running them, as follows:
$ ls | sed ... | sh
ls is just a random example, the original text can be anything. sed too, just an example for transforming text. The interesting bit is sh. I pipe whatever I got to sh and it runs it.
My problem is, that means starting a new sub shell. I'd rather have the commands run within my current shell. Like I would be able to do with source some-file, if I had the commands in a text file.
I don't want to create a temp file because feels dirty.
Alternatively, I'd like to start my sub shell with the exact same characteristics as my current shell.
update
Ok, the solutions using backtick certainly work, but I often need to do this while I'm checking and changing the output, so I'd much prefer if there was a way to pipe the result into something in the end.
sad update
Ah, the /dev/stdin thing looked so pretty, but, in a more complex case, it didn't work.
So, I have this:
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/git mv -f $1 $2.doc/i' | source /dev/stdin
Which ensures all .doc files have their extension lowercased.
And which incidentally, can be handled with xargs, but that's besides the point.
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/$1 $2.doc/i' | xargs -L1 git mv
So, when I run the former, it'll exit right away, nothing happens.
The eval command exists for this very purpose.
eval "$( ls | sed... )"
More from the bash manual:
eval
eval [arguments]
The arguments are concatenated together
into a single command, which
is then read and executed, and its
exit status returned as the exit
status of eval. If there are no
arguments or only empty arguments, the
return status is zero.
$ ls | sed ... | source /dev/stdin
UPDATE: This works in bash 4.0, as well as tcsh, and dash (if you change source to .). Apparently this was buggy in bash 3.2. From the bash 4.0 release notes:
Fixed a bug that caused `.' to fail to read and execute commands from non-regular files such as devices or named pipes.
Try using process substitution, which replaces output of a command with a temporary file which can then be sourced:
source <(echo id)
Wow, I know this is an old question, but I've found myself with the same exact problem recently (that's how I got here).
Anyway - I don't like the source /dev/stdin answer, but I think I found a better one. It's deceptively simple actually:
echo ls -la | xargs xargs
Nice, right? Actually, this still doesn't do what you want, because if you have multiple lines it will concat them into a single command instead of running each command separately. So the solution I found is:
ls | ... | xargs -L 1 xargs
the -L 1 option means you use (at most) 1 line per command execution. Note: if your line ends with a trailing space, it will be concatenated with the next line! So make sure each line ends with a non-space.
Finally, you can do
ls | ... | xargs -L 1 xargs -t
to see what commands are executed (-t is verbose).
Hope someone reads this!
`ls | sed ...`
I sort of feel like ls | sed ... | source - would be prettier, but unfortunately source doesn't understand - to mean stdin.
I believe this is "the right answer" to the question:
ls | sed ... | while read line; do $line; done
That is, one can pipe into a while loop; the read command command takes one line from its stdin and assigns it to the variable $line. $line then becomes the command executed within the loop; and it continues until there are no further lines in its input.
This still won't work with some control structures (like another loop), but it fits the bill in this case.
To use the mark4o's solution on bash 3.2 (macos) a here string can be used instead of pipelines like in this example:
. /dev/stdin <<< "$(grep '^alias' ~/.profile)"
I think your solution is command substitution with backticks: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html
See section 3.4.5
Why not use source then?
$ ls | sed ... > out.sh ; source out.sh

Resources