How to pipe an output of first command to go to specific location of next command? - bash

I am running a command that spits out IPs, I need to feed it to another program at a specific location as it comes, how do I do it?
$ command1 | command2 -c configfile -i "$1" status
"$1" is where I want the result of command1 to go to.
Thanks.

xargs is your tool
$ command1 | xargs -I {} command2 -c configfile -i {} status
you can refer to the argument multiple times, for example
$ echo this | xargs -I {} echo {}, {}, and {}
this, this, and this
based on the last comment, perhaps you want to do something like this
$ var=$(command1) && command2 "$var" ... | command3 "$var" ...

To pass command2 a filename which will, when read, provide output from command1, the appropriate tool is process substitution:
command2 -c configfile -i <(command1) status
The <(...) syntax will be replaced with a filename -- on Linux, of the form /dev/fd/NN; on some other platforms a named pipe instead -- from which the output of command2 can be streamed.

There are probably 100 ways to do this in a bash shell. This one is quick and easy. We can use ifconfig, grep the IP address, use awk to pull it out and assign it to an environment variable. Then use the boolean operator to run the next command which uses the environment variable
IPADDR=`ifconfig | grep 172 | awk '{print $2}' | awk -F: '{print $2}'` && echo $IPADDR

Related

User input into variables and grep a file for pattern

H!
So I am trying to run a script which looks for a string pattern.
For example, from a file I want to find 2 words, located separately
"I like toast, toast is amazing. Bread is just toast before it was toasted."
I want to invoke it from the command line using something like this:
./myscript.sh myfile.txt "toast bread"
My code so far:
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w "$keyword_first""$keyword_second" )
echo $find_keyword
i have tried a few different ways. Directly from the command line I can make it run using:
cat myfile.txt | grep -E 'toast|bread'
I'm trying to put the user input into variables and use the variables to grep the file
You seem to be looking simply for
grep -E "$2|$3" "$1"
What works on the command line will also work in a script, though you will need to switch to double quotes for the shell to replace variables inside the quotes.
In this case, the -E option can be replaced with multiple -e options, too.
grep -e "$2" -e "$3" "$1"
You can pipe to grep twice:
find_keyword=$(cat $text_file | grep -w "$keyword_first" | grep -w "$keyword_second")
Note that your search word "bread" is not found because the string contains the uppercase "Bread". If you want to find the words regardless of this, you should use the case-insensitive option -i for grep:
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
In a full script:
#!/bin/bash
#
# usage: ./myscript.sh myfile.txt "toast" "bread"
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
echo $find_keyword

Execute piped shell commands in Tcl

I want to execute these piped shell commands in Tcl:
grep -v "#" inputfile | grep -v ">" | sort -r -nk7 | head
I try:
exec grep -v "#" inputfile | grep -v ">" | sort -r -nk7 | head
and get an error:
Error: grep: invalid option -- 'k'
When I try to pipe only 2 of the commands:
exec grep -v "#" inputfile | grep -v ">"
I get:
Error: can't specify ">" as last word in command
Update: I also tried {} and {bash -c '...'}:
exec {bash -c 'grep -v "#" inputfile | grep -v ">"'}
Error: couldn't execute "bash -c 'grep -v "#" inputfile | grep -v ">"'": no such file or directory
My question: how can I execute the initial piped commands in a tcl script?
Thanks
The problem is that exec does “special things” when it sees a > on its own (or at the start of a word) as that indicates a redirection. Unfortunately, there's no practical way to avoid this directly; this is an area where Tcl's syntax system doesn't help. You end up having to do something like this:
exec grep -v "#" inputfile | sh -c {exec grep -v ">"} | sort -r -nk7 | head
You can also move the entire pipeline to the Unix shell side:
exec sh -c {grep -v "#" inputfile | grep -v ">" | sort -r -nk7 | head}
Though to be frank this is something that you can do in pure Tcl, which will then make it portable to Windows too…
The > is causing problems here.
You need to escape it from tcl and the shell to make it work here.
exec grep -v "#" inputfile | grep -v {\\>} | sort -r -nk7 | head
or (and this is better since you have one less grep)
exec grep -Ev {#|>} inputfile | sort -r -nk7 | head
If you look in the directory you were running this from (assuming tclsh or similar) you'll probably see that you created an oddly named file (i.e. |) before.
In pure Tcl:
package require fileutil
set lines {}
::fileutil::foreachLine line inputfile {
if {![regexp #|> $line]} {
lappend lines $line
}
}
set lines [lsort -decreasing -integer -index 6 $lines]
set lines [lrange $lines 0 9]
puts [join $lines \n]\n
(-double might be more appropriate than -integer)
Edit: I mistranslated the (1-based) -k index for the command sort when writing the (0-based) -index option for lsort. It is now corrected.
Documentation: fileutil package, if, join, lappend, lrange, lsort, package, puts, regexp, set

Bash code error unexpected syntax error

I am not sure why i am getting the unexpected syntax '( err
#!/bin/bash
DirBogoDict=$1
BogoFilter=/home/nikhilkulkarni/Downloads/bogofilter-1.2.4/src/bogofilter
echo "spam.."
for i in 'cat full/index |fgrep spam |awk -F"/" '{if(NR>1000)print$2"/"$3}'|head -500'
do
cat $i |$BogoFilter -d $DirBogoDict -M -k 1024 -v
done
echo "ham.."
for i in 'cat full/index | fgrep ham | awk -F"/" '{if(NR>1000)print$2"/"$3}'|head -500'
do
cat $i |$BogoFilter -d $DirBogoDict -M -k 1024 -v
done
Error:
./score.bash: line 7: syntax error near unexpected token `('
./score.bash: line 7: `for i in 'cat full/index |fgrep spam |awk -F"/" '{if(NR>1000)print$2"/"$3}'|head -500''
Uh, because you have massive syntax errors.
The immediate problem is that you have an unpaired single quote before the cat which exposes the Awk script to the shell, which of course cannot parse it as shell script code.
Presumably you want to use backticks instead of single quotes, although you should actually not read input with for.
With a fair bit of refactoring, you might want something like
for type in spam ham; do
awk -F"/" -v type="$type" '$0 ~ type && NR>1000 && i++<500 {
print $2"/"$3 }' full/index |
xargs $BogoFilter -d $DirBogoDict -M -k 1024 -v
done
This refactors the useless cat | grep | awk | head into a single Awk script, and avoids the silly loop over each output line. I assume bogofilter can read file name arguments; if not, you will need to refactor the xargs slightly. If you can pipe all the files in one go, try
... xargs cat | $BogoFilter -d $DirBogoDict -M -k 1024 -v
or if you really need to pass in one at a time, maybe
... xargs sh -c 'for f; do $BogoFilter -d $DirBogoDict -M -k 1024 -v <"$f"; done' _
... in which case you will need to export the variables BogoFilter and DirBogoDict to expose them to the subshell (or just inline them -- why do you need them to be variables in the first place? Putting command names in variables is particularly weird; just update your PATH and then simply use the command's name).
In general, if you find yourself typing the same commands more than once, you should think about how to avoid that. This is called the DRY principle.
The syntax error is due to bad quoting. The expression whose output you want to loop over should be in command substitution syntax ($(...) or backticks), not single quotes.

Pipe a list or array in awk as variable (to be executed as parameter in other command)

I am trying to execute the one liner below as a part of bash script.
command1 |grep -ID|grep -v + | awk '{print "command2" $2}'|bash
The first part of the pipe prints the info below:
root#system:~# command1 |grep -v ID|grep -v +
| id | name | mac_address | fixed_ips |
| 00277225-34fa-48f5-9a2a-ee5f1c5b1dcb | dummy | fa:18:3e:c4:85:94 | {"subnet_id": "0cd4d824-4420-4049-87c3-ed33c3addbf5", "ip_address": "11.170.1.121"} |
:
:
| ff9a6ed5-9694-45bc-bf71-59565f96d809 | BAT-T0-A2-0-7-tport | fa:18:3e:62:70:fb | {"subnet_id": "f9ae81ed-3b1a-45a7-96fd-c417ed32
So, $2 in awk command2 is "00277225-34fa-48f5-9a2a-ee5f1c5b1dcb".
e.g.
command2 00277225-34fa-48f5-9a2a-ee5f1c5b1dcb
The whole purpose of this one-liner is to execute a number of "command2" instances with different parameter values from the printout above.
e.g.
command2 00277225-34fa-48f5-9a2a-ee5f1c5b1dcb
command2 ff9a6ed5-9694-45bc-bf71-59565f96d809
:
:
But I can not make the $2 recognized the way below
command1 |grep -ID|grep -v + | awk '{print "command2" $2}'|bash
I think I am missing few syntax tricks here (as newbie).
p.s: If I copy / paste the whole line in command line, it works fine.
You seem to be looking for
command1 | awk '!/ID/ && !/\+/ {print $2}' | xargs -n 1 command2
I refactored the ugly useless greps into the Awk script; but the real beef here is xargs. It reads parameters from standard input and passes them on to the command you supply in the positional parameters.
The option -n 1 says to only accept one additional argument at a time; but if command2 is a well-written standard Unix command, it can probably accept an arbitrary number of arguments, and will simply loop over them. In that case, removing -n 1 will be a lot more efficient.
Incidentally, your original attempt was fairly close; you should have added a space after command2 in the print statement. But I hope this solution will also help you see how to "think Unix".

Bash/Awk: How can I run a command using bash or awk

How can I run a command in bash, read the output it returns and check if there's the text "xyz" in there in order to decide if I run another command or not?
Is it easy?
Thanks
if COMMAND | grep -q xyz; then
#do something
fi
EDIT: Made it quiet.
For example:
command1 | grep "xyz" >/dev/null 2>&1 && command2
run command1
its output filer with grep
discard output from the grep
and if the grep was successful (so found the string)
execute the command2
You can pipe the output of the command to grep or grep -e.
Your specification is very loose, but here is an idea to try
output="$(cmd args ....)"
case "${output}" in
*targetText* ) otherCommand args ... ;;
*target2Text* ) other2Command ... ;;
esac
I hope this helps.
While you can accomplish this task many different ways, this is a perfect use for awk.
prv cmd | awk '/xyx/ {print "cmd" } ' | bash
this is what you want
for example,
i have a text file called temp.txt that only contains 'xyz'
If i run the following command I will exepct the output "found it"
$ cat temp.txt | awk '/xyz/ {print "echo found it"}' | bash
> found it
so what I am doing is piping the output of my previous command into awk, who is looking for the pattern xyz (/xyz/). awk will print the command, in this case echo found it, and pipe it to bash to execute them. simple one-liner doing what you asked. note you can customize the regex that awk looks for.

Resources