I have the following:
ssh $DOMAIN -l root "grep "'$EMAIL#$DOMAIN'" /var/log/maillog | grep retr= | grep -v retr=0 | awk '{ print "'$11'" }' | cut -d, -f1 | cut -d= -f2 | awk '{ t += $1 } END { print "'total: '", t, "' bytes transferred over POP3'"}'"
Running this command gives the following output:
stdin: is not a tty
awk: { t += blah#email.com } END { print total: , t, bytes transferred over POP3}
awk: ^ invalid char '#' in expression
Looks like the issue is with awk '{ t += $1 } because of the # in $1, however I've tried several different methods of escaping this with no luck. Any advice is appreciated.
I don't think this is the command you ran, because the quotes don't match up (there's a superfluous "). It sounds like the command you actually ran expanded "$1", thus causing awk to interpret a literal email address instead of reading from the first field.
The final part should be:
awk '{ t += $1 } END { print "total: ", t, " bytes transferred over POP3"}'
Related
I have a file containing like below, multiple rows are there
test1| 1234 | test2 | test3
Extract second column 1234 and run a command feeding that as input
lets say we get X as output to the command
Print the output as below for each of the line
test1 | X | test2 | test3
Prefer if I could do it in one-liner, but open to ideas.
I am able to extract string using awk, but I am not sure how I can still preserve the initial output and replace it in the output. Below is what I tested
cat file.txt | awk -F '|' '{newVar=system("command "$2); print newVar $4}'
#
Sample command output, where we extract the "name"
openstack show 36a6c06e-5e97-4a53-bb42
+----------------------------+-----------------------------------+
| Property | Value |
+----------------------------+-----------------------------------+
| id | 36a6c06e-5e97-4a53-bb42 |
| name | testVM1 |
+----------------------------+-----------------------------------+
Perl to the rescue!
perl -lF'/\|/' -ne 'chomp( $F[1] = qx{ command $F[1] }); print join "|", #F' < file.txt
-n reads the input line by line
-l removes newlines from input and adds them to prints
F specifies how to split each input line into the #F array
$F[1] corresponds to the second column, we replace it with the output of the command
chomp removes the trailing newline from the command output
join glues the array back to one line
Using awk:
awk -F ' *| *' '{("command "$2) | getline $2}1' file.txt
e.g.
$ awk -F ' *| *' '{("date -d #"$2) | getline $2}1' file.txt
test1| Thu 01 Jan 1970 05:50:34 AM IST | test2 | test3
I changed the field separator from | to *| * to accommodate the spaces surrounding the fields. You can remove those based on your actual input.
This finally did the trick..
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "openstack show \047" $2 "\047"
while ( (cmd | getline line) > 0 ) {
if ( line ~ /name/ ) {
split(line,flds,/ *[|] */)
$2 = flds[3]
break
}
}
close(cmd)
print
}' file
If command can take the whole list of values once and generate the converted list as output (e.g. tr 'a-z' 'A-Z') then you'd want to do something like this to avoid spawning a shell once per input line (which is extremely slow):
awk -F' *[|] *' '{print $2}' file |
command |
awk -F' *[|] *' -v OFS=' | ' 'NR==FNR{a[FNR]=$0; next} {$2=a[FNR]} 1' - file
otherwise if command needs to be called with one value at a time (e.g. echo) or you just don't care about execution speed then you'd do:
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "command \047" $2 "\047"
if ( (cmd | getline line) > 0 ) {
$2 = line
}
close(cmd)
print
}' file
The \047s will produce single quotes around $2 when it's passed to command and so shield it from shell interpretation (see https://mywiki.wooledge.org/Quotes) and the test on the result of getline will protect you from silently overwriting the current $2 with the output of an earlier command execution in the event of a failure (see http://awk.freeshell.org/AllAboutGetline). The close() ensures that you don't end up with a "too many open files" error or other cryptic problem if the pipe isn't being closed properly, e.g. if command is generating multiple lines and you're just reading the first one.
Given your comment below, if you're going with the 2nd approach above then you'd write something like:
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "openstack show \047" $2 "\047"
while ( (cmd | getline line) > 0 ) {
split(line,flds)
if ( flds[2] == "name" ) {
$2 = flds[3]
break
}
}
close(cmd)
print
}' file
I have command, which gives me output from telnet. Full output from telnet It looks like this:
telnet myserver.com 1234
Server, Name=MyServer, Age=123, Ver=1.23, ..., ..., ...
This command should filter just the number after Age - "Age=123" which I want to filter:
echo "\n" | nc myserver.com 1234 | (awk -F "=" '{print $3}')
Instead of 123 it gives me this output:
123, Ver
Is there a way how to get just number after Age=?
It's just bad awk filtering parameter, but I tried some other ways with awk but this gave me almost best result... Thank you for any help.
Edit: I forgot, number after Age= is dynamic +1 every day...
I'm not sure about the echo "\n" part but I think that this should do what you want:
nc myserver.com 1234 | awk -F "," '{ split($3, a, /=/); print a[2] }'
Instead of splitting into fields on the =, I've done so on the ,. The third field is then split into the array a on the = and the second half is printed.
I also removed the ( ) around the invocation of awk, which was creating a subshell unnecessarily.
If you're confident about the response never varying containing = or , in other places, you could simplify the awk expression further:
awk -F'[=,]' '{ print $5 }'
The bracket expression allows fields to be split on either = or ,, making the part you're interested in the fifth field.
echo "\n" | nc myserver.com 1234 | awk -F "," '{print substr($3,6)}'
You can run the awk command twice.
awk -F "=" '{print $3}'|awk -F "," '{print $1}'
You can also use the cut command:
cut -d "=" -f 3|cut -d "," -f 1
I'm currently using awk to replicate the function uniq -c with commas as delimiters.
This gives correct output:
$ cut --delimiter=, -s -f2 wordlist.csv | awk '{ cnts[$0] += 1 } END { for (v in cnts) print cnts[v], v}' OFS="," | head
2,laecherlichen
111,doctrine
1,cremonas
1,embedding
1,conincks
2,similiter
1,mitgesellen
1,hysnelement
1,geringem
1,aquarian
However, if I reverse the awk command print cnts[v], v into print v, cnts[v], I get a messed up output:
$ cut --delimiter=, -s -f2 wordlist.csv | awk '{ cnts[$0] += 1 } END { for (v in cnts) print v, cnts[v]}' OFS="," | head
,2echerlichen
,111rine
,1emonas
,1bedding
,1nincks
,2militer
,1tgesellen
,1snelement
,1ringem
,1uarian
I'm confused by this output, because I'm expecting something like word,1 as output. What is the problem?
Most likely you have DOS line feed characters i.e. \r before end of line \n. You can use RS variable in awk to ignore this:
cut --delimiter=, -s -f2 wordlist.csv | awk -v RS='\r|\n' '{
cnts[$0] += 1 } END { for (v in cnts) print cnts[v], v}' OFS="," | head
However if you show your csv file I believe even cut and head can be removed from above commands.
PS: Thanks to #Bammar you can also run:
dos2unix file.csv
to convert your csv file to unix compatible file.
I am trying to run a shell command from within awk for each line of a file, and the shell command needs one input argument. I tried to use system(), but it didn't recognize the input argument.
Each line of this file is an address of a file, and I want to run a command to process that file. So, for a simple example I want to use 'wc' command for each line and pass $1to wc.
awk '{system("wc $1")}' myfile
you are close. you have to concatenate the command line with awk variables:
awk '{system("wc "$1)}' myfile
You cannot grab the output of an awk system() call, you can only get the exit status. Use the getline/pipe or getline/variable/pipe constructs
awk '{
cmd = "your_command " $1
while (cmd | getline line) {
do_something_with(line)
}
close(cmd)
}' file
FYI here's how to use awk to process files whose names are stored in a file (providing wc-like functionality in this example):
gawk '
NR==FNR { ARGV[ARGC++]=$0; next }
{ nW+=NF; nC+=(length($0) + 1) }
ENDFILE { print FILENAME, FNR, nW, nC; nW=nC=0 }
' file
The above uses GNU awk for ENDFILE. With other awks just store the values in an array and print in a loop in the END section.
I would suggest another solution:
awk '{print $1}' myfile | xargs wc
the difference is that it executes wc once with multiple arguments. It often works (for example, with kill command)
Or use the pipe | as in bash then retrive the output in a variable with awk's getline, like this
zcat /var/log/fail2ban.log* | gawk '/.*Ban.*/ {print $7};' | sort | uniq -c | sort | gawk '{ "geoiplookup " $2 "| cut -f2 -d: " | getline geoip; print $2 "\t\t" $1 " " geoip}'
That line will print all the banned IPs from your server along with their origin (country) using the geoip-bin package.
The last part of that one-liner is the one that affects us :
gawk '{ "geoiplookup " $2 "| cut -f2 -d: " | getline geoip; print $2 "\t\t" $1 " " geoip}'
It simply says : run the command "geoiplookup 182.193.192.4 | -f2 -d:" ($2 gets substituted as you may guess) and put the result of that command in geoip (the | getline geoip bit). Next, print something something and anything inside the geoip variable.
The complete example and the results can be found here, an article I wrote.
I am trying to make a single line ssh call from a ruby script. My script takes a hostname, and then sets out to return the hostname's machine info.
return_value = %x{ ssh #{hostname} "#{number_of_users}; #{number_of_processes};
#{number_of_processes_running}; #{number_of_processes_sleeping}; "}
Where the variables are formatted like this.
number_of_users = %Q(users | wc -w | cat | awk '{print "Number of Users: "\$1}')
number_of_processes = %Q(ps -el | awk '{print $2}' | wc -l | awk '{print "Number of Processes: "$1}')
I have tried both %q, %Q, and just plain "" and I cannot get the awk to print anything before the output. I either get this error (if I include the colon)
awk: line 1: syntax error at or near :
or if I don't include the slash in front of $1 I just get empty output for that line. Is there any solution for this? I thought it might be because I was using %q, but it even happens with just double quotes.
Use backticks to capture the output of the command and return the output as a string:
number_of_users = `users | wc -w | cat | awk '{print "Number of Users:", $1}'`
puts number_of_users
Results on my system:
48
But you can improve your pipeline:
users | awk '{ print "Number of Users:", NF }'
ps -e | awk 'END { print "Number of Processes:", NR }'
So the solution to this problem is:
%q(users | wc -w | awk '{print \"Number of Users: \"\$1}')
Where you have to use %q, not %, not %Q, and not ""
You must backslash double quotes and the dollar sign in front of any awk variables
If somebody could improve upon this answer by explaining why, that would be most appreciated
Though as Steve pointed out I could have improved my code using users | awk '{ print \"Number of Users:\", NF }'
In which case there is no need to backslash the NF.