I found this command in this discussion.
According to the command
top -b -n2 -p 1 | fgrep "Cpu(s)" | tail -1 | awk -F'id,' -v prefix="$prefix" '{ split($1, vs, ","); v=vs[length(vs)]; sub("%", "", v); printf "%s%.1f%%\n", prefix, 100 - v }'
I got the result is such as 2.9%.
I would like to remove % when output. Which part is printing % ?
Have you tried
top -b -n2 -p 1 | fgrep "Cpu(s)" | tail -1 | awk -F'id,' -v prefix="$prefix" '{ split($1, vs, ","); v=vs[length(vs)]; sub("%", "", v); printf "%s%.1f\n", prefix, 100 - v }'
I've removed the last %% in the printf.
Related
I have a file containing like below, multiple rows are there
test1| 1234 | test2 | test3
Extract second column 1234 and run a command feeding that as input
lets say we get X as output to the command
Print the output as below for each of the line
test1 | X | test2 | test3
Prefer if I could do it in one-liner, but open to ideas.
I am able to extract string using awk, but I am not sure how I can still preserve the initial output and replace it in the output. Below is what I tested
cat file.txt | awk -F '|' '{newVar=system("command "$2); print newVar $4}'
#
Sample command output, where we extract the "name"
openstack show 36a6c06e-5e97-4a53-bb42
+----------------------------+-----------------------------------+
| Property | Value |
+----------------------------+-----------------------------------+
| id | 36a6c06e-5e97-4a53-bb42 |
| name | testVM1 |
+----------------------------+-----------------------------------+
Perl to the rescue!
perl -lF'/\|/' -ne 'chomp( $F[1] = qx{ command $F[1] }); print join "|", #F' < file.txt
-n reads the input line by line
-l removes newlines from input and adds them to prints
F specifies how to split each input line into the #F array
$F[1] corresponds to the second column, we replace it with the output of the command
chomp removes the trailing newline from the command output
join glues the array back to one line
Using awk:
awk -F ' *| *' '{("command "$2) | getline $2}1' file.txt
e.g.
$ awk -F ' *| *' '{("date -d #"$2) | getline $2}1' file.txt
test1| Thu 01 Jan 1970 05:50:34 AM IST | test2 | test3
I changed the field separator from | to *| * to accommodate the spaces surrounding the fields. You can remove those based on your actual input.
This finally did the trick..
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "openstack show \047" $2 "\047"
while ( (cmd | getline line) > 0 ) {
if ( line ~ /name/ ) {
split(line,flds,/ *[|] */)
$2 = flds[3]
break
}
}
close(cmd)
print
}' file
If command can take the whole list of values once and generate the converted list as output (e.g. tr 'a-z' 'A-Z') then you'd want to do something like this to avoid spawning a shell once per input line (which is extremely slow):
awk -F' *[|] *' '{print $2}' file |
command |
awk -F' *[|] *' -v OFS=' | ' 'NR==FNR{a[FNR]=$0; next} {$2=a[FNR]} 1' - file
otherwise if command needs to be called with one value at a time (e.g. echo) or you just don't care about execution speed then you'd do:
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "command \047" $2 "\047"
if ( (cmd | getline line) > 0 ) {
$2 = line
}
close(cmd)
print
}' file
The \047s will produce single quotes around $2 when it's passed to command and so shield it from shell interpretation (see https://mywiki.wooledge.org/Quotes) and the test on the result of getline will protect you from silently overwriting the current $2 with the output of an earlier command execution in the event of a failure (see http://awk.freeshell.org/AllAboutGetline). The close() ensures that you don't end up with a "too many open files" error or other cryptic problem if the pipe isn't being closed properly, e.g. if command is generating multiple lines and you're just reading the first one.
Given your comment below, if you're going with the 2nd approach above then you'd write something like:
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "openstack show \047" $2 "\047"
while ( (cmd | getline line) > 0 ) {
split(line,flds)
if ( flds[2] == "name" ) {
$2 = flds[3]
break
}
}
close(cmd)
print
}' file
I wrote a script that it takes a csv file and replace the third column with the HASH of the second column with some string (Key).
After 256 rows, I got an error
awk: cmd. line:3: (FILENAME=C:/hanatest/test.csv FNR=257) fatal:
cannot create child process for `echo -n
E5360712819A7EF1584E2FDA06287379FF5CC3E0A5M7J6PiQMaSBut52ZQhVlS4 |
openssl ripemd160 | cut -f2 -d" "' (fork: Resource temporarily
unavailable)
I change the CSV file and I got always the same error after 256 rows.
here is my code:
awk -F "," -v env_var="$key" '{
tmp="echo -n "$2env_var" | openssl ripemd160 | cut -f2 -d\" \""
tmp | getline cksum
$3=toupper(cksum)
print
}' //test/source.csv > //ziel.csv
Can you please help me ?
Here my sample input:
25,XXXXXXXXXXXXXXXXXX,?
44,YYYYYYYYYYYYYYYYYY,?
84,ZZZZZZZZZZZZZZZZZZ,?
and here my expected output:
25,XXXXXXXXXXXXXXXXXX,301E2A8BF32A7046F65E48DF32CF933F6CAEC529
44,YYYYYYYYYYYYYYYYYY,301E2A8BF32A7046F65E48EF32CF933F6CAEC529
84,ZZZZZZZZZZZZZZZZZZ,301E2A8BF32A7046F65E48EF33CF933F6CAEC529
Thanks in advance
Let's make your code more robust first:
awk -F "," -v env_var="$key" '{
tmp="echo -n \047" $2 env_var "\047 | openssl ripemd160 | cut -f2 -d\047 \047"
if ( (tmp | getline cksum) > 0 ) {
$3 = toupper(cksum)
}
close(tmp)
print
}' /test/source.csv > /ziel.csv
Now - do you still have a problem? If you're considering using getline make sure to read and fully understand the correct uses and all of the caveats discussed at http://awk.freeshell.org/AllAboutGetline.
I'm currently using awk to replicate the function uniq -c with commas as delimiters.
This gives correct output:
$ cut --delimiter=, -s -f2 wordlist.csv | awk '{ cnts[$0] += 1 } END { for (v in cnts) print cnts[v], v}' OFS="," | head
2,laecherlichen
111,doctrine
1,cremonas
1,embedding
1,conincks
2,similiter
1,mitgesellen
1,hysnelement
1,geringem
1,aquarian
However, if I reverse the awk command print cnts[v], v into print v, cnts[v], I get a messed up output:
$ cut --delimiter=, -s -f2 wordlist.csv | awk '{ cnts[$0] += 1 } END { for (v in cnts) print v, cnts[v]}' OFS="," | head
,2echerlichen
,111rine
,1emonas
,1bedding
,1nincks
,2militer
,1tgesellen
,1snelement
,1ringem
,1uarian
I'm confused by this output, because I'm expecting something like word,1 as output. What is the problem?
Most likely you have DOS line feed characters i.e. \r before end of line \n. You can use RS variable in awk to ignore this:
cut --delimiter=, -s -f2 wordlist.csv | awk -v RS='\r|\n' '{
cnts[$0] += 1 } END { for (v in cnts) print cnts[v], v}' OFS="," | head
However if you show your csv file I believe even cut and head can be removed from above commands.
PS: Thanks to #Bammar you can also run:
dos2unix file.csv
to convert your csv file to unix compatible file.
I have maillog file with below parameters
relay=mx3.xyz.com
relay=mx3.xyz.com
relay=mx1.xyz.com
relay=mx1.xyz.com
relay=mx2.xyz.com
relay=home.xyz.abc.com
relay=127.0.0.1
I want to count all relay except 127.0.0.1
Output should like this
total relay= 6
mx3.xyz.com = 2
mx1.xyz.com = 2
mx2.xyz.com = 1
home.xyz.abc.com = 1
If you don't mind using awk:
awk -F= '$2 != "127.0.0.1" && /relay/ {count[$2]++; total++}
END { print "total relay = "total;
for (k in count) { print k" = " count[k]}
}' maillog
And you could also make do with just uniq and grep, though you won't get your total this way:
grep relay maillog | cut -d= -f2 | grep -v 127.0.0.1 | uniq -c
And if you don't hate perl:
perl -ne '/relay=(.*)/ and $1 ne "127.0.0.1" and ++$t and $h{$1}++;
END {print "total = $t\n";
print "$_ = $h{$_}\n" foreach keys %h;
}' maillog
here you go:
awk -F= '$2!="127.0.0.1"&&$2{t++;a[$2]++} END{print "total relay="t; for(x in a)print x"="a[x]}' yourfile
the output would be:
total relay=6
mx2.xyz.com=1
mx1.xyz.com=2
mx3.xyz.com=2
home.xyz.abc.com=1
I would definitely use awk for this (#Faiz's answer). However I worked out this excruciating pipeline
cut -d= -f2 filename | grep -v -e '^[[:space:]]*$' -e 127.0.0.1 | sort | uniq -c | tee >(echo "$(bc <<< $(sed -e 's#[[:alpha:]].\+$#+#' -e '$a0')) total") | sed 's/^ *\([0-9]\+\) \(.*\)/\2 = \1/' | tac
outputs
total = 6
mx3.xyz.com = 2
mx2.xyz.com = 1
mx1.xyz.com = 2
home.xyz.abc.com = 1
Please do not upvote this answer ;)
Is there an easy way to print the IP_Address:port# ? Because I soon as it gets to the SED command, the port :# is stripped
input file example
Apr 6 14:20:41 TCP 178.255.83.1:80 in
preferred output like this
Apr 6 14:20:41 TCP 178.255.83.1:80 in United Kingdom
egrep -w 'TCP|UDP' $Denied_IPs |
sed 's/:[^:]* in/ in/; s/:[^:]* out/ out/' |
awk '{cmd="echo "$5" | code | fgrep 'Country:' | cut -c 16-43";
cmd | getline rslt;
close(cmd);
print $1" "$2" "$3" "$4" "$5" "$6" "rslt}' >> "$IP2COUNTRY"
The sed command is stripping the port explicitly. Given that that is all the sed command is doing, simply remove it from the expression.
That's a rather unoptimal implementation, by the way. Especially after we remove the sed, the egrep can be folded into the awk:
awk '/ (TCP|UDP) / {
split($5, addr, /:/);
cmd = "echo " addr[1] " | code | fgrep Country: | cut -c 16-43";
cmd | getline rslt;
close(cmd);
print $1, $2, $3, $4, $5, $6, rslt
}' < "$Denied_IPs" >> "$IP2COUNTRY"
and I can't help but think that the invocation of code within awk can be optimized a bit.
(I also removed the single quotes around 'Country:', which were doing nothing useful — and if they had been needed, they would in fact have broken the script because the whole thing is already wrapped in single quotes.)