telegraf input.exec for CPU temperature - telegraf-inputs-plugin

[[inputs.exec]]
commands = ["sensors | grep Core | awk '{print $1 \" \" substr($2, 1, length($2)-1) \"=\" substr($3, 2, length($3)-3) }'"]
timeout = "3s"
interval = "34s"
name_suffix = "_CPU_temp"
data_format = "influx"
[inputs.exec.tags]
bucket = "system"
The command
sensors | grep Core | awk '{print $1 " " substr($2, 1, length($2)-1) "=" substr($3, 2, length($3)-3) }'
works fine in bash, however
E! [inputs.exec] Error in plugin: exec: exit status 1 for command 'sensors | grep Core | awk '{print $1 " " substr($2, 1, length($2)-1) "=" substr($3, 2, length($3)-3) }'': Parse error in chip name|'`
Git feeling is that it's something to do with quoting quotes or maybe dollars or both or something?
This works on the command line
c="sensors | grep Core | awk '{print \$1 substr(\$2, 1, length(\$2)-1) \"=\" substr(\$3, 2, length(\$3)-3) }'";
eval $c
but not in telegraf:
invalid TOML syntax

Related

Extracting string from line, give as input to a command and then output the entire line with replacing the string

I have a file containing like below, multiple rows are there
test1| 1234 | test2 | test3
Extract second column 1234 and run a command feeding that as input
lets say we get X as output to the command
Print the output as below for each of the line
test1 | X | test2 | test3
Prefer if I could do it in one-liner, but open to ideas.
I am able to extract string using awk, but I am not sure how I can still preserve the initial output and replace it in the output. Below is what I tested
cat file.txt | awk -F '|' '{newVar=system("command "$2); print newVar $4}'
#
Sample command output, where we extract the "name"
openstack show 36a6c06e-5e97-4a53-bb42
+----------------------------+-----------------------------------+
| Property | Value |
+----------------------------+-----------------------------------+
| id | 36a6c06e-5e97-4a53-bb42 |
| name | testVM1 |
+----------------------------+-----------------------------------+
Perl to the rescue!
perl -lF'/\|/' -ne 'chomp( $F[1] = qx{ command $F[1] }); print join "|", #F' < file.txt
-n reads the input line by line
-l removes newlines from input and adds them to prints
F specifies how to split each input line into the #F array
$F[1] corresponds to the second column, we replace it with the output of the command
chomp removes the trailing newline from the command output
join glues the array back to one line
Using awk:
awk -F ' *| *' '{("command "$2) | getline $2}1' file.txt
e.g.
$ awk -F ' *| *' '{("date -d #"$2) | getline $2}1' file.txt
test1| Thu 01 Jan 1970 05:50:34 AM IST | test2 | test3
I changed the field separator from | to *| * to accommodate the spaces surrounding the fields. You can remove those based on your actual input.
This finally did the trick..
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "openstack show \047" $2 "\047"
while ( (cmd | getline line) > 0 ) {
if ( line ~ /name/ ) {
split(line,flds,/ *[|] */)
$2 = flds[3]
break
}
}
close(cmd)
print
}' file
If command can take the whole list of values once and generate the converted list as output (e.g. tr 'a-z' 'A-Z') then you'd want to do something like this to avoid spawning a shell once per input line (which is extremely slow):
awk -F' *[|] *' '{print $2}' file |
command |
awk -F' *[|] *' -v OFS=' | ' 'NR==FNR{a[FNR]=$0; next} {$2=a[FNR]} 1' - file
otherwise if command needs to be called with one value at a time (e.g. echo) or you just don't care about execution speed then you'd do:
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "command \047" $2 "\047"
if ( (cmd | getline line) > 0 ) {
$2 = line
}
close(cmd)
print
}' file
The \047s will produce single quotes around $2 when it's passed to command and so shield it from shell interpretation (see https://mywiki.wooledge.org/Quotes) and the test on the result of getline will protect you from silently overwriting the current $2 with the output of an earlier command execution in the event of a failure (see http://awk.freeshell.org/AllAboutGetline). The close() ensures that you don't end up with a "too many open files" error or other cryptic problem if the pipe isn't being closed properly, e.g. if command is generating multiple lines and you're just reading the first one.
Given your comment below, if you're going with the 2nd approach above then you'd write something like:
awk -F' *[|] *' -v OFS=' | ' '{
cmd = "openstack show \047" $2 "\047"
while ( (cmd | getline line) > 0 ) {
split(line,flds)
if ( flds[2] == "name" ) {
$2 = flds[3]
break
}
}
close(cmd)
print
}' file

"Resource temporarily unavailable" when using Awk and Fork

I wrote a script that it takes a csv file and replace the third column with the HASH of the second column with some string (Key).
After 256 rows, I got an error
awk: cmd. line:3: (FILENAME=C:/hanatest/test.csv FNR=257) fatal:
cannot create child process for `echo -n
E5360712819A7EF1584E2FDA06287379FF5CC3E0A5M7J6PiQMaSBut52ZQhVlS4 |
openssl ripemd160 | cut -f2 -d" "' (fork: Resource temporarily
unavailable)
I change the CSV file and I got always the same error after 256 rows.
here is my code:
awk -F "," -v env_var="$key" '{
tmp="echo -n "$2env_var" | openssl ripemd160 | cut -f2 -d\" \""
tmp | getline cksum
$3=toupper(cksum)
print
}' //test/source.csv > //ziel.csv
Can you please help me ?
Here my sample input:
25,XXXXXXXXXXXXXXXXXX,?
44,YYYYYYYYYYYYYYYYYY,?
84,ZZZZZZZZZZZZZZZZZZ,?
and here my expected output:
25,XXXXXXXXXXXXXXXXXX,301E2A8BF32A7046F65E48DF32CF933F6CAEC529
44,YYYYYYYYYYYYYYYYYY,301E2A8BF32A7046F65E48EF32CF933F6CAEC529
84,ZZZZZZZZZZZZZZZZZZ,301E2A8BF32A7046F65E48EF33CF933F6CAEC529
Thanks in advance
Let's make your code more robust first:
awk -F "," -v env_var="$key" '{
tmp="echo -n \047" $2 env_var "\047 | openssl ripemd160 | cut -f2 -d\047 \047"
if ( (tmp | getline cksum) > 0 ) {
$3 = toupper(cksum)
}
close(tmp)
print
}' /test/source.csv > /ziel.csv
Now - do you still have a problem? If you're considering using getline make sure to read and fully understand the correct uses and all of the caveats discussed at http://awk.freeshell.org/AllAboutGetline.

linux:extract specific words using awk,grep or sed

Looking to extract Specific Words from each line
Nov 2 11:25:51 imau03ftc CSCOacs_TACACS_Accounting 0687979272 1 0 2016-11-02 11:25:51.250 +13:00 0311976914 3300 NOTICE Tacacs-Accounting: TACACS+ Accounting with Command, ACSVersion=acs-5.6.0.22-B.225, ConfigVersionId=145, Device IP Address=10.107.32.53, CmdSet=[ CmdAV=show controllers <cr> ], RequestLatency=0, Type=Accounting, Privilege-Level=15, Service=Login, User=nc-rancid, Port=tty1, Remote-Address=172.26.200.204, Authen-Method=TacacsPlus, AVPair=task_id=8280, AVPair=timezone=NZDT, AVPair=start_time=1478039151, AVPair=priv-lvl=1, AcctRequest-Flags=Stop, Service-Argument=shell, AcsSessionID=imau03ftc/262636280/336371030, SelectedAccessService=Default Device Admin, Step=13006 , Step=15008 , Step=15004 , Step=15012 , Step=13035 , NetworkDeviceName=CASWNTHS133, NetworkDeviceGroups=All Devices:All Devices, NetworkDeviceGroups=Device Type:All Device Types:Corporate, NetworkDeviceGroups=Location:All Locations, Response={Type=Accounting; AcctReply-Status=Success; }
Looking to extract
Nov 2 11:25:51 show controllers User=nc-rancid NetworkDeviceName=CASWNTHS133
can use awk,grep or sed
i have tried few combinations like
sudo tail -n 20 /var/log/tacacs/imau03ftc-accounting.log | grep -oP 'User=\K.*' & 'NetworkDeviceName=\K.*'
sudo tail -n 20 /var/log/tacacs/imau03ftc-accounting.log | sudo awk -F" " '{ print $1 " " $3 " " $9 " " $28}'
i can add few more lines but most of them have same format
thanks
Try to run this:
sudo tail -n 20 /var/log/tacacs/imau03ftc-accounting.log > tmpfile
Then execute this script:
#!/bin/sh
while read i
do
str=""
str="$(echo $i |awk '{print $1,$2,$3}')"
str="$str $(echo $i |awk 'match($0, /CmdAV=([^<]+)/) { print substr( $0, RSTART,RLENGTH ) }'|awk -F "=" '{print $2}')"
str="$str $(echo $i |awk 'match($0, /User=([^,]+)/) { print substr( $0, RSTART, RLENGTH ) }')"
str="$str $(echo $i |awk 'match($0, /NetworkDeviceName=([^,]+)/) { print substr( $0, RSTART, RLENGTH ) }')"
echo $str
done < tmpfile
Output:
Nov 2 11:25:51 show controllers User=nc-rancid NetworkDeviceName=CASWNTHS133

capture-specific-columns and mask the column

I am trying to write the script to capture and mask the specific column.I need to have the 4 column with clear text and also mask it too in output file .I am not sure how to mask the same column
Pls help me in rewriting the below command or new command
input.txt
---------
AA | BB | CC | 123456
output.txt
---------
BB | 123456 | 12xx56
Script I wrote
cat input.txt | nawk -F '|' '{print $2 "|" $4 "|" $4} >output.txt
nawk -F '|' '{print $2 "|" $4 "|" substr($4, 1,3) "xx" substr($4,6,2)}' input.txt > output.txt
output
BB | 123456| 12xx56
Assuming you don't really need the leading and trailing spaces, I would make it
nawk -F '|' '{gsub(/ */, "", $0);print $2 "|" $4 "|" substr($4, 1,2) "xx" substr($4,5,2)}' input.txt > output.txt
cat output.txt
BB|123456|12xx56
final solution
echo "AA | BB | CC | 12345678" \
| awk -F '|' '{gsub(/ */, "", $0)
#dbg print "length$4=" (length($4)-4)
masking=sprintf("%"(length($4)-4)"s", " ") ; gsub(/ /, "x", masking)
print $2 "|" $4 "|" substr($4, 1,2) masking substr($4,(length($4)-1),2)
}'
BB|12345678|12xxxx78
I using echo "..." to simplfy the testing process. You can take that out, replace with input.txt > output.txt and the end of the line and it will work as before.
I've added the (length($4)-1) to make the position of the 2nd to last char on $4 dynamic, based on the length of what ever word is in $4.
IHTH

How do I count grep results for a string, but specify exclusions?

I have maillog file with below parameters
relay=mx3.xyz.com
relay=mx3.xyz.com
relay=mx1.xyz.com
relay=mx1.xyz.com
relay=mx2.xyz.com
relay=home.xyz.abc.com
relay=127.0.0.1
I want to count all relay except 127.0.0.1
Output should like this
total relay= 6
mx3.xyz.com = 2
mx1.xyz.com = 2
mx2.xyz.com = 1
home.xyz.abc.com = 1
If you don't mind using awk:
awk -F= '$2 != "127.0.0.1" && /relay/ {count[$2]++; total++}
END { print "total relay = "total;
for (k in count) { print k" = " count[k]}
}' maillog
And you could also make do with just uniq and grep, though you won't get your total this way:
grep relay maillog | cut -d= -f2 | grep -v 127.0.0.1 | uniq -c
And if you don't hate perl:
perl -ne '/relay=(.*)/ and $1 ne "127.0.0.1" and ++$t and $h{$1}++;
END {print "total = $t\n";
print "$_ = $h{$_}\n" foreach keys %h;
}' maillog
here you go:
awk -F= '$2!="127.0.0.1"&&$2{t++;a[$2]++} END{print "total relay="t; for(x in a)print x"="a[x]}' yourfile
the output would be:
total relay=6
mx2.xyz.com=1
mx1.xyz.com=2
mx3.xyz.com=2
home.xyz.abc.com=1
I would definitely use awk for this (#Faiz's answer). However I worked out this excruciating pipeline
cut -d= -f2 filename | grep -v -e '^[[:space:]]*$' -e 127.0.0.1 | sort | uniq -c | tee >(echo "$(bc <<< $(sed -e 's#[[:alpha:]].\+$#+#' -e '$a0')) total") | sed 's/^ *\([0-9]\+\) \(.*\)/\2 = \1/' | tac
outputs
total = 6
mx3.xyz.com = 2
mx2.xyz.com = 1
mx1.xyz.com = 2
home.xyz.abc.com = 1
Please do not upvote this answer ;)

Resources