print line number of output in shell script - shell

I have a script that prints out the average time when pinging a server, shown below:
ping -c3 "${I}" | tail -1 | awk '{print $4}' | cut -d '/' -f 2 | sed 's/$/\tms/'
How can I add the line number to output of the script above when pinging a list of servers ??
my actual output when pinging list of 3 host is:
6.924 ms
100.099 ms
7.756 ms
I want the output to be like this:
1,6.924 ms
2,100.099 ms
3,7,756 ms
so that this can be read by excel :)
Thank in advanced!!

Pipe your output through perl:
echo -e 'aa\nbb' | perl -ne 'print $., ",", $_'
Output:
1,aa
2,bb

Is that what you want?
C=1
for I in 'host1' 'host2' 'host3'
do
ping -c3 "${I}" | tail -1 | awk '{print $4}' | cut -d '/' -f 2 | echo "$C,$(sed 's/$/\tms/')"
C=$((C+1))
done

The standard tool for line numbering is nl. Pipe your output to nl -s, That is:
for I; do
ping -c3 "${I}" | awk -F/ 'END{print $5, "\tms"}'
done | nl -s,
Since you haven't specified how the list is generated, I'm just showing the case where the list of hosts to be pinged is given on the command line. Note that this introduces leading whitespace before the line number, so you might want to filter that through sed to remove.
Of course, this script is spending most of its time waiting for the ping, and you probably want to speed it up by running the pings in parallel. In that case, it is better to add the line number at the beginning so you can get a stable sort in the output:
line=1
{ for I; do ping -c3 $I | awk -F/ 'END{
printf( "%d,%s\tms\n", line,$5 )}' line=$line &
: $((line +=1 ))
done; wait; } | sort -n
In this case, the wait is not necessary since sort will block until all of the pings have closed their output, but the wait becomes necessary if you add any processes in the pipeline before the sort that do not necessarily wait for all of their input before doing any processing, so it is a good practice to leave the wait in place.

Related

Integer expected error in script

I am trying to write a simple script to monitor disk usage. I keep getting integer expression expected errors at line 5. (THRESHOLD value is intentionally set low for testing.)
Here is my script
#!/bin/bash
CURRENT=$(df -hP | grep / | awk '{ print $5}' | sed 's/%//g')
THRESHOLD=10
if [ "$CURRENT" -gt "$THRESHOLD" ] ; then
mail -s 'Disk Space Alert' john.kenny#ngc.com << EOF
Your root partition remaining free space is critically low. Used: $CURRENT%
EOF
fi
My screen output looks like this
./monitor_disk_space.sh: line 5: [: 7
0
22
1
1
1
1
1
1: integer expression expected
I'm new to bash scripts and especially awk. Any suggestions would be appreciated.
As you can see you're getting a string of newline-separated values from your pipeline. This string is not in itself an integer, so it can't be compared to $THRESHOLD.
Assuming you'd like to send the message if any filesystem is above $THRESHOLD percent full, you may use
df -hP | awk '/\// { sub("%", "", $5); print $5 }' |
while read number; do
if [ "$number" -gt "$THRESHOLD" ]; then
mail ...
break
fi
done
This would pass the values, one by one, into a loop that would compare them against $THRESHOLD. If any value is larger, the mail is sent and the loop exits (via the break).
I also took the liberty of shortening your pipeline to just df+awk, as awk is more than capable of doing the work of both grep and sed.
If you only want to check the root partition, then use df -hP / in the pipeline above.
CURRENT=$(df -hP | grep / | awk '{ print $5}' | sed 's/%//g')
df -hp shows a summary of disk usage.
grep / filters out the header line.
awk '{print $5}' prints the 5th column, which is the percentage usage for each file system.
sed 's/%//g' deletes the % character. (There's only one, so the g is unnecessary. I might have used tr -d %, but it doesn't really matter.)
$(...) captures the output of the above -- which is going to be multiple lines of output, each of which should contain an integer.
The -gt operator requires a single integer for each of its arguments.
I think the problem is the grep /, which prints every line containing a / character (that's probably going to be everything except the header line). Your message indicates that you're interested in the root filesystem.
Changing grep / to grep /$ is one simple solution.
But passing / as an argument to the df command, so it displays usage only for the root file system, is even simpler.
Here's how I might do it:
CURRENT=$(df / | awk 'NR == 2 { print $5 }' | tr -d %)
You could incorporate the deletion of the % character into the awk command, but that would be a little more complicated.
why not do it all in awk?
$ df -hP |
awk -v th=10 '/\// {if($5+0>th)
system("echo Your ... " $5 " | mail -s \"Disk Space Alert\" xxx#example.com")}'

How to parse the Data to get the time Stamp sorted along with Error code

I am trying to grep out a particular string "Server-OUT" from one of the my log file and counting the string that it was appeared the number of times on the file, and if it gets greater than 18 in number then just pop up a mail to me and thats working.
#!/bin/bash
ERRCOUNT=$(grep SERVER-OUT /licenses/CapSync30/license_logs/770 |tail -100 | wc -l)
HOSTN="`/bin/hostname`"
if [ "$ERRCOUNT" -ge 18 ]
then
echo "SERVER-OUT Error Count is $ERRCOUNT on $HOSTN" | mailx -s "Urgent !!! Licence Admin Please Investigate the Server $HOSTN for any Issues" karn#dence.com
fi
Now, as the log file has the format as below, where i want to cut down the First and Second Column, where i First column has the time and i Want that to be parse precisely as "13:53" , If this "Server-out" appeared between 13:53 repeatedly more than 10 times then send me a mail with number of count including the data.
13:53:21 (meta) SERVER-OUT: Failed to send the message(86)
Below what i'm trying to filter the first & second column along with time stamp(hours:minute) with SERVER-OUT message too, but just screwed myself to get that revolving into my mind as of now..
$ awk '/SERVER-OUT/ {print $1, $3}' /licenses/CapSync30/license_logs/770
13:53:21 SERVER-OUT:
13:54:06 SERVER-OUT:
$ awk '/SERVER-OUT/ {print $1, $3}' /licenses/CapSync30/license_logs/770 | cut -d: -f1,2 | tail -2
13:53
13:54
not clear what you're trying to do, but perhaps will give you ideas
$ echo "13:53:21 (meta) SERVER-OUT: Failed to send the message(86)" |
awk '/SERVER-OUT/{split($1,t,":"); print t[1]":"t[2],$3}'
13:53 SERVER-OUT:
or
$ ... | awk '/SERVER-OUT/{print substr($1,1,5),$3}'
13:53 SERVER-OUT:
set the counter on the extracted values, using the second alternative...
$ ... | awk '/SERVER-OUT/{counter[substr($1,1,5),$3]++}
END {for(k in counter) if(counter[k]>10) exit 1}'
check the exit status and send the notification...

Oneline file-monitoring

I have a logfile continously filling with stuff.
I wish to monitor this file, grep for a specific line and then extract and use parts of that line in a curl command.
I had a look at How to grep and execute a command (for every match)
This would work in a script but I wonder if it is possible to achieve this with the oneliner below using xargs or something else?
Example:
Tue May 01|23:59:11.012|I|22|Event to process : [imsi=242010800195809, eventId = 242010800195809112112, msisdn=4798818181, inbound=false, homeMCC=242, homeMNC=01, visitedMCC=238, visitedMNC=01, timestamp=Tue May 12 11:21:12 CEST 2015,hlr=null,vlr=4540150021, msc=4540150021 eventtype=S, currentMCC=null, currentMNC=null teleSvcInfo=null camelPhases=null serviceKey=null gprsenabled= false APNlist: null SGSN: null]|com.uws.wsms2.EventProcessor|processEvent|139
Extract the fields I want and semi-colon separate them:
tail -f file.log | grep "Event to process" | awk -F'=' '{print $2";"$4";"$12}' | tr -cd '[[:digit:].\n.;]'
Curl command, e.g. something like:
http://user:pass#www.some-url.com/services/myservice?msisdn=...&imsi=...&vlr=...
Thanks!
Try this:
tail -f file.log | grep "Event to process" | awk -F'=' '{print $2" "$4" "$12; }' | tr -cd '[[:digit:].\n. ]' |while read msisdn imsi vlr ; do curl "http://user:pass#www.some-url.com/services/myservice?msisdn=$msisdn&imsi=$imsi&vlr=$vlr" ; done

How to strip a number in the output of an executable?

I run an executable which outputs a lot of lines to stdout. The last line is
Run in 100 seconds
The code in the C program of the executable to write the last line is
printf("Ran in %g seconds\n", time);
So there is a newline character at the end.
I want to strip the last number, e.g. 100, from the stdout, so in bash
./myexecutable > output
Then I wonder how to further parse output to get the time number in bash? Do I need some applications to do that?
Thanks!
You could use grep:
grep -oP 'Ran in \K\d+' output
or
grep -oP '(?<=Ran in )\d+(?= seconds)' output
Let's say:
s='Run in 100 seconds'
Using tr:
tr -cd '[[:digit:]]' <<< "$s"
100
Using sed:
sed 's/[^0-9]*//g' <<< "$s"
100
However if you want to grab last number in a line then use this lookahead regex:
s='Run 10 in 100 seconds'
grep -oP '\d+(?!\D*\d)' <<< "$s"
100
Or, use tail to grab the last line (tail -n 1 <file>) and extract the number by either -
Using sed with three pattern groups and printing the second group match:
tail -n 1 output | sed 's/\(^Run in \)\([0-9]\+\)\( seconds$\)/\2/g'
Using awk to print the third ($3) token:
tail -n 1 output | awk '{print $3}'

bash - summing the output of wordcount

Scenario:
I have a bunch of VIPs. While doing an NSLOOKUP, the output generally returns an output with one public IP. In cases where the loadbalancer fails, the NSLOOKUP returns two public IPs. for such scenarios, I want to write a script.
Logic:
for i vip1 vip2 vip3; do nslookup $i | grep -v "<private IP> | grep 'Address:' | wc -l ; done
in an ideal scenario, the output will look like
1
1
1
If I could sum the output, it would say 3. If something goes wrong, the output will show a sum > 3. I was unable to sum in the above case. Please advice
echo vip1 vip2 vip3 | xargs -n 1 nslookup | \
awk '/Address/ && !/<private-ip>/ {s++} END{print s}'
Instead of summing, just count the matches in the entire loop. And use grep -c to do the match and count in one step.
for i in vip1 vip2 vip3
do
nslookup "$i"
done | grep -v "<private IP>" | grep -c 'Address'
Make it with awk:
user#host:~# cat blub | awk '{ SUM += $1} END { print SUM }'
4
blub is a file with contents:
user#host:~# cat blub
1
2
1
To get a sum you can substitute the command into arithmetic evaluation construct. If your pipeline produces an integer, then try using following loop:
for i in vip1 vip2 vip3; do
((sum += $(nslookup $i | .. rest of pipeline .. | wc -l)))
done
# .. do something with $sum ..
maybe not the most elegant look, but shall work

Resources