Unexpected behaviour with awk exit - bash

I have next code:
process_mem() {
used=`sed -n -e '/^Cpu(s):/p' $temp_data_file | awk '{print $2}' | sed 's/\%us,//'`
idle=`sed -n -e '/^Cpu(s):/p' $temp_data_file | awk '{print $5}' | sed 's/\%id,//'`
awk -v used=$used \
-v custom_cpu_thres=$custom_cpu_thres \
'{
if(used>custom_cpu_thres){
exit 1
}else{
exit 0
}
}'
return=$?
echo $return
if [[ $return -eq 1 ]]; then
echo $server_name"- High CPU Usage (Used:"$used".Idle:"$idle"). "
out=1
else
echo $server_name"- Normal CPU Usage (Used:"$used".Idle:"$idle"). "
fi
}
while IFS='' read -r line || [[ -n "$line" ]]; do
server_name=`echo $line | awk '{print $1}'`
custom_cpu_thres=`echo $line | awk '{print $3}'`
if [ "$custom_cpu_thres" = "-" ]; then
custom_cpu_thres=$def_cpu_thres
fi
expect -f "$EXPECT_SCRIPT" "$command" >/dev/null 2>&1
result=$?
if [[ $result -eq 0 ]]; then
process_mem
else
echo $server_name"- Error in Expect Script. "
out=1
fi
echo $server_name
done < $conf_file
exit $out
The problem is that read bash loop should be executed 4 times (one per line readed). However, if I write the awk code with an exit inside, read bash loop exits after first loop.
Why is this happening? In my opinion exit code in awk code shouldn't affect bash script..
Regards.

I believe the statement you make is false.
You stated:
The problem is that read bash loop should be executed 4 times (one per line read). However, if I write the awk code with an exit inside, read bash loop exits after the first loop.
I do not believe that the script exits after the first loop, but is stuck in the first loop. The reason I make this statement is that your awk script is flawed. The way you wrote it is :
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'{ if(used>custom_cpu_thres){ exit 1 }
else{ exit 0 } }'
The problem here is that Awk did not get an input file. If no input file is proved to awk, it is reading stdin (similar to processing a pipe or keyboard input). Since no information is sent to stdin (unless you pressed a couple of keys and accidentally hit Enter) the script will not move forward and Awk is awaiting input.
The standard input shall be used only if no file operands are specified, or if a file operand is '-', or if a progfile option-argument is '-'; see the INPUT FILES section. If the awk program contains no actions and no patterns, but is otherwise a valid awk program, standard input and any file operands shall not be read and awk shall exit with a return status of zero.
source : Awk POSIX Standard
The following bash-line demonstrates the above statement:
$ while true; do awk '{print "woot!"; exit }'; done
Only when you press some keys followed by Enter, the word "woot!" is printed on the screen!
How to solve your problem:
The easiest way to solve your problem using Awk is by making use of the BEGIN block. This block is executed before it reads any input line (or stdin). If you tell Awk to exit in a begin block, it will terminate Awk without reading any input. Thus:
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'BEGIN{ if(used>custom_cpu_thres){ exit 1 }
else{ exit 0 } }'
or shorter
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'BEGIN{ exit (used>custom_cpu_thres) }
However, Awk is a bit of an overkill here. A simple bash test would suffice:
[[ "$used" -le "$custom_cpu_thres" ]]
result=$?
or
(( used <= custom_cpu_thres ))
result=$?

Related

Would it be possible to print the file used to redirect STDERR?

Would it be possible to print the filename used to redirect STDERR, given the sample command below:
command.sh 2>file.err
Code in command.sh:
#!/bin/sh
ls -l non_existing_file.txt
echo "STDERR file is: $stderrFilename" # variable should print file.err
It's a little risky, but you could try parsing AIX's procfiles output. It involves capturing the major and minor numbers of the stderr device, along with the inode number, then looking for the corresponding device, its mountpoint, and then using find to look for the file with the given inode number:
#!/bin/sh
dev=$(procfiles $$ | awk '$1 == "2:" { print substr($4, 5) }')
inode=$(procfiles $$ | awk '$1 == "2:" { print substr($5, 5) }')
major=${dev%%,*}
minor=${dev##*,}
if [ "$major}" -eq 0 ]
then
echo I give up, the major number is zero
exit 1
fi
for file in /dev/*
do
[ -b "$file" ] || continue
if istat "$file" | grep -q "^Major Device ${major}.*Minor Device ${minor}$"
then
break
fi
done
fs=$(mount | awk '$1 == "'"${file}"'" { print $2 }')
stderrFilename=$(find "$fs" -inum "$inode")
I made a solution using history. Not sure if there is an easier way to do this ( or a proper one).
#!/bin/sh
stderrfname=`history | tail -1 | awk '{ print $3 }' | sed "s/.*>//"`
echo "STDERR file is: $stderrfname"

Shell script hangs on awk command

This is what my script looks like essentially
......
rowNum=$(awk '{print NF}' temp)
i=1
while [ $i -lt $rowNum ]
do
echo "$rowNum"
echo "$i"
echo "$j"
awk -v text=$(awk -v numb=$i '{print $numb}' temp) -v num=$j 'BEGIN{FS=","} $1 ~ text {print $num}' > temp${i}
echo "testing flag"
i=$(expr $i + 1)
done
......
When I run it I get
101
1
3
And then it just hangs with "awk * script.sh text.txt" written on the tab of the terminal continuously so it's definately just hanging on the awk command but I can't figure out how to fix it.
Thank-you
Looks like you didn't supply input file for awk, so it's reading stdin.

awk command variable NF not working on NULL input

I run my safe shell script to make sure a binary is running
to check a binary is running I do following command
pidof prog.bin | awk '{print NF}'
is some system it gives me 0 when binary not running
and
in some systems it gives me NULL(nothing)
I can check the NULL using -z option but why awk command acting this way ??
Instead of pidof you can use:
pgrep -qf prog.bin
And check its exit status.
As per man pgrep:
-f Match against full argument lists. The default is to match against process names.
-q Do not write anything to standard output.
You can use this,
if [ `pidof 'NetworkManager'` ]; then
echo "Running"
else
echo "Not Running"
fi
One way to handle this sort of thing (undefined variables) in awk is like this:
echo hi | awk '{print a}'
compared with:
echo hi | awk '{print a || 0}'
0
One Liner for If else
[[ $(pidof 'NetworkManager') ]] && echo "Running" || echo "Not Running"
Try this:
pidof prog.bin | awk '{ if (NF!=0) print NF }'
Here's some tests with awk and NF:
$ # regular line of input
$ echo foo | awk '{print NF}'
1
$ # empty line
$ echo | awk '{print NF}'
0
$ # a word on input with no newline
$ printf "%s" nonewline | awk '{print NF}'
1
$ # no input, not even a newline
$ printf %s | awk '{print NF}'
# no output from awk
I suspect the pidof case is the last: not even a newline. To force a newline:
echo $(pidof prog) | ...
printf "%s\n" "$(pidof prog)" | ...

Need help in shell script

I am new into shell scripting and learning it for past 2 month. I need your help in tuning or providing any other solution either in sed or AWK for the below question.
"write a script to input the filename and display the content of file in such a manner that each line has only 10 characters.If line in a file exceeds 10 characters then display the rest of the line in next line."
I have written the below script and worked fine. But it took 2 hours for me to write it..(certainly not acceptable. Problem is i know the shell commands very well but still have not mastered the skills to put them into shell scripts :-( . Thanks.
#!/bin/bash
if [ $# -ne 1 ]; then
echo "USAGE: $0 $1"
exit 99;
fi
VAR1=$(echo "$1" | wc -c)
cat "$1" | while read line
do
[ $VAR1 -gt 10 ] && echo "$line" || echo "$line"|tr " " "\n"
done
Using sed
sed 's/........../&\n/g' file.txt
Using grep
grep -oE '.{1,10}' file.txt
Using dd
cat file.txt | dd cbs=10 conv=unblock 2>/dev/null
Using awk?
awk 'BEGIN {FS=""} {for (i=1; i<=NF; i++) if (i % 10 == 0) printf "%s\n", $i ; else if (i == NF) print "\n" ; else printf "%s", $i} ' inputs.txt
This works, but I have a feeling that this is not the most optimal way of using awk :-P

How to verify information using standard linux/unix filters?

I have the following data in a Tab delimited file:
_ DATA _
Col1 Col2 Col3 Col4 Col5
blah1 blah2 blah3 4 someotherText
blahA blahZ blahJ 2 someotherText1
blahB blahT blahT 7 someotherText2
blahC blahQ blahL 10 someotherText3
I want to make sure that the data in 4th column of this file is always an integer. I know how to do this in perl
Read each line, Store value of 4th column in a variable
check if that variable is an integer
if above is true, continue the loop
else break out of the loop with message saying file data not correct
But how would I do this in a shell script using standard linux/unix filter? My guess would be to use grep, but I am not sure how?
cut -f4 data | LANG=C grep -q '[^0-9]' && echo invalid
LANG=C for speed
-q to quit at first error in possible long file
If you need to strip the first line then use tail -n+2 or you could get hacky and use:
cut -f4 data | LANG=C sed -n '1b;/[^0-9]/{s/.*/invalid/p;q}'
awk is the tool most naturally suited for parsing by columns:
awk '{if ($4 !~ /^[0-9]+$/) { print "Error! Column 4 is not an integer:"; print $0; exit 1}}' data.txt
As you get more complex with your error detection, you'll probably want to put the awk script in a file and invoke it with awk -f verify.awk data.txt.
Edit: in the form you'd put into verify.awk:
{
if ($4 !~/^[0-9]+$/) {
print "Error! Column 4 is not an integer:"
print $0
exit 1
}
}
Note that I've made awk exit with a non-zero code, so that you can easily check it in your calling script with something like this in bash:
if awk -f verify.awk data.txt; then
# action for success
else
# action for failure
fi
You could use grep, but it doesn't inherently recognize columns. You'd be stuck writing patterns to match the columns.
awk is what you need.
I can't upvote yet, but I would upvote Jefromi's answer if I could.
Sometimes you need it BASH only, because tr, cut & awk behave differently on Linux/Solaris/Aix/BSD/etc:
while read a b c d e ; do [[ "$d" =~ ^[0-9] ]] || echo "$a: $d not a numer" ; done < data
Edited....
#!/bin/bash
isdigit ()
{
[ $# -eq 1 ] || return 0
case $1 in
*[!0-9]*|"") return 0;;
*) return 1;;
esac
}
while read line
do
col=($line)
digit=${col[3]}
if isdigit "$digit"
then
echo "err, no digit $digit"
else
echo "hey, we got a digit $digit"
fi
done
Use this in a script foo.sh and run it like ./foo.sh < data.txt
See tldp.org for more info
Pure Bash:
linenum=1; while read line; do field=($line); if ((linenum>1)); then [[ ! ${field[3]} =~ ^[[:digit:]]+$ ]] && echo "FAIL: line number: ${linenum}, value: '${field[3]}' is not an integer"; fi; ((linenum++)); done < data.txt
To stop at the first error, add a break:
linenum=1; while read line; do field=($line); if ((linenum>1)); then [[ ! ${field[3]} =~ ^[[:digit:]]+$ ]] && echo "FAIL: line number: ${linenum}, value: '${field[3]}' is not an integer" && break; fi; ((linenum++)); done < data.txt
cut -f 4 filename
will return the fourth field of each line to stdout.
Hopefully that's a good start, because it's been a long time since I had to do any major shell scripting.
Mind, this may well not be the most efficient compared to iterating through the file with something like perl.
tail +2 x.x | sort -n -k 4 | head -1 | cut -f 4 | egrep "^[0-9]+$"
if [ "$?" == "0" ]
then
echo "file is ok";
fi
tail +2 gives you all but the first line (since your sample has a header)
sort -n -k 4 sorts the file numerically on the 4th column, letters will rise to the top.
head -1 gives you the first line of the file
cut -f 4 gives you the 4th column, of the first line
egrep "^[0-9]+$" checks if the value is a number (integers in this case).
If egrep finds nothing, $? is 1, otherwise it's 0.
There's also:
if [ `tail +2 x.x | wc -l` == `tail +2 x.x | cut -f 4 | egrep "^[0-9]+$" | wc -l` ] then
echo "file is ok";
fi
This will be faster, requiring two simple scans through the file, but it's not a single pipeline.
#OP, use awk
awk '$4+0<=0{print "not ok";exit}' file

Resources