Shell script hangs on awk command - shell

This is what my script looks like essentially
......
rowNum=$(awk '{print NF}' temp)
i=1
while [ $i -lt $rowNum ]
do
echo "$rowNum"
echo "$i"
echo "$j"
awk -v text=$(awk -v numb=$i '{print $numb}' temp) -v num=$j 'BEGIN{FS=","} $1 ~ text {print $num}' > temp${i}
echo "testing flag"
i=$(expr $i + 1)
done
......
When I run it I get
101
1
3
And then it just hangs with "awk * script.sh text.txt" written on the tab of the terminal continuously so it's definately just hanging on the awk command but I can't figure out how to fix it.
Thank-you

Looks like you didn't supply input file for awk, so it's reading stdin.

Related

Unexpected behaviour with awk exit

I have next code:
process_mem() {
used=`sed -n -e '/^Cpu(s):/p' $temp_data_file | awk '{print $2}' | sed 's/\%us,//'`
idle=`sed -n -e '/^Cpu(s):/p' $temp_data_file | awk '{print $5}' | sed 's/\%id,//'`
awk -v used=$used \
-v custom_cpu_thres=$custom_cpu_thres \
'{
if(used>custom_cpu_thres){
exit 1
}else{
exit 0
}
}'
return=$?
echo $return
if [[ $return -eq 1 ]]; then
echo $server_name"- High CPU Usage (Used:"$used".Idle:"$idle"). "
out=1
else
echo $server_name"- Normal CPU Usage (Used:"$used".Idle:"$idle"). "
fi
}
while IFS='' read -r line || [[ -n "$line" ]]; do
server_name=`echo $line | awk '{print $1}'`
custom_cpu_thres=`echo $line | awk '{print $3}'`
if [ "$custom_cpu_thres" = "-" ]; then
custom_cpu_thres=$def_cpu_thres
fi
expect -f "$EXPECT_SCRIPT" "$command" >/dev/null 2>&1
result=$?
if [[ $result -eq 0 ]]; then
process_mem
else
echo $server_name"- Error in Expect Script. "
out=1
fi
echo $server_name
done < $conf_file
exit $out
The problem is that read bash loop should be executed 4 times (one per line readed). However, if I write the awk code with an exit inside, read bash loop exits after first loop.
Why is this happening? In my opinion exit code in awk code shouldn't affect bash script..
Regards.
I believe the statement you make is false.
You stated:
The problem is that read bash loop should be executed 4 times (one per line read). However, if I write the awk code with an exit inside, read bash loop exits after the first loop.
I do not believe that the script exits after the first loop, but is stuck in the first loop. The reason I make this statement is that your awk script is flawed. The way you wrote it is :
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'{ if(used>custom_cpu_thres){ exit 1 }
else{ exit 0 } }'
The problem here is that Awk did not get an input file. If no input file is proved to awk, it is reading stdin (similar to processing a pipe or keyboard input). Since no information is sent to stdin (unless you pressed a couple of keys and accidentally hit Enter) the script will not move forward and Awk is awaiting input.
The standard input shall be used only if no file operands are specified, or if a file operand is '-', or if a progfile option-argument is '-'; see the INPUT FILES section. If the awk program contains no actions and no patterns, but is otherwise a valid awk program, standard input and any file operands shall not be read and awk shall exit with a return status of zero.
source : Awk POSIX Standard
The following bash-line demonstrates the above statement:
$ while true; do awk '{print "woot!"; exit }'; done
Only when you press some keys followed by Enter, the word "woot!" is printed on the screen!
How to solve your problem:
The easiest way to solve your problem using Awk is by making use of the BEGIN block. This block is executed before it reads any input line (or stdin). If you tell Awk to exit in a begin block, it will terminate Awk without reading any input. Thus:
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'BEGIN{ if(used>custom_cpu_thres){ exit 1 }
else{ exit 0 } }'
or shorter
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'BEGIN{ exit (used>custom_cpu_thres) }
However, Awk is a bit of an overkill here. A simple bash test would suffice:
[[ "$used" -le "$custom_cpu_thres" ]]
result=$?
or
(( used <= custom_cpu_thres ))
result=$?

echo "$var" prints blank space

I was basically trying to compare two files and as part of that I assigned the cksum of the file to a variable . But when I try to compare it, it did not work. I realized that when I tried to read the variable nothing gets printed out
The below commands worked just fine
s.joseph#VA-S-JOSEPH-900 /cygdrive/c/users/Anuprita
$ test=`cksum interface2 | awk -F" " '{ print $1 }'`
s.joseph#VA-S-JOSEPH-900 /cygdrive/c/users/Anuprita
$ echo "$test"
3021988741
But when these are part of a script and I try to echo $var, nothing gets printed
$ for i in `ls interface*`;
do chksum1=`cksum $i | awk -F" " '{ print "'$1'" }'`;
echo "$chksum1";
done
s.joseph#VA-S-JOSEPH-900 /cygdrive/c/users/Anuprita
$
I am using bash shell
Without assigning it to any variable, the output is as shown below
for i in interface*; do echo "interface=\"$i\""; cksum "$i"; done
interface="interface11"
4113442291 111 interface11
interface="interface17"
1275738681 111 interface17
interface="interface2"
3021988741 186 interface2
Looks like it is an issue only with bash on cygwin. The script seems to be working just fine on unix
for i in ls interface*; do chksum1=cksum $i | awk -F" " '{ print $1 }'; echo $i, $chksum1; done
interface1, 4294967295
interface2, 4294967295
Try this;
for i in ls interface*; do echo "interface=$i"; chksum1=$(cksum $i | awk -F" " '{ print "'$1'" }'); echo "$chksum1"; done
I like adding the echo statement to verify your getting what you think with the ls statement and the variable assignment should use $(cmd) or `cmd`
Cheers
What you have in your 2nd script:
print "'$1'"
is a completely different statement from what you have in your first one:
print $1
Think about it and ask yourself why you changed it and what it is you're trying to achieve. Also man awk and see g at http://cfajohnson.com/shell/cus-faq-2.html#Q24 for what print "'$1'" does.
Best I can tell without and provided sample input your script should be written:
for i in interface*; do chksum1=$(cksum "$i" | awk '{ print $1 }'); echo "$chksum1"; done

How to pass filename through variable to be read it by awk

Good day,
I was wondering how to pass the filename to awk as variable, in order to awk read it.
So far I have done:
echo file1 > Aenumerar
echo file2 >> Aenumerar
echo file3 >> Aenumerar
AE=`grep -c '' Aenumerar`
r=1
while [ $r -le $AE ]; do
lista=`awk "NR==$r {print $0}" Aenumerar`
AEList=`grep -c '' $lista`
s=1
while [ $s -le $AEList ]; do
word=`awk -v var=$s 'NR==var {print $1}' $lista`
echo $word
let "s = s + 1"
done
let "r = r + 1"
done
Thanks so much in advance for any clue or other simple way to do it with bash command line
Instead of:
awk "NR==$r {print $0}" Aenumerar
You need to use:
awk -v r="$r" 'NR==r' Aenumerar
Judging by what you've posted, you don't actually need all the NR stuff; you can replace your whole script with this:
while IFS= read -r lista ; do
awk '{print $1}' "$lista"
done < Aenumerar
(This will print the first field of each line in each of file1, file2, file3. I think that's what you're trying to do?)

awk command variable NF not working on NULL input

I run my safe shell script to make sure a binary is running
to check a binary is running I do following command
pidof prog.bin | awk '{print NF}'
is some system it gives me 0 when binary not running
and
in some systems it gives me NULL(nothing)
I can check the NULL using -z option but why awk command acting this way ??
Instead of pidof you can use:
pgrep -qf prog.bin
And check its exit status.
As per man pgrep:
-f Match against full argument lists. The default is to match against process names.
-q Do not write anything to standard output.
You can use this,
if [ `pidof 'NetworkManager'` ]; then
echo "Running"
else
echo "Not Running"
fi
One way to handle this sort of thing (undefined variables) in awk is like this:
echo hi | awk '{print a}'
compared with:
echo hi | awk '{print a || 0}'
0
One Liner for If else
[[ $(pidof 'NetworkManager') ]] && echo "Running" || echo "Not Running"
Try this:
pidof prog.bin | awk '{ if (NF!=0) print NF }'
Here's some tests with awk and NF:
$ # regular line of input
$ echo foo | awk '{print NF}'
1
$ # empty line
$ echo | awk '{print NF}'
0
$ # a word on input with no newline
$ printf "%s" nonewline | awk '{print NF}'
1
$ # no input, not even a newline
$ printf %s | awk '{print NF}'
# no output from awk
I suspect the pidof case is the last: not even a newline. To force a newline:
echo $(pidof prog) | ...
printf "%s\n" "$(pidof prog)" | ...

Need help in shell script

I am new into shell scripting and learning it for past 2 month. I need your help in tuning or providing any other solution either in sed or AWK for the below question.
"write a script to input the filename and display the content of file in such a manner that each line has only 10 characters.If line in a file exceeds 10 characters then display the rest of the line in next line."
I have written the below script and worked fine. But it took 2 hours for me to write it..(certainly not acceptable. Problem is i know the shell commands very well but still have not mastered the skills to put them into shell scripts :-( . Thanks.
#!/bin/bash
if [ $# -ne 1 ]; then
echo "USAGE: $0 $1"
exit 99;
fi
VAR1=$(echo "$1" | wc -c)
cat "$1" | while read line
do
[ $VAR1 -gt 10 ] && echo "$line" || echo "$line"|tr " " "\n"
done
Using sed
sed 's/........../&\n/g' file.txt
Using grep
grep -oE '.{1,10}' file.txt
Using dd
cat file.txt | dd cbs=10 conv=unblock 2>/dev/null
Using awk?
awk 'BEGIN {FS=""} {for (i=1; i<=NF; i++) if (i % 10 == 0) printf "%s\n", $i ; else if (i == NF) print "\n" ; else printf "%s", $i} ' inputs.txt
This works, but I have a feeling that this is not the most optimal way of using awk :-P

Resources