Variable loss in redirected bash while loop - bash

I have the following code
for ip in $(ifconfig | awk -F ":" '/inet addr/{split($2,a," ");print a[1]}')
do
bytesin=0; bytesout=0;
while read line
do
if [[ $(echo ${line} | awk '{print $1}') == ${ip} ]]
then
increment=$(echo ${line} | awk '{print $4}')
bytesout=$((${bytesout} + ${increment}))
else
increment=$(echo ${line} | awk '{print $4}')
bytesin=$((${bytesin} + ${increment}))
fi
done < <(pmacct -s | grep ${ip})
echo "${ip} ${bytesin} ${bytesout}" >> /tmp/bwacct.txt
done
Which I would like to print the incremented values to bwacct.txt, but instead the file is full of zeroes:
91.227.223.66 0 0
91.227.221.126 0 0
127.0.0.1 0 0
My understanding of Bash is that a redirected for loop should preserve variables. What am I doing wrong?

First of all, simplify your script! Usually there are many better ways in bash. Also most of the time you can rely on pure bash solutions instead of running awk or other tools.
Then add some debbuging!
Here is a bit refactored script with debugging
#!/bin/bash
for ip in "$(ifconfig | grep -oP 'inet addr:\K[0-9.]+')"
do
bytesin=0
bytesout=0
while read -r line
do
read -r subIp _ _ increment _ <<< "$line"
if [[ $subIp == "$ip" ]]
then
((bytesout+=increment))
else
((bytesin+=increment))
fi
# some debugging
echo "line: $line"
echo "subIp: $subIp"
echo "bytesin: $bytesin"
echo "bytesout: $bytesout"
done <<< "$(pmacct -s | grep "$ip")"
echo "$ip $bytesin $bytesout" >> /tmp/bwacct.txt
done
Much clearer now, huh? :)

Related

System Variable set in bash not sticking after i go to an IF statement

apacherelease=$(curl -s "https://httpd.apache.org" | grep Released | awk '{print $4}' | perl -p -e 's/2.4.54/2.4.54-1/g') &&
apacheinstallversion=$(dnf list installed | grep httpd.x86_64|awk '{print $2}') &&
echo $apacherelease
echo $apacheinstallversion
if test "$apacheinstallversion" = "$apacherelease"; then
: variables are the same
else
: variables are different
fi
`
If I run the commands to set variable directly from the command line instead of a script the variables stick however in the script they disappear the moment I move to the if statement.
Any input would extremely help!
Corrected version:
apacherelease=$(curl -s "https://httpd.apache.org" | grep Released | awk '{print $4}' | perl -p -e 's/2.4.54/2.4.54-1/g') &&
apacheinstallversion=$(dnf list installed | grep httpd.x86_64 | awk '{print $2}')
echo "$apacherelease"
echo "$apacheinstallversion"
if [[ $apacheinstallversion == $apacherelease ]]; then
echo "variables are the same"
else
echo "variables are different" >&2
fi
use the full featured bash test [[
use == instead of =

Bash while loop: Preventing third-party commands to read from stdin

Assume an input table (intable.csv) that contains ID numbers in its second column, and a fresh output table (outlist.csv) into which the input file - extended by one column - is to be written line by line.
echo -ne "foo,NC_045043\nbar,NC_045193\nbaz,n.a.\nqux,NC_045054\n" > intable.csv
echo -n "" > outtable.csv
Further assume that one or more third-party commands (here: esearch, efetch; both part of Entrez Direct) are employed to retrieve additional information for each ID number. This additional info is to form the third column of the output table.
while IFS="" read -r line || [[ -n "$line" ]]
do
echo -n "$line" >> outtable.csv
NCNUM=$(echo "$line" | awk -F"," '{print $2}')
if [[ $NCNUM == NC_* ]]
then
echo "$NCNUM"
RECORD=$(esearch -db nucleotide -query "$NCNUM" | efetch -format gb)
echo "$RECORD" | grep "^LOCUS" | awk '{print ","$3}' | \
tr -d "\n" >> outtable.csv
else
echo ",n.a." >> outtable.csv
fi
done < intable.csv
Why does the while loop iterate only over the first input table entry under the above code, whereas it iterates over all input table entries if the code lines starting with RECORD and echo "$RECORD" are commented out? How can I correct this behavior?
This would happen if esearch reads from standard input. It will inherit the input redirection from the while loop, so it will consume the rest of the input file.
The solution is to redirect is standard input elsewhere, e.g. /dev/null.
while IFS="" read -r line || [[ -n "$line" ]]
do
echo -n "$line" >> outtable.csv
NCNUM=$(echo "$line" | awk -F"," '{print $2}')
if [[ $NCNUM == NC_* ]]
then
echo "$NCNUM"
RECORD=$(esearch -db nucleotide -query "$NCNUM" </dev/null | efetch -format gb)
echo "$RECORD" | grep "^LOCUS" | awk '{print ","$3}' | \
tr -d "\n" >> outtable.csv
else
echo ",n.a." >> outtable.csv
fi
done < intable.csv

Bash - Extract Matching String from GZIP Files Is Running Very Slow

Complete novice in Bash. Trying to iterate thru 1000 gzip files, may be GNU parallel is the solution??
#!/bin/bash
ctr=0
echo "file_name,symbol,record_count" > $1
dir="/data/myfolder"
for f in "$dir"/*.gz; do
gunzip -c $f | while read line;
do
str=`echo $line | cut -d"|" -f1`
if [ "$str" == "H" ]; then
if [ $ctr -gt 0 ]; then
echo "$f,$sym,$ctr" >> $1
fi
ctr=0
sym=`echo $line | cut -d"|" -f3`
echo $sym
else
ctr=$((ctr+1))
fi
done
done
Any help to speed the process will be greatly appreciated !!!
#!/bin/bash
ctr=0
export ctr
echo "file_name,symbol,record_count" > $1
dir="/data/myfolder"
export dir
doit() {
f="$1"
gunzip -c $f | while read line;
do
str=`echo $line | cut -d"|" -f1`
if [ "$str" == "H" ]; then
if [ $ctr -gt 0 ]; then
echo "$f,$sym,$ctr"
fi
ctr=0
sym=`echo $line | cut -d"|" -f3`
echo $sym >&2
else
ctr=$((ctr+1))
fi
done
}
export -f doit
parallel doit ::: *gz 2>&1 > $1
The Bash while read loop is probably your main bottleneck here. Calling multiple external processes for simple field splitting will exacerbate the problem. Briefly,
while IFS="|" read -r first second third rest; do ...
leverages the shell's built-in field splitting functionality, but you probably want to convert the whole thing to a simple Awk script anyway.
echo "file_name,symbol,record_count" > "$1"
for f in "/data/myfolder"/*.gz; do
gunzip -c "$f" |
awk -F "\|" -v f="$f" -v OFS="," '
/H/ { if(ctr) print f, sym, ctr
ctr=0; sym=$3;
print sym >"/dev/stderr"
next }
{ ++ctr }'
done >>"$1"
This vaguely assumes that printing the lone sym is just for diagnostics. It should hopefully not be hard to see how this can be refactored if this is an incorrect assumption.

BASH better way to monitor files

I've made a Bash script to monitor some server log files for certain data and my method probably isn't the most efficient.
One section specifically bugs me is that I have to write a newline to the monitored log so that the same line wont be read over continually.
Feedback would be greatly appreciated!
#!/bin/bash
serverlog=/home/skay/NewWorld/server.log
onlinefile=/home/skay/website/log/online.log
offlinefile=/home/skay/website/log/offline.log
index=0
# Creating the file
if [ ! -f "$onlinefile" ]; then
touch $onlinefile
echo "Name Date Time" >> "$onlinefile"
fi
if [ ! -f "$offlinefile" ]; then
touch $offlinefile
echo "Name Date Time" >> "$offlinefile"
fi
# Functions
function readfile {
# Login Variables
loginplayer=`tail -1 $serverlog | grep "[INFO]" | grep "joined the game" | awk '{print $4}'`
logintime=`tail -1 $serverlog | grep "[INFO]" | grep "joined the game" | awk '{print $2}'`
logindate=`tail -1 $serverlog | grep "[INFO]" | grep "joined the game" | awk '{print $1}'`
# Logout Variables
logoutplayer=`tail -1 $serverlog | grep "[INFO]" | grep "left the game" | awk '{print $4}'`
logouttime=`tail -1 $serverlog | grep "[INFO]" | grep "left the game" | awk '{print $2}'`
logoutdate=`tail -1 $serverlog | grep "[INFO]" | grep "left the game" | awk '{print $1}'`
# Check for Player Login
if [ ! -z "$loginplayer" ]; then
echo "$loginplayer $logindate $logintime" >> "$onlinefile"
echo "Player $loginplayer login detected" >> "$serverlog"
line=`grep -rne "$loginplayer" $offlinefile | cut -d':' -f1`
if [ "$line" > 1 ]; then
sed -i "$line"d $offlinefile
unset loginplayer
unset line
fi
fi
# Check for Player Logout
if [ ! -z "$logoutplayer" ]; then
echo "$logoutplayer $logoutdate $logouttime" >> "$offlinefile"
echo "Player $loginplayer logout detected" >> "$serverlog"
line=`grep -rne "$logoutplayer" $onlinefile | cut -d':' -f1`
if [ "$line" > 1 ]; then
sed -i "$line"d $onlinefile
unset logoutplayer
unset line
fi
fi
}
# Loop
while [ $index -lt 100 ]; do
readfile
done
Thanks!
instead of using multiple
tail -n 1 file
try the following construct:
tail -f file | while read line;do
echo "read: $line"
done
it will be much more reliable...and won't read the same line twice ;)
note: by using new processes of grep/awk/etc you are burning away processes...it's not that it is critical, but usually process creation is expensive...but if new lines occur rarely it's perfectly fine
where i want'ed to get is: if you are intrested, take a look at bash builting string manipulator function replace $(x/aa} ${x//aa} and friends..or try to use extended regexpes with grep

Dynamic Patch Counter for Shell Script

I am developing a script on a Solaris 10 SPARC machine to calculate how many patches got installed successfully during a patch delivery. I would like to display to the user:
(X) of 33 patches were successfully installed
I would like my script to output dynamically replacing the "X" so the user knows there is activity occurring; sort of like a counter. I am able to show counts, but only on a new line. How can I make the brackets update dynamically as the script performs its checks? Don't worry about the "pass/fail" ... I am mainly concerned with making my output update in the bracket.
for x in `cat ${PATCHLIST}`
do
if ( showrev -p $x | grep $x > /dev/null 2>&1 ); then
touch /tmp/patchcheck/* | echo "pass" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1 | awk '{print $1}'
else
touch /tmp/patchcheck/* | echo "fail" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1 | awk '{print $1}'
fi
done
The usual way to do that is to emit a \r carriage return (CR) at some point and to omit the \n newline or line feed (LF) at the end of the line. Since you're using awk, you can try:
awk '{printf "\r%s", $1} END {print ""}'
For most lines, it outputs a carriage return and the data in field 1 (without a newline at the end). At the end of the input, it prints an empty string followed by a newline.
One other possibility is that you should place the awk script outside your for loop:
for x in `cat ${PATCHLIST}`
do
if ( showrev -p $x | grep $x > /dev/null 2>&1 ); then
touch /tmp/patchcheck/* | echo "pass" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1
else
touch /tmp/patchcheck/* | echo "fail" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1
fi
done | awk '{ printf "\r%s", $1} END { print "" }'
I'm not sure but I think you can apply similar streamlining to the rest of the repetitious code in the script:
for x in `cat ${PATCHLIST}`
do
if showrev -p $x | grep -s $x
then echo "pass"
else echo "fail"
fi >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1
done | awk '{ printf "\r%s", $1} END { print "" }'
This eliminates the touch (which doesn't seem to do much), and especially not when the empty output of touch is piped to echo which ignores its standard input. It eliminates the sub-shell in the if line; it uses the -s option of grep to keep it quiet.
I'm still a bit dubious about the wc line. I think you're looking to count the number of files, in effect, since each file should contain one line (pass or fail), unless you listed some patch twice in the file identified by ${PATCHLIST}. In which case, I'd probably use:
for x in `cat ${PATCHLIST}`
do
if showrev -p $x | grep -s $x
then echo "pass"
else echo "fail"
fi >> /tmp/patchcheck/$x
ls /tmp/patchcheck | wc -l
done | awk '{ printf "\r%s", $1} END { print "" }'
This lists the files in /tmp/patchcheck and counts the number of lines output. It means you could simply print $0 in the awk script since $0 and $1 are the same. To the extent efficiency matters (not a lot), this is more efficient because ls only scans a directory, rather than having wc open each file. But it is more particularly a more accurate description of what you are trying to do. If you later want to count the passes, you can use:
for x in `cat ${PATCHLIST}`
do
if showrev -p $x | grep -s $x
then echo "pass"
else echo "fail"
fi >> /tmp/patchcheck/$x
grep '^pass$' /tmp/patchcheck/* | wc -l
done | awk '{ printf "\r%s", $1} END { print "" }'
Of course, this goes back to reading each file, but you're getting more refined information out of it now (and that's the penalty for the more refined information).
Here is how I got my patch installation script working the way I wanted:
while read pkgline
do
patchadd -d ${pkgline} >> /var/log/patch_install.log 2>&1
# Create audit file for progress indicator
for x in ${pkgline}
do
if ( showrev -p ${x} | grep -i ${x} > /dev/null 2>&1 ); then
echo "${x}" >> /tmp/pass
else
echo "${x}" >> /tmp/fail
fi
done
# Progress indicator
for y in `wc -l /tmp/pass | awk '{print $1}'`
do
printf "\r${y} out of `wc -l /patchdir/master | awk '{print $1}'` packages installed for `hostname`. Last patch installed: (${pkgline})"
done
done < /patchdir/master

Resources