Bash variable not saving new data given? - bash

I wrote a Bash function:
CAPACITY=0
USED=0
FREE=0
df | grep /$ | while read LINE ; do
CAPACITY=$(echo "${CAPACITY}+$(echo ${LINE} | awk '{print $2}')" | bc )
USED="$[${USED}+$(echo ${LINE} | awk '{print $3}')]"
FREE="$[${FREE}+$(echo ${LINE} | awk '{print $4}')]"
done
echo -e "${CAPACITY}\t${USED}\t${FREE}"
for i in /home /etc /var /usr; do
df | grep ${i}[^' ']*$ | while read LINE ; do
CAPACITY=$[${CAPACITY}+$(echo ${LINE} | awk '{print $2}')]
USED=$[${USED}+$(echo ${LINE} | awk '{print $3}')]
FREE=$[${FREE}+$(echo ${LINE} | awk '{print $4}')]
done
done
if [ "${1}" = "explode?" ] ; then
if [ $[${USED}*100/${CAPACITY}] -ge 95 ] ; then
return 0
else
return 1
fi
elif [ "${1}" = "check" ] ; then
echo -e "Capacity = $(echo "scale=2; ${CAPACITY}/1024/1024" | bc)GB\nUsed = $(echo "scale=2; ${USED}/1024/1024" | bc)GB\nAvaliable = $(echo "scale=2; ${FREE}/1024/1024" | bc)GB\nUsage = $(echo "scale=2; ${USED}*100/${CAPACITY}" | bc)%"
fi
}
Note the 2 different methods to store the data in the CAPACITY/USED/FREE vars in the first 'while' loop and the echo right after it to debug the code.
Seems as though while running the script the data inputted into the variables in the loop isn't saved.
Here's the output while running the script with 'set -x':
+ CAPACITY=0
+ USED=0
+ FREE=0
+ df
+ grep '/$'
+ read LINE
++ bc
+++ echo /dev/vda1 52417516 8487408 43930108 17% /
+++ awk '{print $2}'
++ echo 0+52417516
+ CAPACITY=52417516
++ echo /dev/vda1 52417516 8487408 43930108 17% /
++ awk '{print $3}'
+ USED=8487408
++ echo /dev/vda1 52417516 8487408 43930108 17% /
++ awk '{print $4}'
+ FREE=43930108
+ read LINE
+ echo -e '0\t0\t0'
0 0 0
Why the heck don't the variables store the new numbers even though it clearly shows a new number was stored?

Why ... don't the variables store the new numbers even though it clearly shows a new number was stored?
Because the right part of | is run in a subshell, so the changes are not propagated to the parent shell.
$ a=1
$ echo a=$a
a=1
$ true | { a=2; echo a=$a; }
a=2
$ echo a=$a
echo a=1
For more info read bashfaq I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates?. The common solution is to use a process substitution:
while IFS= read -r line; do
blabla
done < <( blabla )
The $[ is deprecated. Use $((...)) instead. bash hackers wiki obsolete and deprecated syntax.
In bash just use arithmetic expansion (( for numbers comparison. if (( used * 100 / capacity >= 96 )); then.
By convention upper case variables are used for exported variables. Use lower case variable names for script local variables.
The is no need to grep the output of df. Just df /home /etc /var /usr. Or really just read -r capacity used free < <(df /home /etc /var /usr | awk '{ capacity += $1; used += $3; free += $4 } END{print capacity, used, free}').

Related

How to break pipe if stdin is empty?

I want to break the whole pipe if the stdin is empty. I try to combined xargs -r and tee, which means not print and write if stdin is empty, but it failed
...| upstream commands | xargs -r tee output.txt | downstream commands | ...
Any feedback appreciated.
There is no way you can actually terminate a bash pipe conditionally. All commands in a pipeline are started simultaneously. There is however a tool available that would assist you with creating a conditional pipeline. In moreutils you can find the tool ifne which executes a command if and only if the input /dev/stdin is not empty. So you could write something like:
$ command1 | ifne command2 | ifne command3 | ifne command4
Here all commands ifne and command1 are started simultaniously. Only if ifne receives input via /dev/stdin, it will start its respective commandx
Pipe'll break if command failed. You can add grep in between to achieve this. An example:
$ echo ok | awk '{print $0,"1"}' | awk '{print $0,"2"}' | awk '{print $0,"3"}'
ok 1 2 3
Now add grep:
$ echo ok | grep -Ei '^.+$' | awk '{print $0,"1"}' | awk '{print $0,"2"}' | awk '{print $0,"3"}'
ok 1 2 3
And test empty echo:
$ echo | awk '{print $0,"1"}' | awk '{print $0,"2"}' | awk '{print $0,"3"}'
1 2 3
$ echo | grep -Ei '^.+$' | awk '{print $0,"1"}' | awk '{print $0,"2"}' | awk '{print $0,"3"}'
Looks like this works but it doesn't, interesting indeed, well then obvy pipes don't fit here, try this approach:
#!/bin/bash
set -x
fun(){
data=$(echo "$1"); [[ $data ]] && data=$(awk '{print $0,1}' <<< "$data") || return 1; [[ $data ]] && data=$(awk '{print $0,2}' <<< "$data") || return 1; [[ $data ]] && data=$(awk '{print $0,3}' <<< "$data") || return 1; echo "$data"
}
fun ok
fun
Testing:
$ ./test
+ fun ok
++ echo ok
+ data=ok
+ [[ -n ok ]]
++ awk '{print $0,1}'
+ data='ok 1'
+ [[ -n ok 1 ]]
++ awk '{print $0,2}'
+ data='ok 1 2'
+ [[ -n ok 1 2 ]]
++ awk '{print $0,3}'
+ data='ok 1 2 3'
+ echo 'ok 1 2 3'
ok 1 2 3
+ fun
++ echo ''
+ data=
+ [[ -n '' ]]
+ return 1
More readable variant:
#!/bin/bash
set -x
fun(){
data=$(echo "$1")
[[ $data ]] && data=$(awk '{print $0,1}' <<< "$data") || return 1
[[ $data ]] && data=$(awk '{print $0,2}' <<< "$data") || return 1
[[ $data ]] && data=$(awk '{print $0,3}' <<< "$data") || return 1
echo "$data"
}
fun ok
fun

Tail recursion in Bash

I've tried to write a script to verify that all the stats of a metrics are positive before I make any further changes using the service. The part I'm stuck at is thinking over how to tail the recursion for the following use-case :
function load_cache() {
cacheStat=( $(curl -s -X GET "http://localhost:${MET_PORT}/metrics" | sed 's/\\\\\//\//g' | sed 's/[{}]//g' | awk -v k="cacheSize" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' | sed 's/\"\:\"/\|/g' | sed 's/[\,]/ /g' | sed 's/\"//g' | grep -w "cacheSize" | cut -d ':' -f 2) )
# the above gives me the ouput(cacheStat) as -
# 2.0
# 311.0
# 102.0
count=0
for index in ${!cacheStat[*]}
do
if [[ ${cacheStat[$index]} -le 0 ] && [ $count -lt 3 ]]; then
sleep .5
count=$[$count +1];
load_cache
#Wouldn't the above initialise `count` to 0 again.
fi
done
}
What I am trying to do is if any of the elements in the cacheStat is less than or equal to 0, then sleep for .5 secs and query the cacheStat again and perform the check on all its elements again. Though not do this more than 3 times for which I am trying to use `count.
Open to any suggestion to improve the script.
Update -
On modifying the scripts as suggested by #Inian to
RETRY_COUNT=0
function load_cache() {
cacheStat=( $(curl -s -X GET "http://localhost:${MET_PORT}/metrics" | sed 's/\\\\\//\//g' | sed 's/[{}]//g' | awk -v k="cacheSize" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' | sed 's/\"\:\"/\|/g' | sed 's/[\,]/ /g' | sed 's/\"//g' | grep -w "cacheSize" | cut -d ':' -f 2) );
for index in ${!cacheStat[*]}
do
echo "Stat - ${cacheStat[$index]}"
if (( ${cacheStat[$index]} <= 0 )) && (( $RETRY_COUNT < 3 )); then
echo "Attempt count - ${RETRY_COUNT}"
sleep .5s
RETRY_COUNT=$((RETRY_COUNT +1));
load_cache
fi
done
}
The logs read -
> > + cacheStat=($(curl -s -X GET "http://localhost:${MET_PORT}/metrics" | sed 's/\\\\\//\//g' | sed
> 's/[{}]//g' | awk -v k="cacheSize"
> > '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' | sed
> > 's/\"\:\"/\|/g' | sed 's/[\,]/ /g' | sed 's/\"//g' | grep -w
> > "cacheSize" | cut -d ':' -f 2))
> > ++ curl -s -X GET http://localhost:8181/metrics
> > ++ sed 's/\\\\\//\//g'
> > ++ sed 's/[{}]//g'
> > ++ sed 's/[\,]/ /g'
> > ++ awk -v k=cacheSize '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
> > ++ sed 's/\"\:\"/\|/g'
> > ++ cut -d : -f 2
> > ++ sed 's/\"//g'
> > ++ grep -w cacheSize
It doesn't even iterate I guess.
Remove the infinite recursion by moving the count=0 outside the function body.
Also your script has couple of issues, a syntax violation and an outdated construct, lines 12-14 should have been,
if [[ ${cacheStat[$index]} -le 0 ]] && [[ $count -lt 3 ]]; then
sleep .5s
count=$((count +1));
load_cache
fi
or) use a more readable arithmetic operator, (()) in the if-clause as
if (( ${cacheStat[$index]} <= 0 )) && (( $count < 3 )); then
bash does not inherently support floating point arithmetic (comparison in your case), use a third party tool like bc, awk for this,
if (( $(echo "${cacheStat[$index]} <= 0" | bc -l) )) && (( $count < 3 )); then
You can avoid all that ad-hoc JSON parsing by using a JSON parser.
# Avoid using Bash-only "function" keyword
load_cache () {
local try
for try in 1 2 3; do
# Suction: jq doesn't return non-zero exit code for no match
# work around that by piping to grep .
if curl -s -X GET "http://localhost:${MET_PORT}/metrics" |
jq '.[] | select(cacheSize < 0)' |
grep .
then
# Notice also redirection to stderr for diagnostic messages
echo "$0: Attempt $try failed, sleeping before retrying" >&2
sleep 0.5
else
# Return with success, we are done, exit function
return 0
fi
done
# Return failure
return 1
}
I see no reason to prefer recursion over a straightforward for loop for controlling the number of retries.
If you never want to see the offending values, you can use grep -q in the conditional. I'm expecting you would do load_cache >/dev/null if you don't want the output.
If you want to see the non-offending values, the code will need some refactoring, but I'm focusing on getting the central job done elegantly and succinctly. Here's a sketch, mainly to show you the jq syntax for that.
load_cache () {
local try
local results
for try in 1 2 3; do
results=$(curl -s -X GET "http://localhost:${MET_PORT}/metrics" |
jq '.[] | .cacheSize' | tr '\n' ' ')
echo "$0: try $try: cacheSize $results" >&2
# Funky: massage the expression we test againt into a normalized form
# so that we know that the value will always be preceded by a space
case " $results " in
*" 0 "* | *" -"* )
case $try in
3) echo "$0: try $try failed; aborting" >&2 ;;
*) echo "$0: try $try failed; sleeping before retrying" >&2
sleep 0.5 ;;
esac;;
*) return 0
esac
done
return 1
}
The nested case to avoid sleeping on the final iteration isn't particularly elegant, but at least it should ensure that the reader is awake. /-8

Converting multiple lines of bash in to a single line

Is there any short and easy way to convert multiple lines of script in to a single line to be parsed in a eval command?
ie
getent group | cut -f3 -d":" | sort -n | uniq -c |\
while read x ; do
[ -z "${x}" ] && break
set - $x ; if [ $1 -gt 1 ]; then
grps=`getent group | nawk -F: '($3 == n) { print $1 }' n=$2 | xargs` ; echo "Duplicate GID ($2): ${grps}" ; fi done
one_line=`cat your_script_file | sed ":a s/[\]$//; N; s/[\]$//; s/\n/ /; t a ;"`
echo $one_line

Variable loss in redirected bash while loop

I have the following code
for ip in $(ifconfig | awk -F ":" '/inet addr/{split($2,a," ");print a[1]}')
do
bytesin=0; bytesout=0;
while read line
do
if [[ $(echo ${line} | awk '{print $1}') == ${ip} ]]
then
increment=$(echo ${line} | awk '{print $4}')
bytesout=$((${bytesout} + ${increment}))
else
increment=$(echo ${line} | awk '{print $4}')
bytesin=$((${bytesin} + ${increment}))
fi
done < <(pmacct -s | grep ${ip})
echo "${ip} ${bytesin} ${bytesout}" >> /tmp/bwacct.txt
done
Which I would like to print the incremented values to bwacct.txt, but instead the file is full of zeroes:
91.227.223.66 0 0
91.227.221.126 0 0
127.0.0.1 0 0
My understanding of Bash is that a redirected for loop should preserve variables. What am I doing wrong?
First of all, simplify your script! Usually there are many better ways in bash. Also most of the time you can rely on pure bash solutions instead of running awk or other tools.
Then add some debbuging!
Here is a bit refactored script with debugging
#!/bin/bash
for ip in "$(ifconfig | grep -oP 'inet addr:\K[0-9.]+')"
do
bytesin=0
bytesout=0
while read -r line
do
read -r subIp _ _ increment _ <<< "$line"
if [[ $subIp == "$ip" ]]
then
((bytesout+=increment))
else
((bytesin+=increment))
fi
# some debugging
echo "line: $line"
echo "subIp: $subIp"
echo "bytesin: $bytesin"
echo "bytesout: $bytesout"
done <<< "$(pmacct -s | grep "$ip")"
echo "$ip $bytesin $bytesout" >> /tmp/bwacct.txt
done
Much clearer now, huh? :)

shell script sum in for loop not working

size=`ls -l /var/temp.* | awk '{ print $5}'`
fin_size=0
for row in ${size} ;
do
fin_size=`echo $(( $row + $fin_size )) | bc`;
done
echo $fin_size
is not working !! echo $fin_size is throwing some garbage minus value.
where I'm mistaking?
(my bash is old and I suppose to work in this only Linux kernel: 2.6.39)
Don't parse ls.
Why not use du as shown below?
du -cb /var/temp.* | tail -1
Because it cannot be stressed enough: Why you shouldn't parse the output of ls(1)
Use e.g. du as suggested by dogbane, or find:
$ find /var -maxdepth 1 -type f -name "temp.*" -printf "%s\n" | awk '{total+=$1}END{print total}'
or stat:
$ stat -c%s /var/temp.* | awk '{total+=$1}END{print total}'
or globbing and stat (unnecessary, slow):
total=0
for file in /var/temp.*; do
[ -f "${file}" ] || continue
size="$(stat -c%s "${file}")"
((total+=size))
done
echo "${total}"
Below should be enough:
ls -l /var/temp.* | awk '{a+=$5}END{print a}'
No need for you to run the for loop.This means:
size=ls -l /var/temp.* | awk '{ print $5}'`
fin_size=0
for row in ${size} ;
do
fin_size=`echo $(( $row + $fin_size )) | bc`;
done
echo $fin_size
The whole above thing can be replaced with :
fin_size=`ls -l /var/temp.* | awk '{a+=$5}END{printf("%10d",a);}'`
echo $fin_size

Resources