i'm new into bash scripts and i really need your help.
The following Script gets humidity and temperature from a DHT22 sensor on my BananaPi and sends it to my HomeAutomation.
#!/bin/bash
cd /opt/lol_dht22
WERTE=$(./loldht 7 | grep "Humidity")
Temp=( $(echo $WERTE | awk '{ print $ 7}'))
Hum=( $(echo $WERTE | awk '{ print $ 3}'))
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 temperature $Temp"
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 humidity $Hum"
The results are like 30.00 (With a Comma), and sometimes the sensor fails and gives an unrealistic value like -3000.00 or similar.
So i wanted to implement a IF Condition which checks if it's greater or equal 0 or less or equal 100.
I tried things like:
#!/bin/bash
cd /opt/lol_dht22
WERTE=$(./loldht 7 | grep "Humidity")
Temp=( $(echo $WERTE | awk '{ print $ 7}' | tr '.' ','))
Hum=( $(echo $WERTE | awk '{ print $ 3}' | tr '.' ','))
if [[ $Temp -ge 0 && $Temp -le 100 && $Hum -ge 0 && $Hum -le 100 ]]; then
Temp1=( $(echo $Temp | tr ',' '.'))
Hum1=( $(echo $Hum | tr ',' '.'))
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 temperature $Temp1"
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 humidity $Hum1"
else
exit;
fi
It seemed like the comma (.) was the problem, so i tried switching it to a (,) and back. But there are sometimes still values which are not between 0 and 100.
I hope someone can help me,
thank you!
Regards,
Klaus
awk uses doubles, and can easily be used to test the range of decimal numbers.
...
Temp=$(echo $WERTE | tr , . | awk '$7>=0&&$7<=100{print$7}')
Hum=$(echo $WERTE | tr , . | awk '$3>=0&&$3<=100{print$3}')
if [ -n "$Temp" -a -n "$Hum" ]; then
...
fi
-lt, -ge, etc. only work with integers. You can use <, >=, etc. inside [[, and it should work on floating point numbers too.
Some good practices in bash:
#!/bin/bash
set -e #autoexit if you run into an error
cd /opt/lol_dht22
#Reserve all-caps variables for exported variables and bash config variables
werte="$(./loldht 7 | grep "Humidity")" #always double-quote $ expressions
#foo=( bar ) creates arrays, you don't want that
#printf "%s\n" is more robust than echo if you're printing the contents of variables
Temp="$(printf '%s\n' "$werte" | awk '{ print $7 }' | tr . ,)"
Hum="$(printf '%s\n' "$werte" | awk '{ print $3 }' | tr . ,)"
#-lt etc. only work with integers
if [[ "$Temp" >= 0 && "$Temp" < 100 && "$Hum" >= 0 && "$Hum" < 100 ]]; then
Temp1="$(printf '%s\n' "$Temp" | tr , .)"
Hum1="$(printf '%s\n' "$Hum" | tr , .)"
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 temperature $Temp1"
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 humidity $Hum1"
else
exit 1 #return nonzero codes for errors
fi
Please mention what is content of WERTE. It will help in answering the question because awk FS by default is space. We need to properly understand your content of $7 and $3
Thank your Guys,
i've learned a lot.
and it seems to work:
#!/bin/bash
set -e
cd /opt/lol_dht22
WERTE=$(./loldht 7 | grep "Humidity")
Temp=$(echo $WERTE | awk '$7>=0&&$7<=100{print$7}')
Hum=$(echo $WERTE | awk '$3>=0&&$3<=100{print$3}')
if [ -n "$Temp" -a -n "$Hum" ]; then
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 temperature $Temp"
perl /opt/fhem/fhem.pl 7072 "setreading DHT22 humidity $Hum"
else
exit 1
fi
You've saved my Sunday =)
Best Regards,
Klaus
Related
I have this shell script variable, var. It keeps 3 entries separated by new line. From this variable var, I want to extract 2, and 0.078688. Just these two numbers.
var="USER_ID=2
# 0.078688
Suhas"
These are the code I tried:
echo "$var" | grep -o -P '(?<=\=).*(?=\n)' # For extracting 2
echo "$var" | awk -v FS="(# |\n)" '{print $2}' # For extracting 0.078688
None of the above working. What is the problem here? How to fix this ?
Just use tr alone for retaining the numerical digits, the dot (.) and the white-space and remove everything else.
tr -cd '0-9. ' <<<"$var"
2 0.078688
From the man page, of tr for usage of -c, -d flags,
tr [OPTION]... SET1 [SET2]
-c, -C, --complement
use the complement of SET1
-d, --delete
delete characters in SET1, do not translate
To store it in variables,
IFS=' ' read -r var1 var2 < <(tr -cd '0-9. ' <<<"$var")
printf "%s\n" "$var1"
2
printf "%s\n" "$var2"
2
0.078688
Or in an array as
IFS=' ' read -ra numArray < <(tr -cd '0-9. ' <<<"$var")
printf "%s\n" "${numArray[#]}"
2
0.078688
Note:- The -cd flags in tr are POSIX compliant and will work on any systems that has tr installed.
echo "$var" |grep -oP 'USER_ID=\K.*'
2
echo "$var" |grep -oP '# \K.*'
0.078688
Your solution is near to perfect, you need to chance \n to $ which represent end of line.
echo "$var" |awk -F'# ' '/#/{print $2}'
0.078688
echo "$var" |awk -F'=' '/USER_ID/{print $2}'
2
You can do it with pure bash using a regex:
#!/bin/bash
var="USER_ID=2
# 0.078688
Suhas"
[[ ${var} =~ =([0-9]+).*#[[:space:]]([0-9\.]+) ]] && result1="${BASH_REMATCH[1]}" && result2="${BASH_REMATCH[2]}"
echo "${result1}"
echo "${result2}"
With awk:
First value:
echo "$var" | grep 'USER_ID' | awk -F "=" '{print $2}'
Second value:
echo "$var" | grep '#' | awk '{print $2}'
Assuming this is the format of data as your sample
# For extracting 2
echo "$var" | sed -e '/.*=/!d' -e 's///'
echo "$var" | awk -F '=' 'NR==1{ print $2}'
# For extracting 0.078688
echo "$var" | sed -e '/.*#[[:blank:]]*/!d' -e 's///'
echo "$var" | awk -F '#' 'NR==2{ print $2}'
I need some help . I want the result will be
UP:N%:N%
but the current result is
UP:N%
:N%
this is the code.
#!/bin/bash
UP=$(pgrep mysql | wc -l);
if [ "$UP" -ne 1 ];
then
echo -n "DOWN"
else
echo -n "UP:"
fi
df -hl | grep 'sda1' | awk ' {percent+=$5;} END{print percent"%"}'| column -t && echo -n ":"
top -bn2 | grep "Cpu(s)" | \sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \awk 'END{print 100 - $1"%"}'
You can use command substitution in your first sentence (notice you're creating a subshell in this way):
echo -n $(df -hl | grep 'sda1' | awk ' {percent+=$5;} END{print percent"%"}'| column -t ):
I have the following code
for ip in $(ifconfig | awk -F ":" '/inet addr/{split($2,a," ");print a[1]}')
do
bytesin=0; bytesout=0;
while read line
do
if [[ $(echo ${line} | awk '{print $1}') == ${ip} ]]
then
increment=$(echo ${line} | awk '{print $4}')
bytesout=$((${bytesout} + ${increment}))
else
increment=$(echo ${line} | awk '{print $4}')
bytesin=$((${bytesin} + ${increment}))
fi
done < <(pmacct -s | grep ${ip})
echo "${ip} ${bytesin} ${bytesout}" >> /tmp/bwacct.txt
done
Which I would like to print the incremented values to bwacct.txt, but instead the file is full of zeroes:
91.227.223.66 0 0
91.227.221.126 0 0
127.0.0.1 0 0
My understanding of Bash is that a redirected for loop should preserve variables. What am I doing wrong?
First of all, simplify your script! Usually there are many better ways in bash. Also most of the time you can rely on pure bash solutions instead of running awk or other tools.
Then add some debbuging!
Here is a bit refactored script with debugging
#!/bin/bash
for ip in "$(ifconfig | grep -oP 'inet addr:\K[0-9.]+')"
do
bytesin=0
bytesout=0
while read -r line
do
read -r subIp _ _ increment _ <<< "$line"
if [[ $subIp == "$ip" ]]
then
((bytesout+=increment))
else
((bytesin+=increment))
fi
# some debugging
echo "line: $line"
echo "subIp: $subIp"
echo "bytesin: $bytesin"
echo "bytesout: $bytesout"
done <<< "$(pmacct -s | grep "$ip")"
echo "$ip $bytesin $bytesout" >> /tmp/bwacct.txt
done
Much clearer now, huh? :)
I have a file where each key-value pair takes a new line. There is a possibility of having multiple values for each key. I want to return a list of all pairs that have a "special key", where "special" is is defined as some function.
For Example, if "special" is defined as a key that somewhere has a value of 100
A 100
B 400
A hello
B world
C 100
I would return
A 100
A hello
C 100
How to do this in bash?
#!/bin/bash
special=100
awk -v s=$special '
{
a[$1,$2]
if($2 ~ s)
k[$1]
}
END
{
for(key in k)
for(pair in a)
{
split(pair,b,SUBSEP)
if(b[1] == key)
print b[1],b[2]
}
}' ./infile
Proof of Concept
$ special=100; echo -e "A 100\nB 400\nA hello\nB world\nC 100" | awk -v s=$special '{a[$1,$2];if($2 ~ s)k[$1]}END{for(key in k)for(pair in a){split(pair,b,SUBSEP); if(b[1] == key)print b[1],b[2]}}'
A hello
A 100
C 100
This would also work:
id=`grep "\<$special\>$" yourfile | sed -e "s/$special//"`
[ -z "$id" ] || grep "^$id" yourfile
Returns:
If special=100
A 100
A hello
C 100
If special="hello"
A 100
A hello
If special="A"
(nothing)
If special="ello"
(nothing)
Notes
drop the \<\> if you want partial match
add | uniq at the end if there is a possibility of multiple entrances of the same pair (A 100, A 100, ...) but you don't want that in your output.
***** script *****
#!/bin/bash
grep " $1" data.txt | cut -d ' ' -f1 | grep -f /dev/fd/0 data.txt
result:
./test.sh 100
A 100
A hello
C 100
***** inline *****
the first grep must contain the 'special' preceded by a space ' ':
grep " 100" data.txt | cut -d ' ' -f1 | grep -f /dev/fd/0 data.txt
A 100
A hello
C 100
awk -v special="100" '$2==special{a[$1]}($1 in a)' file
Whew! My bash was incredibly rusty! Hope this helps:
FILE=$1
IFS=$'\n' # Internal File Sep, so as to avoid splitting in whitespaces
FIND="100"
KEEP=""
for line in `cat $FILE`; do
key=`echo $line | cut -d \ -f1`;
value=`echo $line | cut -d \ -f2`;
echo "$key = $value"
if [ "$value" == "$FIND" ]; then
KEEP="$key $KEEP"
fi
done
echo "Keys to keep: $KEEP"
# You can now do whatever you want with those keys.
Is there a way to get the integer that wc returns in bash?
Basically I want to write the line numbers and word counts to the screen after the file name.
output: filename linecount wordcount
Here is what I have so far:
files=\`ls`
for f in $files;
do
if [ ! -d $f ] #only print out information about files !directories
then
# some way of getting the wc integers into shell variables and then printing them
echo "$f $lines $words"
fi
done
Most simple answer ever:
wc < filename
Just:
wc -l < file_name
will do the job. But this output includes prefixed whitespace as wc right-aligns the number.
You can use the cut command to get just the first word of wc's output (which is the line or word count):
lines=`wc -l $f | cut -f1 -d' '`
words=`wc -w $f | cut -f1 -d' '`
wc $file | awk {'print "$4" "$2" "$1"'}
Adjust as necessary for your layout.
It's also nicer to use positive logic ("is a file") over negative ("not a directory")
[ -f $file ] && wc $file | awk {'print "$4" "$2" "$1"'}
Sometimes wc outputs in different formats in different platforms. For example:
In OS X:
$ echo aa | wc -l
1
In Centos:
$ echo aa | wc -l
1
So using only cut may not retrieve the number. Instead try tr to delete space characters:
$ echo aa | wc -l | tr -d ' '
The accepted/popular answers do not work on OSX.
Any of the following should be portable on bsd and linux.
wc -l < "$f" | tr -d ' '
OR
wc -l "$f" | tr -s ' ' | cut -d ' ' -f 2
OR
wc -l "$f" | awk '{print $1}'
If you redirect the filename into wc it omits the filename on output.
Bash:
read lines words characters <<< $(wc < filename)
or
read lines words characters <<EOF
$(wc < filename)
EOF
Instead of using for to iterate over the output of ls, do this:
for f in *
which will work if there are filenames that include spaces.
If you can't use globbing, you should pipe into a while read loop:
find ... | while read -r f
or use process substitution
while read -r f
do
something
done < <(find ...)
If the file is small you can afford calling wc twice, and use something like the following, which avoids piping into an extra process:
lines=$((`wc -l "$f"`))
words=$((`wc -w "$f"`))
The $((...)) is the Arithmetic Expansion of bash. It removes any whitespace from the output of wc in this case.
This solution makes more sense if you need either the linecount or the wordcount.
How about with sed?
wc -l /path/to/file.ext | sed 's/ *\([0-9]* \).*/\1/'
typeset -i a=$(wc -l fileName.dat | xargs echo | cut -d' ' -f1)
Try this for numeric result:
nlines=$( wc -l < $myfile )
Something like this may help:
#!/bin/bash
printf '%-10s %-10s %-10s\n' 'File' 'Lines' 'Words'
for fname in file_name_pattern*; {
[[ -d $fname ]] && continue
lines=0
words=()
while read -r line; do
((lines++))
words+=($line)
done < "$fname"
printf '%-10s %-10s %-10s\n' "$fname" "$lines" "${#words[#]}"
}
To (1) run wc once, and (2) not assign any superfluous variables, use
read lines words <<< $(wc < $f | awk '{ print $1, $2 }')
Full code:
for f in *
do
if [ ! -d $f ]
then
read lines words <<< $(wc < $f | awk '{ print $1, $2 }')
echo "$f $lines $words"
fi
done
Example output:
$ find . -maxdepth 1 -type f -exec wc {} \; # without formatting
1 2 27 ./CNAME
21 169 1065 ./LICENSE
33 130 961 ./README.md
86 215 2997 ./404.html
71 168 2579 ./index.html
21 21 478 ./sitemap.xml
$ # the above code
404.html 86 215
CNAME 1 2
index.html 71 168
LICENSE 21 169
README.md 33 130
sitemap.xml 21 21
Solutions proposed in the answered question doesn't work for Darwin kernels.
Please, consider following solutions that work for all UNIX systems:
print exactly the number of lines of a file:
wc -l < file.txt | xargs
print exactly the number of characters of a file:
wc -m < file.txt | xargs
print exactly the number of bytes of a file:
wc -c < file.txt | xargs
print exactly the number of words of a file:
wc -w < file.txt | xargs
There is a great solution with examples on stackoverflow here
I will copy the simplest solution here:
FOO="bar"
echo -n "$FOO" | wc -l | bc # "3"
Maybe these pages should be merged?
Try this:
wc `ls` | awk '{ LINE += $1; WC += $2 } END { print "lines: " LINE " words: " WC }'
It creates a line count, and word count (LINE and WC), and increase them with the values extracted from wc (using $1 for the first column's value and $2 for the second) and finally prints the results.
"Basically I want to write the line numbers and word counts to the screen after the file name."
answer=(`wc $f`)
echo -e"${answer[3]}
lines: ${answer[0]}
words: ${answer[1]}
bytes: ${answer[2]}"
Outputs :
myfile.txt
lines: 10
words: 20
bytes: 120
files=`ls`
echo "$files" | wc -l | perl -pe "s#^\s+##"
You have to use input redirection for wc:
number_of_lines=$(wc -l <myfile.txt)
respectively in your context
echo "$f $(wc -l <"$f") $(wc -w <"$f")"