Bash Script "integer expression expected" , (floats in bash) replace part with awk - bash

I wrote a little bash-script to check my Memory-usage and warn me if it's to high.
now my problem is, that I would like to keep the floating-value instead of just cutting it away. but I'm not able to do it..
I would prefer awk over anything else as it's pre-installed on many systems and I already use it in the script.
#!/bin/bash
#define what values are too high (%)
inacceptableRAM="90"
#check RAM %
RAM=`free | grep Mem | awk '{print $3/$2 * 100.0}'`
#send Alarm for RAM
if [ $RAM -ge $inacceptableRAM ] ; then
echo Alarm RAM usage is #$RAM"
fi
so how can I replace my if -ge using awk?
I think it should look something like:
awk ' $RAM >= $inacceptableRAM '
but what do I need to do to make it work inside the bash script?

Since you're comparing with an integer, you can just trim off the decimal part when comparing:
if [ "${RAM%.*}" -ge "$inacceptableRAM" ] ; then
If you want to do it entirely in awk, the only tricky thing is that you have to use -v var=value to convert the inacceptableRAM shell variable into an awk variable:
free | awk -v limit="$inacceptableRAM" '/Mem/ {ram=$3/$2*100; if (ram>=limit) print ram}'
Note that I'm using /Mem/ in the awk script to effectively replace the grep command. Piping from grep to awk is almost never necessary, since you can just do it all in awk.
Other recommendations: use $( ) instead of backticks for command substitutions (see BashFAQ #82), and use lower- or mixed-case variable names (e.g. ram instead of RAM) to avoid accidentally using one of the many all-caps names that have special meanings (see this answer).

An alternative to awk is bc , something like:
#!/usr/bin/env bash
#define what values are too high (%)
inacceptableRAM="90"
#check RAM %
ram=$(free | awk '/Mem/{print $3/$2 * 100.0}')
#send Alarm for RAM
if (( $(bc <<< "$ram > $inacceptableRAM") )) ; then
echo "Alarm RAM usage is #$ram"
fi

You're trying to do too much in shell. A shell is a tool to manipulate files and process and sequence calls to other tools. The tool that the guys who created shell also created for shell to call to manipulate text is awk:
#!/usr/bin/env bash
free |
awk '
BEGIN {
#define what values are too high (%)
unacceptableRam = 90
}
/Mem/ {
#check RAM %
ram = ( $2 ? $3 / $2 * 100 : 0 )
#send Alarm for RAM
if ( ram >= unacceptableRam ) {
print "Alarm RAM usage is #" ram
}
}
'

It's worth considering if it's time for you to "upgrade" to a proper programming language with support for features like floating point arithmetic. Bash and shell scripting is great, but it runs up against limitations very quickly. Even Google's Shell Style Guide suggests changing languages when things get complicated. It's likely you could get a Python script doing exactly what your Bash script is doing with just a few more lines of code.
That said, I am very guilty of leaning on Bash even when I probably shouldn't :)
I recently wanted to compute a floating point average in a Bash script and decided to delegate to a Python subprocess just for that task, rather than swap over fully to Python. I opted for Python because it's almost as ubiquitous as Bash, easy to read, and extensible. I considered bc but was worried it might not be installed on all systems. I didn't consider awk because, as great a tool as it is, its syntax is often opaque and arcane. Python is probably slower than bc or awk but I really don't care about that incremental performance difference. If I did care that would be strong evidence I should rewrite the whole script in a performant language.
For your use case Jetchisel's suggestion to use bc <<< "$ram > $inacceptableRAM" is probably sufficient, as the Python equivalent is more verbose, but for something more complex Python may be a good choice.

Related

How to sequentially copy multiple rows (e.g. 1-10 then 11-20 and so on) from one big file using FOR loop to other multiple files (maybe sed or awk)?

I'm new to Bash and Linux. I'm trying to make it happen in Bash. So far my code looks smth like this:
The problem is with coping using awk or sed, i suppose... as i try to put the sum of a constant and a variable somehow wrong. Brackets or different quotes do not make any difference.
for i in {0..10}
do
touch /path/file$i
awk -v "NR>=1+$i*9 && NR<=$i*9" /path/BigFile > /path/file$i
(or with sed) sed -n "1+$i*9,$i*9" /path/BigFile > /path/file$i
done
Thank you in advance
Instead of reinventing this wheel, you can use the split utility. split -l 10 will tell it to split into chunks of 10 lines (or maybe less for the last one), and there are some options you can use to control the output filenames -- you probably want -d to get numeric suffixes (the default is alphabetical).
hobbs' answer (use split) is the right way to solve this. Aside from being simpler, it's also linear (meaning that if you double the input size, it only takes twice as long) while these for loops are quadratic (doubling the input quadruples the time it takes), plus the loop requires that you know in advance how many lines there are in the file. But for completeness let me explain what went wrong in the original attempts.
The primary problem is that the math is wrong. NR>=1+$i*9 && NR<=$i*9 will never be true. For example, in the first iteration, $i is 0, so this is equivalent to NR>=1 && NR<=0, which requires that the record number be at least 1 but no more than 0. Similarly, when $i is 1, it becomes NR>=10 && NR<=9... same basic problem. What you want is something like NR>=1+$i*9 && NR<=($i+1)*9, which matches for lines 1-9, then 10-18, etc.
The second problem is that you're using awk's -v option without supplying a variable name & value. Either remove the -v, or (the cleaner option) use -v to convert the shell variable i into an awk variable (and then put the awk program in single-quotes and don't use $ to get the variable's value -- that's how you get shell variables' values, not awk variables). Something like this:
awk -v i="$i" 'NR>=1+i*9 && NR<=(i+1)*9' /path/BigFile > "/path/file$i"
(Note that I double-quoted everything involving a shell variable reference -- not strictly necessary here, but a general good scripting habit.)
The sed version also has a couple of problems of its own. First, unlike awk, sed doesn't do math; if you want to use an expression as a line number, you need to have the shell do the math with $(( )) and pass the result to sed as a simple number. Second, your sed command specifies a line range, but doesn't say what to do with it; you need to add a p command to print those lines. Something like this:
sed -n "$((1+i*9)),$(((i+1)*9)) p" /path/BigFile > "/path/file$i"
Or, equivalently, omit -n and tell it to delete all but those lines:
sed "$((1+i*9)),$(((i+1)*9)) ! d" /path/BigFile > "/path/file$i"
But again, don't actually do any of these; use split instead.

Do I need stay away from bash scripts for big files?

I have big log files(1-2 gb and more). I'm new on programming and bash so useful and easy for me. When I need something, I can do (someone help me on here). Simple scripts works fine, but when I need complex operations, maybe bash so slow maybe my programming skill so bad, it's so slow working.
So do I need C for complex programming on my server log files or do I need just optimization my scripts?
If I need just optimization, how can I check where is bad or where is good on my codes?
For example I have while-do loop:
while read -r date month size;
do
...
...
done < file.tmp
How can I use awk for faster run?
That depends on how you use bash. To illustrate, consider how you'd sum a possibly large number of integers.
This function does what Bash was meant for: being control logic for calling other utilities.
sumlines_fast() {
awk '{n += $1} END {print n}'
}
It runs in 0.5 seconds on a million line file. That's the kind of bash code you can very effectively use for larger files.
Meanwhile, this function does what Bash is not intended for: being a general purpose programming language:
sumlines_slow() {
local i=0
while IFS= read -r line
do
(( i += $line ))
done
echo "$i"
}
This function is slow, and takes 30 seconds to sum the same million line file. You should not be doing this for larger files.
Finally, here's a function that could have been written by someone who has no understanding of bash at all:
sumlines_garbage() {
i=0
for f in `cat`
do
i=`echo $f + $i | bc`
done
echo $i
}
It treats forks as being free and therefore runs ridiculously slowly. It would take something like five hours to sum the file. You should not be using this at all.

What is the range of typeset -i variable

Below is my script
#!/bin/sh
typeset resl=$(($1+$2))
echo $resl
when i am passing two value 173591451 and 2000252844 to shell script, it is returning negative value.
./addvalue.sh 173591451 2000252844
output ---> -2121123001
Please let me know how we can fix this problem?
Dropping into a friendly programming calculator application to look at your values in hex I see you are into 32-bits of precision. Once you hit 32-bits (8'th digit >= 8) you have exceeded the size of integer your shell was compiled with and entered the land of negative numbers (but that's another post).
0x81923B47 = 0xA58CB9B + 0x77396FAC
Two workarounds, without having to worry about getting a 64-bit shell, follow.
1. awk
The success of this depends on how your awk as compiled and which awk you are using.
awk 'END {print 173591451 + 2000252844}' </dev/null
Also do all your relational testing in awk.
2. dc
The "dc" program (desk calculator) uses arbitrary precision so you never need to worry about integer bit-size again. To put it into a variable:
$ sum="$( echo 173591451 2000252844 + p | dc )"; echo $sum
2173844295
And avoid typeset -i with dc as the shell needs to see strings. Properly checking relationships (if $a < $b) gets a little tricky, but can be done ($a -lt $b is wrong).

Shell Script - Pair numbers with a for statement

I'm trying to make a script that adds pairs of number, using only simple for/if statements, for example:
./pair 1 2 4 8 16
would result in
./pair 1 2 4 8 16 32
3
12
48
My current script is (probably hugely wrong, I'm new to Shell Scripting):
#!/bin/sh
sum=0
count=0
for a in $*
do
sum=`expr $sum + $a`
count=`expr $count + 1`
if [ count=2 ]
then
sum=0
count=0
fi
echo "$sum"
done
exit
However this is not working. Any help would be great.
A for loop isn't really the right tool for this. Use a while loop and the shift command.
while [ $# -gt 1 ]; do
echo $(( $1 + $2 ))
shift 2
done
The problem in your script is that you do not have sufficient whitespace in your if statement, as well as not preceding the variable name with a $:
if [ $count = 2 ]
Also, you only want to output $sum just before you reset its value to 0, not every time through the loop.
And, expr isn't needed for arithmetic anymore: sum=$(( sum + a ))
My approach uses cshell. As a C programmer, I connect better with cshell. But that asside, I approach the problem with an iterator, incrementing twice.
macetw
I haven't seen anyone write in Csh for a long time. Back in the early Unix days, there were two primary shells: Csh and Bourne shell. Bourne was the AT&T flavor and BSD (Berkeley Standard Distribution) used Csh which took syntax hints from the C language.
Most people I knew used Csh as their standard shell, but wrote scripts in Bourne shell because Csh had all sorts of issues as pointed out by Tom Christiansen (as already mentioned by Olivier Dulac). However, Csh had command aliases and command line history and editing. Features that people wanted in their command line shell.
Csh peaked back in the SunOS days. SunOS was based upon BSD and Sun administrators wrote almost all of their scripts in Csh. However, David Korn changed a lot of that with Kornshell. Kornshell contained the standard Bournshell syntax language, but contained features like command aliasing and shell history editing that people wanted in Csh. Not only that, but it contained a lot of new features that were never previously found in shells like built in math (goodbye to expr and bc), pattern matching with [[ ... ]] and shell options that could be set with set -o.
Now, there was little need to know Csh at all. You could use Kornshell as both your scripting language and your command line shell. When Sun replaced SunOS with Solaris, the Kornshell and not the C shell was the default shell. That was the end of C shell as a viable shell choice.
The last big advocate of Csh was Steve Jobs. NeXT used the TurboCsh as its default shell, and it was the default shell on the first iteration of Mac OS X -- long after Sun had abandoned it. However, later versions of Mac OS X defaulted to BASH.
BASH is now the default and standard. As I mentioned before, on most systems, /bin/sh is BASH. In the BASH manpage is this:
If bash is invoked with the name sh, it tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well.
That means that solid BASHisms (shopt, ${foo/re/repl}, etc.) are not active, but most of the stuff inherited from Kornshell ([[ ... ]], set -o, typeset, $(( ... )) ) are still available. And, that means about 99% of the BASH scripts will still work even if invoked as /bin/sh.
I believe that most of the criticisms in Tom Christiansen anti-Csh paper no longer apply. Turbo C shell - which long ago replaced the original C shell fixed many of the bugs. However, Turbo C came after much of the world abandoned C shell, so I really can't say.
The Csh approach is interesting, but is not a correct answer to this question. C Shell scripting is a completely different language than Bourne shell. You'd be better off giving an answer in Python.
My approach uses cshell. As a C programmer, I connect better with cshell. But that asside, I approach the problem with an iterator, incrementing twice.
# i = 1
while ( $i < $#argv )
# j = $i + 1
# sum = $argv[$i] + $argv[$j]
echo $sum
# i = $i + 2
end
So the shebangs inplies bourne shell, and if you start the script directly it will be the same as doing:
/bin/sh <your_script_name>
And the simplest loop I can think of would be this one:
while [ $# -gt 1 ]; do
expr $1 + $2
shift 2
done
It counts the input tokens and as long as you have more than 2 (the square brackets test) it adds the first and the second one displaying the result. shift 2 means shifting the input two places (i.e. $1 and $2 are dropped and $n will be mapped to $n-2).
If you want a funny sh/bc combo solution:
#!/bin/sh
printf "%s+%s\n" "$#" | bc
It's cool, printf takes care of the loop itself. :)
Of course, there are no error checkings whatsoever! it will work with floats too.
The same with dc:
#!/bin/sh
printf "%s %s+pc" "$#" | dc
but it won't work with negative numbers. For negative numbers, you could:
#!/bin/sh
printf "%s %s+pc" "${#/-/_}" | dc
And I just realized this is the shortest answer!

Shell loops using non-integers?

I wrote a .sh file to compile and run a few programs for a homework assignment. I have a "for" loop in the script, but it won't work unless I use only integers:
#!/bin/bash
for (( i=10; i<=100000; i+=100))
do
./hw3_2_2 $i
done
The variable $i is an input for the program hw3_2_2, and I have non-integer values I'd like to use. How could I loop through running the code with a list of decimal numbers?
I find it surprising that in five years no one ever mentioned the utility created just for generating ranges, but, then again, it comes from BSD around 2005, and perhaps it wasn't even generally available on Linux at the time the question was made.
But here it is:
for i in $(seq 0 0.1 1)
Or, to print all numbers with the same width (by prepending or appending zeroes), use -w. That helps prevent numbers being sent as "integers", if that would cause issues.
The syntax is seq [first [incr]] last, with first defaulting to 1, and incr defaulting to either 1 or -1, depending on whether last is greater than or less than first. For other parameters, see seq(1).
you can use awk to generate your decimals eg steps of0.1
num=$(awk 'BEGIN{for(i=1;i<=10;i+=0.1)print i}')
for n in $num
do
./hw3_2_2 $n
done
or you can do it entirely in awk
awk 'BEGIN{cmd="hw3_2_2";for(i=1;i<=10;i+=0.1){c=cmd" "i;system(cmd) } }'
The easiest way is to just list them:
for a in 1.2 3.4 3.11 402.12 4.2 2342.40
do
./hw3_2_2 $a
done
If the list is huge, so you can't have it as a literal list, consider dumping it in a file and then using something like
for a in $(< my-numbers.txt)
do
./hw3_2_2 $a
done
The $(< my-numbers.txt) part is an efficient way (in Bash) to substitute the contents of the names file in that location of the script. Thanks to Dennis Williamson for pointing out that there is no need to use the external cat command for this.
Here's another way. You can use a here doc to include your data in the script:
read -r -d '' data <<EOF
1.1
2.12
3.14159
4
5.05
EOF
for i in "$data"
do
./hw3_2_2 "$i"
done
Similarly:
array=(
1.1
2.12
3.14159
4
5.05
)
for i in "${array[#]}"
do
./hw3_2_2 "$i"
done
I usually also use "seq" as per the second answer, but just to give an answer in terms of a precision-robust integer loop and then bc conversion to a float:
#!/bin/bash
for i in {2..10..2} ; do
x=`echo "scale=2 ; ${i}/10" | bc`
echo $x
done
gives:
.2
.4
.6
.8
1.0
bash doesn't do decimal numbers. Either use something like bc that can, or move to a more complete programming language. Beware of accuracy problems though.

Resources