Calculate difference between two number in a file - bash

I want to know if it is possible to calculate the difference between two float number contained in a file in two distinct lines in one bash command line.
File content example :
Start at 123456.789
...
...
...
End at 123654.987
I would like to do an echo of 123654.987-123456.789
Is that possible? What is this magic command line ?
Thank you!

awk '
/Start/ { start = $3 } # 3rd field in line matching "Start"
/End/ {
end = $3; # 3rd field in line matching "End"
print end - start # Print the difference.
}
' < file
If you really want to do this on one line:
awk '/Start/ { start = $3 } /End/ { end = $3; print end - start }' < file

you can do this with this command:
start=`grep 'Start' FILENAME| cut -d ' ' -f 3`; end=`grep 'End' FILENAME | cut -d ' ' -f 3`; echo "$end-$start" | bc
You need the 'bc' program for this (for floating point math). You can install it with apt-get install bc, or yum, or rpm, zypper... OS specific :)

Bash doesn't support floating point operations. But you can split your numbers to parts and perform integer operations. Example:
#!/bin/bash
echo $(( ${2%.*} - ${1%.*} )).$(( ${2#*.} - ${1#*.} ))
Result:
./test.sh 123456.789 123654.987
198.198
EDIT:
Correct solution would be using not command line hack, but tool designed or performing fp operations. For example, bc:
echo 123654.987-123456.789 | bc
output:
198.198

Here's a weird way:
printf -- "-%s+%s\n" $(grep -oP '(Start|End) at \K[\d.]+' file) | bc

Related

Evaluate expression using printf in bash [duplicate]

How to do arithmetic with floating point numbers such as 1.503923 in a shell script? The floating point numbers are pulled from a file as a string. The format of the file is as follows:
1.5493482,3.49384,33.284732,23.043852,2.2384...
3.384,3.282342,23.043852,2.23284,8.39283...
.
.
.
Here is some simplified sample code I need to get working. Everything works fine up to the arithmetic. I pull a line from the file, then pull multiple values from that line. I think this would cut down on search processing time as these files are huge.
# set vars, loops etc.
while [ $line_no -gt 0 ]
do
line_string=`sed -n $line_no'p' $file_path` # Pull Line (str) from a file
string1=${line_string:9:6} # Pull value from the Line
string2=${line_string:16:6}
string3=...
.
.
.
calc1= `expr $string2 - $string7` |bc -l # I tried these and various
calc2= ` "$string3" * "$string2" ` |bc -l # other combinations
calc3= `expr $string2 - $string1`
calc4= "$string2 + $string8" |bc
.
.
.
generic_function_call # Use the variables in functions
line_no=`expr $line_no - 1` # Counter--
done
Output I keep getting:
expr: non-numeric argument
command not found
I believe you should use : bc
For example:
echo "scale = 10; 123.456789/345.345345" | bc
(It's the unix way: each tool specializes to do well what they are supposed to do, and they all work together to do great things. don't emulate a great tool with another, make them work together.)
Output:
.3574879198
Or with a scale of 1 instead of 10:
echo "scale = 1; 123.456789/345.345345" | bc
Output:
.3
Note that this does not perform rounding.
I highly recommand switching to awk if you need to do more complex operations, or perl for the most complex ones.
ex: your operations done with awk:
# create the test file:
printf '1.5493482,3.49384,33.284732,23.043852,2.2384,12.1,13.4,...\n' > somefile
printf '3.384,3.282342,23.043852,2.23284,8.39283,14.1,15.2,...\n' >> somefile
# do OP's calculations (and DEBUG print them out!)
awk -F',' '
# put no single quote in here... even in comments! you can instead print a: \047
# the -F tell awk to use "," as a separator. Thus awk will automatically split lines for us using it.
# $1=before first "," $2=between 1st and 2nd "," ... etc.
function some_awk_function_here_if_you_want() { # optionnal function definition
# some actions here. you can even have arguments to the function, etc.
print "DEBUG: no action defined in some_awk_function_here_if_you_want yet ..."
}
BEGIN { rem="Optionnal START section. here you can put initialisations, that happens before the FIRST file-s FIRST line is read"
}
(NF>=8) { rem="for each line with at least 8 values separated by commas (and only for lines meeting that condition)"
calc1=($2 - $7)
calc2=($3 * $2)
calc3=($2 - $1)
calc4=($2 + $8)
# uncomment to call this function :(ex1): # some_awk_function_here_if_you_want
# uncomment to call this script:(ex2): # cmd="/path/to/some/script.sh \"" calc1 "\" \"" calc2 "\" ..." ; rem="continued next line"
# uncomment to call this script:(ex2): # system(cmd); close(cmd)
line_no=(FNR-1) # ? why -1? . FNR=line number in the CURRENT file. NR=line number since the beginning (NR>FNR after the first file ...)
print "DEBUG: calc1=" calc1 " , calc2=" calc2 " , calc3=" calc3 " , calc4=" calc4 " , line_no=" line_no
print "DEBUG fancier_exemples: see man printf for lots of info on formatting (%...f for floats, %...d for integer, %...s for strings, etc)"
printf("DEBUG: calc1=%d , calc2=%10.2f , calc3=%s , calc4=%d , line_no=%d\n",calc1, calc2, calc3, calc4, line_no)
}
END { rem="Optionnal END section. here you can put things that need to happen AFTER the LAST file-s LAST line is read"
}
' somefile # end of the awk script, and the list of file(s) to be read by it.
What about this?
calc=$(echo "$String2 + $String8"|bc)
This will make bc to add the values of $String2 and $String8 and saves the result in the variable calc.
If you don't have the "bc" you can just use 'awk' :
calc=$(echo 2.3 4.6 | awk '{ printf "%f", $1 + $2 }')
scale in bc is the precission so with a scale of 4 if you type bc <<< 'scale=4;22.0/7' you get 3.1428 as an answer. If you use a scale of 8 you get 3.14285714 which is 8 numbers after the floating point.
So the scale is a precission factor

BASH: How to calculate decimals using echo command? [duplicate]

How to do arithmetic with floating point numbers such as 1.503923 in a shell script? The floating point numbers are pulled from a file as a string. The format of the file is as follows:
1.5493482,3.49384,33.284732,23.043852,2.2384...
3.384,3.282342,23.043852,2.23284,8.39283...
.
.
.
Here is some simplified sample code I need to get working. Everything works fine up to the arithmetic. I pull a line from the file, then pull multiple values from that line. I think this would cut down on search processing time as these files are huge.
# set vars, loops etc.
while [ $line_no -gt 0 ]
do
line_string=`sed -n $line_no'p' $file_path` # Pull Line (str) from a file
string1=${line_string:9:6} # Pull value from the Line
string2=${line_string:16:6}
string3=...
.
.
.
calc1= `expr $string2 - $string7` |bc -l # I tried these and various
calc2= ` "$string3" * "$string2" ` |bc -l # other combinations
calc3= `expr $string2 - $string1`
calc4= "$string2 + $string8" |bc
.
.
.
generic_function_call # Use the variables in functions
line_no=`expr $line_no - 1` # Counter--
done
Output I keep getting:
expr: non-numeric argument
command not found
I believe you should use : bc
For example:
echo "scale = 10; 123.456789/345.345345" | bc
(It's the unix way: each tool specializes to do well what they are supposed to do, and they all work together to do great things. don't emulate a great tool with another, make them work together.)
Output:
.3574879198
Or with a scale of 1 instead of 10:
echo "scale = 1; 123.456789/345.345345" | bc
Output:
.3
Note that this does not perform rounding.
I highly recommand switching to awk if you need to do more complex operations, or perl for the most complex ones.
ex: your operations done with awk:
# create the test file:
printf '1.5493482,3.49384,33.284732,23.043852,2.2384,12.1,13.4,...\n' > somefile
printf '3.384,3.282342,23.043852,2.23284,8.39283,14.1,15.2,...\n' >> somefile
# do OP's calculations (and DEBUG print them out!)
awk -F',' '
# put no single quote in here... even in comments! you can instead print a: \047
# the -F tell awk to use "," as a separator. Thus awk will automatically split lines for us using it.
# $1=before first "," $2=between 1st and 2nd "," ... etc.
function some_awk_function_here_if_you_want() { # optionnal function definition
# some actions here. you can even have arguments to the function, etc.
print "DEBUG: no action defined in some_awk_function_here_if_you_want yet ..."
}
BEGIN { rem="Optionnal START section. here you can put initialisations, that happens before the FIRST file-s FIRST line is read"
}
(NF>=8) { rem="for each line with at least 8 values separated by commas (and only for lines meeting that condition)"
calc1=($2 - $7)
calc2=($3 * $2)
calc3=($2 - $1)
calc4=($2 + $8)
# uncomment to call this function :(ex1): # some_awk_function_here_if_you_want
# uncomment to call this script:(ex2): # cmd="/path/to/some/script.sh \"" calc1 "\" \"" calc2 "\" ..." ; rem="continued next line"
# uncomment to call this script:(ex2): # system(cmd); close(cmd)
line_no=(FNR-1) # ? why -1? . FNR=line number in the CURRENT file. NR=line number since the beginning (NR>FNR after the first file ...)
print "DEBUG: calc1=" calc1 " , calc2=" calc2 " , calc3=" calc3 " , calc4=" calc4 " , line_no=" line_no
print "DEBUG fancier_exemples: see man printf for lots of info on formatting (%...f for floats, %...d for integer, %...s for strings, etc)"
printf("DEBUG: calc1=%d , calc2=%10.2f , calc3=%s , calc4=%d , line_no=%d\n",calc1, calc2, calc3, calc4, line_no)
}
END { rem="Optionnal END section. here you can put things that need to happen AFTER the LAST file-s LAST line is read"
}
' somefile # end of the awk script, and the list of file(s) to be read by it.
What about this?
calc=$(echo "$String2 + $String8"|bc)
This will make bc to add the values of $String2 and $String8 and saves the result in the variable calc.
If you don't have the "bc" you can just use 'awk' :
calc=$(echo 2.3 4.6 | awk '{ printf "%f", $1 + $2 }')
scale in bc is the precission so with a scale of 4 if you type bc <<< 'scale=4;22.0/7' you get 3.1428 as an answer. If you use a scale of 8 you get 3.14285714 which is 8 numbers after the floating point.
So the scale is a precission factor

How to update minor version number in bash?

Currently, I want to update the minor version in a text file using a bash command. This is the format I am dealing with: MAJOR.Minor.BugFix. I am able to increment the BugFix version number but have been unable to increment just the minor version.
I.e
01.01.00-> 01.02.00
01.99.00-> 02.00.00
This is the code snippet I found online and was trying to tweak to update the minor instead of the bug fix
echo 01.00.1 | awk -F. -v OFS=. 'NF==1{print ++$NF}; NF>1{if(length($NF+1)>length($NF))$(NF-1)++; $NF=sprintf("%0*d", length($NF), ($NF+1)%(10^length($NF))); print}'
As -F takes a regular expression -F. will match any character. Do something like -F"[.]" to make it match periods and you can just split fields without any of the length() stuff.
larsks idea of splitting into multiple lines is a good one:
echo $a | awk -F'[.]' '{
major=$1;
minor=$2;
patch=$3;
minor += 1;
major += minor / 100;
minor = minor % 100;
printf( "%02d.%02d.%02d\n", major, minor, patch );
}'
You don't need AWK for this, just read with IFS=. will do.
Though in Bash, leading zeroes indicate octal so you'll need to guard against them.
IFS=. read -r major minor bugfix <<< "$1"
# Specify base 10 in case of leading zeroes (octal)
((major=10#$major, minor=10#$minor, bugfix=10#$bugfix))
if [[ $minor -eq 99 ]]; then
((major++, minor=0))
else
((minor++))
fi
printf '%02d.%02d.%02d\n' "$major" "$minor" "$bugfix"
Test run:
$ ./test.sh 01.01.00
01.02.00
$ ./test.sh 01.99.09
02.00.09
$ ./test.sh 1.1.1
01.02.01
Quick answer:
version=01.02.00
newversion="$(printf "%06d" "$(expr "$(echo $version | sed 's/\.//g')" + 100)")"
echo "${newversion:0:2}.${newversion:2:2}.${newversion:4:2}"
Full explanation:
version=01.02.00
# get the number without decimals
rawnumber="$(echo $version | sed 's/\.//g')"
# add 100 to number (to increment minor version)
sum="$(expr "$rawnumber" + 100)"
# make number 6 digits
newnumber="$(printf "%06d" "$sum")"
# add decimals back to number
newversion="${newnumber:0:2}.${newnumber:2:2}.${newnumber:4:2}"
echo "$newversion"
awk provides a simple and efficient way to handle updating the minor-version (and increment the major-version if the minor version is 99 and setting the minor-version zero), e.g.
awk -F'.' '{
if ($2 == 99) {
$1++
$2=0
}
else
$2++
printf "%02d.%02d.%02d\n", $1, $2 ,$3
}' minorver
Above the leading-zeros are are ignored when considered as a number and then it is just a simple comparison of the minor-version to determine whether to increment the major-version and zero the minor-version or simply increment the minor-version. The printf is used to provide the formatted output:
Example Use/Output
With your data in the file minorver, you can do:
$ awk -F'.' '{
> if ($2 == 99) {
> $1++
> $2=0
> }
> else
> $2++
> printf "%02d.%02d.%02d\n", $1, $2 ,$3
> }' minorver
01.02.00
02.00.00
Let me know if you have further questions.

Floating-point arithmetic in UNIX shell script

How to do arithmetic with floating point numbers such as 1.503923 in a shell script? The floating point numbers are pulled from a file as a string. The format of the file is as follows:
1.5493482,3.49384,33.284732,23.043852,2.2384...
3.384,3.282342,23.043852,2.23284,8.39283...
.
.
.
Here is some simplified sample code I need to get working. Everything works fine up to the arithmetic. I pull a line from the file, then pull multiple values from that line. I think this would cut down on search processing time as these files are huge.
# set vars, loops etc.
while [ $line_no -gt 0 ]
do
line_string=`sed -n $line_no'p' $file_path` # Pull Line (str) from a file
string1=${line_string:9:6} # Pull value from the Line
string2=${line_string:16:6}
string3=...
.
.
.
calc1= `expr $string2 - $string7` |bc -l # I tried these and various
calc2= ` "$string3" * "$string2" ` |bc -l # other combinations
calc3= `expr $string2 - $string1`
calc4= "$string2 + $string8" |bc
.
.
.
generic_function_call # Use the variables in functions
line_no=`expr $line_no - 1` # Counter--
done
Output I keep getting:
expr: non-numeric argument
command not found
I believe you should use : bc
For example:
echo "scale = 10; 123.456789/345.345345" | bc
(It's the unix way: each tool specializes to do well what they are supposed to do, and they all work together to do great things. don't emulate a great tool with another, make them work together.)
Output:
.3574879198
Or with a scale of 1 instead of 10:
echo "scale = 1; 123.456789/345.345345" | bc
Output:
.3
Note that this does not perform rounding.
I highly recommand switching to awk if you need to do more complex operations, or perl for the most complex ones.
ex: your operations done with awk:
# create the test file:
printf '1.5493482,3.49384,33.284732,23.043852,2.2384,12.1,13.4,...\n' > somefile
printf '3.384,3.282342,23.043852,2.23284,8.39283,14.1,15.2,...\n' >> somefile
# do OP's calculations (and DEBUG print them out!)
awk -F',' '
# put no single quote in here... even in comments! you can instead print a: \047
# the -F tell awk to use "," as a separator. Thus awk will automatically split lines for us using it.
# $1=before first "," $2=between 1st and 2nd "," ... etc.
function some_awk_function_here_if_you_want() { # optionnal function definition
# some actions here. you can even have arguments to the function, etc.
print "DEBUG: no action defined in some_awk_function_here_if_you_want yet ..."
}
BEGIN { rem="Optionnal START section. here you can put initialisations, that happens before the FIRST file-s FIRST line is read"
}
(NF>=8) { rem="for each line with at least 8 values separated by commas (and only for lines meeting that condition)"
calc1=($2 - $7)
calc2=($3 * $2)
calc3=($2 - $1)
calc4=($2 + $8)
# uncomment to call this function :(ex1): # some_awk_function_here_if_you_want
# uncomment to call this script:(ex2): # cmd="/path/to/some/script.sh \"" calc1 "\" \"" calc2 "\" ..." ; rem="continued next line"
# uncomment to call this script:(ex2): # system(cmd); close(cmd)
line_no=(FNR-1) # ? why -1? . FNR=line number in the CURRENT file. NR=line number since the beginning (NR>FNR after the first file ...)
print "DEBUG: calc1=" calc1 " , calc2=" calc2 " , calc3=" calc3 " , calc4=" calc4 " , line_no=" line_no
print "DEBUG fancier_exemples: see man printf for lots of info on formatting (%...f for floats, %...d for integer, %...s for strings, etc)"
printf("DEBUG: calc1=%d , calc2=%10.2f , calc3=%s , calc4=%d , line_no=%d\n",calc1, calc2, calc3, calc4, line_no)
}
END { rem="Optionnal END section. here you can put things that need to happen AFTER the LAST file-s LAST line is read"
}
' somefile # end of the awk script, and the list of file(s) to be read by it.
What about this?
calc=$(echo "$String2 + $String8"|bc)
This will make bc to add the values of $String2 and $String8 and saves the result in the variable calc.
If you don't have the "bc" you can just use 'awk' :
calc=$(echo 2.3 4.6 | awk '{ printf "%f", $1 + $2 }')
scale in bc is the precission so with a scale of 4 if you type bc <<< 'scale=4;22.0/7' you get 3.1428 as an answer. If you use a scale of 8 you get 3.14285714 which is 8 numbers after the floating point.
So the scale is a precission factor

What's an easy way to read random line from a file?

What's an easy way to read random line from a file in a shell script?
You can use shuf:
shuf -n 1 $FILE
There is also a utility called rl. In Debian it's in the randomize-lines package that does exactly what you want, though not available in all distros. On its home page it actually recommends the use of shuf instead (which didn't exist when it was created, I believe). shuf is part of the GNU coreutils, rl is not.
rl -c 1 $FILE
Another alternative:
head -$((${RANDOM} % `wc -l < file` + 1)) file | tail -1
sort --random-sort $FILE | head -n 1
(I like the shuf approach above even better though - I didn't even know that existed and I would have never found that tool on my own)
This is simple.
cat file.txt | shuf -n 1
Granted this is just a tad slower than the "shuf -n 1 file.txt" on its own.
perlfaq5: How do I select a random line from a file? Here's a reservoir-sampling algorithm from the Camel Book:
perl -e 'srand; rand($.) < 1 && ($line = $_) while <>; print $line;' file
This has a significant advantage in space over reading the whole file in. You can find a proof of this method in The Art of Computer Programming, Volume 2, Section 3.4.2, by Donald E. Knuth.
using a bash script:
#!/bin/bash
# replace with file to read
FILE=tmp.txt
# count number of lines
NUM=$(wc - l < ${FILE})
# generate random number in range 0-NUM
let X=${RANDOM} % ${NUM} + 1
# extract X-th line
sed -n ${X}p ${FILE}
Single bash line:
sed -n $((1+$RANDOM%`wc -l test.txt | cut -f 1 -d ' '`))p test.txt
Slight problem: duplicate filename.
Here's a simple Python script that will do the job:
import random, sys
lines = open(sys.argv[1]).readlines()
print(lines[random.randrange(len(lines))])
Usage:
python randline.py file_to_get_random_line_from
Another way using 'awk'
awk NR==$((${RANDOM} % `wc -l < file.name` + 1)) file.name
A solution that also works on MacOSX, and should also works on Linux(?):
N=5
awk 'NR==FNR {lineN[$1]; next}(FNR in lineN)' <(jot -r $N 1 $(wc -l < $file)) $file
Where:
N is the number of random lines you want
NR==FNR {lineN[$1]; next}(FNR in lineN) file1 file2
--> save line numbers written in file1 and then print corresponding line in file2
jot -r $N 1 $(wc -l < $file) --> draw N numbers randomly (-r) in range (1, number_of_line_in_file) with jot. The process substitution <() will make it look like a file for the interpreter, so file1 in previous example.
#!/bin/bash
IFS=$'\n' wordsArray=($(<$1))
numWords=${#wordsArray[#]}
sizeOfNumWords=${#numWords}
while [ True ]
do
for ((i=0; i<$sizeOfNumWords; i++))
do
let ranNumArray[$i]=$(( ( $RANDOM % 10 ) + 1 ))-1
ranNumStr="$ranNumStr${ranNumArray[$i]}"
done
if [ $ranNumStr -le $numWords ]
then
break
fi
ranNumStr=""
done
noLeadZeroStr=$((10#$ranNumStr))
echo ${wordsArray[$noLeadZeroStr]}
Here is what I discovery since my Mac OS doesn't use all the easy answers. I used the jot command to generate a number since the $RANDOM variable solutions seems not to be very random in my test. When testing my solution I had a wide variance in the solutions provided in the output.
RANDOM1=`jot -r 1 1 235886`
#range of jot ( 1 235886 ) found from earlier wc -w /usr/share/dict/web2
echo $RANDOM1
head -n $RANDOM1 /usr/share/dict/web2 | tail -n 1
The echo of the variable is to get a visual of the generated random number.
Using only vanilla sed and awk, and without using $RANDOM, a simple, space-efficient and reasonably fast "one-liner" for selecting a single line pseudo-randomly from a file named FILENAME is as follows:
sed -n $(awk 'END {srand(); r=rand()*NR; if (r<NR) {sub(/\..*/,"",r); r++;}; print r}' FILENAME)p FILENAME
(This works even if FILENAME is empty, in which case no line is emitted.)
One possible advantage of this approach is that it only calls rand() once.
As pointed out by #AdamKatz in the comments, another possibility would be to call rand() for each line:
awk 'rand() * NR < 1 { line = $0 } END { print line }' FILENAME
(A simple proof of correctness can be given based on induction.)
Caveat about rand()
"In most awk implementations, including gawk, rand() starts generating numbers from the same starting number, or seed, each time you run awk."
-- https://www.gnu.org/software/gawk/manual/html_node/Numeric-Functions.html

Resources