division in kernel module - linux-kernel

I have two integer variables which I need to divide in order to work the percentage out for something. I have (variablea / variableb) * 100. The problem is that the (variablea / variableb) will be between 0 and 1 so it gets rounded to 0 because it is an int. How can I get round this so the answer isn't always 0?

Try (variablea * 100) / variableb.
This will truncate the result. If you're rather round to the nearest whole percent, you could do (variablea * 100 + variableb/2) / variableb.
Finally, instead of 100 you could use a constant such as 1000 or 10000 if you'd like to get more decimal places (just remember to format the number correctly when printing, i.e. scale it by 10 or 100).

Related

How to find 15% from big.Int

I'm want to get 15% of big.Int value, how can I do this?
new(big.Int).Mul(totalValue, new(big.Int).Div(new(big.Int).SetUint64(15), new(big.Int).SetUint64(100)))
returns 0
Order of operations can be really important with integer arithmetic when division is involved. Take the following code for example
int n = 200;
Console.WriteLine($"{n * (15 / 100)}");
Console.WriteLine($"{(n * 15) / 100}");
The first WriteLine will print 0 and the second will print 30.
Even though both lines should theoretically yield the same result because multiplication and division are associative. And they would actually produce the same result if you were using floating point math.
However, in this case integer math works a little differently. The problem is 15/100 yields 0 with integer division. So if the division is performed first, the result will always be 0.

Why annotating float division within brackets produces a different result than outside them? [duplicate]

Can somebody explain why multiplying by 100 here gives a less accurate result but multiplying by 10 twice gives a more accurate result?
± % sc
Loading development environment (Rails 3.0.1)
>> 129.95 * 100
12994.999999999998
>> 129.95*10
1299.5
>> 129.95*10*10
12995.0
If you do the calculations by hand in double-precision binary, which is limited to 53 significant bits, you'll see what's going on:
129.95 = 1.0000001111100110011001100110011001100110011001100110 x 2^7
129.95*100 = 1.1001011000010111111111111111111111111111111111111111011 x 2^13
This is 56 significant bits long, so rounded to 53 bits it's
1.1001011000010111111111111111111111111111111111111111 x 2^13, which equals
12994.999999999998181010596454143524169921875
Now 129.95*10 = 1.01000100110111111111111111111111111111111111111111111 x 2^10
This is 54 significant bits long, so rounded to 53 bits it's 1.01000100111 x 2^10 = 1299.5
Now 1299.5 * 10 = 1.1001011000011 x 2^13 = 12995.
First off: you are looking at the string representation of the result, not the actual result itself. If you really want to compare the two results, you should format both results explicitly, using String#% and you should format both results the same way.
Secondly, that's just how binary floating point numbers work. They are inexact, they are finite and they are binary. All three mean that you get rounding errors, which generally look totally random, unless you happen to have memorized the entirety of IEEE754 and can recite it backwards in your sleep.
There is no floating point number exactly equal to 129.95. So your language uses a value which is close to it instead. When that value is multiplied by 100, the result is close to 12995, but it just so happens to not equal 12995. (It is also not exactly equal to 100 times the original value it used in place of 129.95.) So your interpreter prints a decimal number which is close to (but not equal to) the value of 129.95 * 100 and which shows you that it is not exactly 12995. It also just so happens that the result 129.95 * 10 is exactly equal to 1299.5. This is mostly luck.
Bottom line is, never expect equality out of any floating point arithmetic, only "closeness".

fmt.Printf with width and precision fields in %g behaves unexpectedly

I am trying to get some floats formatted with the same width using fmt.Printf().
For example, given the float values 0.0606060606060606, 0.3333333333333333, 0.05, 0.4 and 0.1818181818181818, I would like to get each value formatted in, say, 10 runes:
0.06060606
0.33333333
0.05
0.4
0.18181818
But I can't understand how it's done. Documentation says that
For floating-point values, width sets the minimum width of the field
and precision sets the number of places after the decimal, if
appropriate, except that for %g/%G it sets the total number of digits.
For example, given 123.45 the format %6.2f prints 123.45 while %.4g
prints 123.5. The default precision for %e and %f is 6; for %g it is
the smallest number of digits necessary to identify the value
uniquely.
So, if I use %f a larger number will not fit in 10-character constraint, therefore %g is required. To get a minimum width of 10 is %10g and to get a maximum number of 9 digits (+1 for the dot) it's %.9g, but combining them in %10.9g is not behaving as I expect
0.0606060606
0.333333333
0.05
0.4
0.181818182
How come I get strings which are of 10 runes, others that are 11 runes and others that are 12 runes?
In particular, it seems that %.9g does not produce 9 digits in total. See for example: http://play.golang.org/p/ie9k8bYC7r
Firstly, we need to understand the documentation correctly:
width sets the minimum width of the field and precision sets the number of places after the decimal, if appropriate, except that for %g/%G it sets the total number of digits.
This line is grammatically correct, but the it in the last part of this sentence is really confusing: it actually refers to the precision, not the width.
Therefore, let's look at some examples:
123.45
12312.2
1.6069
0.6069
0.0006069
and you print it like fmt.Printf("%.4g"), it gives you
123.5
1.231e+04
1.607
0.6069
0.0006069
only 4 digits, excluding all decimal points and exponent. But wait, what happens to the last 2 example? Are you kidding me isn't that more than 5 digits?
This is the confusing part in printing: leading 0s won't be counted as digits, and won't be shrunk when there are less than 4 zeros.
Let's look at 0 behavior using the example below:
package main
import "fmt"
func main() {
fmt.Printf("%.4g\n", 0.12345)
fmt.Printf("%.4g\n", 0.012345)
fmt.Printf("%.4g\n", 0.0012345)
fmt.Printf("%.4g\n", 0.00012345)
fmt.Printf("%.4g\n", 0.000012345)
fmt.Printf("%.4g\n", 0.0000012345)
fmt.Printf("%.4g\n", 0.00000012345)
fmt.Printf("%g\n", 0.12345)
fmt.Printf("%g\n", 0.012345)
fmt.Printf("%g\n", 0.0012345)
fmt.Printf("%g\n", 0.00012345)
fmt.Printf("%g\n", 0.000012345)
fmt.Printf("%g\n", 0.0000012345)
fmt.Printf("%g\n", 0.00000012345)
}
and the output:
0.1235
0.01235
0.001234
0.0001234
1.234e-05
1.234e-06
1.235e-07
0.12345
0.012345
0.0012345
0.00012345
1.2345e-05
1.2345e-06
1.2345e-07
So you could see, when there are less than 4 leading 0s, they will be counted, and be shrunk if there are more than that.
Ok, next thing is the width. From the documentation, width only specifies the minimum width, including decimal place and exponent. Which means, if you have more digits than what width specified, it will shoot out of the width.
Remember, width will be taken account as the last step, which means it needs to first satisfy the precision field.
Let's go back to your case. You specified %10.9g, that means you want a total digit of 9, excluding the leading 0, and a min width of 10 including decimal place and exponent, and the precision should take priority.
0.0606060606060606: take 9 digits without leading 0 will give you 0.0606060606, since it's already 12 width, it passes the min width of 10;
0.3333333333333333: take 9 digits without leading 0 will give you 0.333333333, since it's already 11 width, it passes the min width of 10;
0.05: take 9 digits without leading 0 will give you 0.05, since it's less than width 10, it will pad with another 6 width to get width of 10;
0.4: same as above;
0.1818181818181818: take 9 digits without leading 0 will give you 0.181818182 with rounding, since it's already 11 width, it passes the min width of 10.
So this explains why you got the funny printing.
Yes, I agree: it gives precedence to the "precision fields" not to "width".
So when we need fix columns for printing we need write new formatting func.

How does the map() function in Processing work?

I'm taking a class which uses Processing.
I having a problem understanding the map() function.
According to it's documentation(http://www.processing.org/reference/map_.html):
Re-maps a number from one range to another.
In the first example above, the number 25 is converted from a value in the range of 0 to 100 into a value that ranges from the left edge of the window (0) to the right edge(width).
As shown in the second example, numbers outside of the range are not clamped to the minimum and maximum parameters values, because out-of-range values are often intentional and useful.
Is is similar to a random function but the range is set by the user? Also, i cant understand the explanation for the first example: it says the number is converted to a value of 0 to 100 into a value that ranges from edge to edge of the screen. im thinking why not just convert directly, the number 25 to the range of value pertaining to the screen?
The map() function is a useful shortcut and you won't regret the time spent at understanding it.
This is its syntax:
variable2 = map(variable1, min1, max1, min2, max2);
The function establishes a proportion between two ranges of values:
min1 : min2 = max1 : max2
you can read it as: min1 is to min2 as max1 is to max2.
variable1 stores a value between the first range min1~max1.
variable2 gets a value between the second range min2~max2.
This is the equation the function solves for the programmer:
variable2 = min2+(max2-min2)*((variable1-min1)/(max1-min1))
This is the Java code behind the Processing map() function:
static public final float map(float value,
float istart,
float istop,
float ostart,
float ostop) {
return ostart + (ostop - ostart) * ((value - istart) / (istop - istart));
}
think of it this way: divide a range of 0 to 10 into 100 equal parts. (you will get 0.1 per part) now divide the range 0 to 100 into 100 equal parts (you will get 1 per part) so 0.1 in the range of 0 to 10 is equal to 1 in the range 0 to 100. if you want to find where 5 in the range of 0 to 10 belongs in the range of 0 to 100, divide 5 by the size of a 0 to 10 part and multiply that number by the size of a 0 to 100 part and you will have your answer! (50)
P.S. I know this isn't how it actually works, but I just thought I would give an example to clarify things.
If you think it a bit more carefully,
Its nothing more than just calculating the percentage,
in percentage you have final range is 0-100 and initial range as 0 - max(example like you have maximum marks in all subjects is 500, then initial range is 0 - 500),
Now for the solution what you can do:
for step by step to understand
n - your number
(initialMin - initialMax) your initial range
(finalMin - finalMax) your final range
then,
n
________________________ X (finalMax - finalMin) = say N
(initialMax - initialMin)
now N is exactly like percentage,, instead 0 to 100 as range, you have 0 to (finalMax-finalMin) as range
so for converting it into finalMin to finalMax range that map() function do in processing,
just do N = N + finalMin
Now you get the answer that is N
Hope you guys understand the solution>>>

ruby floating point errors

Can somebody explain why multiplying by 100 here gives a less accurate result but multiplying by 10 twice gives a more accurate result?
± % sc
Loading development environment (Rails 3.0.1)
>> 129.95 * 100
12994.999999999998
>> 129.95*10
1299.5
>> 129.95*10*10
12995.0
If you do the calculations by hand in double-precision binary, which is limited to 53 significant bits, you'll see what's going on:
129.95 = 1.0000001111100110011001100110011001100110011001100110 x 2^7
129.95*100 = 1.1001011000010111111111111111111111111111111111111111011 x 2^13
This is 56 significant bits long, so rounded to 53 bits it's
1.1001011000010111111111111111111111111111111111111111 x 2^13, which equals
12994.999999999998181010596454143524169921875
Now 129.95*10 = 1.01000100110111111111111111111111111111111111111111111 x 2^10
This is 54 significant bits long, so rounded to 53 bits it's 1.01000100111 x 2^10 = 1299.5
Now 1299.5 * 10 = 1.1001011000011 x 2^13 = 12995.
First off: you are looking at the string representation of the result, not the actual result itself. If you really want to compare the two results, you should format both results explicitly, using String#% and you should format both results the same way.
Secondly, that's just how binary floating point numbers work. They are inexact, they are finite and they are binary. All three mean that you get rounding errors, which generally look totally random, unless you happen to have memorized the entirety of IEEE754 and can recite it backwards in your sleep.
There is no floating point number exactly equal to 129.95. So your language uses a value which is close to it instead. When that value is multiplied by 100, the result is close to 12995, but it just so happens to not equal 12995. (It is also not exactly equal to 100 times the original value it used in place of 129.95.) So your interpreter prints a decimal number which is close to (but not equal to) the value of 129.95 * 100 and which shows you that it is not exactly 12995. It also just so happens that the result 129.95 * 10 is exactly equal to 1299.5. This is mostly luck.
Bottom line is, never expect equality out of any floating point arithmetic, only "closeness".

Resources