shape:setY(math.random(0, 450 - 50))
math.random function with a minimum and a maximum value is what I understand but what exactly is the meaning of using arithmetic operator in mat.random function, say in above LOC, why is it 450 - 50 and not 400? what is the difference?
similarly,
self:setPosition(540, math.random(160) + 40)
There is no difference. It is equivalent to shape:setY(math.random(0, 400)).
The second line of code is equivalent to the following:
self:setPosition(540, math.random(1, 160) + 40)
It is most certainly intended to be a clearer and/or simpler way of writing
self:setPosition(540, math.random(41, 200))
Related
So I am working on a Maxima program that involves a bunch of iterations (the Souriau-Frame Drazin Inverse Algorithm, to be specific), each step of which yields a polynomial. I need to check and stop my iterations when the polynomial goes to zero (i.e., all coefficients go to zero).
Maxima seems to never truncate small numbers to zero up until it reaches something absurd like $10^{-323}$ and so on.
The following code snippet gives an idea of what I need:
(%i3) rat(1e-300);
rat: replaced 1.0E-300 by 1/999999999999999903803069407426113968898218766118103141789833949572356552411722264192305659040010509526872994217248819197070144216063125530186267630296136203765329090687113225440746189048800695790727969805197112921161540803823920273299782054992133678869364753954248541633605124057805104488924519071744 = 1.0E-300
(%o3)/R/ 1/9999999999999999038030694074261139688982187661181031417898339495723\
565524117222641923056590400105095268729942172488191970701442160631255301862676\
302961362037653290906871132254407461890488006957907279698051971129211615408038\
23920273299782054992
133678869364753954248541633605124057805104488924519071744
(%i4) rat(1e-323);
rat: replaced 9.0E-324 by^C
Maxima encountered a Lisp error:
SIMPLE-ERROR: Console interrupt.
Automatically continuing.
To enable the Lisp debugger set *debugger-hook* to nil.
(%i5) rat(1e-325);
rat: replaced 0.0 by 0/1 = 0.0
(%o5)/R/ 0
(%i6)
As one can see, it's not truncating $10^{-300}$ to zero, it hangs (and I had to sigkill it) for $10^{-323}$ and everything smaller than $10^{-325}$ is set to zero.
I don't know where this 324 is coming from. And I'd like to know if it's possible to reduce this for my code.
Edit 1: Here's the output if I used rationalize instead of rat:
(%i3) rationalize(1e-300);
(%o3) 6032057205060441/6032057205060440848842124543157735677050252251748505781\
796615064961622344493727293370973578138265743708225425014400837164813540499979\
063179105919597766951022193355091707896034850684039059079180396788349106095584\
290087446076413771468940477241550670753145517602931224392424029547429993824129\
889235158145614364972941312
(%i4) rationalize(1e-323);
(%o4) 1/1012011266536553091762476733594586535247783248820710591784506790137151\
697839976734459801918507185622475935389321584059556949043686928967384335066999\
703692549607587121382831806822334538710466081706198838392363725342810037417123\
463493090516778245797781704050282561793847761667073076152512660931637543230031\
31653853870546747392
(%i5) rationalize(1e-324);
(%o5) 0
Edit 2: Here's the output to "build_info();":
(%i6) build_info();
(%o6)
Maxima version: "5.43.2"
Maxima build date: "2020-02-21 05:22:38"
Host type: "x86_64-pc-linux-gnu"
Lisp implementation type: "GNU Common Lisp (GCL)"
Lisp implementation version: "GCL 2.6.12"
User dir: "/home/nidish/.maxima"
Temp dir: "/tmp"
Object dir: "/home/nidish/.maxima/binary/5_43_2/gcl/GCL_2_6_12"
Frontend: false
I gather that the goal is to replace small (in absolute value) float with zero. There doesn't appear to be a built-in function for that. Here's an attempt at an implementation via the pattern matching machinery.
First define a rule to replace small floats, and define a function which applies the rule to an expression.
(%i4) matchdeclare(xx,floatnump) $
(%i5) defrule(squashing_rule,xx, if abs(xx) <= squashing_tolerance then 0 else xx);
(%o5) squashing_rule : xx -> (if abs(xx) <= squashing_tolerance then 0 else xx)
(%i6) squashing_tolerance:0.01 $
(%i7) squash_floats(expr):=applyb1(expr,squashing_rule) $
Now create a random polynomial.
(%i8) e:makelist(float((((2*random(2)-1)*(1+random(8)))/8) *10^-random(4)) *x^k,k,1,6);
2 3 4 5 6
(%o8) [- 3.75e-4 x, - 0.00625 x , - 0.05 x , 0.00625 x , 0.005 x , 0.5 x ]
(%i9) e1:apply("+",e);
6 5 4 3 2
(%o9) 0.5 x + 0.005 x + 0.00625 x - 0.05 x - 0.00625 x - 3.75e-4 x
Apply squash_floats to the generated polynomial.
(%i10) squash_floats(e1);
6 3
(%o10) 0.5 x - 0.05 x
Change the squashing tolerance.
(%i11) squashing_tolerance:0.001;
(%i12) squash_floats(e1);
6 5 4 3 2
(%o12) 0.5 x + 0.005 x + 0.00625 x - 0.05 x - 0.00625 x
Verify the replacement happens in nested expressions.
(%i13) squash_floats(sin(1+1/e1));
1
(%o13) sin(----------------------------------------------------- + 1)
6 5 4 3 2
0.5 x + 0.005 x + 0.00625 x - 0.05 x - 0.00625 x
First let's step back a moment. What is the behavior you are hoping to find? If you need to convert very small floats to rational numbers accurately, try rationalize instead of rat. Does that work correctly for 1e-323?
If you want floats smaller than a tolerance to be converted to zero, we'll need to take a different approach. I'll hold off on that for the moment.
About the specific behavior you have observed, it appears to be implementation-dependent; I get a different (still buggy) behavior with Maxima + SBCL, which reports a floating point overflow. What does build_info(); report?
I don't know if it matters, but 1e-323 is a so-called denormalized float -- it is smaller than the smallest normalized (full precision) float, which is about 1e-308.
First you say "I want to know when a polynomial is exactly going to zero." And then you say "if a coefficient in a polynomial drops below a threshold, I want that terms to be completely thrown out of the polynomial". So you don't want the polynomial to be exactly zero, you want it to be zero within some threshold (relative? absolute?).
I'm afraid I'm not familiar with the Souriau-Frame Drazin algorithm, but looking at the Greville paper about it, it seems that all the calculations are rational (no square roots etc.), so I wonder if it's feasible to perform your calculations with completely exact rational numbers instead of using floating-point numbers. Then presumably exact means exact, and you don't need to worry about thresholds at all.
Recently I found out that ORACLE function POWER does not always give exact results.
This can be easily checked by this script or similar:
BEGIN
FOR X IN 1 .. 100 LOOP
IF SQRT(X) = POWER(X, 1 / 2) THEN
DBMS_OUTPUT.PUT_LINE(X);
END IF;
END LOOP;
END;
The output results are just following: 1, 5, 7, 11, 16, 24, 35, 37, 46, 48, 53, 70, 72, 73.
I.e. we see the situation when only in 14 cases from 100 first natural numbers the square root of a number is equal to its exponentiation with an index of 1/2.
I think this has to do with the limits to the NUMBER data type. I believe NUMBER precision can only go to 38 max. If you try with BINARY_DOUBLE, you'll find all values 1->100 will match:
DECLARE
l_num binary_double := 0;
BEGIN
LOOP
l_num := l_num + 1;
exit when l_num > 100;
IF ( SQRT(l_num) = POWER(l_num, 0.5) ) THEN
DBMS_OUTPUT.PUT_LINE(l_num);
ELSE
DBMS_OUTPUT.PUT_LINE(l_num || ': ' || SQRT(l_num) || ' <> ' || POWER(l_num, 0.5));
END IF;
END LOOP;
END;
Output (partial):
1.0E+000
2.0E+000
3.0E+000
...
9.8E+001
9.9E+001
1.0E+002
Another option is to round the results of both SQRT and POWER to, say, 35 or less (if you must use NUMBER datatype).
The Oracle documentation somewhat covers this:
Numeric Functions
Numeric functions accept numeric input and return numeric values. Most
numeric functions return NUMBER values that are accurate to 38 decimal
digits. The transcendental functions COS, COSH, EXP, LN,
LOG, SIN, SINH, SQRT, TAN, and TANH are accurate to 36
decimal digits. The transcendental functions ACOS, ASIN, ATAN,
and ATAN2 are accurate to 30 decimal digits.
So SQRT is stated to be accurate to 36 decimal digits; but POWER isn't in the list so it is implied to be accurate to 38 decimal digits. If you look at the values returned by the two functions you can see the discrepancy way down in the least significant digits; e.g. for X = 2:
SQRT(2): 1.41421356237309504880168872420969807857
POWER(2, 1/2): 1.41421356237309504880168872420969807855
Curiously, though, it looks like SQRT is more accurate and it's POWER that is slightly less precise, as you stated. Wolfram Alpha gives:
1.4142135623730950488016887242096980785696718753769480...
(but notice that also states it's an approximation), which rounds to the same as SQRT; and if you reverse the process with SQRT(2) * SQRT(2) and POWER((POWER(2, 1/2), 2) you get:
(SQRT): 2
(POWER): 1.99999999999999999999999999999999999994
When X is a binary_double rather than a number you get the same value for both:
1.4142135623730951
but you've lost precision; squaring that again gives:
2.0000000000000004
Ultimately any decimal representation of a floating point number has a limit to its precision, and will be an approximation. Two functions giving slightly different approximations is perhaps a little confusing, but since they seem to have SQRT a closer approximation (despite what the documentation says) - as a special case - I'm not sure that's really something to complain about.
So this is weird. I'm in Ruby 1.9.3, and float addition is not working as I expect it would.
0.3 + 0.6 + 0.1 = 0.9999999999999999
0.6 + 0.1 + 0.3 = 1
I've tried this on another machine and get the same result. Any idea why this might happen?
Floating point operations are inexact: they round the result to nearest representable float value.
That means that each float operation is:
float(a op b) = mathematical(a op b) + rounding-error( a op b )
As suggested by above equation, the rounding error depends on operands a & b.
Thus, if you perform operations in different order,
float(float( a op b) op c) != float(a op (b op c))
In other words, floating point operations are not associative.
They are commutative though...
As other said, transforming a decimal representation 0.1 (that is 1/10) into a base 2 representation (that is 1/16 + 1/64 + ... ) would lead to an infinite serie of digits. So float(0.1) is not equal to 1/10 exactly, it also has a rounding-error and it leads to a long serie of binary digits, which explains that following operations have a non null rounding-error (mathematical result is not representable in floating point)
It has been said many times before but it bears repeating: Floating point numbers are by their very nature approximations of decimal numbers. There are some decimal numbers that cannot be represented precisely due to the way the floating point numbers are stored in binary. Small but perceptible rounding errors will occur.
To avoid this kind of mess, you should always format your numbers to an appropriate number of places for presentation:
'%.3f' % (0.3 + 0.6 + 0.1)
# => "1.000"
'%.3f' % (0.6 + 0.1 + 0.3)
# => "1.000"
This is why using floating point numbers for currency values is risky and you're generally encouraged to use fixed point numbers or regular integers for these things.
First, the numerals “0.3”, “.6”, and “.1” in the source text are converted to floating-point numbers, which I will call a, b, and c. These values are near .3, .6, and .1 but not equal to them, but that is not directly the reason you see different results.
In each floating-point arithmetic operation, there may be a little rounding error, some small number ei. So the exact mathematical results your two expressions calculate is:
(a + b + e0) + c + e1 and
(b + c + e2) + a + e3.
That is, in the first expression, a is added to b, and there is a slight rounding error e0. Then c is added, and there is a slight rounding error e1. In the second expression, b is added to c, and there is a slight rounding error e2. Finally, a is added, and there is a slight rounding error e3.
The reason your results differ is that e0 + e1 ≠ e2 + e3. That is, the rounding that was necessary when a and b were added was different from the rounding that was necessary when b and c were added and/or the roundings that were necessary in the second additions of the two cases were different.
There are rules that govern these errors. If you know the rules, you can make deductions about them that bound the size of the errors in final results.
This is a common limitation of floating point numbers, due to their being encoded in base 2 instead of base 10. It can be difficult to understand, but once you do, you can easily avoid problems like this. I recommend this guide, which goes in depth to explain it.
For this problem specifically, you might try rounding your result to the nearest millionths place:
result = (0.3+0.6+0.1)
=> 0.9999999999999999
(result*1000000.0).round/1000000.0
=> 1.0
As for why the order matters, it has to do with rounding. When those numbers are turned into floats, they are converted to binary, and all of them become repeating fractions, like ⅓ is in decimal. Since the result gets rounded during each addition, the final answer depends on the order of the additions. It appears that in one of those, you get a round-up, where in the other, you get a round-down. This explains the discrepancy.
It is worth noting what the actual difference is between those two answers: approximately 0.0000000000000001.
In view you can use also the number_with_precision helper:
result = 0.3 + 0.6 + 0.1
result = number_with_precision result, :precision => 3
Wouldn't it be better to see an error when dividing an odd integer by 2 than an incorrect calculation?
Example in Ruby (I'm guessing its the same in other languages because ints and floats are common datatypes):
39 / 2 => 19
I get that the output isn't 19.5 because we're asking for the value of an integer divided by an integer, not a float (39.0) divided by an integer. My question is, if the limits of these datatypes inhibit it from calculating the correct value, why output the least correct value?
Correct = 19.5
Correct-ish = 20 (rounded up)
Least correct = 19
Wouldn't it be better to see an error?
Throwing an error would be usually be extremely counter-productive, and computationally inefficient in most languages.
And consider that this is often useful behaviour:
total_minutes = 563;
hours = total_minutes / 60;
minutes = total_minutes % 60;
Correct = 19.5
Correct-ish = 20 (rounded up)
Least correct = 19
Who said that 20 is more correct than 19?
Among other reasons to keep the following very useful relationship between the sibling operators of division and modulo.
Quotient: a / b = Q
Remainder: a % b = R
Awesome relationship: a = b*Q + R.
Also so that integer division by two returns the same result as a right shift by one bit and lots of other nice relationships.
But the secret, main reason is that C did it this way, and you simply don't argue with C!
If you divide through 2.0, you get the correct result in Ruby.
39 / 2.0 => 19.5
I have two integer variables which I need to divide in order to work the percentage out for something. I have (variablea / variableb) * 100. The problem is that the (variablea / variableb) will be between 0 and 1 so it gets rounded to 0 because it is an int. How can I get round this so the answer isn't always 0?
Try (variablea * 100) / variableb.
This will truncate the result. If you're rather round to the nearest whole percent, you could do (variablea * 100 + variableb/2) / variableb.
Finally, instead of 100 you could use a constant such as 1000 or 10000 if you'd like to get more decimal places (just remember to format the number correctly when printing, i.e. scale it by 10 or 100).