Micropython - maximal decimal number (esp32)? - esp32

Can the accuracy of the decimal number be increased to 6 digits ? eg: 50.820257
print(50.12025733333333)
50.82025
I need this to read gps coordinates....

You can access more of the float by formatting it as follows:
Entering REPL. Use Control-X to exit.
>
MicroPython v1.12-452-g801f7dca7 on 2020-05-14; TinyPICO with ESP32-PICO-D4
Type "help()" for more information.
>>>
>>> x = 12.123456789
>>> type(x)
<class 'float'>
>>> print(x)
12.12346
>>> print('{:.6f}'.format(x))
12.123458
If you're going to log the lat\lon in a file and not do math with it that should get you there.
There is a more general discussion on Float Precision in GitHub: https://github.com/micropython/micropython/issues/4380

Related

How to compare matrices in GNU Octave [duplicate]

I am writing a program where I need to delete duplicate points stored in a matrix. The problem is that when it comes to check whether those points are in the matrix, MATLAB can't recognize them in the matrix although they exist.
In the following code, intersections function gets the intersection points:
[points(:,1), points(:,2)] = intersections(...
obj.modifiedVGVertices(1,:), obj.modifiedVGVertices(2,:), ...
[vertex1(1) vertex2(1)], [vertex1(2) vertex2(2)]);
The result:
>> points
points =
12.0000 15.0000
33.0000 24.0000
33.0000 24.0000
>> vertex1
vertex1 =
12
15
>> vertex2
vertex2 =
33
24
Two points (vertex1 and vertex2) should be eliminated from the result. It should be done by the below commands:
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
After doing that, we have this unexpected outcome:
>> points
points =
33.0000 24.0000
The outcome should be an empty matrix. As you can see, the first (or second?) pair of [33.0000 24.0000] has been eliminated, but not the second one.
Then I checked these two expressions:
>> points(1) ~= vertex2(1)
ans =
0
>> points(2) ~= vertex2(2)
ans =
1 % <-- It means 24.0000 is not equal to 24.0000?
What is the problem?
More surprisingly, I made a new script that has only these commands:
points = [12.0000 15.0000
33.0000 24.0000
33.0000 24.0000];
vertex1 = [12 ; 15];
vertex2 = [33 ; 24];
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
The result as expected:
>> points
points =
Empty matrix: 0-by-2
The problem you're having relates to how floating-point numbers are represented on a computer. A more detailed discussion of floating-point representations appears towards the end of my answer (The "Floating-point representation" section). The TL;DR version: because computers have finite amounts of memory, numbers can only be represented with finite precision. Thus, the accuracy of floating-point numbers is limited to a certain number of decimal places (about 16 significant digits for double-precision values, the default used in MATLAB).
Actual vs. displayed precision
Now to address the specific example in the question... while 24.0000 and 24.0000 are displayed in the same manner, it turns out that they actually differ by very small decimal amounts in this case. You don't see it because MATLAB only displays 4 significant digits by default, keeping the overall display neat and tidy. If you want to see the full precision, you should either issue the format long command or view a hexadecimal representation of the number:
>> pi
ans =
3.1416
>> format long
>> pi
ans =
3.141592653589793
>> num2hex(pi)
ans =
400921fb54442d18
Initialized values vs. computed values
Since there are only a finite number of values that can be represented for a floating-point number, it's possible for a computation to result in a value that falls between two of these representations. In such a case, the result has to be rounded off to one of them. This introduces a small machine-precision error. This also means that initializing a value directly or by some computation can give slightly different results. For example, the value 0.1 doesn't have an exact floating-point representation (i.e. it gets slightly rounded off), and so you end up with counter-intuitive results like this due to the way round-off errors accumulate:
>> a=sum([0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]); % Sum 10 0.1s
>> b=1; % Initialize to 1
>> a == b
ans =
logical
0 % They are unequal!
>> num2hex(a) % Let's check their hex representation to confirm
ans =
3fefffffffffffff
>> num2hex(b)
ans =
3ff0000000000000
How to correctly handle floating-point comparisons
Since floating-point values can differ by very small amounts, any comparisons should be done by checking that the values are within some range (i.e. tolerance) of one another, as opposed to exactly equal to each other. For example:
a = 24;
b = 24.000001;
tolerance = 0.001;
if abs(a-b) < tolerance, disp('Equal!'); end
will display "Equal!".
You could then change your code to something like:
points = points((abs(points(:,1)-vertex1(1)) > tolerance) | ...
(abs(points(:,2)-vertex1(2)) > tolerance),:)
Floating-point representation
A good overview of floating-point numbers (and specifically the IEEE 754 standard for floating-point arithmetic) is What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
A binary floating-point number is actually represented by three integers: a sign bit s, a significand (or coefficient/fraction) b, and an exponent e. For double-precision floating-point format, each number is represented by 64 bits laid out in memory as follows:
The real value can then be found with the following formula:
This format allows for number representations in the range 10^-308 to 10^308. For MATLAB you can get these limits from realmin and realmax:
>> realmin
ans =
2.225073858507201e-308
>> realmax
ans =
1.797693134862316e+308
Since there are a finite number of bits used to represent a floating-point number, there are only so many finite numbers that can be represented within the above given range. Computations will often result in a value that doesn't exactly match one of these finite representations, so the values must be rounded off. These machine-precision errors make themselves evident in different ways, as discussed in the above examples.
In order to better understand these round-off errors it's useful to look at the relative floating-point accuracy provided by the function eps, which quantifies the distance from a given number to the next largest floating-point representation:
>> eps(1)
ans =
2.220446049250313e-16
>> eps(1000)
ans =
1.136868377216160e-13
Notice that the precision is relative to the size of a given number being represented; larger numbers will have larger distances between floating-point representations, and will thus have fewer digits of precision following the decimal point. This can be an important consideration with some calculations. Consider the following example:
>> format long % Display full precision
>> x = rand(1, 10); % Get 10 random values between 0 and 1
>> a = mean(x) % Take the mean
a =
0.587307428244141
>> b = mean(x+10000)-10000 % Take the mean at a different scale, then shift back
b =
0.587307428244458
Note that when we shift the values of x from the range [0 1] to the range [10000 10001], compute a mean, then subtract the mean offset for comparison, we get a value that differs for the last 3 significant digits. This illustrates how an offset or scaling of data can change the accuracy of calculations performed on it, which is something that has to be accounted for with certain problems.
Look at this article: The Perils of Floating Point. Though its examples are in FORTRAN it has sense for virtually any modern programming language, including MATLAB. Your problem (and solution for it) is described in "Safe Comparisons" section.
type
format long g
This command will show the FULL value of the number. It's likely to be something like 24.00000021321 != 24.00000123124
Try writing
0.1 + 0.1 + 0.1 == 0.3.
Warning: You might be surprised about the result!
Maybe the two numbers are really 24.0 and 24.000000001 but you're not seeing all the decimal places.
Check out the Matlab EPS function.
Matlab uses floating point math up to 16 digits of precision (only 5 are displayed).

Conditional expectation with sympy

How can I calculate the conditional expectation of a random variable in sympy? I read this and tried:
from sympy.stats import *
v = Uniform("v",0,1)
E(v)
this returns correctly 1/2, but then:
E(v, v>1/2)
returns NaN. I also tried:
E(v, where(v > 1/2))
it returned 1/2, which is incorrect (it should be 3/4).
What am I doing wrong?
This issue (which I see you already reported) is specific to uniformly distributed random variables. (There's also an older issue involving Uniform.) For other distributions, what you did works correctly:
>>> from sympy.stats import *
>>> x = Exponential("x", 1)
>>> E(x, x < 2)
-3/(-1 + exp(2)) + exp(2)/(-1 + exp(2))
As for the uniform type, a workaround for now is to remember that conditioning a uniformly distributed random variable to some interval creates another uniformly distributed random variable.
So the value of E(v, v > 1/2) can be found by computing
E(Uniform("x", 1/2, 1))
which returns 0.75.
Caution: if working interactively, one may want to eventually import from core SymPy, in addition to its stats module. Since E stands for Euler's number 2.718... in SymPy, one may end up unable to compute expectations with
TypeError: 'Exp1' object is not callable
So one either has to be more specific about what to import, or use namespace for one or both modules. My preferred solution is
from sympy import *
import sympy.stats as st
So that st.E is expectation while E is 2.718...

generation of random binary numbers in python

I want to print or plot list of binary numbers which are randomly generated. I have print and plot random numbers between 1 and 5000 and my code is as under;
generation of random numbers
for a in range(0, 5000):
a=random.sample(range(0, 5000), 5000)
print (a)
plt.plot(a)
plt.show()
This code is running perfectly.
but I want Binary numbers instead of Decimal numbers.
kindly help me in this regard.
To get the binary representation of your random generated number (decimal int) , use bin()
For instance, the following code will print 10 random numbers. in decimal and binary
import random
for i in range(0, 10):
a = random.randint(0, 5000)
print a, bin(a)
output:
1465 0b10110111001
1624 0b11001011000
2963 0b101110010011
510 0b111111110
3653 0b111001000101
3671 0b111001010111
2624 0b101001000000
4412 0b1000100111100
3910 0b111101000110
2582 0b101000010110
Online demo - https://repl.it/Dx3O
NOTE: in your example i saw some usage in matplotlib however you weren't explicitly asking about matplotlib so i answered more generally

How to change a negative number to zero in python without using decision structures

I have a program that determines the number of points you get per day, for 5 days from an event.
source code:
total=0
for x in range (5):
points=int(input('How many points did you get today?'))
total=total+points
print ('You got {0} points this event'.format(total))
My question is how do I get it to make any number below or equal to zero a 0 without using decision statements (if's, case's, i think while or for loop is not allowed either)
Can you use built-in functions? Because this is normally done using:
max(0, points)
>>> f=lambda a: (abs(a)+a)/2
>>> f(a)
0
>>> f(3)
3
>>> f(-3)
0
>>> f(0)
0
Since I don't see boolean operators as a restriction, you can use:
points * (points>0)

Python: Is there a way to keep an automatic conversion from int to long int from happening?

Consider the example:
>>> from sys import maxint
>>> type(maxint)
<type 'int'>
>>> print maxint
9223372036854775807
>>> type(maxint+2)
<type 'long'>
>>> print maxint+2
9223372036854775809
>>> type((maxint+2)+maxint)
<type 'long'>
>>> print ((maxint+2)+maxint)
18446744073709551616
Python will autopromote from an int, which in this case is a 64 bit integer value (OS X, python 2.6.1) to a python long integer which is of arbitrary precision. Even though the types are not the same, they are similar and Python allows the usual numeric operators to be used. Usually this is helpful, for example to be able to use code that expects 64-bit values on a 32-bit machine.
However, the arbitrary precision operations are vastly slower than the native int operations. For example:
>>> print maxint**maxint # execution so long it is essentially a crash
Is there a way to defeat or disallow the auto-promotion of a Python int to a Python long?
So you want to throw out the One True Way and go retro on overflows. Silly you.
There is no good upside to the C / C++ / C# / Java style of overflow. It does not reliably raise an error condition. For C and C99 it is "undefined behavior" in ANSI and POSIX (C++ mandates modulo return) and it is a known security risk. Why do you want this?
The Python method of seamlessly overflowing to a long is the better way. I believe this is the same behavior being adapted by Perl 6.
You can use the Decimal module to get more finite overflows:
>>> from decimal import *
>>> from sys import maxint
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999999, Emax=999999999, capitals=1,
flags=[], traps=[DivisionByZero, Overflow, InvalidOperation])
>>> d=Decimal(maxint)
>>> d
Decimal('9223372036854775807')
>>> e=Decimal(maxint)
>>> f=d**e
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 2225, in __pow__
ans = ans._fix(context)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 1589, in _fix
return context._raise_error(Overflow, 'above Emax', self._sign)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 3680, in _raise_error
raise error(explanation)
decimal.Overflow: above Emax
You can set your precision and boundary conditions with Decimal classes and the overflow is nearly immediate. You can set what you trap for. You can set your max and min. Really -- How does it get better than this? (I don't know about relative speed to be honest, but I suspect it is faster than numby but slower than native ints obviously...)
For your specific issue of image processing, this sounds like a natural application to consider some form of saturation arithmetic. You also might consider, if you are having overflows on 32 arithmetic, check operands along the way on obvious cases: pow, **, *. You might consider overloaded operators and check for the conditions you don't want.
If Decimal, saturation, or overloaded operators don't work -- you can write an extension. Heaven help you if you want to throw out the Python way of overflow to go retro...
If you want arithmetic overflows to overflow within e.g. 32 bits, you could use e.g. numpy.uint32.
That gives you a warning when an overflow occurs.
>>> import numpy
>>> numpy.uint32(2**32-3) + numpy.uint32(5)
Warning: overflow encountered in ulong_scalars
2
I tested its speed though:
>\python26\python.exe -m timeit "2**16 + 2**2"
1000000 loops, best of 3: 0.118 usec per loop
>\python26\python.exe -m timeit "2**67 + 2**65"
1000000 loops, best of 3: 0.234 usec per loop
>\python26\python.exe -m timeit -s "import numpy; numpy.seterr('ignore')" "numpy.uint32(2)**numpy.uint32(67) + numpy.uint32(2)**numpy.uint32(65)"
10000 loops, best of 3: 34.7 usec per loop
It's not looking good for speed.
You can force your values to return to normal ints if you include an num = int(num) occasionally in your algorithm. If the value is long but fits in a native int, it will demote down to int. If the value doesn't fit in a native int, it will remain a long.
Int vs long is an historical legacy - in python 3, every int is a "long". If your script speed is limited by int computation, it is likely that you are doing it wrong.
To give you a proper answer, we need more information on what are you trying to do.
Well, if you don't care about accuracy you could all of your math ops modulo maxint.
I don't know if it would be faster, neccesarily, but you could use numpy arrays of one element instead of ints.
If the specific calculation you are concerned about is integer exponentiation, then there are some inferences we can draw:
def smart_pow(mantissa, exponent, limit=int(math.ceil(math.log(sys.maxint)/math.log(2)))):
if mantissa in (0, 1):
return mantissa
if exponent > limit:
if mantissa == -1:
return -1 if exponent&1 else 1
if mantissa > 1:
return sys.maxint
else:
return (-1-sys.maxint) if exponent&1 else sys.maxint
else: # this *might* overflow, but at least it won't take long
return mantissa ** exponent

Resources