Cannot programmatically change boost multiprecision mpfr precision - boost

Keep in mind that Precision is based on the total number of digits and not the decimal places, but I need a way to set the Decimal places and all I can find is Precision, so I am trying to work with it, so I account for the number of digits in the whole number, in order to get this to work, but that is not working.
I want to do some math and return with a set precision value, the function takes a string of a very large number with a very large decimal precision and returns it as a string set to the number of decimals passed in as precision:
#include <boost/multiprecision/mpfr.hpp>
QString multiply(const QString &mThis, const QString &mThat, const unsigned int &precison)
{
typedef boost::multiprecision::number<mpfr_float_backend<0> > my_mpfr_float;
my_mpfr_float aThis(mThis.toStdString());
my_mpfr_float aThat(mThat.toStdString());
my_mpfr_float::default_precision(precison);
my_mpfr_float ans = (aThis * aThat);
return QString::fromStdString(ans.str());
}
I have tried it without the typedef, same problem;
MathWizard::multiply("123456789.123456789", "123456789.123456789", 20);
18 digits of Precision, 9 + 9, I should ask for 30
will return 22 decimal places
15241578780673678.51562
instead of 20
15241578780673678.516
So why is it off by 2?
I would like to make the precision change after the math, but it seems you have to set it before, and not like the examples that boost shows in their example, but still does not return the correct value, doing it after does not change value.
Update: Compare what I did to what they say works in this Post:
how to change at runtime number precision with boost::multiprecision
typedef number<gmp_float<0> > mpf_float;
mpfr_float a = 2;
mpfr_float::default_precision(1000);
std::cout << mpfr_float::default_precision() << std::endl;
std::cout << sqrt(a) << std::endl; // print root-2
I have noticed differences between gmp_float, mpf_float (using boost/multiprecision/gmp.hpp) and mpfr_float, and mpfr_float will give me a closer precision, for example, if I take the number (1/137):
mpf_float
0.007299270072992700729927007299270072992700729927007299270073
only 1 Precision, 23 digits when set to 13
0.00729927007299270072993
mpfr_float
0.007299270072992700729929
only 1 Precision, 16 digits when set to 13
0.0072992700729928
With only 1 Precision I would expect my answer to be have one less decimal.
The other data types do similar, I did try them all, so this code will work the same for all the data types described here:
boost 1.69.0: multiprecision Chapter 1
I also must point out that I rely on Qt since this function is used in a QtQuick Qml Felgo App, and actually I could not figure out to convert this to string without converting it to an exponent, even though I used ans.str() for both, my guess is that fromStdString does something different then std::string(ans.str()).
I figure if I can not figure his out, I will just do String Rounding to get the correct precision.
std::stringstream ss;
ss.imbue(std::locale(""));
ss << std::fixed << std::setprecision(int(precison)) << ans.str();
qDebug() << "divide(" << mThis << "/" << mThat << " # " << precison << " =" << QString::fromStdString(ss.str()) << ")";
return QString::fromStdString(ss.str());
I still could not get away without using QString, but this did not work, it returns 16 digits instead of 13, I know that is a different question, as such I just post it to show my alternatives do not work any better at this point. Also note that the divide function works the same as the multiply, I used used that example to show the math has nothing to do with this, but all the samples they are showing me do not seem to work correctly, and I do not understand why, so just to make the steps clear:
Create back end: typedef boost::multiprecision::number > my_mpfr_float;
Set Precision: my_mpfr_float::default_precision(precision);
Set initial value of variable: my_mpfr_float aThis(mThis.toStdString());
Do some math if you want, return value with correct Precision.
I must be missing something.
I know I can just get the length of the string, and if longer than Precision, then check if Precision + 1 is greater than 5, if so add 1 to Precision and return a substring of 0, Precision and be done with all this Correct way of doing things, I could even do this in JavaScript after the return, and just forget about doing it the Correct way, but I still think I am just missing something, because I can not believe this is the way this is actually supposed to work.
Submitted Bug Report: https://github.com/boostorg/multiprecision/issues/127

Related

How to change a boost::multiprecision::cpp_int from big endian to little endian

I have a boost::multiprecision::cpp_int in big endian and have to change it to little endian. How can I do that? I tried with boost::endian::conversion but that did not work.
boost::multiprecision::cpp_int bigEndianInt("0xe35fa931a0000*);
boost::multiprecision::cpp_int littleEndianInt;
littleEndianIn = boost::endian::endian_reverse(m_cppInt);
The memory layout of boost multi-precision types is implementation detail. So you cannot assume much about it anyways (they're not supposed to be bitwise serializable).
Just read a random section of the docs:
MinBits
Determines the number of Bits to store directly within the object before resorting to dynamic memory allocation. When zero, this field is determined automatically based on how many bits can be stored in union with the dynamic storage header: setting a larger value may improve performance as larger integer values will be stored internally before memory allocation is required.
It's not immediately clear that you have any chance at some level of "normal int behaviour" in memory layout. The only exception would be when MinBits==MaxBits.
Indeed, we can static_assert that the size of cpp_int with such backend configs match the corresponding byte-sizes.
It turns out that there's even a promising tag in the backend base-class to indicate "triviality" (this is truly promising): trivial_tag, so let's use it:
Live On Coliru
#include <boost/multiprecision/cpp_int.hpp>
namespace mp = boost::multiprecision;
template <int bits> using simple_be =
mp::cpp_int_backend<bits, bits, mp::unsigned_magnitude>;
template <int bits> using my_int =
mp::number<simple_be<bits>, mp::et_off>;
using my_int8_t = my_int<8>;
using my_int16_t = my_int<16>;
using my_int32_t = my_int<32>;
using my_int64_t = my_int<64>;
using my_int128_t = my_int<128>;
using my_int192_t = my_int<192>;
using my_int256_t = my_int<256>;
template <typename Num>
constexpr bool is_trivial_v = Num::backend_type::trivial_tag::value;
int main() {
static_assert(sizeof(my_int8_t) == 1);
static_assert(sizeof(my_int16_t) == 2);
static_assert(sizeof(my_int32_t) == 4);
static_assert(sizeof(my_int64_t) == 8);
static_assert(sizeof(my_int128_t) == 16);
static_assert(is_trivial_v<my_int8_t>);
static_assert(is_trivial_v<my_int16_t>);
static_assert(is_trivial_v<my_int32_t>);
static_assert(is_trivial_v<my_int64_t>);
static_assert(is_trivial_v<my_int128_t>);
// however it doesn't scale
static_assert(sizeof(my_int192_t) != 24);
static_assert(sizeof(my_int256_t) != 32);
static_assert(not is_trivial_v<my_int192_t>);
static_assert(not is_trivial_v<my_int256_t>);
}
Conluding: you can have trivial int representation up to a certain point, after which you get the allocator-based dynamic-limb implementation no matter what.
Note that using unsigned_packed instead of unsigned_magnitude representation never leads to a trivial backend implementation.
Note that triviality might depend on compiler/platform choices (it's likely that cpp_128_t uses some builtin compiler/standard library support on GCC, e.g.)
Given this, you MIGHT be able to pull of what you wanted to do with hacks IF your backend configuration support triviality. Sadly I think it requires you to manually overload endian_reverse for 128 bits case, because the GCC builtins do not have __builtin_bswap128, nor does Boost Endian define things.
I'd suggest working off the information here How to make GCC generate bswap instruction for big endian store without builtins?
Final Demo (not complete)
#include <boost/multiprecision/cpp_int.hpp>
#include <boost/endian/buffers.hpp>
namespace mp = boost::multiprecision;
namespace be = boost::endian;
template <int bits> void check() {
using T = mp::number<mp::cpp_int_backend<bits, bits, mp::unsigned_magnitude>, mp::et_off>;
static_assert(sizeof(T) == bits/8);
static_assert(T::backend_type::trivial_tag::value);
be::endian_buffer<be::order::big, T, bits, be::align::no> buf;
buf = T("0x0102030405060708090a0b0c0d0e0f00");
std::cout << std::hex << buf.value() << "\n";
}
int main() {
check<128>();
}
(Changing be::order::big to be::order::native obviously makes it compile. The other way to complete it would be to have an ADL accessible overload for endian_reverse for your int type.)
This is both trivial and in the general case unanswerable, let me explain:
For a general N-bit integer, where N is a large number, there is unlikely to be any well defined byte order, indeed even for 64 and 128 bit integers there are more than 2 possible orders in use: https://en.wikipedia.org/wiki/Endianness#Middle-endian.
On any platform, with any native endianness you can always extract the bytes of a cpp_int, the first example here: https://www.boost.org/doc/libs/1_73_0/libs/multiprecision/doc/html/boost_multiprecision/tut/import_export.html#boost_multiprecision.tut.import_export.examples shows you how. When exporting bytes like this, they are always most significant byte first, so you can subsequently rearrange them how you wish. You should not however, rearrange them and load them back into a cpp_int as the class won't know what to do with the result!
If you know that the value is small enough to fit into a native integer type, then you can simply cast to the native integer and use a system API on the result. As in endian_reverse(static_cast<int64_t>(my_cpp_int)). Again, don't assign the result back into a cpp_int as it requires native byte order.
If you wish to check whether a value is small enough to fit in an N-bit integer for the approach above, you can use the msb function, which returns the index of the most significant bit in the cpp_int, add one to that to obtain the number of bits used, and filter out the zero case and the code looks like:
unsigned bits_used = my_cpp_int.is_zero() ? 0 : msb(my_cpp_int) + 1;
Note that all of the above use completely portable code - no hacking of the underlying implementation is required.

float, round to 2 decimal places - Processing

I started learning processing since a short time ago and I came across a problem; When deviding 199.999 by 200 I want to outcome to be with 2 decimals (so the outcome should be 1 rounded of). Without formatting the outcome is 0.999995.
Code for formatting to String with 2 decimal:
float money = 199.999;
int munten = 200;
String calc1 = nf(money/munten,0,2);
println(calc1);
float calc2 = float(calc1);
println(calc2);
Prints:
1,0
NaN
I think float() wont work cause there is a comma in the String instead of a dot, I'm not sure tough. But how can I round a number to 2 decimal and still let it be a float?
Thanks for taking your time to read this,
When I run your example on Processing 3.3.6 / macOS 10.12 (US), I get "1.00" and "1.0". This could be due to your number formatting settings creating output strings that are then not read correctly by nf().
float money;
int munten;
String s;
float f;
money = 199.999;
munten = 200;
s = nf(money/munten, 0, 2);
println(s); // "1.00" -- or "1,0" etc. in different os language locales
f = float(s);
println(f); // "1.0" -- or NaN error if above is not 1.0 format
f = money/munten;
println(f); // 0.999995
s = nf(f, 0, 2);
println(s); // 1.00 -- or local format
You can see what should be happening more clearly in the second bit of code -- don't try to convert into a String and then back out again; don't store numbers in Strings. Instead, keep everything in numeric variables up until the moment you need to display.
Also keep in mind that nf() isn't really for rounding precision, although it is often used that way:
nf() is used to add zeros to the left and/or right of a number. This is typically for aligning a list of numbers. To remove digits from a floating-point number, use the int(), ceil(), floor(), or round() functions. https://processing.org/reference/nf_.html
If you need to work around your locale, you can use Java String formatting in Processing to do so:
float fval = 199.999/200;
println(fval); // 0.999995
String s = String.format(java.util.Locale.US,"%.2f", fval);
println(s); // 1.00
See https://stackoverflow.com/a/5383201/7207622 for more discussion of the Java approach.

Too large const on Arduino UNO

I'm trying to execute an algorithm on an Arduino UNO, it needs const table with some larges numbers and sometimes, I get overflow values. This is the case for this number : 628331966747.0
Okay, this is a big one, but its type is float (32 bit) where maximum is 3.4028235e38. So it should work, theoretically ?
What can I do against this ? Do you know a solution ?
EDIT : On Arduino UNO, double are exaclty the same type that floats (32 bits)
Here is a code that leads to the error :
float A;
void setup() {
A = 628331966747.0;
Serial.begin(9600);
}
void loop() {
Serial.println(A);
delay(1000);
}
it print "ovf, ovf, ..., ovf"
There is nothing wrong with the constant itself (except for its rather optimistic number of significant figures), but the problem is with the implementation of the Arduino's library support for printing floating point values. Print::printFloat() contains the following pre-condition tests:
if (isnan(number)) return print("nan");
if (isinf(number)) return print("inf");
if (number > 4294967040.0) return print ("ovf"); // constant determined empirically
if (number <-4294967040.0) return print ("ovf"); // constant determined empirically
It seems that the range of printable values is deliberately restricted in order presumably to reduce complexity and code size. The subsequent code reveals why:
// Extract the integer part of the number and print it
unsigned long int_part = (unsigned long)number;
double remainder = number - (double)int_part;
n += print(int_part);
The somewhat simplistic implementation requires that the absolute value of the integer part is itself a 32bit integer.
The worrying thing perhaps is the comment "constant determined empirically" which rather suggests that the values were arrived at by trial and error rather then an understanding of the mathematics! One has to wonder why these values are not defined in terms of INT_UMAX.
There is a proposed "fix" described here, but it will not work at least because it applies the integer abs() function to the double parameter number, which will only work if the integer part is less than the even more restrictive MAX_INT. The author has posted a link to a zip file containing a fix that looks more likely to work (there is evidence at least of testing!).

How to get precision (number of digits past decimal) from a Ruby BigDecimal object?

Given the following expression for a new BigDecimal object:
b = BigDecimal.new("3.3")
How can I get the precision that has been defined for it? I would like to know a method that will return 1, as there is 1 digit after the decimal. I'm asking this because b.precision or b.digits don't work.
Thanks to Stefan, a method name for dealing with such information is BigDecimal#precs. Given that a BigDecimal object comes from a database, I don't know the precision of that database object. I have tried the following, but it does not seem useful for my situation.
b = BigDecimal.new(3.14, 2)
b.precs
=> [18, 27]
How can I retrieve the 2 information/argument?
In Ruby 2.2.2 (and, I'm guessing, in prior versions), you can't get
back the precision that was given to BigDecimal::new. That's
because it is used in some computations; only the result of those
computations is stored. This doc comment is a clue:
The actual number of significant digits used in computation is
usually larger than the specified number.
Let's look at the source to see what's going on. BigDecimal_new
extracts the parameters, does some limit and type checking, and calls
VpAlloc. mf holds the digits argument to BigDecimal::new:
return VpAlloc(mf, RSTRING_PTR(iniValue));
In VpAlloc, mf gets
renamed to mx:
VpAlloc(size_t mx, const char *szVal)
The very first thing MxAlloc does is to round mx (the precision) up to
the nearest multiple of BASE_FIG:
mx = (mx + BASE_FIG - 1) / BASE_FIG; /* Determine allocation unit. */
if (mx == 0) ++mx;
BASE_FIG is equivalent to RMPD_COMPONENT_FIGURES, which has a platform
dependent value of either 38, 19, 9, 4, or 2.
There are further computations with mx before it is stored in the
BigDecimal being created, but we can already see that the original
argument passed to ::new is destroyed and not recoverable.

Crash when casting the result of arc4random() to Int

I've written a simple Bag class. A Bag is filled with a fixed ratio of Temperature enums. It allows you to grab one at random and automatically refills itself when empty. It looks like this:
class Bag {
var items = Temperature[]()
init () {
refill()
}
func grab()-> Temperature {
if items.isEmpty {
refill()
}
var i = Int(arc4random()) % items.count
return items.removeAtIndex(i)
}
func refill() {
items.append(.Normal)
items.append(.Hot)
items.append(.Hot)
items.append(.Cold)
items.append(.Cold)
}
}
The Temperature enum looks like this:
enum Temperature: Int {
case Normal, Hot, Cold
}
My GameScene:SKScene has a constant instance property bag:Bag. (I've tried with a variable as well.) When I need a new temperature I call bag.grab(), once in didMoveToView and when appropriate in touchesEnded.
Randomly this call crashes on the if items.isEmpty line in Bag.grab(). The error is EXC_BAD_INSTRUCTION. Checking the debugger shows items is size=1 and [0] = (AppName.Temperature) <invalid> (0x10).
Edit Looks like I don't understand the debugger info. Even valid arrays show size=1 and unrelated values for [0] =. So no help there.
I can't get it to crash isolated in a Playground. It's probably something obvious but I'm stumped.
Function arc4random returns an UInt32. If you get a value higher than Int.max, the Int(...) cast will crash.
Using
Int(arc4random_uniform(UInt32(items.count)))
should be a better solution.
(Blame the strange crash messages in the Alpha version...)
I found that the best way to solve this is by using rand() instead of arc4random()
the code, in your case, could be:
var i = Int(rand()) % items.count
This method will generate a random Int value between the given minimum and maximum
func randomInt(min: Int, max:Int) -> Int {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
The crash that you were experiencing is due to the fact that Swift detected a type inconsistency at runtime.
Since Int != UInt32 you will have to first type cast the input argument of arc4random_uniform before you can compute the random number.
Swift doesn't allow to cast from one integer type to another if the result of the cast doesn't fit. E.g. the following code will work okay:
let x = 32
let y = UInt8(x)
Why? Because 32 is a possible value for an int of type UInt8. But the following code will fail:
let x = 332
let y = UInt8(x)
That's because you cannot assign 332 to an unsigned 8 bit int type, it can only take values 0 to 255 and nothing else.
When you do casts in C, the int is simply truncated, which may be unexpected or undesired, as the programmer may not be aware that truncation may take place. So Swift handles things a bit different here. It will allow such kind of casts as long as no truncation takes place but if there is truncation, you get a runtime exception. If you think truncation is okay, then you must do the truncation yourself to let Swift know that this is intended behavior, otherwise Swift must assume that is accidental behavior.
This is even documented (documentation of UnsignedInteger):
Convert from Swift's widest unsigned integer type,
trapping on overflow.
And what you see is the "overflow trapping", which is poorly done as, of course, one could have made that trap actually explain what's going on.
Assuming that items never has more than 2^32 elements (a bit more than 4 billion), the following code is safe:
var i = Int(arc4random() % UInt32(items.count))
If it can have more than 2^32 elements, you get another problem anyway as then you need a different random number function that produces random numbers beyond 2^32.
This crash is only possible on 32-bit systems. Int changes between 32-bits (Int32) and 64-bits (Int64) depending on the device architecture (see the docs).
UInt32's max is 2^32 − 1. Int64's max is 2^63 − 1, so Int64 can easily handle UInt32.max. However, Int32's max is 2^31 − 1, which means UInt32 can handle numbers greater than Int32 can, and trying to create an Int32 from a number greater than 2^31-1 will create an overflow.
I confirmed this by trying to compile the line Int(UInt32.max). On the simulators and newer devices, this compiles just fine. But I connected my old iPod Touch (32-bit device) and got this compiler error:
Integer overflows when converted from UInt32 to Int
Xcode won't even compile this line for 32-bit devices, which is likely the crash that is happening at runtime. Many of the other answers in this post are good solutions, so I won't add or copy those. I just felt that this question was missing a detailed explanation of what was going on.
This will automatically create a random Int for you:
var i = random() % items.count
i is of Int type, so no conversion necessary!
You can use
Int(rand())
To prevent same random numbers when the app starts, you can call srand()
srand(UInt32(NSDate().timeIntervalSinceReferenceDate))
let randomNumber: Int = Int(rand()) % items.count

Resources