How can we improve error in decision-making by applying conditioning on floating-point numbers? - floating-point-precision

I am working with floating point arithmetic that involves decision making by the use of conditioning such as if...else etc. The algorithm works fine but I doubt that it's not been optimized to get the best results. I want to know that how can I improve the numerical stability by reducing the error in floating point numbers during comparison. I'm using C language in my project. Any suggestions will be greatly appreciated. Thanks

If you need better precision than one of the built in floating point standards, then a third party library or creating your own number system are about the only options. GNU Multiple Precision, is an option.

Related

Fixed point precision real numbers arithmetic support for Eigen/Eigen3

I'm going to raise again a very general question relating to Eigen/Eigen3 matrix library support for different matrix support "fields/representations" for operations.
I've analyzed a bit the Eigen matrix template library, and so far, I've only seen suffort for floating point real numbers arithmetic (that is IEEE754 single precision 32 bits and double precision 64 bits floating point numbers).
I would like to raise a question concerning fixed-size precision real numbers arithmetic support for Eigen/Eigen3:
is there any support for fixed precision vectorization in Eigen/Eigen3 ?
if not so, what would be necessary to implement such a suport ?
can standard decomposition routines and matrix operations be immediately implemented using fixed size precision ? If so, how ?
if not so, what are the pre-requisites for such a support (concepts, operators overloads, "real" functions required to be implemented, etc...) in order to implement such operations/decompositions without impairing Eigen's core ?
are there any plans to implement such functionnalities into the core of Eigen/Eigen3 ?
If such kind of things aren't foreseen in the near future,
does there exist already any kind of such functionnality that you are aware of and that would be compatible with Eigen/Eigen3 in order to fully implement vectorization/optimizations ?
if not so, which approach would you recomment if s.o. was interested in implementing it ?
I would like to know the feasibility to implement a few matrix computations onto a 16- or 32-bits micro-controller. I'm not aware of any such kind of things that are disclosed under GPL licencing scheme, and would be geatly interested if such thing would be usable. If not, I would like to assess workload necessary to implement it.
Thanking anyone in advance for help.

How to multiply matrices containing floating points in FPGA?

I would like to ask a question about matrix multiplication in HDL. For 6 months I have been learning about FPGAs and ASIC design, but still do not have the enough experience for programming FPGAs using Verilog/VHDL. I had a quick search and found that Verily is suitable for me. Anyway, you just suppose me as a beginner and till now I only followed simple tutorials made of using Xilinx Spartan 3E-XCS1600E MicroBlaze Starter Kit, because I have it, too.
The most challenging part for me was to create matrices in Verilog. If I am able to create matrices and fill them with integers first, then I can move on the next step matrices with floating numbers. In advance, I also want to take inverse of these matrix and seems hard to me extremely.
My question is, what should I do in order to multiply matrices? Is there any tricky or easier way to do that like in C language? (I know Verilog is a HDL and we cannot think on that way). Also how can I convert my floating numbers to fixed or integer type? Then I think I can solve my problem in this way. I looked trough other questions but did not understand well. Thanks for your response and help.
Bonus Question: If I try these operations on MATLAB or Simulink, could it be easier to convert it to HDL using HDL Coder? If it is, could you guide me to do that?
Regards,
Leonardo
You can create matrices with RAM in hardware design. Actually, everything can be described as RAM:)
Of course only integer can be supported in Verilog, but we do have some method that can create and compute float numbers.
Define a float syntax. Suppose that we have reg var[7:0], we can assume var[7:4] is the integer part and var[3:0] is the decimal part. Like 8'b0101_1001 equals 5.9 in DEC. You must limit the range of var[3:0] to 0~9!
IEEE 754. http://grouper.ieee.org/groups/754/ This standard has been widely used in many areas, but I think it will be a little difficult for you.
Deal with a matrices is nothing special, just follow what you have learned in math class.
I'm not good at English. Hope you can understand.

How to avoid exponential notation using Scratch?

In my program, I have a large string of numbers that have been compiled together, and I'm switching it back and forth between different base values. But when I switch back to decimal, the computer directly switches to a number using exponential notation. The program I'm using is Scratch, but as long as any algorithms that are given are readable, I should be able to translate.
Essentially, I just need a way to go from like 1.0e13 to 10000000000000. Any ideas?
This script is the best I could muster:
And a sample output:
As well as a project containing the custom block for your convenience: https://scratch.mit.edu/projects/150067538/
Unfortunately, Scratch still rounds numbers, so your answers won't always be 100% exact, but at least they won't be in scientific (e) notation. If somebody else has an even better solution, I'd love to see it.
Like PullJosh said, (Hey again PullJosh!) scratch rounds numbers off the Scientific Notation so it won't be exactly accurate but their is always a solution to a problem!
My theory is that you can put each digit of the scientific notation into a list. This will make the conversion much easier! I will not take a picture of my code but send you the link to it as the code is massive mostly because I added some code that will detect if your scientific notation is a number and it can convert numbers like 1.123e2.
https://scratch.mit.edu/projects/341550388/editor
You can use the code without credit,yay! Just put it in your backpack and you're good to go.
Edit: Also, if you need more help with Scratch and stuff, feel free to follow me # endermite334 (you don't have to) and I will be happy to help you!

What's the most efficient way to run cross-platform, deterministic simulations in Haskell?

My goal is to run a simulation that requires non-integral numbers across different machines that might have a varying CPU architectures and OSes. The main priority is that given the same initial state, each machine should reproduce the simulation exactly the same. Secondary priority is that I'd like the calculations to have performance and precision as close as realistically possible to double-precision floats.
As far as I can tell, there doesn't seem to be any way to affect the determinism of floating
point calculations from within a Haskell program, similar to the _controlfp and _FPU_SETCW macros in C. So, at the moment I consider my options to be
Use Data.Ratio
Use Data.Fixed
Use Data.Fixed.Binary from the fixed-point package
Write a module to call _ controlfp (or the equivivalent for each platform) via FFI.
Possibly, something else?
One problem with the fixed point arithmetic libraries is that they don't have e.g. trigonometric functions or logarithms defined for them (as they don't implement the Floating type-class) so I guess I would need to provide lookup tables for all the functions in the simulation seed data. Or is there some better way?
Both of the fixed point libraries also hide the newtype constructor, so any (de-)serialization would need to be done via toRational/fromRational as far as I can tell, and that feels like it would add unnecessary overhead.
My next step is to benchmark the different fixed-point solutions to see the real world performance, but meanwhile, I'd gladly take any advice you have on this subject.
Clause 11 of the IEEE 754-2008 standard describes what is needed for reproducible floating-point results. Among other things, you need unambiguous expression evaluation rules. Some languages permit floating-point expressions to be evaluated with extra precision or permit some alterations of expressions (such as evaluating a*b+c in a single instruction instead of separate multiply and add instructions). I do not know about Haskell’s semantics. If Haskell does not precisely map expressions to definite floating-point operations, then it cannot support reproducible floating-point results.
Also, since you mention trigonometric and logarithmic functions, be aware that these vary from implementation to implementation. I am not aware of any math library that provides correctly rounded implementations of every standard math function. (CRLibm is a project to create one.) So each math library uses its own approximations, and their results vary slightly. Perhaps you might work around this by including a math library with your simulation code, so that it is used instead of each Haskell implementation’s default library.
Routines that convert between binary floating-point and decimal numerals are also a source of differences between implementations. This is less of a problem than it used to be because algorithms for converting correctly are known. However, it is something that might need to be checked in each implementation.

Prolog for Quantum Logic Gate Simulator

I don't know Prolog, but I'm wondering if it could be a good option for building a Quantum Gate simulator?
My main question is what are the pros and cons of using Prolog for such a project? Also other suggestions are highly appreciated.
I've written quantum logic simulators in C++ and Python. My first programming language was Prolog, but that was a long time ago. My recollections don't make me think it's the ideal choice: IIRC, return values were limited to True/False, and it seems to me one wants a little more flexibility in the values. Although it's true that one can use a True/False value to see, say, whether a surface code protected a circuit against errors, you might want to actually know the quantum amplitude for some of the values, and a more general programming language might help you.
The way I wrote my simulator, I used a lot of 2x2 and 4x4 matrix multiplies, and I found that the Eigen C++ library, which cache-optimizes a lot of smaller cases, was unbelievably fast. In these simulations, you often want to run lots of statistics, so that you can see how well the code protects when the input error rate is fairly small. The speed of Eigen made things really nice.

Resources