Does Linux kernel support Trigonometric functions and floating point arithmetic? - linux-kernel

I see from network that linux kernel doesnot support floating point arithmetic,but my module for kernel has many "double" variablities 、 floating point arithmetic and Trigonometric functions like "tan"、“sin” and "acos" . How should I do ? All codes in kernel only use integer?

Related

Is there a compilation option to stop gcc using floating point instructions in integer code?

My integer-only, bare-metal C project just ground to a halt when I got an unexpected exception about a floating point instruction.
Looking at the gcc generated code, the culprit is an fmov d0, x0, used to temporarily store a value in a floating point register, rather than on the stack.
I don't want it to do that!
I could mark a function or two with the noinline attribute, but that's no guarantee that the problem won't occur again elsewhere.
This option does the trick:
-mgeneral-regs-only
https://gcc.gnu.org/onlinedocs/gcc-7.1.0/gcc/AArch64-Options.html
Generate code which uses only the general-purpose registers. This will prevent the compiler from using floating-point and Advanced SIMD registers but will not impose any restrictions on the assembler.
You need to inform compiler that your target does not have floating-point hardware. The exact flag depends on your target (for ARM it's -msoft-float).

Fixed point precision real numbers arithmetic support for Eigen/Eigen3

I'm going to raise again a very general question relating to Eigen/Eigen3 matrix library support for different matrix support "fields/representations" for operations.
I've analyzed a bit the Eigen matrix template library, and so far, I've only seen suffort for floating point real numbers arithmetic (that is IEEE754 single precision 32 bits and double precision 64 bits floating point numbers).
I would like to raise a question concerning fixed-size precision real numbers arithmetic support for Eigen/Eigen3:
is there any support for fixed precision vectorization in Eigen/Eigen3 ?
if not so, what would be necessary to implement such a suport ?
can standard decomposition routines and matrix operations be immediately implemented using fixed size precision ? If so, how ?
if not so, what are the pre-requisites for such a support (concepts, operators overloads, "real" functions required to be implemented, etc...) in order to implement such operations/decompositions without impairing Eigen's core ?
are there any plans to implement such functionnalities into the core of Eigen/Eigen3 ?
If such kind of things aren't foreseen in the near future,
does there exist already any kind of such functionnality that you are aware of and that would be compatible with Eigen/Eigen3 in order to fully implement vectorization/optimizations ?
if not so, which approach would you recomment if s.o. was interested in implementing it ?
I would like to know the feasibility to implement a few matrix computations onto a 16- or 32-bits micro-controller. I'm not aware of any such kind of things that are disclosed under GPL licencing scheme, and would be geatly interested if such thing would be usable. If not, I would like to assess workload necessary to implement it.
Thanking anyone in advance for help.

What's the most efficient way to run cross-platform, deterministic simulations in Haskell?

My goal is to run a simulation that requires non-integral numbers across different machines that might have a varying CPU architectures and OSes. The main priority is that given the same initial state, each machine should reproduce the simulation exactly the same. Secondary priority is that I'd like the calculations to have performance and precision as close as realistically possible to double-precision floats.
As far as I can tell, there doesn't seem to be any way to affect the determinism of floating
point calculations from within a Haskell program, similar to the _controlfp and _FPU_SETCW macros in C. So, at the moment I consider my options to be
Use Data.Ratio
Use Data.Fixed
Use Data.Fixed.Binary from the fixed-point package
Write a module to call _ controlfp (or the equivivalent for each platform) via FFI.
Possibly, something else?
One problem with the fixed point arithmetic libraries is that they don't have e.g. trigonometric functions or logarithms defined for them (as they don't implement the Floating type-class) so I guess I would need to provide lookup tables for all the functions in the simulation seed data. Or is there some better way?
Both of the fixed point libraries also hide the newtype constructor, so any (de-)serialization would need to be done via toRational/fromRational as far as I can tell, and that feels like it would add unnecessary overhead.
My next step is to benchmark the different fixed-point solutions to see the real world performance, but meanwhile, I'd gladly take any advice you have on this subject.
Clause 11 of the IEEE 754-2008 standard describes what is needed for reproducible floating-point results. Among other things, you need unambiguous expression evaluation rules. Some languages permit floating-point expressions to be evaluated with extra precision or permit some alterations of expressions (such as evaluating a*b+c in a single instruction instead of separate multiply and add instructions). I do not know about Haskell’s semantics. If Haskell does not precisely map expressions to definite floating-point operations, then it cannot support reproducible floating-point results.
Also, since you mention trigonometric and logarithmic functions, be aware that these vary from implementation to implementation. I am not aware of any math library that provides correctly rounded implementations of every standard math function. (CRLibm is a project to create one.) So each math library uses its own approximations, and their results vary slightly. Perhaps you might work around this by including a math library with your simulation code, so that it is used instead of each Haskell implementation’s default library.
Routines that convert between binary floating-point and decimal numerals are also a source of differences between implementations. This is less of a problem than it used to be because algorithms for converting correctly are known. However, it is something that might need to be checked in each implementation.

include floating point library in vhdl

I have pex_pkg.vhd and I want to use this library to make floating point adder but altera max+plus II give me an error can't open "PEX_lib" how to include this library in max+plus ?
I'd stay away from Max plus II if I were you, it's v. old - its VHDL support was always spotty, and IIRC using libraries other than work wasn't possible.
Altera's tool is Quartus now - I'm sure that can handle multiple libraries.
You should check out David Bishop's VHDL Floating Point Library. This is by the same author who wrote the VHDL floating point libraries that are build into VHDL 2008, but these are usable with older and more common VHDL tools. They are fully tested and synthesizable. The only potential downside is that because they are implemented via functions, they can only describe the common case of either pipelined or combinational data paths. (If you want, for instance, a smaller multi-cycle digit-serial design, you have to resort to a different library.)
Are you sure you wish that? Most floating point libraries aren't synthesizable.

Linux Kernel - Integer to ASCII

I need to convert an integer to it's ASCII representation from within the Linux Kernel. How can I do this? I can't find any built-in conversion methods. Are there any already in the kernel or do I need to add my own?
The kernel does offer snprintf(), would that suit your need? I'm also curious what you are doing with the ASCII representation of an integer within the kernel.
It's very likely that you just want printk().

Resources