map range of IEEE 32bit float [1:2) to some arbitrary [a:b) - random

Back story : uniform PRNG with arbitrary endpoints
I've got a fast uniform pseudo random number generator that creates uniform float32 numbers in range [1:2) i.e. u : 1 <= u <= 2-eps. Unfortunately mapping the endpoints [1:2) to that of an arbitrary range [a:b) is non-trivial in floating point math. I'd like to exactly match the endpoints with a simple affine calculation.
Formally stated
I want to make an IEEE-754 32 bit floating point affine function f(x,a,b) for 1<=x<2 and arbitrary a,b that exactly maps
1 -> a and nextlower(2) -> nextlower(b)
where nextlower(q) is the next lower FP representable number (e.g. in C++ std::nextafter(float(q),float(q-1)))
What I've tried
The simple mapping f(x,a,b) = (x-1)*(b-a) + a always achieves the f(1) condition but sometimes fails the f(2) condition due to floating point rounding.
I've tried replacing the 1 with a free design parameter to cancel FP errors in the spirit of Kahan summation.
i.e. with
f(x,c0,c1,c2) = (x-c0)*c1 + c2
one mathematical solution is c0=1,c1=(b-a),c2=a (the simple mapping above),
but the extra parameter lets me play around with constants c0,c1,c2 to match the endpoints. I'm not sure I understand the principles behind Kahan summation well enough to apply them to determine the parameters or even be confident a solution exists. It feels like I'm bumping around in the dark where others might've found the light already.
Aside: I'm fine assuming the following
a < b
both a and b are far from zero, i.e. OK to ignore subnormals
a and b are far enough apart (measuered in representable FP values) to mitigate non-uniform quantization and avoid degenerate cases
Update
I'm using a modified form of Chux's answer to avoid the division.
While I'm not 100% certain my refactoring kept all the magic, it does still work in all my test cases.
float lerp12(float x,float a,float b)
{
const float scale = 1.0000001f;
// scale = 1/(nextlower(2) - 1);
const float ascale = a*scale;
const float bscale = nextlower(b)*scale;
return (nextlower(2) - x)*ascale + (x - 1.0f)*bscale;
}
Note that only the last line (5 FLOPS) depends on x, so the others can be reused if (a,b) remain the same.

OP's goal
I want to make an IEEE-754 32 bit floating point affine function f(x,a,b) for 1<=x<2 and arbitrary a,b that exactly maps 1 -> a and nextlower(2) -> nextlower(b)
This differs slightly from "map range of IEEE 32bit float [1:2) to some arbitrary [a:b)".
General case
Map x0 to y0, x1 to y1 and various x in-between to y :
m = (y1 - y0)/(x1 - x0);
y = m*(x - x0) + y0;
OP's case
// x0 = 1.0f;
// x1 = nextafterf(2.0f, 1.0f);
// y0 = a;
// y1 = nextafterf(b, a);
#include <math.h> // for nextafterf()
float x = random_number_1_to_almost_2();
float m = (nextafterf(b, a) - a)/(nextafterf(2.0f, 1.0f) - 1.0f);
float y = m*(x - 1.0f) + a;
nextafterf(2.0f, 1.0f) - 1.0f, x - 1.0f and nextafterf(b, a) are exact, incurring no calculation error.
nextafterf(2.0f, 1.0f) - 1.0f is a value a little less than 1.0f.
Recommendation
Other re-formations are possible with better symmetry and numerical stability at the end-points.
float x = random_number_1_to_almost_2();
float afactor = nextafterf(2.0f, 1.0f) - x; // exact
float bfactor = x - 1.0f; // exact
float xwidth = nextafterf(2.0f, 1.0f) - 1.0f; // exact
// Do not re-order next line of code, perform 2 divisions
float y = (afactor/xwidth)*a + (bfactor/xwidth)*nextafterf(b, a);
Notice afactor/xwidth and bfactor/xwidth are both exactly 0.0 or 1.0 at the end-points, thus meeting "maps 1 -> a and nextlower(2) -> nextlower(b)". Extended precision not needed.
OP's (x-c0)*c1 + c2 has trouble as it divides (x-c0)*c1 by (2.0 - 1.0) or 1.0 (implied), when it should divide by nextafterf(2.0f, 1.0f) - 1.0f.

Simple lerping based on fused multiply-add can reliably hit the endpoints for interpolation factors 0 and 1. For x in [1, 2) the interpolation factor x - 1 does not reach unity, which can be fixed by slight stretching by multiplying x-1 with (2.0f / nextlower(2.0f)). Obviously the endpoint needs to also be adjusted to the endpoint nextlower(b). For the C code below I have used the definition of nextlower() provided in the question, which may not be what asker desires, since for floating-point q sufficiently large in magnitude, q == (q - 1).
Asker stated in comments that it is understood that this kind of mapping is not going to result in an exactly uniform distribution of the pseudo-random numbers in the interval [a, b), only approximately so, and that pathological mappings may occur when a and b are extremely close together. I have not mathematically proved that the implementation of map() below guarantees the desired behavior, but it seems to do so for a large number of random test cases.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
float nextlowerf (float q)
{
return nextafterf (q, q - 1);
}
float map (float a, float b, float x)
{
float t = (x - 1.0f) * (2.0f / nextlowerf (2.0f));
return fmaf (t, nextlowerf (b), fmaf (-t, a, a));
}
float uint32_as_float (uint32_t a)
{
float r;
memcpy (&r, &a, sizeof(r));
return r;
}
// George Marsaglia's KISS PRNG, period 2**123. Newsgroup sci.math, 21 Jan 1999
// Bug fix: Greg Rose, "KISS: A Bit Too Simple" http://eprint.iacr.org/2011/007
static uint32_t kiss_z=362436069, kiss_w=521288629;
static uint32_t kiss_jsr=123456789, kiss_jcong=380116160;
#define znew (kiss_z=36969*(kiss_z&65535)+(kiss_z>>16))
#define wnew (kiss_w=18000*(kiss_w&65535)+(kiss_w>>16))
#define MWC ((znew<<16)+wnew )
#define SHR3 (kiss_jsr^=(kiss_jsr<<13),kiss_jsr^=(kiss_jsr>>17), \
kiss_jsr^=(kiss_jsr<<5))
#define CONG (kiss_jcong=69069*kiss_jcong+1234567)
#define KISS ((MWC^CONG)+SHR3)
int main (void)
{
float a, b, x, r;
float FP32_MIN_NORM = 0x1.000000p-126f;
float FP32_MAX_NORM = 0x1.fffffep+127f;
do {
do {
a = uint32_as_float (KISS);
} while ((fabsf (a) < FP32_MIN_NORM) || (fabsf (a) > FP32_MAX_NORM) || isnan (a));
do {
b = uint32_as_float (KISS);
} while ((fabsf (b) < FP32_MIN_NORM) || (fabsf (b) > FP32_MAX_NORM) || isnan (b) || (b < a));
x = 1.0f;
r = map (a, b, x);
if (r != a) {
printf ("lower bound failed: a=%12.6a b=%12.6a map=%12.6a\n", a, b, r);
return EXIT_FAILURE;
}
x = nextlowerf (2.0f);
r = map (a, b, x);
if (r != nextlowerf (b)) {
printf ("upper bound failed: a=%12.6a b=%12.6a map=%12.6a\n", a, b, r);
return EXIT_FAILURE;
}
} while (1);
return EXIT_SUCCESS;
}

Related

How can I check wether a point is inside the circumcircle of 3 points?

Is there any easy solution? Or has anybody an example of an implementation?
Thanks, Jonas
Lets call
a, b, c our three points,
C the circumcircle of (a, b, c)
and d an other point.
A fast way to determine if d is in C is to compute this determinant:
| ax-dx, ay-dy, (ax-dx)² + (ay-dy)² |
det = | bx-dx, by-dy, (bx-dx)² + (by-dy)² |
| cx-dx, cy-dy, (cx-dx)² + (cy-dy)² |
if a, b, c are in counter clockwise order then:
if det equal 0 then d is on C
if det > 0 then d is inside C
if det < 0 then d is outside C
here is a javascript function that does just that:
function inCircle (ax, ay, bx, by, cx, cy, dx, dy) {
let ax_ = ax-dx;
let ay_ = ay-dy;
let bx_ = bx-dx;
let by_ = by-dy;
let cx_ = cx-dx;
let cy_ = cy-dy;
return (
(ax_*ax_ + ay_*ay_) * (bx_*cy_-cx_*by_) -
(bx_*bx_ + by_*by_) * (ax_*cy_-cx_*ay_) +
(cx_*cx_ + cy_*cy_) * (ax_*by_-bx_*ay_)
) > 0;
}
You might also need to check if your points are in counter clockwise order:
function ccw (ax, ay, bx, by, cx, cy) {
return (bx - ax)*(cy - ay)-(cx - ax)*(by - ay) > 0;
}
I didn't place the ccw check inside the inCircle function because you shouldn't check it every time.
This process doesn't take any divisions or square root operation.
You can see the code in action there: https://titouant.github.io/testTriangle/
And the source there: https://github.com/TitouanT/testTriangle
(In case you are interested in a non-obvious/"crazy" kind of solution.)
One equivalent property of Delaunay triangulation is as follows: if you build a circumcircle of any triangle in the triangulation, it is guaranteed not to contain any other vertices of the triangulation.
Another equivalent property of Delaunay triangulation is: it maximizes the minimal triangle angle (i.e. maximizes it among all triangulations on the same set of points).
This suggests an algorithm for your test:
Consider triangle ABC built on the original 3 points.
If the test point P lies inside the triangle it is definitely inside the circle
If the test point P belongs to one of the "corner" regions (see the shaded regions in the picture below), it is definitely outside the circle
Otherwise (let's say P lies in region 1) consider two triangulations of quadrilateral ABCP: the original one contains the original triangle (red diagonal), and the alternate one with "flipped" diagonal (blue diagonal)
Determine which one if this triangulations is a Delaunay triangulation by testing the "flip" condition, e.g. by comparing α = min(∠1,∠4,∠5,∠8) vs. β = min(∠2,∠3,∠6,∠7).
If the original triangulation is a Delaunay triangulation (α > β), P lies outside the circle. If the alternate triangulation is a Delaunay triangulation (α < β), P lies inside the circle.
Done.
(Revisiting this answer after a while.)
This solution might not be as "crazy" as it might appear at the first sight. Note that in order to compare angles at steps 5 and 6 there's no need to calculate the actual angle values. It is sufficient to know their cosines (i.e. there's no need to involve trigonometric functions).
A C++ version of the code
#include <cmath>
#include <array>
#include <algorithm>
struct pnt_t
{
int x, y;
pnt_t ccw90() const
{ return { -y, x }; }
double length() const
{ return std::hypot(x, y); }
pnt_t &operator -=(const pnt_t &rhs)
{
x -= rhs.x;
y -= rhs.y;
return *this;
}
friend pnt_t operator -(const pnt_t &lhs, const pnt_t &rhs)
{ return pnt_t(lhs) -= rhs; }
friend int operator *(const pnt_t &lhs, const pnt_t &rhs)
{ return lhs.x * rhs.x + lhs.y * rhs.y; }
};
int side(const pnt_t &a, const pnt_t &b, const pnt_t &p)
{
int cp = (b - a).ccw90() * (p - a);
return (cp > 0) - (cp < 0);
}
void make_ccw(std::array<pnt_t, 3> &t)
{
if (side(t[0], t[1], t[2]) < 0)
std::swap(t[0], t[1]);
}
double ncos(pnt_t a, const pnt_t &o, pnt_t b)
{
a -= o;
b -= o;
return -(a * b) / (a.length() * b.length());
}
bool inside_circle(std::array<pnt_t, 3> t, const pnt_t &p)
{
make_ccw(t);
std::array<int, 3> s =
{ side(t[0], t[1], p), side(t[1], t[2], p), side(t[2], t[0], p) };
unsigned outside = std::count(std::begin(s), std::end(s), -1);
if (outside != 1)
return outside == 0;
while (s[0] >= 0)
{
std::rotate(std::begin(t), std::begin(t) + 1, std::end(t));
std::rotate(std::begin(s), std::begin(s) + 1, std::end(s));
}
double
min_org = std::min({
ncos(t[0], t[1], t[2]), ncos(t[2], t[0], t[1]),
ncos(t[1], t[0], p), ncos(p, t[1], t[0]) }),
min_alt = std::min({
ncos(t[1], t[2], p), ncos(p, t[2], t[0]),
ncos(t[0], p, t[2]), ncos(t[2], p, t[1]) });
return min_org <= min_alt;
}
and a couple of tests with arbitrarily chosen triangles and a large number of random points
Of course, the whole thing can be easily reformulated without even mentioning Delaunay triangulations. Starting from step 4 this solution is based in the property of the opposite angles of cyclic quadrilateral, which must sum to 180°.
In this Math SE post of mine I included an equation which checks if four points are cocircular by computing a 4×4 determinant. By turning that equation into an inequality you can check for insideness.
If you want to know which direction the inequality has to go, conisder the case of a point very far away. In this case, the x²+y² term will dominate all other terms. So you can simply assume that for the point in question, this term is one while the three others are zero. Then pick the sign of your inequality so this value does not satisfy it, since this point is definitely outside but you want to characterize inside.
If numeric precision is an issue, this page by Prof. Shewchuk describes how to obtain consistent predicates for points expressed using regular double precision floating point numbers.
Given 3 points (x1,y1),(x2,y2),(x3,y3) and the point you want to check is inside the circle defined by the above 3 points (x,y) you can do something like
/**
*
* #param x coordinate of point want to check if inside
* #param y coordinate of point want to check if inside
* #param cx center x
* #param cy center y
* #param r radius of circle
* #return whether (x,y) is inside circle
*/
static boolean g(double x,double y,double cx,double cy,double r){
return Math.sqrt((x-cx)*(x-cx)+(y-cy)*(y-cy))<r;
}
// check if (x,y) is inside circle defined by (x1,y1),(x2,y2),(x3,y3)
static boolean isInside(double x,double y,double x1,double y1,double x2,double y2,double x3,double y3){
double m1 = (x1-x2)/(y2-y1);
double m2 = (x1-x3)/(y3-y1);
double b1 = ((y1+y2)/2) - m1*(x1+x2)/2;
double b2 = ((y1+y3)/2) - m2*(x1+x3)/2;
double xx = (b2-b1)/(m1-m2);
double yy = m1*xx + b1;
return g(x,y,xx,yy,Math.sqrt((xx-x1)*(xx-x1)+(yy-y1)*(yy-y1)));
}
public static void main(String[] args) {
// if (0,1) is inside the circle defined by (0,0),(0,2),(1,1)
System.out.println(isInside(0,1,0,0,0,2,1,1));
}
The method for getting an expression for the center of circle from 3 points goes from finding the intersection of the 2 perpendicular bisectors of 2 line segments, above I chose (x1,y1)-(x2,y2) and (x1,y1)-(x3,y3). Since you know a point on each perpendicular bisector, namely (x1+x2)/2 and (x1+x3)/2, and since you also know the slope of each perpendicular bisector, namely (x1-x2)/(y2-y1) and (x1-x3)/(y3-y1) from the above 2 line segments respectively, you can solve for the (x,y) where they intersect.

Bresenham line algorithm - where does the decision parameter come from?

void line()
{
int x1 = 10, y1 = 10, x2 = 300, y2 = 500 , x, y;
int dx, dy, //deltas
e; // decision parameter
glClear(GL_COLOR_BUFFER_BIT);
glColor3f( 1 ,0, 0);
setPixel(x1, y1); //plot first point
// difference between starting and ending points
dx = x2 - x1;
dy = y2 - y1;
e = 2 * dy - dx;
x = x1; y = y1;
for(int k = 0; k < dx - 1; ++k)
{
if(e < 0)
{
//next pixel: (x+1, y)
e = e + 2*dy;
}
else
{
//next pixel: (x+1, y+1)
e = e + 2*dy - 2*dx;
++y;
}
++x;
setPixel(x, y);
}
glFlush();
}
Where does the e = 2*dy - dx come from? Why do we increase it by 2*dy or 2*dy - 2*dx?
Bresenham's algorithm uses only integer arithmetic. The key idea is to minimize the calculations for incremental evaluation of the line equation.
The algorithm is really simple. Let's start with the line equation
f(x) = y = a*x +b
(and assume 0 <= a < 1 for now).
When we go one pixel to the right, we get:
f(x+1) = a * (x+1) + b = f(x) + a
But both a and y will not be integers for the typical line.
So let's just introduce an "error". We always go to the right neighbor. In doing so, we make an error of a by not going up. If our error is above half a pixel (0.5), we go up (and hence decrease the error value by a pixel again)
float e=a;
float y=y1;
int x=x1;
while(x<=x2) {
SetPixel(x,y);
x++;
if (e > 0.5) {
y++;
e=e+a-1;
} else {
e=e+a;
}
}
(Note that we already set the error e to a initially and not to zero, because we always make the decision after the pixel is drawn, and we don't need to check the condition before drawing the very first pixel because that one is always exactly on the line.)
Now, we have come close. But there are two things which prevent us from using integers: the 0.5 and a which is dy/dx. But: we can scale the error value (and the condition) by an arbitray factor, without changing anything. Think about it: we've measured the error in pixels so far (because that seems intuitive at first), but this algorithm could use any arbitrary unit for the error value - half pixels, double pixels, pi pixels.
So let's just scale it by 2*dx to get rid of both fractions in the formula above! (In a way, they key trick here is that the "unit" in which we measure the error value is just not constant in the algorithm, but a function of the line).
int e=2*dy;
int y=y1;
int x=x1;
while(x<=x2) {
SetPixel(x,y);
x++;
if (e > dx) {
y++;
e=e+2*dy - 2*dx;
} else {
e=e+2*dy;
}
}
Now, we have what we want: only integers.
(One thing to note here, though: by going from float to int, we automatically "snap-in" the line's endpoints to integer coordinates - having integer endpoints is some precondition for (and limitation of) the Bresenham algorithm).
There is one additional trick: the condition contains a variable. It would be even more efficient, if we would test against a constant, and ideally against zero (since branching depending just on the sign or zero flags saves us a compare operation). And we can achive this, by just shifiting our error values. In the same way as before, not only the scale of the error value cane be chosen arbitrarily, but also origin.
Since we test for e > dx currently, shifting the error by -dx will allow us to test against 0 (and 0 now means what dx meant before, namely 0.5 pixels). This shift only affects the initial value of e, and the condition, all the increments stay the same as before:
int e=2*dy-dx;
int y=y1;
int x=x1;
while(x<=x2) {
SetPixel(x,y);
x++;
if (e > 0) {
y++;
e=e+2*dy - 2*dx;
} else {
e=e+2*dy;
}
}
Voila, the 2*dy-dx term has suddenly emerged... ;)
The term 2dy-dx comes after we fill xk =yk=0 in the formula (2dy•xk-2dx•yk+2dy+(2b-1)) because for the first parameter we assume the starting point of line lies at origin i.e (0,0).
And b is constant so it is ignored.
Try it by yourself.

Fast sigmoid algorithm

The sigmoid function is defined as
I found that using the C built-in function exp() to calculate the value of f(x) is slow. Is there any faster algorithm to calculate the value of f(x)?
you don't have to use the actual, exact sigmoid function in a neural network algorithm but can replace it with an approximated version that has similar properties but is faster the compute.
For example, you can use the "fast sigmoid" function
f(x) = x / (1 + abs(x))
Using first terms of the series expansion for exp(x) won't help too much if the arguments to f(x) are not near zero, and you have the same problem with a series expansion of the sigmoid function if the arguments are "large".
An alternative is to use table lookup. That is, you precalculate the values of the sigmoid function for a given number of data points, and then do fast (linear) interpolation between them if you want.
It's best to measure on your hardware first. Just a quick benchmark script shows, that on my machine 1/(1+|x|) is the fastest, and tanh(x) is the close second. Error function erf is pretty fast too.
% gcc -Wall -O2 -lm -o sigmoid-bench{,.c} -std=c99 && ./sigmoid-bench
atan(pi*x/2)*2/pi 24.1 ns
atan(x) 23.0 ns
1/(1+exp(-x)) 20.4 ns
1/sqrt(1+x^2) 13.4 ns
erf(sqrt(pi)*x/2) 6.7 ns
tanh(x) 5.5 ns
x/(1+|x|) 5.5 ns
I expect that the results may vary depending on architecture and the compiler used, but erf(x) (since C99), tanh(x) and x/(1.0+fabs(x)) are likely to be the fast performers.
People here are mostly concerned about how fast one function is relative to another and create micro benchmark to see whether f1(x) runs 0.0001 ms faster than f2(x). The big problem is that this is mostly irrelevant, because what matters is how fast your network learns with your activation function trying to minimize your cost function.
As of current theory, rectifier function and softplus
compared to sigmoid function or similar activation functions, allow
for faster and effective training of deep neural architectures on
large and complex datasets.
So I suggest to throw away micro-optimization, and take a look at which function allows faster learning (also taking looking at various other cost function).
To do the NN more flexible usually used some alpha rate to change the angle of graph around 0.
The sigmoid function looks like:
f(x) = 1 / ( 1+exp(-x*alpha))
The nearly equivalent, (but more faster function) is:
f(x) = 0.5 * (x * alpha / (1 + abs(x*alpha))) + 0.5
You can check the graphs here
When I using abs function the network become faster 100+ times.
This answer probably isn't relevant for most cases, but just wanted to throw out there that for CUDA computing I've found x/sqrt(1+x^2) to be the fastest function by far.
For example, done with single precision float intrinsics:
__device__ void fooCudaKernel(/* some arguments */) {
float foo, sigmoid;
// some code defining foo
sigmoid = __fmul_rz(rsqrtf(__fmaf_rz(foo,foo,1)),foo);
}
Also you might use rough version of sigmoid (it differences not greater than 0.2% from original):
inline float RoughSigmoid(float value)
{
float x = ::abs(value);
float x2 = x*x;
float e = 1.0f + x + x2*0.555f + x2*x2*0.143f;
return 1.0f / (1.0f + (value > 0 ? 1.0f / e : e));
}
void RoughSigmoid(const float * src, size_t size, const float * slope, float * dst)
{
float s = slope[0];
for (size_t i = 0; i < size; ++i)
dst[i] = RoughSigmoid(src[i] * s);
}
Optimization of RoughSigmoid function with using SSE:
#include <xmmintrin.h>
void RoughSigmoid(const float * src, size_t size, const float * slope, float * dst)
{
size_t alignedSize = size/4*4;
__m128 _slope = _mm_set1_ps(*slope);
__m128 _0 = _mm_set1_ps(-0.0f);
__m128 _1 = _mm_set1_ps(1.0f);
__m128 _0555 = _mm_set1_ps(0.555f);
__m128 _0143 = _mm_set1_ps(0.143f);
size_t i = 0;
for (; i < alignedSize; i += 4)
{
__m128 _src = _mm_loadu_ps(src + i);
__m128 x = _mm_andnot_ps(_0, _mm_mul_ps(_src, _slope));
__m128 x2 = _mm_mul_ps(x, x);
__m128 x4 = _mm_mul_ps(x2, x2);
__m128 series = _mm_add_ps(_mm_add_ps(_1, x), _mm_add_ps(_mm_mul_ps(x2, _0555), _mm_mul_ps(x4, _0143)));
__m128 mask = _mm_cmpgt_ps(_src, _0);
__m128 exp = _mm_or_ps(_mm_and_ps(_mm_rcp_ps(series), mask), _mm_andnot_ps(mask, series));
__m128 sigmoid = _mm_rcp_ps(_mm_add_ps(_1, exp));
_mm_storeu_ps(dst + i, sigmoid);
}
for (; i < size; ++i)
dst[i] = RoughSigmoid(src[i] * slope[0]);
}
Optimization of RoughSigmoid function with using AVX:
#include <immintrin.h>
void RoughSigmoid(const float * src, size_t size, const float * slope, float * dst)
{
size_t alignedSize = size/8*8;
__m256 _slope = _mm256_set1_ps(*slope);
__m256 _0 = _mm256_set1_ps(-0.0f);
__m256 _1 = _mm256_set1_ps(1.0f);
__m256 _0555 = _mm256_set1_ps(0.555f);
__m256 _0143 = _mm256_set1_ps(0.143f);
size_t i = 0;
for (; i < alignedSize; i += 8)
{
__m256 _src = _mm256_loadu_ps(src + i);
__m256 x = _mm256_andnot_ps(_0, _mm256_mul_ps(_src, _slope));
__m256 x2 = _mm256_mul_ps(x, x);
__m256 x4 = _mm256_mul_ps(x2, x2);
__m256 series = _mm256_add_ps(_mm256_add_ps(_1, x), _mm256_add_ps(_mm256_mul_ps(x2, _0555), _mm256_mul_ps(x4, _0143)));
__m256 mask = _mm256_cmp_ps(_src, _0, _CMP_GT_OS);
__m256 exp = _mm256_or_ps(_mm256_and_ps(_mm256_rcp_ps(series), mask), _mm256_andnot_ps(mask, series));
__m256 sigmoid = _mm256_rcp_ps(_mm256_add_ps(_1, exp));
_mm256_storeu_ps(dst + i, sigmoid);
}
for (; i < size; ++i)
dst[i] = RoughSigmoid(src[i] * slope[0]);
}
Code is based on a C# version previously posted by '#jenkas' with minor modifications.
The following C++ code provides excellent precision that outperforms low-precision approximations by virtue of the fact that it allows compilers to auto-vectorize compiled code onto SIMD instructions when used in simple loops.
GCC will compile code to SIMD (Arm Neon, or Intel AVX) instructions that perform four sigmoid (or tanh) computations in parallel. Auto-vectorization yields performance that is comparable to even very low-precision optimizations while maintaining essentially full precision. Microsoft and Intel compilers also perform auto-vectorization.
A brief discussion of auto-vectorization, compiler optimizations, and practices that produce optimal performance is provided near the end of this post.
The following functions provide a maximum error of +/- 6.55651e-07 over full range as compared to 1/(1+exp(-v)).
// Returns float approximation of 1/(1+exp(-v))
inline float fast_sigmoid(float v)
{
constexpr float c1 = 0.03138777F;
constexpr float c2 = 0.276281267F;
constexpr float c_log2f = 1.442695022F;
v *= c_log2f*0.5;
int intPart = (int)v;
float x = (v - intPart);
float xx = x * x;
float v1 = c_log2f + c2 * xx;
float v2 = x + xx * c1 * x;
float v3 = (v2 + v1);
*((int*)&v3) += intPart << 24;
float v4 = v2 - v1;
float res = v3 / (v3 - v4); //for tanh change to (v3 + v4)/ (v3 - v4)
return res;
}
// Returns float approximation tanh(v)
inline float fast_tanh(float v)
{
const float c1 = 0.03138777F;
const float c2 = 0.276281267F;
const float c_log2f = 1.442695022F;
v *= c_log2f;
int intPart = (int)v;
float x = (v - intPart);
float xx = x * x;
float v1 = c_log2f + c2 * xx;
float v2 = x + xx * c1 * x;
float v3 = (v2 + v1);
*((int*)&v3) += intPart << 24;
float v4 = v2 - v1;
float res = (v3+v4) / (v3 - v4);
return res;
}
Benchmark results on Raspberry PI 4 (AARCH64):
-- Sigmoid benchmark --------
fast_sigmoid(x) 5.63 ns
fast_tanh(x) 5.89 ns
Vectorized fast_sigmoid(out,in,count) using Neon intrinsics
5.79 ns
atan(pi*/2 * x)/(pi/2) 27.29 ns
atan(x) 24.13 ns
1/(1+exp(-x)) 14.92 ns
1/sqrt(1+x^2) 4.26 ns
(erf(sqrt(pi)/2 *x) 20.62 ns
tanh(x) 20.64 ns
x/(1+|x|) 8.93 ns
x (measures loop overhead) 1.62 ns
x*x (for reference) 1.62 ns
1/(1+x) (for reference) 2.64 ns
Raspberry Pi 4, aarch64 Arm Cortex 72#1.8GHz. GCC 10.2.1
In the benchmark, GCC vectorizes the fast_sigmoid call into ARM Neon instructions allowing four values to be calculated in parallel.
For optimal performance, you should ensure that input vectors are aligned on 64-byte boundaries. AVX and Neon instructions both allow for unaligned access, but do so with a mild performance penalty.
In addition, you should inform the compiler that input vectors do not alias using the non-standard restrict keyword. The restrict keyword is defined in the C99 standard, but is not standard C++. Fortunately, all major C++ compilers (Intel, Microsoft, GCC, Clang) implement it as a C++ keyword as well. Without alias guarantees, compilers will generate a small code preamble that tests for aliasing at runtime, and executes a slow code-path if aliasing is detected.
To enable vectorization, GCC requires either the -ftree-vectorize option, or -O3 (which includes -ftree-vectorize).
Loops are vectorized as long as there are no operations that prevent vectorization. Including a call to a math intrinsic (exp, sin, cos &c) will prevent loop vectorization, as will if statements within the loop. However, loop bodies can be fairly substantial. For example, in my LSTM implementation, one of the loops contains operations on four separate vector components (more operations in the loop provides more opportunity for interleaved instruction scheduling)
The restrict keyword in the following sample informs the compiler that no part of the input and output vector overlap, allowing the compiler to omit the aliasing check:
void vec_sigmoid(
int length,
restrict float*output,
restrict float*input,
restrict float *bias)
{
for (int i = 0; i < length; ++i)
{
output[i] = fast_sigmoid(input[i])+bias[i];
}
}
Code is a C++ port of #jenkas' C# code posted earlier, adjusted to return 1/(1+exp(-x)) instead of 1/(1+exp(-2*x)) which is what the original code calculates.
You can use a simple but effective method by using two formulas:
if x < 0 then f(x) = 1 / (0.5/(1+(x^2)))
if x > 0 then f(x) = 1 / (-0.5/(1+(x^2)))+1
This will look like this:
Two graphs for a sigmoid {Blue: (0.5/(1+(x^2))), Yellow: (-0.5/(1+(x^2)))+1}
Try this .NET Core 5+ implementation
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static unsafe float FastSigmoid(float v)
{
const float c1 = 0.03138777F;
const float c2 = 0.276281267F;
const float c_log2f = 1.442695022F;
v *= c_log2f;
int intPart = (int)v;
float x = (v - intPart);
float xx = x * x;
float v1 = c_log2f + c2 * xx;
float v2 = x + xx * c1 * x;
float v3 = (v2 + v1);
*((int*)&v3) += intPart << 24;
float v4 = v2 - v1;
float res = v3 / (v3 - v4); //for tanh change to (v3 + v4)/ (v3 - v4)
return res;
}
Using Eureqa to search for approximations to sigmoid I found 1/(1 + 0.3678749025^x) approximates it. It's pretty close, just gets rid of one operation with the negation of x.
Some of the other functions shown here are interesting, but is the power operation really that slow? I tested it and it actually did faster than addition, but that could just be a fluke. If so it should be just as fast or faster as all the others.
EDIT:0.5 + 0.5*tanh(0.5*x) and less accurate, 0.5 + 0.5*tanh(n) also works. And you could just get rid of the constants if you don't care about getting it between the range [0,1] like sigmoid. But it assumes that tanh is faster.
The tanh function may be optimized in some languages, making it faster than a custom defined x/(1+abs(x)), such is the case in Julia.
You can also use this:
y=x / (2 * ((x<0.0)*-x+(x>=0.0)*x) + 2) + 0.5;
y'=y(1-y);
acts like a sigmoid now because y(1-y)=y' is more let say round than 1/(2 (1 + abs(x))^2)
acts more like to fast sigmoid;
I don't think you can do better than the built-in exp() but if you want another approach, you can use series expansion. WolframAlpha can compute it for you.

Computing the null space of a matrix as fast as possible

I need to compute the nullspace of several thousand small matrices (8x9, not 4x3 as I wrote previously) in parallel (CUDA). All references point to SVD but the algorithm in numerical recipes seems very expensive, and gives me lots of things other than the null space that I don't really need. Is Gaussian elimination really not an option? Are there any other commonly used methods?
To answer your question directly... yes! QR decomposition!
Let A be an m-by-n matrix with rank n. QR decomposition finds orthonormal m-by-m matrix Q and upper triangular m-by-n matrix R such that A = QR. If we define Q = [Q1 Q2], where Q1 is m-by-n and Q2 is m-by-(m-n), then the columns of Q2 form the null space of A^T.
QR decomposition is computed either by Gram-Schmidt, Givens rotations, or Householder reflections. They have different stability properties and operation counts.
You are right: SVD is expensive! I can't speak for what state-of-the-art stuff uses, but when I hear "compute null space" (EDIT: in a way that is simple for me to understand), I think QR.
I don't think the above proposed method always gives the whole null space. To recap: "A = QR, where Q = [Q1 Q2], and Q1 is m-by-n and Q2 is m-by-(m-n). Then the columns of Q2 form the null space of A^T."
Indeed, this may only give a subspace of the null space. Simple counter-example is when A=0, in which case the null space of A^T is the whole R^m.
Therefore, it is necessary to check R too. Based on my experience with Matlab, if a row of R is straight 0, then the corresponding column in Q should also be a basis of the null space of A^T. Clearly this observation is heuristic and hinges on the particular algorithm used for QR decomposition.
Gaussian elimination is plenty fast for 4x3 matrices. IIRC I've done about 5 million per second with Java without parallelism. With such a small problem, your best bet is to code the routine (row reduce etc.) yourself; otherwise you'll waste most of the time putting the data into the right format for the external routine.
In the anwers above, it has been already pointed out how the null space of a matrix can be calculated by using the QR or the SVD approach. SVD should be preferred when accuracy is required, see also Null-space of a rectangular dense matrix.
As of February 2015, CUDA 7 (now in release candidate) makes SVD available through its new cuSOLVER library. Below I report an example on how using cuSOLVER's SVD to calculate the null space of a matrix.
Be aware that the problem you are focusing on concerns the calculation of several small matrices, so you should adapt the example I'm providing below by using streams to make sense for your case. To associate a stream to each task you can use
cudaStreamCreate()
and
cusolverDnSetStream()
kernel.cu
#include "cuda_runtime.h"
#include "device_launch_paraMeters.h"
#include<iostream>
#include<iomanip>
#include<stdlib.h>
#include<stdio.h>
#include<assert.h>
#include<math.h>
#include <cusolverDn.h>
#include <cuda_runtime_api.h>
#include "Utilities.cuh"
/********/
/* MAIN */
/********/
int main(){
// --- gesvd only supports Nrows >= Ncols
// --- column major memory ordering
const int Nrows = 7;
const int Ncols = 5;
// --- cuSOLVE input/output parameters/arrays
int work_size = 0;
int *devInfo; gpuErrchk(cudaMalloc(&devInfo, sizeof(int)));
// --- CUDA solver initialization
cusolverDnHandle_t solver_handle;
cusolverDnCreate(&solver_handle);
// --- Singular values threshold
double threshold = 1e-12;
// --- Setting the host, Nrows x Ncols matrix
double *h_A = (double *)malloc(Nrows * Ncols * sizeof(double));
for(int j = 0; j < Nrows; j++)
for(int i = 0; i < Ncols; i++)
h_A[j + i*Nrows] = (i + j*j) * sqrt((double)(i + j));
// --- Setting the device matrix and moving the host matrix to the device
double *d_A; gpuErrchk(cudaMalloc(&d_A, Nrows * Ncols * sizeof(double)));
gpuErrchk(cudaMemcpy(d_A, h_A, Nrows * Ncols * sizeof(double), cudaMemcpyHostToDevice));
// --- host side SVD results space
double *h_U = (double *)malloc(Nrows * Nrows * sizeof(double));
double *h_V = (double *)malloc(Ncols * Ncols * sizeof(double));
double *h_S = (double *)malloc(min(Nrows, Ncols) * sizeof(double));
// --- device side SVD workspace and matrices
double *d_U; gpuErrchk(cudaMalloc(&d_U, Nrows * Nrows * sizeof(double)));
double *d_V; gpuErrchk(cudaMalloc(&d_V, Ncols * Ncols * sizeof(double)));
double *d_S; gpuErrchk(cudaMalloc(&d_S, min(Nrows, Ncols) * sizeof(double)));
// --- CUDA SVD initialization
cusolveSafeCall(cusolverDnDgesvd_bufferSize(solver_handle, Nrows, Ncols, &work_size));
double *work; gpuErrchk(cudaMalloc(&work, work_size * sizeof(double)));
// --- CUDA SVD execution
cusolveSafeCall(cusolverDnDgesvd(solver_handle, 'A', 'A', Nrows, Ncols, d_A, Nrows, d_S, d_U, Nrows, d_V, Ncols, work, work_size, NULL, devInfo));
int devInfo_h = 0; gpuErrchk(cudaMemcpy(&devInfo_h, devInfo, sizeof(int), cudaMemcpyDeviceToHost));
if (devInfo_h != 0) std::cout << "Unsuccessful SVD execution\n\n";
// --- Moving the results from device to host
gpuErrchk(cudaMemcpy(h_S, d_S, min(Nrows, Ncols) * sizeof(double), cudaMemcpyDeviceToHost));
gpuErrchk(cudaMemcpy(h_U, d_U, Nrows * Nrows * sizeof(double), cudaMemcpyDeviceToHost));
gpuErrchk(cudaMemcpy(h_V, d_V, Ncols * Ncols * sizeof(double), cudaMemcpyDeviceToHost));
for(int i = 0; i < min(Nrows, Ncols); i++)
std::cout << "d_S["<<i<<"] = " << std::setprecision(15) << h_S[i] << std::endl;
printf("\n\n");
int count = 0;
bool flag = 0;
while (!flag) {
if (h_S[count] < threshold) flag = 1;
if (count == min(Nrows, Ncols)) flag = 1;
count++;
}
count--;
printf("The null space of A has dimension %i\n\n", min(Ncols, Nrows) - count);
for(int j = count; j < Ncols; j++) {
printf("Basis vector nr. %i\n", j - count);
for(int i = 0; i < Ncols; i++)
std::cout << "d_V["<<i<<"] = " << std::setprecision(15) << h_U[j*Ncols + i] << std::endl;
printf("\n");
}
cusolverDnDestroy(solver_handle);
return 0;
}
Utilities.cuh
#ifndef UTILITIES_CUH
#define UTILITIES_CUH
extern "C" int iDivUp(int, int);
extern "C" void gpuErrchk(cudaError_t);
extern "C" void cusolveSafeCall(cusolverStatus_t);
#endif
Utilities.cu
#include <stdio.h>
#include <assert.h>
#include "cuda_runtime.h"
#include <cuda.h>
#include <cusolverDn.h>
/*******************/
/* iDivUp FUNCTION */
/*******************/
extern "C" int iDivUp(int a, int b){ return ((a % b) != 0) ? (a / b + 1) : (a / b); }
/********************/
/* CUDA ERROR CHECK */
/********************/
// --- Credit to http://stackoverflow.com/questions/14038589/what-is-the-canonical-way-to-check-for-errors-using-the-cuda-runtime-api
void gpuAssert(cudaError_t code, char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) { exit(code); }
}
}
extern "C" void gpuErrchk(cudaError_t ans) { gpuAssert((ans), __FILE__, __LINE__); }
/**************************/
/* CUSOLVE ERROR CHECKING */
/**************************/
static const char *_cudaGetErrorEnum(cusolverStatus_t error)
{
switch (error)
{
case CUSOLVER_STATUS_SUCCESS:
return "CUSOLVER_SUCCESS";
case CUSOLVER_STATUS_NOT_INITIALIZED:
return "CUSOLVER_STATUS_NOT_INITIALIZED";
case CUSOLVER_STATUS_ALLOC_FAILED:
return "CUSOLVER_STATUS_ALLOC_FAILED";
case CUSOLVER_STATUS_INVALID_VALUE:
return "CUSOLVER_STATUS_INVALID_VALUE";
case CUSOLVER_STATUS_ARCH_MISMATCH:
return "CUSOLVER_STATUS_ARCH_MISMATCH";
case CUSOLVER_STATUS_EXECUTION_FAILED:
return "CUSOLVER_STATUS_EXECUTION_FAILED";
case CUSOLVER_STATUS_INTERNAL_ERROR:
return "CUSOLVER_STATUS_INTERNAL_ERROR";
case CUSOLVER_STATUS_MATRIX_TYPE_NOT_SUPPORTED:
return "CUSOLVER_STATUS_MATRIX_TYPE_NOT_SUPPORTED";
}
return "<unknown>";
}
inline void __cusolveSafeCall(cusolverStatus_t err, const char *file, const int line)
{
if(CUSOLVER_STATUS_SUCCESS != err) {
fprintf(stderr, "CUSOLVE error in file '%s', line %d\n %s\nerror %d: %s\nterminating!\n",__FILE__, __LINE__,err, \
_cudaGetErrorEnum(err)); \
cudaDeviceReset(); assert(0); \
}
}
extern "C" void cusolveSafeCall(cusolverStatus_t err) { __cusolveSafeCall(err, __FILE__, __LINE__); }
I think the most important thing for CUDA is to find an algorithm that doesn't depend on conditional branching (which is quite slow on graphics hardware). Simple if statements that can be optimized into conditional assignment are much better (or you can use the ?: operator).
If necessary, you should be able to do some form of pivoting using conditional assignment. It might actually be harder to determine how to store your result: if your matrix is rank-deficient, what do you want your CUDA program to do about it?
If you assume your 4x3 matrix is not actually rank-deficient, you can find your (single) null-space vector without any conditionals at all: the matrix is small enough that you can use Cramer's rule efficiently.
Actually, since you don't actually care about the scale of your null vector, you don't have to divide by the determinant -- you can just take the determinants of the minors:
x1 x2 x3
M = y1 y2 y3
z1 z2 z3
w1 w2 w3
|y1 y2 y3| |x1 x2 x3| |x1 x2 x3| |x1 x2 x3|
-> x0 = |z1 z2 z3| y0 = -|z1 z2 z3| z0 = |y1 y2 y3| w0 = -|y1 y2 y3|
|w1 w2 w3| |w1 w2 w3| |w1 w2 w3| |z1 z2 z3|
Note that these 3x3 determinants are just triple products; you can save computation by reusing the cross products.
"seems very expensive" - what data do you have that supports this?
Maybe Block Lanczos is the answer you seek.
Or maybe this.
Both JAMA and Apache Commons Math have SVD implementations in Java. Why not take those and try them out? Get some real data for your case instead of impressions. It won't cost you much, since the code is already written and tested.
I wondered if the matrixes are related rather than just being random, so that the null spaces you are seeking can be considered to be like 1-dimensional tangents to a curve in N-space (N = 9). If so, you may be able to speed things up by using Newton's method to solve successive instances of the system of quadratic equations Ax = 0, |x|^2 = 1, starting from a previous null space vector. Newton's method uses first derivatives to converge to a solution, and so would use Gaussian elimination to solve 9x9 systems. Using this technique would require that you be able to make small steps from matrix to matrix by say varying a parameter.
So the idea is that you initialize using SVD on the first matrix, but thereafter you step from matrix to matrix, using the null space vector of one as the starting point for the iteration for the next one. You need one or two iterations to get convergence. If you don't get convegence you use SVD to restart. If this situation is what you have, it is much faster than starting fresh on each matrix.
I used this a long time ago to map contours in the solutions of sets of 50 x 50 quadratic equations associated with the behavior of electric power systems.

Least Squares solution to simultaneous equations

I am trying to fit a transformation from one set of coordinates to another.
x' = R + Px + Qy
y' = S - Qx + Py
Where P,Q,R,S are constants, P = scale*cos(rotation). Q=scale*sin(rotation)
There is a well known 'by hand' formula for fitting P,Q,R,S to a set of corresponding points.
But I need to have an error estimate on the fit - so I need a least squares solution.
Read 'Numerical Recipes' but I'm having trouble working out how to do this for data sets with x and y in them.
Can anyone point to an example/tutorial/code sample of how to do this ?
Not too bothered about the language.
But - just use built in feature of Matlab/Lapack/numpy/R probably not helpful !
edit:
I have a large set of old(x,y) new(x,y) to fit to. The problem is overdetermined (more data points than unknowns) so simple matrix inversion isn't enough - and as I said I really need the error on the fit.
The following code should do the trick. I used the following formula for the residuals:
residual[i] = (computed_x[i] - actual_x[i])^2
+ (computed_y[i] - actual_y[i])^2
And then derived the least-squares formulae based on the general procedure described at Wolfram's MathWorld.
I tested out this algorithm in Excel and it performs as expected. I Used a collection of ten random points which were then rotated, translated and scaled by a randomly generated transformation matrix.
With no random noise applied to the output data, this program produces four parameters (P, Q, R, and S) which are identical to the input parameters, and an rSquared value of zero.
As more and more random noise is applied to the output points, the constants start to drift away from the correct values, and the rSquared value increases accordingly.
Here is the code:
// test data
const int N = 1000;
float oldPoints_x[N] = { ... };
float oldPoints_y[N] = { ... };
float newPoints_x[N] = { ... };
float newPoints_y[N] = { ... };
// compute various sums and sums of products
// across the entire set of test data
float Ex = Sum(oldPoints_x, N);
float Ey = Sum(oldPoints_y, N);
float Exn = Sum(newPoints_x, N);
float Eyn = Sum(newPoints_y, N);
float Ex2 = SumProduct(oldPoints_x, oldPoints_x, N);
float Ey2 = SumProduct(oldPoints_y, oldPoints_y, N);
float Exxn = SumProduct(oldPoints_x, newPoints_x, N);
float Exyn = SumProduct(oldPoints_x, newPoints_y, N);
float Eyxn = SumProduct(oldPoints_y, newPoints_x, N);
float Eyyn = SumProduct(oldPoints_y, newPoints_y, N);
// compute the transformation constants
// using least-squares regression
float divisor = Ex*Ex + Ey*Ey - N*(Ex2 + Ey2);
float P = (Exn*Ex + Eyn*Ey - N*(Exxn + Eyyn))/divisor;
float Q = (Exn*Ey + Eyn*Ex + N*(Exyn - Eyxn))/divisor;
float R = (Exn - P*Ex - Q*Ey)/N;
float S = (Eyn - P*Ey + Q*Ex)/N;
// compute the rSquared error value
// low values represent a good fit
float rSquared = 0;
float x;
float y;
for (int i = 0; i < N; i++)
{
x = R + P*oldPoints_x[i] + Q*oldPoints_y[i];
y = S - Q*oldPoints_x[i] + P*oldPoints_y[i];
rSquared += (x - newPoints_x[i])^2;
rSquared += (y - newPoints_y[i])^2;
}
To find P, Q, R, and S, then you can use least squares. I think the confusing thing is that that usual description of least squares uses x and y, but they don't match the x and y in your problem. You just need translate your problem carefully into the least squares framework. In your case the independent variables are the untransformed coordinates x and y, the dependent variables are the transformed coordinates x' and y', and the adjustable parameters are P, Q, R, and S. (If this isn't clear enough, let me know and I'll post more detail.)
Once you've found P, Q, R, and S, then scale = sqrt(P^2 + Q^2) and you can then find the rotation from sin rotation = Q / scale and cos rotation = P / scale.
You can use the levmar program to calculate this. Its tested and integrated into multiple products including mine. Its licensed under the GPL, but if this is a non-opensource project, he will change the license for you (for a fee)
Define the 3x3 matrix T(P,Q,R,S) such that (x',y',1) = T (x,y,1). Then compute
A = \sum_i |(T (x_i,y_i,1)) - (x'_i,y'_i,1)|^2
and minimize A against (P,Q,R,S).
Coding this yourself is a medium to large sized project unless you can guarntee that the data are well conditioned, especially when you want good error estimates out of the procedure. You're probably best off using an existing minimizer that supports error estimates..
Particle physics type would use minuit either directly from CERNLIB (with the coding most easily done in fortran77), or from ROOT (with the coding in c++, or it should be accessible though the python bindings). But that is a big installation if you don't have one of these tools already.
I'm sure that others can suggest other minimizers.
Thanks eJames, thats almost exaclty what I have. I coded it from an old army surveying manual that was based on an earlier "Instructions to Surveyors" note that must be 100years old! (It uses N and E for North and East rather than x/y )
The goodness of fit parameter will be very useful - I can interactively throw out selected points if they make the fit worse.
FindTransformation(vector<Point2D> known,vector<Point2D> unknown) {
{
// sums
for (unsigned int ii=0;ii<known.size();ii++) {
sum_e += unknown[ii].x;
sum_n += unknown[ii].y;
sum_E += known[ii].x;
sum_N += known[ii].y;
++n;
}
// mean position
me = sum_e/(double)n;
mn = sum_n/(double)n;
mE = sum_E/(double)n;
mN = sum_N/(double)n;
// differences
for (unsigned int ii=0;ii<known.size();ii++) {
de = unknown[ii].x - me;
dn = unknown[ii].y - mn;
// for P
sum_deE += (de*known[ii].x);
sum_dnN += (dn*known[ii].y);
sum_dee += (de*unknown[ii].x);
sum_dnn += (dn*unknown[ii].y);
// for Q
sum_dnE += (dn*known[ii].x);
sum_deN += (de*known[ii].y);
}
double P = (sum_deE + sum_dnN) / (sum_dee + sum_dnn);
double Q = (sum_dnE - sum_deN) / (sum_dee + sum_dnn);
double R = mE - (P*me) - (Q*mn);
double S = mN + (Q*me) - (P*mn);
}
One issue is that numeric stuff like this is often tricky. Even when the algorithms are straightforward, there's often problems that show up in actual computation.
For that reason, if there is a system you can get easily that has a built-in feature, it might be best to use that.

Resources