Floating Point Divider Hardware Implementation Details - algorithm

I am trying to implement a 32-bit floating point hardware divider in hardware and I am wondering if I can get any suggestions as to some tradeoffs between different algorithms?
My floating point unit currently suppports multiplication and addition/subtraction, but I am not going to switch it to a fused multiply-add (FMA) floating point architecture since this is an embedded platform where I am trying to minimize area usage.

Once upon a very long time ago i come across this neat and easy to implement float/fixed point divison algorithm used in military FPUs of that time period:
input must be unsigned and shifted so x < y and both are in range < 0.5 ; 1 >
don't forget to store the difference of shifts sh = shx - shy and original signs
find f (by iterating) so y*f -> 1 .... after that x*f -> x/y which is the division result
shift the x*f back by sh and restore result sign (sig=sigx*sigy)
the x*f can be computed easily like this:
z=1-y
(x*f)=(x/y)=x*(1+z)*(1+z^2)*(1+z^4)*(1+z^8)*(1+z^16)...(1+z^2n)
where
n = log2(num of fractional bits for fixed point, or mantisa bit size for floating point)
You can also stop when z^2n is zero on fixed bit width data types.
[Edit2] Had a bit of time&mood for this so here 32 bit IEEE 754 C++ implementation
I removed the old (bignum) examples to avoid confusion for future readers (they are still accessible in edit history if needed)
//---------------------------------------------------------------------------
// IEEE 754 single masks
const DWORD _f32_sig =0x80000000; // sign
const DWORD _f32_exp =0x7F800000; // exponent
const DWORD _f32_exp_sig=0x40000000; // exponent sign
const DWORD _f32_exp_bia=0x3F800000; // exponent bias
const DWORD _f32_exp_lsb=0x00800000; // exponent LSB
const DWORD _f32_exp_pos= 23; // exponent LSB bit position
const DWORD _f32_man =0x007FFFFF; // mantisa
const DWORD _f32_man_msb=0x00400000; // mantisa MSB
const DWORD _f32_man_bits= 23; // mantisa bits
//---------------------------------------------------------------------------
float f32_div(float x,float y)
{
union _f32 // float bits access
{
float f; // 32bit floating point
DWORD u; // 32 bit uint
};
_f32 xx,yy,zz; int sh; DWORD zsig; float z;
// result signum abs value
xx.f=x; zsig =xx.u&_f32_sig; xx.u&=(0xFFFFFFFF^_f32_sig);
yy.f=y; zsig^=yy.u&_f32_sig; yy.u&=(0xFFFFFFFF^_f32_sig);
// initial exponent difference sh and normalize exponents to speed up shift in range
sh =0;
sh-=((xx.u&_f32_exp)>>_f32_exp_pos)-(_f32_exp_bia>>_f32_exp_pos); xx.u&=(0xFFFFFFFF^_f32_exp); xx.u|=_f32_exp_bia;
sh+=((yy.u&_f32_exp)>>_f32_exp_pos)-(_f32_exp_bia>>_f32_exp_pos); yy.u&=(0xFFFFFFFF^_f32_exp); yy.u|=_f32_exp_bia;
// shift input in range
while (xx.f> 1.0f) { xx.f*=0.5f; sh--; }
while (xx.f< 0.5f) { xx.f*=2.0f; sh++; }
while (yy.f> 1.0f) { yy.f*=0.5f; sh++; }
while (yy.f< 0.5f) { yy.f*=2.0f; sh--; }
while (xx.f<=yy.f) { yy.f*=0.5f; sh++; }
// divider block
z=(1.0f-yy.f);
zz.f=xx.f*(1.0f+z);
for (;;)
{
z*=z; if (z==0.0f) break;
zz.f*=(1.0f+z);
}
// shift result back
for (;sh>0;) { sh--; zz.f*=0.5f; }
for (;sh<0;) { sh++; zz.f*=2.0f; }
// set signum
zz.u&=(0xFFFFFFFF^_f32_sig);
zz.u|=zsig;
return zz.f;
}
//---------------------------------------------------------------------------
I wanted to keep it simple so it is not optimized yet. You can for example replace all *=0.5 and *=2.0 by exponent inc/dec ... If you compare with FPU results on float operator / this will be a bit less precise because most FPUs compute on 80 bit internal format and this implementation is only on 32 bits.
As you can see I am using from FPU just +,-,*. The stuff can be speed up by using fast sqr algorithms like
Fast bignum square computation
especially if you want to use big bit widths ...
Do not forget to implement normalization and or overflow/underflow correction.

Related

Is there an efficient way to calculate ceiling of log_b(a)?

I need to accurately calculate where a and b
are both integers. If I simply use typical change of base formula with floating point math functions I wind up with errors due to rounding error.
You can use this identity:
b^logb(a) = a
So binary search x = logb(a) so the result of b^x is biggest integer which is still less than a and afterwards just increment the final result.
Here small C++ example for 32 bits:
//---------------------------------------------------------------------------
DWORD u32_pow(DWORD a,DWORD b) // = a^b
{
int i,bits=32;
DWORD d=1;
for (i=0;i<bits;i++)
{
d*=d;
if (DWORD(b&0x80000000)) d*=a;
b<<=1;
}
return d;
}
//---------------------------------------------------------------------------
DWORD u32_log2(DWORD a) // = ceil(log2(a))
{
DWORD x;
for (x=32;((a&0x80000000)==0)&&(x>1);x--,a<<=1);
return x;
}
//---------------------------------------------------------------------------
DWORD u32_log(DWORD b,DWORD a) // = ceil(logb(a))
{
DWORD x,m,bx;
// edge cases
if (b< 2) return 0;
if (a< 2) return 0;
if (a<=b) return 1;
m=1<<(u32_log2(a)-1); // max limit for b=2, all other bases lead to smaller exponents anyway
for (x=0;m;m>>=1)
{
x|=m;
bx=u32_pow(b,x);
if (bx>=a) x^=m;
}
return x+1;
}
//---------------------------------------------------------------------------
Where DWORD is any unsigned 32bit int type... for more info about pow,log,exp and bin search see:
Power by squaring for negative exponents
Note that u32_log2 is not really needed (unless you want bigints) you can use constant bitwidth instead, also some CPUs like x86 has single asm instruction returning the same much faster than for loop...
Now the next step is exploit the fact that the u32_pow bin search is the same as the u32_log bin search so we can merge the two functions and get rid of one nested for loop completely improving complexity considerably like this:
//---------------------------------------------------------------------------
DWORD u32_pow(DWORD a,DWORD b) // = a^b
{
int i,bits=32;
DWORD d=1;
for (i=0;i<bits;i++)
{
d*=d;
if (DWORD(b&0x80000000)) d*=a;
b<<=1;
}
return d;
}
//---------------------------------------------------------------------------
DWORD u32_log2(DWORD a) // = ceil(log2(a))
{
DWORD x;
for (x=32;((a&0x80000000)==0)&&(x>1);x--,a<<=1);
return x;
}
//---------------------------------------------------------------------------
DWORD u32_log(DWORD b,DWORD a) // = ceil(logb(a))
{
const int _bits=32; // DWORD bitwidth
DWORD bb[_bits]; // squares of b LUT for speed up b^x
DWORD x,m,bx,bx0,bit,bits;
// edge cases
if (b< 2) return 0;
if (a< 2) return 0;
if (a<=b) return 1;
// max limit for x where b=2, all other bases lead to smaller x
bits=u32_log2(a);
// compute bb LUT
bb[0]=b;
for (bit=1;bit< bits;bit++) bb[bit]=bb[bit-1]*bb[bit-1];
for ( ;bit<_bits;bit++) bb[bit]=1;
// bin search x and b^x at the same time
for (bx=1,x=0,bit=bits-1,m=1<<bit;m;m>>=1,bit--)
{
x|=m; bx0=bx; bx*=bb[bit];
if (bx>=a){ x^=m; bx=bx0; }
}
return x+1;
}
//---------------------------------------------------------------------------
The only drawback is that we need LUT for squares of b so: b,b^2,b^4,b^8... up to bits number of squares
Beware squaring will double the number of bits so you should also handle overflow if b or a are too big ...
[Edit2] more optimization
As benchmark on normal ints (on bigints the bin search is much much faster) revealed bin search version is the same speed as naive version (because of many subsequent operations except multiplications):
DWORD u32_log_naive(DWORD b,DWORD a) // = ceil(logb(a))
{
int x,bx;
if (b< 2) return 0;
if (a< 2) return 0;
if (a<=b) return 1;
for (x=2,bx=b;bx*=b;x++)
if (bx>=a) break;
return x;
}
We can optimize more:
we can comment out computation of unused squares:
//for ( ;bit<_bits;bit++) bb[bit]=1;
with this bin search become faster also on ints but not by much
we can use faster log2 instead of naive one
see: Fastest implementation of log2(int) and log2(float)
putting all together (x86 CPUs):
DWORD u32_log(DWORD b,DWORD a) // = ceil(logb(a))
{
const int _bits=32; // DWORD bitwidth
DWORD bb[_bits]; // squares of b LUT for speed up b^x
DWORD x,m,bx,bx0,bit,bits;
// edge cases
if (b< 2) return 0;
if (a< 2) return 0;
if (a<=b) return 1;
// max limit for x where b=2, all other bases lead to smaller x
asm {
bsr eax,a; // bits=u32_log2(a);
mov bits,eax;
}
// compute bb LUT
bb[0]=b;
for (bit=1;bit< bits;bit++) bb[bit]=bb[bit-1]*bb[bit-1];
// for ( ;bit<_bits;bit++) bb[bit]=1;
// bin search x and b^x at the same time
for (bx=1,x=0,bit=bits-1,m=1<<bit;m;m>>=1,bit--)
{
x|=m; bx0=bx; bx*=bb[bit];
if (bx>=a){ x^=m; bx=bx0; }
}
return x+1;
}
however the speed up is just slight for example naive 137 ms bin search 133 ms ... note that faster log2 did almost no change but that is because how my compiler is handling inline asm (not sure why BDS2006 and BCC32 is very slow on switching between asm and C++ but its true that is why in older C++ builders inline asm functions where not a good choice for speed optimizations unless a major speedup was expected) ...

How can I convert between a double-double and a decimal string?

One way of increasing precision beyond that of a double (e.g. if my application is doing something space-related that needs to represent accurate positions over distances of many light-years) is to use a double-double, a structure composed of two doubles which represents the sum of the two. Algorithms are known for the various arithmetic operations on such a structure, e.g. double-double + double-double, double × double-double, etc, e.g. as given in this paper.
(Note that this is not the same format as the IEEE 754-2008 binary128, a.k.a. quad-precision and conversion to/from double-double and binary128 is not guaranteed to round-trip.)
An obvious way to represent such a quantity as a string would then be to use strings representing each individual component of the double, e.g. "1.0+1.0e-200". My question is, is there a known way to convert to and from strings that represent the value as a single decimal? I.e. given the string "0.3" then provide the double-double closest to this representation, or go in the reverse direction. One naïve way would be to use successive multiplications/divisions by 10, but that is insufficient for doubles so I'm somewhat sceptical that they would work here.
such technique as summing 2 floating point variables just effectively doubles the mantissa bitwidth so its enough to just store/load bigger mantissa.
Standard IEEE 754 double has 52+1 bit mantissa leading to
log10(2^53) = 15.95 = ~16 [dec digits]
so when you add 2 such variables then:
log10(2^(53+53)) = 31.9 = ~32 [dec digits]
so just store/load 32 digit mantissa to/from string. The exponent of the 2 variables will differ by +/- 53 so its enough to store just one of them.
To further improve performance and precision you can use hex strings. Its much faster and there is no rounding as you can directly convert between the mantissa bits and hex string characters.
any 4 bits form a single hexadecimal digit so
(53+53) / 4 = 26.5 = ~27 [hex digits]
As you can see its also more storage efficient the only problem is the exponent delimiter as hexa digits contain E so you need to distinct the digit and exponent separator by upper/lower casing or use different character or use just sign for example:
1.23456789ABCDEFe10
1.23456789ABCDEFe+10
1.23456789ABCDEF|+10
1.23456789ABCDEF+10
I usually use the first version. Also you need to take in mind the exponent is bit shift of mantissa so resulting number is:
mantisa<<exponent = mantisa * (2^exponent)
Now during loading/storing from/to string you just load 53+53 bit integer number then separate it to 2 mantissas and reconstruct the floating point values at bit level ... Its important that your mantissas are aligned so exp1+53 = exp2 give or take 1 ...
All this can be done on integer arithmetics.
If your exponent is exp10 then you will inflict heavy rounding on the number during both storage and loading to/from string as your mantissa will usually missing many zero bits before or after the decimal point making transformation between decadic and binary/hexadecimal very hard and inaccurate (especially if you limit your computation just to 64/80/128/160 bits of mantissa).
Here an C++ example of just that (printing 32bit float in decadic on integer arithmetics only):
//---------------------------------------------------------------------------
AnsiString f32_prn(float fx) // scientific format integers only
{
const int ms=10+5; // mantisa digits
const int es=2; // exponent digits
const int eb=100000;// 10^(es+3)
const int sz=ms+es+5;
char txt[sz],c;
int i=0,i0,i1,m,n,exp,e2,e10;
DWORD x,y,man;
for (i0=0;i0<sz;i0++) txt[i0]=' ';
// float -> DWORD
x=((DWORD*)(&fx))[0];
// sign
if (x>=0x80000000){ txt[i]='-'; i++; x&=0x7FFFFFFF; }
else { txt[i]='+'; i++; }
// exp
exp=((x>>23)&255)-127;
// man
man=x&0x007FFFFF;
if ((exp!=-127)&&(exp!=+128)) man|=0x00800000; // not zero or denormalized or Inf/NaN
// special cases
if ((man==0)&&(exp==-127)){ txt[i]='0'; i++; txt[i]=0; return txt; } // +/- zero
if ((man==0)&&(exp==+128)){ txt[i]='I'; i++;
txt[i]='N'; i++;
txt[i]='F'; i++; txt[i]=0; return txt; } // +/- Infinity
if ((man!=0)&&(exp==+128)){ txt[i]='N'; i++;
txt[i]='A'; i++;
txt[i]='N'; i++; txt[i]=0; return txt; } // +/- Not a number
// align man,exp to 4bit
e2=(1+(exp&3))&3;
man<<=e2;
exp-=e2+23; // exp of lsb of mantisa
e10=0; // decimal digits to add/remove
m=0; // mantisa digits
n=ms; // max mantisa digits
// integer part
if (exp>=-28)
{
x=man; y=0; e2=exp;
// shift x to integer part <<
if (x) for (;e2>0;)
{
while (x>0x0FFFFFFF){ y/=10; y+=((x%10)<<28)/10; x/=10; e10++; }
e2-=4; x<<=4; y<<=4;
x+=(y>>28)&15; y&=0x0FFFFFFF;
}
// shift x to integer part >>
for (;e2<0;e2+=4) x>>=4;
// no exponent?
if ((e10>0)&&(e10<=es+3)) n++; // no '.'
// print
for (i0=i;x;)
{
if (m<n){ txt[i]='0'+(x%10); i++; m++; if ((m==n)&&(x<eb)) m+=es+1; } else e10++;
x/=10;
}
// reverse digits
for (i1=i-1;i0<i1;i0++,i1--){ c=txt[i0]; txt[i0]=txt[i1]; txt[i1]=c; }
}
// fractional part
if (exp<0)
{
x=man; y=0; e2=exp;
// shift x to fractional part <<
if (x) for (;e2<-28;)
{
while ((x<=0x19999999)&&(y<=0x19999999)){ y*=10; x*=10; x+=(y>>28)&15; y&=0x0FFFFFFF; e10--; }
y>>=4; y&=0x00FFFFFF; y|=(x&15)<<24;
x>>=4; x&=0x0FFFFFFF; e2+=4;
}
// shift x to fractional part <<
for (;e2>-28;e2-=4) x<<=4;
// print
x&=0x0FFFFFFF;
if ((m)&&(!e10)) n+=es+2; // no exponent means more digits for mantisa
if (x)
{
if (m){ txt[i]='.'; i++; }
for (i0=i;x;)
{
y*=10; x*=10;
x+=(y>>28)&15;
if (m<n)
{
i0=((x>>28)&15);
if (!m)
{
if (i0)
{
txt[i]='0'+i0; i++; m++;
txt[i]='.'; i++;
}
e10--;
if (!e10) n+=es+2; // no exponent means more digits for mantisa
}
else { txt[i]='0'+i0; i++; m++; }
} else break;
y&=0x0FFFFFFF;
x&=0x0FFFFFFF;
}
}
}
else{
// no fractional part
if ((e10>0)&&(e10<sz-i))
for (;e10;e10--){ txt[i]='0'+i0; i++; m++; }
}
// exponent
if (e10)
{
if (e10>0) // move . after first digit
{
for (i0=i;i0>2;i0--) txt[i0]=txt[i0-1];
txt[2]='.'; i++; e10+=i-3;
}
// sign
txt[i]='E'; i++;
if (e10<0.0){ txt[i]='-'; i++; e10=-e10; }
else { txt[i]='+'; i++; }
// print
for (i0=i;e10;){ txt[i]='0'+(e10%10); e10/=10; i++; }
// reverse digits
for (i1=i-1;i0<i1;i0++,i1--){ c=txt[i0]; txt[i0]=txt[i1]; txt[i1]=c; }
}
txt[i]=0;
return txt;
}
//---------------------------------------------------------------------------
Just change the AnsiString return type into any string type or char* you got at your disposal ...
As you can see its a lot of code with a lot of hacks and internally a lot more than 24bit of mantissa is used to lower the rounding errors inflicted by decadic exponent.
So I strongly advice to use binary exponent (exp2) and hexa digits for mantissa it will simplify your problem a lot and get rid of the rounding entirely. The only problem is when you want print or input decadic number in such case you have no choice but to round ... Luckily you can use hexa output and convert it to decadic on strings... Or construct the print from single variable prints ...
for more info see related QAs:
How do I convert a very long binary number to decimal?

Why float division is faster than integer division in c++?

Consider the following code snippet in C++ :(visual studio 2015)
First Block
const int size = 500000000;
int sum =0;
int *num1 = new int[size];//initialized between 1-250
int *num2 = new int[size];//initialized between 1-250
for (int i = 0; i < size; i++)
{
sum +=(num1[i] / num2[i]);
}
Second Block
const int size = 500000000;
int sum =0;
float *num1 = new float [size]; //initialized between 1-250
float *num2 = new float [size]; //initialized between 1-250
for (int i = 0; i < size; i++)
{
sum +=(num1[i] / num2[i]);
}
I expected that first block runs faster because it is integer operation . But the Second block is considerably faster , although it is floating point operation . here is results of my bench mark :
Division:
Type Time
uint8 879.5ms
uint16 885.284ms
int 982.195ms
float 654.654ms
As well as floating point multiplication is faster than integer multiplication.
here is results of my bench mark :
Multiplication:
Type Time
uint8 166.339ms
uint16 524.045ms
int 432.041ms
float 402.109ms
My system spec: CPU core i7-7700 ,Ram 64GB,Visual studio 2015
Floating point number division is faster than integer division because of the exponent part in floating point number representation. To divide one exponent by another one plain subtraction is used.
int32_t division requires fast division of 31-bit numbers, whereas float division requires fast division of 24-bit mantissas (the leading one in mantissa is implied and not stored in a floating point number) and faster subtraction of 8-bit exponents.
See an excellent detailed explanation how division is performed in CPU.
It may be worth mentioning that SSE and AVX instructions only provide floating point division, but no integer division. SSE instructions/intrinsincs can be used to quadruple the speed of your float calculation easily.
If you look into Agner Fog's instruction tables, for example, for Skylake, the latency of the 32-bit integer division is 26 CPU cycles, whereas the latency of the SSE scalar float division is 11 CPU cycles (and, surprisingly, it takes the same time to divide four packed floats).
Also note, in C and C++ there is no division on numbers shorter that int, so that uint8_t and uint16_t are first promoted to int and then the division of ints happens. uint8_t division looks faster than int because it has fewer bits set when converted to int which causes the division to complete faster.

How to calculate modulus of 64-bit unsigned integer?

Note: This question is different from Fastest way to calculate a 128-bit integer modulo a 64-bit integer.
Here's a C# fiddle:
https://dotnetfiddle.net/QbLowb
Given the pseudocode:
UInt64 a = 9228496132430806238;
UInt32 d = 585741;
How do i calculate
UInt32 r = a % d?
The catch, of course, is that i am not in a compiler that supports the UInt64 data type.1 But i do have access to the Windows ULARGE_INTEGER union:
typedef struct ULARGE_INTEGER {
DWORD LowPart;
DWORD HighPart;
};
Which means really that i can turn my code above into:
//9228496132430806238 = 0x80123456789ABCDE
UInt32 a = 0x80123456; //high part
UInt32 b = 0x789ABCDE; //low part
UInt32 r = 585741;
How to do it
But now comes how to do the actual calculation. I can start with the pencil-and-paper long division:
________________________
585741 ) 0x80123456 0x789ABCDE
To make it simpler, we can work in variables:
Now we are working entirely with 32-bit unsigned types, which my compiler does support.
u1 = a / r; //integer truncation math
v1 = a % r; //modulus
But now i've brought myself to a standstill. Because now i have to calculate:
v1||b / r
In other words, I have to perform division of a 64-bit value, which is what i was unable to perform in the first place!
This must be a solved problem already. But the only questions i can find on Stackoverflow are people trying to calculate:
a^b mod n
or other cryptographically large multi-precision operations, or approximate floating point.
Bonus Reading
Microsoft Research: Division and Modulus for Computer Scientists
https://stackoverflow.com/questions/36684771/calculating-large-mods-by-hand
Fastest way to calculate a 128-bit integer modulo a 64-bit integer (unrelated question; i hate you people)
1But it does support Int64, but i don't think that helps me
Working with Int64 support
I was hoping for the generic solution to the performing modulus against a ULARGE_INTEGER (and even LARGE_INTEGER), in a compiler without native 64-bit support. That would be the correct, good, perfect, and ideal answer, which other people will be able to use when they need.
But there is also the reality of the problem i have. And it can lead to an answer that is generally not useful to anyone else:
cheating by calling one of the Win32 large integer functions (although there is none for modulus)
cheating by using 64-bit support for signed integers
I can check if a is positive. If it is, i know my compiler's built-in support for Int64 will handle:
UInt32 r = a % d; //for a >= 0
Then there's there's how to handle the other case: a is negative
UInt32 ModU64(ULARGE_INTEGER a, UInt32 d)
{
//Hack: Our compiler does support Int64, just not UInt64.
//Use that Int64 support if the high bit in a isn't set.
Int64 sa = (Int64)a.QuadPart;
if (sa >= 0)
return (sa % d);
//sa is negative. What to do...what to do.
//If we want to continue to work with 64-bit integers,
//we could now treat our number as two 64-bit signed values:
// a == (aHigh + aLow)
// aHigh = 0x8000000000000000
// aLow = 0x0fffffffffffffff
//
// a mod d = (aHigh + aLow) % d
// = ((aHigh % d) + (aLow % d)) % d //<--Is this even true!?
Int64 aLow = sa && 0x0fffffffffffffff;
Int64 aHigh = 0x8000000000000000;
UInt32 rLow = aLow % d; //remainder from low portion
UInt32 rHigh = aHigh % d; //this doesn't work, because it's "-1 mod d"
Int64 r = (rHigh + rLow) % d;
return d;
}
Answer
It took a while, but i finally got an answer. I would post it as an answer; but Z29kIGZ1Y2tpbmcgZGFtbiBzcGVybSBidXJwaW5nIGNvY2tzdWNraW5nIHR3YXR3YWZmbGVz people mistakenly decided that my unique question was an exact duplicate.
UInt32 ModU64(ULARGE_INTEGER a, UInt32 d)
{
//I have no idea if this overflows some intermediate calculations
UInt32 Al = a.LowPart;
UInt32 Ah = a.HighPart;
UInt32 remainder = (((Ah mod d) * ((0xFFFFFFFF - d) mod d)) + (Al mod d)) mod d;
return remainder;
}
Fiddle
I just updated my ALU32 class code in this related QA:
Cant make value propagate through carry
As CPU assembly independent code for mul,div was requested. The divider is solving all your problems. However it is using Binary long division so its a bit slover than stacking up 32 bit mul/mod/div operations. Here the relevant part of code:
void ALU32::div(DWORD &c,DWORD &d,DWORD ah,DWORD al,DWORD b)
{
DWORD ch,cl,bh,bl,h,l,mh,ml;
int e;
// edge cases
if (!b ){ c=0xFFFFFFFF; d=0xFFFFFFFF; cy=1; return; }
if (!ah){ c=al/b; d=al%b; cy=0; return; }
// align a,b for binary long division m is the shifted mask of b lsb
for (bl=b,bh=0,mh=0,ml=1;bh<0x80000000;)
{
e=0; if (ah>bh) e=+1; // e = cmp a,b {-1,0,+1}
else if (ah<bh) e=-1;
else if (al>bl) e=+1;
else if (al<bl) e=-1;
if (e<=0) break; // a<=b ?
shl(bl); rcl(bh); // b<<=1
shl(ml); rcl(mh); // m<<=1
}
// binary long division
for (ch=0,cl=0;;)
{
sub(l,al,bl); // a-b
sbc(h,ah,bh);
if (cy) // a<b ?
{
if (ml==1) break;
shr(mh); rcr(ml); // m>>=1
shr(bh); rcr(bl); // b>>=1
continue;
}
al=l; ah=h; // a>=b ?
add(cl,cl,ml); // c+=m
adc(ch,ch,mh);
}
cy=0; c=cl; d=al;
if ((ch)||(ah)) cy=1; // overflow
}
Look the linked QA for description of the class and used subfunctions. The idea behind a/b is simple:
definition
lets assume that we got 64/64 bit division (modulus will be a partial product) and want to use 32 bit arithmetics so:
(ah,al) / (bh,bl) = (ch,cl)
each 64bit QWORD will be defined as high and low 32bit DWORD.
align a,b
exactly like computing division on paper we must align b so it divides a so find sh that:
(bh,bl)<<sh <= (ah,al)
(bh,bl)<<(sh+1) > (ah,al)
and compute m so
(mh,ml) = 1<<sh
beware that in case bh>=0x80000000 stop the shifting or we would overflow ...
divide
set result c = 0 and then simply substract b from a while b>=a. For each substraction add m to c. Once b>a shift both b,m right to align again. Stop if m==0 or a==0.
result
c will hold 64bit result of division so use cl and similarly a holds the remainder so use al as your modulus result. You can check if ch,ah are zero if not overflow occurs (as result is bigger than 32 bit). The same goes for edge cases like division by zero...
Now as you want 64bit/32bit simply set bh=0 ... To do this I needed 64bit operations (+,-,<<,>>) which I did by stacking up 32bit operations with Carry (that is the reason why my ALU32 class was created in the first place) for more info see the link above.

How can I use mach_absolute_time without overflowing?

On Darwin, the POSIX standard clock_gettime(CLOCK_MONOTONIC) timer is not available. Instead, the highest resolution monotonic timer is obtained through the mach_absolute_time function from mach/mach_time.h.
The result returned may be an unadjusted tick count from the processor, in which case the time units could be a strange multiple. For example, on a CPU with a 33MHz tick count, Darwin returns 1000000000/33333335 as the exact units of the returned result (ie, multiply the mach_absolute_time by that fraction to obtain a nanosecond value).
We usually wish to convert from exact ticks to "standard" (decimal) units, but unfortunately, naively multiplying the absolute time by the fraction will overflow even in 64-bit arithmetic. This is an error that Apple's sole piece of documentation on mach_absolute_time falls into (Technical Q&A QA1398).1
How should I write a function that correctly uses mach_absolute_time?
Note that this is not a theoretical problem: the sample code in QA1398 completely fails to work on PowerPC-based Macs. On Intel Macs, mach_timebase_info always returns 1/1 as the scaling factor because the CPU's raw tick count is unreliable (dynamic speed-stepping), so the API does the scaling for you. On PowerPC Macs, mach_timebase_info returns either 1000000000/33333335 or 1000000000/25000000, so Apple's provided code definitely overflows every few minutes. Oops.
Most-precise (best) answer
Perform the arithmetic at 128-bit precision to avoid the overflow!
// Returns monotonic time in nanos, measured from the first time the function
// is called in the process.
uint64_t monotonicTimeNanos() {
uint64_t now = mach_absolute_time();
static struct Data {
Data(uint64_t bias_) : bias(bias_) {
kern_return_t mtiStatus = mach_timebase_info(&tb);
assert(mtiStatus == KERN_SUCCESS);
}
uint64_t scale(uint64_t i) {
return scaleHighPrecision(i - bias, tb.numer, tb.denom);
}
static uint64_t scaleHighPrecision(uint64_t i, uint32_t numer,
uint32_t denom) {
U64 high = (i >> 32) * numer;
U64 low = (i & 0xffffffffull) * numer / denom;
U64 highRem = ((high % denom) << 32) / denom;
high /= denom;
return (high << 32) + highRem + low;
}
mach_timebase_info_data_t tb;
uint64_t bias;
} data(now);
return data.scale(now);
}
A simple low-resolution answer
// Returns monotonic time in nanos, measured from the first time the function
// is called in the process. The clock may run up to 0.1% faster or slower
// than the "exact" tick count.
uint64_t monotonicTimeNanos() {
uint64_t now = mach_absolute_time();
static struct Data {
Data(uint64_t bias_) : bias(bias_) {
kern_return_t mtiStatus = mach_timebase_info(&tb);
assert(mtiStatus == KERN_SUCCESS);
if (tb.denom > 1024) {
double frac = (double)tb.numer/tb.denom;
tb.denom = 1024;
tb.numer = tb.denom * frac + 0.5;
assert(tb.numer > 0);
}
}
mach_timebase_info_data_t tb;
uint64_t bias;
} data(now);
return (now - data.bias) * data.tb.numer / data.tb.denom;
}
A fiddly solution using low-precision arithmetic but using continued fractions to avoid loss of accuracy
// This function returns the rational number inside the given interval with
// the smallest denominator (and smallest numerator breaks ties; correctness
// proof neglects floating-point errors).
static mach_timebase_info_data_t bestFrac(double a, double b) {
if (floor(a) < floor(b))
{ mach_timebase_info_data_t rv = {(int)ceil(a), 1}; return rv; }
double m = floor(a);
mach_timebase_info_data_t next = bestFrac(1/(b-m), 1/(a-m));
mach_timebase_info_data_t rv = {(int)m*next.numer + next.denum, next.numer};
return rv;
}
// Returns monotonic time in nanos, measured from the first time the function
// is called in the process. The clock may run up to 0.1% faster or slower
// than the "exact" tick count. However, although the bound on the error is
// the same as for the pragmatic answer, the error is actually minimized over
// the given accuracy bound.
uint64_t monotonicTimeNanos() {
uint64_t now = mach_absolute_time();
static struct Data {
Data(uint64_t bias_) : bias(bias_) {
kern_return_t mtiStatus = mach_timebase_info(&tb);
assert(mtiStatus == KERN_SUCCESS);
double frac = (double)tb.numer/tb.denom;
uint64_t spanTarget = 315360000000000000llu; // 10 years
if (getExpressibleSpan(tb.numer, tb.denom) >= spanTarget)
return;
for (double errorTarget = 1/1024.0; errorTarget > 0.000001;) {
mach_timebase_info_data_t newFrac =
bestFrac((1-errorTarget)*frac, (1+errorTarget)*frac);
if (getExpressibleSpan(newFrac.numer, newFrac.denom) < spanTarget)
break;
tb = newFrac;
errorTarget = fabs((double)tb.numer/tb.denom - frac) / frac / 8;
}
assert(getExpressibleSpan(tb.numer, tb.denom) >= spanTarget);
}
mach_timebase_info_data_t tb;
uint64_t bias;
} data(now);
return (now - data.bias) * data.tb.numer / data.tb.denom;
}
The derivation
We aim to reduce the fraction returned by mach_timebase_info to one that is essentially the same, but with a small denominator. The size of the timespan that we can handle is limited only by the size of the denominator, not the numerator of the fraction we shall multiply by:
uint64_t getExpressibleSpan(uint32_t numer, uint32_t denom) {
// This is just less than the smallest thing we can multiply numer by without
// overflowing. ceilLog2(numer) = 64 - number of leading zeros of numer
uint64_t maxDiffWithoutOverflow = ((uint64_t)1 << (64 - ceilLog2(numer))) - 1;
return maxDiffWithoutOverflow * numer / denom;
}
If denom=33333335 as returned by mach_timebase_info, we can handle differences of up to 18 seconds only before the multiplication by numer overflows. As getExpressibleSpan shows, by calculating a rough lower bound for this, the size of numer doesn't matter: halving numer doubles maxDiffWithoutOverflow. The only goal therefore is to produce a fraction close to numer/denom that has a smaller denominator. The simplest method to do this is using continued fractions.
The continued fractions method is rather handy. bestFrac clearly works correctly if the provided interval contains an integer: it returns the least integer in the interval over 1. Otherwise, it calls itself recursively with a strictly larger interval and returns m+1/next. The final result is a continued fraction that can be shown by induction to have the correct property: it's optimal, the fraction inside the given interval with the least denominator.
Finally, we reduce the fraction Darwin passes us to a smaller one to use when rescaling the mach_absolute_time to nanoseconds. We may introduce an error here because we can't reduce the fraction in general without losing accuracy. We set ourselves the target of 0.1% error, and check that we've reduced the fraction enough for common timespans (up to ten years) to be handled correctly.
Arguably the method is over-complicated for what it does, but it handles correctly anything the API can throw at it, and the resulting code is still short and extremely fast (bestFrac typically recurses only three or four iterations deep before returning a denominator less than 1000 for random intervals [a,a*1.002]).
You're worrying about overflow when multiplying/dividing with values from the mach_timebase_info struct, which is used for conversion to nanoseconds. So, while it may not fit your exact needs, there are easier ways to get a count in nanoseconds or seconds.
All solutions below are using mach_absolute_time internally (and NOT the wall clock).
Use double instead of uint64_t
(supported in Objective-C and Swift)
double tbInSeconds = 0;
mach_timebase_info_data_t tb;
kern_return_t kError = mach_timebase_info(&tb);
if (kError == 0) {
tbInSeconds = 1e-9 * (double)tb.numer / (double)tb.denom;
}
(remove the 1e-9 if you want nanoseconds)
Usage:
uint64_t start = mach_absolute_time();
// do something
uint64_t stop = mach_absolute_time();
double durationInSeconds = tbInSeconds * (stop - start);
Use ProcessInfo.processInfo.systemUptime
(supported in Objective-C and Swift)
It does the job in double seconds directly:
CFTimeInterval start = NSProcessInfo.processInfo.systemUptime;
// do something
CFTimeInterval stop = NSProcessInfo.processInfo.systemUptime;
NSTimeInterval durationInSeconds = stop - start;
For reference, source code of systemUptime
just does something similar as previous solution:
struct mach_timebase_info info;
mach_timebase_info(&info);
__CFTSRRate = (1.0E9 / (double)info.numer) * (double)info.denom;
__CF1_TSRRate = 1.0 / __CFTSRRate;
uint64_t tsr = mach_absolute_time();
return (CFTimeInterval)((double)tsr * __CF1_TSRRate);
Use QuartzCore.CACurrentMediaTime()
(supported in Objective-C and Swift)
Same as systemUptime, but without being open source.
Use Dispatch.DispatchTime.now()
(supported in Swift only)
Another wrapper around mach_absolute_time(). Base precision is nanoseconds, backed with UInt64.
DispatchTime start = DispatchTime.now()
// do something
DispatchTime stop = DispatchTime.now()
TimeInterval durationInSeconds = Double(end.uptimeNanoseconds - start.uptimeNanoseconds) / 1_000_000_000
For reference, source code of DispatchTime.now() says it basically simply returns a struct DispatchTime(rawValue: mach_absolute_time()). And the calculation for uptimeNanoseconds is:
(result, overflow) = result.multipliedReportingOverflow(by: UInt64(DispatchTime.timebaseInfo.numer))
result = overflow ? UInt64.max : result / UInt64(DispatchTime.timebaseInfo.denom)
So it just discards results if the multiplication can't be stored in an UInt64.
If mach_absolute_time() sets the uint64 back to 0 then reset the time calculations if less than the last check.
That's the problem, they don't document what happens when the uint64 reaches all ones (binary).
read it. https://developer.apple.com/documentation/kernel/1462446-mach_absolute_time

Resources