How to compute the integer absolute value - algorithm

How to compute the integer absolute value without using if condition.
I guess we need to use some bitwise operation.
Can anybody help?

Same as existing answers, but with more explanations:
Let's assume a twos-complement number (as it's the usual case and you don't say otherwise) and let's assume 32-bit:
First, we perform an arithmetic right-shift by 31 bits. This shifts in all 1s for a negative number or all 0s for a positive one (but note that the actual >>-operator's behaviour in C or C++ is implementation defined for negative numbers, but will usually also perform an arithmetic shift, but let's just assume pseudocode or actual hardware instructions, since it sounds like homework anyway):
mask = x >> 31;
So what we get is 111...111 (-1) for negative numbers and 000...000 (0) for positives
Now we XOR this with x, getting the behaviour of a NOT for mask=111...111 (negative) and a no-op for mask=000...000 (positive):
x = x XOR mask;
And finally subtract our mask, which means +1 for negatives and +0/no-op for positives:
x = x - mask;
So for positives we perform an XOR with 0 and a subtraction of 0 and thus get the same number. And for negatives, we got (NOT x) + 1, which is exactly -x when using twos-complement representation.

Set the mask as right shift of integer by 31 (assuming integers are stored as two's-complement 32-bit values and that the right-shift operator does sign extension).
mask = n>>31
XOR the mask with number
mask ^ n
Subtract mask from result of step 2 and return the result.
(mask^n) - mask

Assume int is of 32-bit.
int my_abs(int x)
{
int y = (x >> 31);
return (x ^ y) - y;
}

One can also perform the above operation as:
return n*(((n>0)<<1)-1);
where n is the number whose absolute need to be calculated.

In C, you can use unions to perform bit manipulations on doubles. The following will work in C and can be used for both integers, floats, and doubles.
/**
* Calculates the absolute value of a double.
* #param x An 8-byte floating-point double
* #return A positive double
* #note Uses bit manipulation and does not care about NaNs
*/
double abs(double x)
{
union{
uint64_t bits;
double dub;
} b;
b.dub = x;
//Sets the sign bit to 0
b.bits &= 0x7FFFFFFFFFFFFFFF;
return b.dub;
}
Note that this assumes that doubles are 8 bytes.

I wrote my own, before discovering this question.
My answer is probably slower, but still valid:
int abs_of_x = ((x*(x >> 31)) | ((~x + 1) * ((~x + 1) >> 31)));

If you are not allowed to use the minus sign you could do something like this:
int absVal(int x) {
return ((x >> 31) + x) ^ (x >> 31);
}

For assembly the most efficient would be to initialize a value to 0, substract the integer, and then take the max:
pxor mm1, mm1 ; set mm1 to all zeros
psubw mm1, mm0 ; make each mm1 word contain the negative of each mm0 word
pmaxswmm1, mm0 ; mm1 will contain only the positive (larger) values - the absolute value

In C#, you can implement abs() without using any local variables:
public static long abs(long d) => (d + (d >>= 63)) ^ d;
public static int abs(int d) => (d + (d >>= 31)) ^ d;
Note: regarding 0x80000000 (int.MinValue) and 0x8000000000000000 (long.MinValue):
As with all of the other bitwise/non-branching methods shown on this page, this gives the single non-mathematical result abs(int.MinValue) == int.MinValue (likewise for long.MinValue). These represent the only cases where result value is negative, that is, where the MSB of the two's-complement result is 1 -- and are also the only cases where the input value is returned unchanged. I don't believe this important point was mentioned elsewhere on this page.
The code shown above depends on the value of d used on the right side of the xor being the value of d updated during the computation of left side. To C# programmers this will seem obvious. They are used to seeing code like this because .NET formally incorporates a strong memory model which strictly guarantees the correct fetching sequence here. The reason I mention this is because in C or C++ one may need to be more cautious. The memory models of the latter are considerably more permissive, which may allow certain compiler optimizations to issue out-of-order fetches. Obviously, in such a regime, fetch-order sensitivity would represent a correctness hazard.

If you don't want to rely on implementation of sign extension while right bit shifting, you can modify the way you calculate the mask:
mask = ~((n >> 31) & 1) + 1
then proceed as was already demonstrated in the previous answers:
(n ^ mask) - mask

What is the programming language you're using? In C# you can use the Math.Abs method:
int value1 = -1000;
int value2 = 20;
int abs1 = Math.Abs(value1);
int abs2 = Math.Abs(value2);

Related

Truncate integer using bit twiddling

Is there a way to "truncate" an integer using bit twiddling, as if it floor-divided and then multiplied back, as in:
z = floor(x / y) * y
I know it is possible to do so if y is of power of two, for example:
z = floor(x / 4) * 4 == x & ~3
But what trick does one use when y is some general positive integer?
For each individual y, there is a sequence of operations (addition, subtraction, and binary shift) which divides x by y faster than the (x86) division instruction.
Finding that sequence however is not straightforward, and must be done in advance (feasible when you divide by the same y a lot).
A simple example: to divide an arbitrary uint32 x by 3, we can instead calculate x * M in uint64 type and shift it to the right by 33 bits, where M is a magic constant equal to 233 / 3 rounded up.
The following code (C) tries 20 random uint32 values with the above algorithm and checks that the result is equal to just dividing by 3:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main ()
{
int step;
unsigned x, y1, y2;
unsigned const M = (1ULL << 33) / 3 + 1;
srand (time (NULL));
for (step = 0; step < 20; step++)
{
x = (rand () << 30) | (rand () << 15) | rand ();
y1 = x / 3;
y2 = (x * 1ULL * M) >> 33;
printf ("%10u %10u %10u %s\n", x, y1, y2, y1 == y2 ? "true" : "false");
}
return 0;
}
For further information, see Hacker's Delight book in general, and the freely available addition - chapter 10 here: hackersdelight.org/divcMore.pdf.
The reason this works for powers of 2 is the way binary representations works. Dividing by 2 (or powers of 2) is identical to bit shifting. Shifting right and then back left the same amount is identical to floor-division as you put it.
Consider an arbitrary binary number: 110101010111. If you'd bit shift it 3 times to the right (division by 8), and then back again it would turn to 110101010000 which is identical to ANDing it with 111111111000. Now lets consider division by 3 of the (decimal) number 16: start with 10000. Division (not shifting!) by 3 would be 5 (101) and multiply by 3 again is 15 (1111). No bit shifting can do that.
The obvious thing to do is to convert to whatever base you are trying to work with, and then basically make the last digit 0. (Or if you are working with a kth power, then make the last k digits 0). However you asked about bit (base-2) operations. It turns out that for any desired base B (at least, that is odd), you can come up with a number in binary so that the first M digits in base B are anything you want, for any M. Thus, how could you possibly have a general method for what you want (with an odd base), that just works on bits (binary)? At the very least it would probably be a lot more complicated than simply converting your number to your desired base and setting however many last digits to 0 and then converting back to natural base-2 integer representation.

how to calculate 2^32 without multiplying numbers directly?

the simplest way to calculate 2^32 is 2*2*2*2*2......= 4294967296
, I want to know that is there any other way to get 4294967296? (2^16 * 2^16 is treated as the same method as 2*2*2.... )
and How many ways to calculate it?
Is there any function to calculate it?
I can't come up with any methods to calculate it without 2*2*2...
2 << 31
is a bit shift. It effectively raises 2 to the 32nd power.
Options:
1 << 32
2^32 = (2^32 - 1) + 1 = (((2^32 - 1) + 1) - 1) + 1 = ...
Arrange 32 items on a table. Count the ways you can choose subsets of them.
If you are not much of a fan of binary magic, then I would suggest quickpower.This function computes xn in O(logn) time.
int qpower(int x,int n)
{
if(n==0)return 1;
if(n==1)return x;
int mid=qpower(x,n/2);
if(n%2==0)return mid*mid;
return x*mid*mid;
}
If you are on a common computer you can left bitshift 2 by 31 (i.e. 2<<31) to obtain 2^32.
In standard C:
unsigned long long x = 2ULL << 31;
unsigned long long is needed since a simple unsigned long is not guaranteed to be large enough to store the value of 2<<31.
In section 5.2.4.2.1 paragraph 1 of the C99 standard:
... the
following shall be replaced by expressions that have the same type as would an
expression that is an object of the corresponding type converted according to the integer
promotions. Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
— maximum value for an object of type unsigned long int
ULONG_MAX 4294967295 // 2^32 - 1
— maximum value for an object of type unsigned long long int
ULLONG_MAX 18446744073709551615 // 2^64 - 1
Why not using Math.Pow() (in .NET). I think most language (or environment) would support the similar function for you:
Math.Pow(2,32);
In Groovy/Java you can do something like following with long number (signed integer can be max 2^31 in Java)
long twoPOW32 = 1L << 32;

Optimizing this query based search

We have two N-bit numbers (0< N< 100000). We have to perform q queries (0< q<500000) over these numbers. The query can be of following three types:
set_a idx x: Set A[idx] to x, where 0 <= idx < N, where A[idx] is idx'th least significant bit of A.
set_b idx x: Set B[idx] to x, where 0 <= idx < N.
get_c idx: Print C[idx], where C=A+B, and 0<=idx
Now, I have optimized the code to the best extent I can.
First, I tried with an int array for a, b and c. For every update, I calculate c and return the ith bit when queried. It was damn slow. Cleared 4/11 test cases only.
I moved over to using boolean array. It was around 2 times faster than int array approach. Cleared 7/11 testcases.
Next, I figured out that I need not calculate c for calculating idx th bit of A+B. I will just scan A and B towards right from idx until I find either a[i]=b[i]=0 or a[i]=b[i]=1. If a[i]=b[i]=0, then I just add up towards left to idx th bit starting with initial carry=0. And if a[i]=b[i]=1, then I just add up towards left to idx th bit starting with initial carry=1.
This was faster but cleared only 8/11 testcases.
Then, I figured out once, I get to the position i, a[i]=b[i]=0 or a[i]=b[i]=1, then I need not add up towards idx th position. If a[i]=b[i]=0, then answer is (a[idx]+b[idx])%2 and if a[i]=b[i]=1, then the answer is (a[idx]+b[idx]+1)%2. It was around 40% faster but still cleared only 8/11 testcases.
Now my question is how do get down those 3 'hard' testcases? I dont know what they are but the program is taking >3 sec to solve the problem.
Here is the code: http://ideone.com/LopZf
One possible optimization is to replace
(a[pos]+b[pos]+carry)%2
with
a[pos]^b[pos]^carry
The XOR operator (^) performs addition modulo 2, making the potentially expensive mod operation (%) unnecessary. Depending on the language and compiler, the compiler may make optimizations for you when doing a mod with a power of 2. But since you are micro-optimizing it is a simple change to make that removes dependence on that optimization being made for you behind the scenes.
http://en.wikipedia.org/wiki/Exclusive_or
This is just one suggestion that is simple to make. As others have suggested, using packed ints to represent your bit array will likely also improve what is probably the worst case test for your code. That would be the get_c function of the most significant bit, with either A or B (but not both) being 1 for all the other positions, requiring a scan of every bit position to the least significant bit to determine carry. If you were using packed ints for your bits, there would only be approximately 1/32 as many operations neccessary (assuming 32 bit ints). Using packed ints however would be a somewhat more complicated than your use of a simple boolean array (which really is likely just an array of bytes).
C/C++ Bit Array or Bit Vector
Convert bit array to uint or similar packed value
http://en.wikipedia.org/wiki/Bit_array
There are lots of other examples on Stackoverflow and the net for using ints as if they were bit arrays.
Here is a solution that looks a bit like your algorithm. I demonstrate it with bytes, but of course you can easily optimize the algorithm using 32 bit words (I suppose your machine has 64 bits arithmetic nowadays).
void setbit( unsigned char*x,unsigned int idx,unsigned int bit)
{
unsigned int digitIndex = idx>>3;
unsigned int bitIndex = idx & 7;
if( ((x[digitIndex]>>bitIndex)&1) ^ bit) x[digitIndex]^=(1u<<bitIndex);
}
unsigned int getbit(unsigned char *a,unsigned char *b,unsigned int idx)
{
unsigned int digitIndex = idx>>3;
unsigned int bitIndex = idx & 7;
unsigned int c = a[digitIndex]+b[digitIndex];
unsigned int bit = (c>>bitIndex) & 1;
/* a zero bit on the right will absorb a carry, let's check if any */
if( (c^(c+1))>>bitIndex )
{
/* none, we must check if there's a carry propagating from the right digits */
for(;digitIndex-- > 0;)
{
c=a[digitIndex]+b[digitIndex];
if( c > 255 ) return bit^1; /* yes, a carry */
if( c < 255 ) return bit; /* no carry possible, a zero bit will absorb it */
}
}
return bit;
}
If you find anything cryptic, just ask.
Edit: oops, I inverted the zero bit condition...

Obfuscating an ID

I'm looking for a way to encrypt/obfuscate an integer ID into another integer. More precisely, I need a function int F(int x), so that
x<->F(x) is one-to-one correspondence (if x != y, F(x) != F(y))
given F(x), it's easy to find out x - so F is not a hash function
given x and F(x) it's hard/impossible to find out F(y), something like x ^ 0x1234 won't work
For clarity, I'm not looking for a strong encryption solution, it's only obfuscation. Imagine a web application with urls like example.com/profile/1, example.com/profile/2 etc. The profiles themselves are not secret, but I'd like to prevent casual voyeurs to view/fetch all profiles one after another, so I'd rather hide them behind something like example.com/profile/23423, example.com/profile/80980234 etc. Although database-stored tokens can do the job quite easily, I'm curious if there's some simple math available for this.
One important requirement I wasn't clear about is that results should look "random", that is, given a sequence x,x+1,...,x+n , F(x),F(x+1)...F(x+n) shouldn't form a progression of any kind.
Obfuscate it with some combination of 2 or 3 simple methods:
XOR
shuffle individual bits
convert to modular representation (D.Knuth, Vol. 2, Chapter 4.3.2)
choose 32 (or 64) overlapping subsets of bits and XOR bits in each subset (parity bits of subsets)
represent it in variable-length numberic system and shuffle digits
choose a pair of odd integers x and y that are multiplicative inverses of each other (modulo 232), then multiply by x to obfuscate and multiply by y to restore, all multiplications are modulo 232 (source: "A practical use of multiplicative inverses" by Eric Lippert)
Variable-length numberic system method does not obey your "progression" requirement on its own. It always produces short arithmetic progressions. But when combined with some other method, it gives good results.
The same is true for the modular representation method.
Here is C++ code example for 3 of these methods. Shuffle bits example may use some different masks and distances to be more unpredictable. Other 2 examples are good for small numbers (just to give the idea). They should be extended to obfuscate all integer values properly.
// *** Numberic system base: (4, 3, 5) -> (5, 3, 4)
// In real life all the bases multiplied should be near 2^32
unsigned y = x/15 + ((x/5)%3)*4 + (x%5)*12; // obfuscate
unsigned z = y/12 + ((y/4)%3)*5 + (y%4)*15; // restore
// *** Shuffle bits (method used here is described in D.Knuth's vol.4a chapter 7.1.3)
const unsigned mask1 = 0x00550055; const unsigned d1 = 7;
const unsigned mask2 = 0x0000cccc; const unsigned d2 = 14;
// Obfuscate
unsigned t = (x ^ (x >> d1)) & mask1;
unsigned u = x ^ t ^ (t << d1);
t = (u ^ (u >> d2)) & mask2;
y = u ^ t ^ (t << d2);
// Restore
t = (y ^ (y >> d2)) & mask2;
u = y ^ t ^ (t << d2);
t = (u ^ (u >> d1)) & mask1;
z = u ^ t ^ (t << d1);
// *** Subset parity
t = (x ^ (x >> 1)) & 0x44444444;
u = (x ^ (x << 2)) & 0xcccccccc;
y = ((x & 0x88888888) >> 3) | (t >> 1) | u; // obfuscate
t = ((y & 0x11111111) << 3) | (((y & 0x11111111) << 2) ^ ((y & 0x22222222) << 1));
z = t | ((t >> 2) ^ ((y >> 2) & 0x33333333)); // restore
You want the transformation to be reversible, and not obvious. That sounds like an encryption that takes a number in a given range and produces a different number in the same range. If your range is 64 bit numbers, then use DES. If your range is 128 bit numbers then use AES. If you want a different range, then your best bet is probably Hasty Pudding cipher, which is designed to cope with different block sizes and with number ranges that do not fit neatly into a block, such as 100,000 to 999,999.
Obfuscation is not really sufficient in terms of security.
However, if you are trying to thwart the casual onlooker, I'd recommend a combination of two methods:
A private key that you combine with the id by xor'ing them together
Rotating the bits by a certain amount both before and after the key
has been applied
Here is an example (using pseudo code):
def F(x)
x = x XOR 31415927 # XOR x with a secret key
x = rotl(x, 5) # rotate the bits left 5 times
x = x XOR 31415927 # XOR x with a secret key again
x = rotr(x, 5) # rotate the bits right 5 times
x = x XOR 31415927 # XOR x with a secret key again
return x # return the value
end
I haven't tested it, but I think this is reversible, should be fast, and not too easy to tease out the method.
I found this particular piece of Python/PHP code very useful:
https://github.com/marekweb/opaque-id
I wrote some JS code using some of the ideas in this thread:
const BITS = 32n;
const MAX = 4294967295n;
const COPRIME = 65521n;
const INVERSE = 2166657316n;
const ROT = 6n;
const XOR1 = 10296065n;
const XOR2 = 2426476569n;
function rotRight(n, bits, size) {
const mask = (1n << bits) - 1n;
// console.log('mask',mask.toString(2).padStart(Number(size),'0'));
const left = n & mask;
const right = n >> bits;
return (left << (size - bits)) | right;
}
const pipe = fns => fns.reduce((f, g) => (...args) => g(f(...args)));
function build(...fns) {
const enc = fns.map(f => Array.isArray(f) ? f[0] : f);
const dec = fns.map(f => Array.isArray(f) ? f[1] : f).reverse();
return [
pipe(enc),
pipe(dec),
]
}
[exports.encode, exports.decode] = build(
[BigInt, Number],
[i => (i * COPRIME) % MAX, i => (i * INVERSE) % MAX],
x => x ^ XOR1,
[x => rotRight(x, ROT, BITS), x => rotRight(x, BITS-ROT, BITS)],
x => x ^ XOR2,
);
It produces some nice results like:
1 1352888202n 1 'mdh37u'
2 480471946n 2 '7y26iy'
3 3634587530n 3 '1o3xtoq'
4 2225300362n 4 '10svwqy'
5 1084456843n 5 'hxno97'
6 212040587n 6 '3i8rkb'
7 3366156171n 7 '1jo4eq3'
8 3030610827n 8 '1e4cia3'
9 1889750920n 9 'v93x54'
10 1017334664n 10 'gtp0g8'
11 4171450248n 11 '1wzknm0'
12 2762163080n 12 '19oiqo8'
13 1621319561n 13 'qtai6h'
14 748903305n 14 'cdvlhl'
15 3903018889n 15 '1sjr8nd'
16 3567473545n 16 '1mzzc7d'
17 2426613641n 17 '144qr2h'
18 1554197390n 18 'ppbudq'
19 413345678n 19 '6u3fke'
20 3299025806n 20 '1ik5klq'
21 2158182286n 21 'zoxc3y'
22 1285766031n 22 'l9iff3'
23 144914319n 23 '2ea0lr'
24 4104336271n 24 '1vvm64v'
25 2963476367n 25 '1d0dkzz'
26 2091060108n 26 'ykyob0'
27 950208396n 27 'fpq9ho'
28 3835888524n 28 '1rfsej0'
29 2695045004n 29 '18kk618'
30 1822628749n 30 'u559cd'
31 681777037n 31 'b9wuj1'
32 346231693n 32 '5q4y31'
Testing with:
const {encode,decode} = require('./obfuscate')
for(let i = 1; i <= 1000; ++i) {
const j = encode(i);
const k = decode(j);
console.log(i, j, k, j.toString(36));
}
XOR1 and XOR2 are just random numbers between 0 and MAX. MAX is 2**32-1; you should set this to whatever you think your highest ID will be.
COPRIME is a number that's coprime w/ MAX. I think prime numbers themselves are coprime with every other number (except multiples of themselves).
INVERSE is the tricky one to figure out. These blog posts don't give a straight answer, but WolframAlpha can figure it out for you. Basically, just solve the equation (COPRIME * x) % MAX = 1 for x.
The build function is something I created to make it easier to create these encode/decode pipelines. You can feed it as many operations as you want as [encode, decode] pairs. These functions have to be equal and opposite. The XOR functions are their own compliments so you don't need a pair there.
Here's another fun involution:
function mixHalves(n) {
const mask = 2n**12n-1n;
const right = n & mask;
const left = n >> 12n;
const mix = left ^ right;
return (mix << 12n) | right;
}
(assumes 24-bit integers -- just change the numbers for any other size)
Do anything with the bits of the ID that won't destroy them. For example:
rotate the value
use lookup to replace certain parts of the value
xor with some value
swap bits
swap bytes
mirror the whole value
mirror a part of the value
... use your imagination
For decryption, do all that in reverse order.
Create a program that will 'encrypt' some interesting values for you and put them in a table you can examine. Have same program TEST your encryption/decryption routine WITH all set of values that you want to have in your system.
Add stuff to the above list into the routines until your numbers will look properly mangled to you.
For anything else, get a copy of The Book.
I wrote an article on secure permutations with block ciphers, which ought to fulfil your requirements as stated.
I'd suggest, though, that if you want hard to guess identifiers, you should just use them in the first place: generate UUIDs, and use those as the primary key for your records in the first place - there's no need to be able to convert to and from a 'real' ID.
Not sure how "hard" you need it to be, how fast, or how little memory to use. If you have no memory constraints you could make a list of all integers, shuffle them and use that list as a mapping. However, even for a 4 byte integer you would need a lot of memory.
However, this could be made smaller so instead of mapping all integers you would map only 2 (or worst case 1) byte and apply this to each group in the integer. So, using 2 bytes a integer would be (group1)(group2) you would map each group through the random map. But that means that if you only change group2 then the mapping for group1 would stay the same. This could "fixed" by mapping different bits to each group.
So, *(group2) could be (bit 14,12,10,8,6,4,2,0) so, adding 1 would change both group1 and group2.
Still, this is only security by obscurity, anyone that can feed numbers into your function (even if you keep the function secret) could fairly easily figure it out.
Generate a private symmetric key for use in your application, and encrypt your integer with it. This will satisfy all three requirements, including the hardest #3: one would need to guess your key in order to break your scheme.
What you're describing here seems to be the opposite of a one-way function: it's easy to invert but super difficult to apply. One option would be to use a standard, off-the-shelf public-key encryption algorithm where you fix a (secret, randomly-chosen) public key that you keep a secret and a private key that you share with the world. That way, your function F(x) would be the encryption of x using the public key. You could then easily decrypt F(x) back to x by using the private decryption key. Notice that the roles of the public and private key are reversed here - you give out the private key to everyone so that they can decrypt the function, but keep the public key secret on your server. That way:
The function is a bijection, so it's invertible.
Given F(x), x is efficiently computable.
Given x and F(x), it is extremely difficult to compute F(y) from y, since without the public key (assuming you use a cryptographically strong encryption scheme) there is no feasible way to encrypt the data, even if the private decryption key is known.
This has many advantages. First, you can rest assured that the crypto system is safe, since if you use a well-established algorithm like RSA then you don't need to worry about accidental insecurity. Second, there are already libraries out there to do this, so you don't need to code much up and can be immune to side-channel attacks. Finally, you can make it possible for anyone to go and invert F(x) without anyone actually being able to compute F(x).
One detail- you should definitely not just be using the standard int type here. Even with 64-bit integers, there are so few combinations possible that an attacker could just brute-force try inverting everything until they find the encryption F(y) for some y even if they don't have the key. I would suggest using something like a 512-bit value, since even a science fiction attack would not be able to brute-force this.
Hope this helps!
If xor is acceptable for everything but inferring F(y) given x and F(x) then I think you can do that with a salt. First choose a secret one-way function. For example S(s) = MD5(secret ^ s). Then F(x) = (s, S(s) ^ x) where s is chosen randomly. I wrote that as a tuple but you can combine the two parts into an integer, e.g. F(x) = 10000 * s + S(s) ^ x. The decryption extracts the salt s again and uses F'(F(x)) = S(extract s) ^ (extract S(s)^x). Given x and F(x) you can see s (though it is slightly obfuscated) and you can infer S(s) but for some other user y with a different random salt t the user knowing F(x) can't find S(t).

What's the fastest algorithm to divide an integer by 3 without using a division instruction? [duplicate]

int x = n / 3; // <-- make this faster
// for instance
int a = n * 3; // <-- normal integer multiplication
int b = (n << 1) + n; // <-- potentially faster multiplication
The guy who said "leave it to the compiler" was right, but I don't have the "reputation" to mod him up or comment. I asked gcc to compile int test(int a) { return a / 3; } for an ix86 and then disassembled the output. Just for academic interest, what it's doing is roughly multiplying by 0x55555556 and then taking the top 32 bits of the 64 bit result of that. You can demonstrate this to yourself with eg:
$ ruby -e 'puts(60000 * 0x55555556 >> 32)'
20000
$ ruby -e 'puts(72 * 0x55555556 >> 32)'
24
$
The wikipedia page on Montgomery division is hard to read but fortunately the compiler guys have done it so you don't have to.
This is the fastest as the compiler will optimize it if it can depending on the output processor.
int a;
int b;
a = some value;
b = a / 3;
There is a faster way to do it if you know the ranges of the values, for example, if you are dividing a signed integer by 3 and you know the range of the value to be divided is 0 to 768, then you can multiply it by a factor and shift it to the left by a power of 2 to that factor divided by 3.
eg.
Range 0 -> 768
you could use shifting of 10 bits, which multiplying by 1024, you want to divide by 3 so your multiplier should be 1024 / 3 = 341,
so you can now use (x * 341) >> 10
(Make sure the shift is a signed shift if using signed integers), also make sure the shift is an actually shift and not a bit ROLL
This will effectively divide the value 3, and will run at about 1.6 times the speed as a natural divide by 3 on a standard x86 / x64 CPU.
Of course the only reason you can make this optimization when the compiler cant is because the compiler does not know the maximum range of X and therefore cannot make this determination, but you as the programmer can.
Sometime it may even be more beneficial to move the value into a larger value and then do the same thing, ie. if you have an int of full range you could make it an 64-bit value and then do the multiply and shift instead of dividing by 3.
I had to do this recently to speed up image processing, i needed to find the average of 3 color channels, each color channel with a byte range (0 - 255). red green and blue.
At first i just simply used:
avg = (r + g + b) / 3;
(So r + g + b has a maximum of 768 and a minimum of 0, because each channel is a byte 0 - 255)
After millions of iterations the entire operation took 36 milliseconds.
I changed the line to:
avg = (r + g + b) * 341 >> 10;
And that took it down to 22 milliseconds, its amazing what can be done with a little ingenuity.
This speed up occurred in C# even though I had optimisations turned on and was running the program natively without debugging info and not through the IDE.
See How To Divide By 3 for an extended discussion of more efficiently dividing by 3, focused on doing FPGA arithmetic operations.
Also relevant:
Optimizing integer divisions with Multiply Shift in C#
Depending on your platform and depending on your C compiler, a native solution like just using
y = x / 3
Can be fast or it can be awfully slow (even if division is done entirely in hardware, if it is done using a DIV instruction, this instruction is about 3 to 4 times slower than a multiplication on modern CPUs). Very good C compilers with optimization flags turned on may optimize this operation, but if you want to be sure, you are better off optimizing it yourself.
For optimization it is important to have integer numbers of a known size. In C int has no known size (it can vary by platform and compiler!), so you are better using C99 fixed-size integers. The code below assumes that you want to divide an unsigned 32-bit integer by three and that you C compiler knows about 64 bit integer numbers (NOTE: Even on a 32 bit CPU architecture most C compilers can handle 64 bit integers just fine):
static inline uint32_t divby3 (
uint32_t divideMe
) {
return (uint32_t)(((uint64_t)0xAAAAAAABULL * divideMe) >> 33);
}
As crazy as this might sound, but the method above indeed does divide by 3. All it needs for doing so is a single 64 bit multiplication and a shift (like I said, multiplications might be 3 to 4 times faster than divisions on your CPU). In a 64 bit application this code will be a lot faster than in a 32 bit application (in a 32 bit application multiplying two 64 bit numbers take 3 multiplications and 3 additions on 32 bit values) - however, it might be still faster than a division on a 32 bit machine.
On the other hand, if your compiler is a very good one and knows the trick how to optimize integer division by a constant (latest GCC does, I just checked), it will generate the code above anyway (GCC will create exactly this code for "/3" if you enable at least optimization level 1). For other compilers... you cannot rely or expect that it will use tricks like that, even though this method is very well documented and mentioned everywhere on the Internet.
Problem is that it only works for constant numbers, not for variable ones. You always need to know the magic number (here 0xAAAAAAAB) and the correct operations after the multiplication (shifts and/or additions in most cases) and both is different depending on the number you want to divide by and both take too much CPU time to calculate them on the fly (that would be slower than hardware division). However, it's easy for a compiler to calculate these during compile time (where one second more or less compile time plays hardly a role).
For 64 bit numbers:
uint64_t divBy3(uint64_t x)
{
return x*12297829382473034411ULL;
}
However this isn't the truncating integer division you might expect.
It works correctly if the number is already divisible by 3, but it returns a huge number if it isn't.
For example if you run it on for example 11, it returns 6148914691236517209. This looks like a garbage but it's in fact the correct answer: multiply it by 3 and you get back the 11!
If you are looking for the truncating division, then just use the / operator. I highly doubt you can get much faster than that.
Theory:
64 bit unsigned arithmetic is a modulo 2^64 arithmetic.
This means for each integer which is coprime with the 2^64 modulus (essentially all odd numbers) there exists a multiplicative inverse which you can use to multiply with instead of division. This magic number can be obtained by solving the 3*x + 2^64*y = 1 equation using the Extended Euclidean Algorithm.
What if you really don't want to multiply or divide? Here is is an approximation I just invented. It works because (x/3) = (x/4) + (x/12). But since (x/12) = (x/4) / 3 we just have to repeat the process until its good enough.
#include <stdio.h>
void main()
{
int n = 1000;
int a,b;
a = n >> 2;
b = (a >> 2);
a += b;
b = (b >> 2);
a += b;
b = (b >> 2);
a += b;
b = (b >> 2);
a += b;
printf("a=%d\n", a);
}
The result is 330. It could be made more accurate using b = ((b+2)>>2); to account for rounding.
If you are allowed to multiply, just pick a suitable approximation for (1/3), with a power-of-2 divisor. For example, n * (1/3) ~= n * 43 / 128 = (n * 43) >> 7.
This technique is most useful in Indiana.
I don't know if it's faster but if you want to use a bitwise operator to perform binary division you can use the shift and subtract method described at this page:
Set quotient to 0
Align leftmost digits in dividend and divisor
Repeat:
If that portion of the dividend above the divisor is greater than or equal to the divisor:
Then subtract divisor from that portion of the dividend and
Concatentate 1 to the right hand end of the quotient
Else concatentate 0 to the right hand end of the quotient
Shift the divisor one place right
Until dividend is less than the divisor:
quotient is correct, dividend is remainder
STOP
For really large integer division (e.g. numbers bigger than 64bit) you can represent your number as an int[] and perform division quite fast by taking two digits at a time and divide them by 3. The remainder will be part of the next two digits and so forth.
eg. 11004 / 3 you say
11/3 = 3, remaineder = 2 (from 11-3*3)
20/3 = 6, remainder = 2 (from 20-6*3)
20/3 = 6, remainder = 2 (from 20-6*3)
24/3 = 8, remainder = 0
hence the result 3668
internal static List<int> Div3(int[] a)
{
int remainder = 0;
var res = new List<int>();
for (int i = 0; i < a.Length; i++)
{
var val = remainder + a[i];
var div = val/3;
remainder = 10*(val%3);
if (div > 9)
{
res.Add(div/10);
res.Add(div%10);
}
else
res.Add(div);
}
if (res[0] == 0) res.RemoveAt(0);
return res;
}
If you really want to see this article on integer division, but it only has academic merit ... it would be an interesting application that actually needed to perform that benefited from that kind of trick.
Easy computation ... at most n iterations where n is your number of bits:
uint8_t divideby3(uint8_t x)
{
uint8_t answer =0;
do
{
x>>=1;
answer+=x;
x=-x;
}while(x);
return answer;
}
A lookup table approach would also be faster in some architectures.
uint8_t DivBy3LU(uint8_t u8Operand)
{
uint8_t ai8Div3 = [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, ....];
return ai8Div3[u8Operand];
}

Resources