Integer division without using the / or * operator - algorithm

I am going through an algorithms and datastructures textbook and came accross this question:
1-28. Write a function to perform integer division without using
either the / or * operators. Find a fast way to do it.
How can we come up with a fast way to do it?

I like this solution: https://stackoverflow.com/a/34506599/1008519, but I find it somewhat hard to reason about (especially the |-part). This solution makes a little more sense in my head:
var divide = function (dividend, divisor) {
// Handle 0 divisor
if (divisor === 0) {
return NaN;
}
// Handle negative numbers
var isNegative = false;
if (dividend < 0) {
// Change sign
dividend = ~dividend+1;
isNegative = !isNegative;
}
if (divisor < 0) {
// Change sign
divisor = ~divisor+1;
isNegative = !isNegative;
}
/**
* Main algorithm
*/
var result = 1;
var denominator = divisor;
// Double denominator value with bitwise shift until bigger than dividend
while (dividend > denominator) {
denominator <<= 1;
result <<= 1;
}
// Subtract divisor value until denominator is smaller than dividend
while (denominator > dividend) {
denominator -= divisor;
result -= 1;
}
// If one of dividend or divisor was negative, change sign of result
if (isNegative) {
result = ~result+1;
}
return result;
}
We initialize our result to 1 (since we are going to double our denominator until it is bigger than the dividend)
Double our denominator (with bitwise shifts) until it is bigger than the dividend
Since we know our denominator is bigger than our dividend, we can minus our divisor until it is less than our dividend
Return result since denominator is now as close to the result as possible using the divisor
Here are some test runs:
console.log(divide(-16, 3)); // -5
console.log(divide(16, 3)); // 5
console.log(divide(16, 33)); // 0
console.log(divide(16, 0)); // NaN
console.log(divide(384, 15)); // 25
Here is a gist of the solution: https://gist.github.com/mlunoe/e34f14cff4d5c57dd90a5626266c4130

Typically, when an algorithms textbook says fast they mean in terms of computational complexity. That is, the number of operations per bit of input. In general, they don't care about constants, so if you have an input of n bits, whether it takes two operations per bit or a hundred operations per bit, we say the algorithm takes O(n) time. This is because if we have an algorithm that runs in O(n^2) time (polynomial... in this case, square time) and we imagine a O(n) algorithm that does 100 operations per bit compared to our algorithm which may do 1 operation per bit, once the input size is 100 bits, the polynomial algorithm starts to run really slow really quickly (compared to our other algorithm). Essentially, you can imagine two lines, y=100x and y=x^2. Your teacher probably made you do an exercise in Algebra (maybe it was calculus?) where you have to say which one is bigger as x approaches infinity. This is actually a key concept in divergence/convergence in calculus if you have gotten there already in mathematics. Regardless, with a little algebra, you can imagine our graphs intersecting at x=100, and y=x^2 being larger for all points where x is greater than 100.
As far as most textbooks are concerned, O(nlgn) or better is considered "fast". One example of a really bad algorithm to solve this problem would be the following:
crappyMultiplicationAlg(int a, int b)
int product = 0
for (b>0)
product = product + a
b = b-1
return product
This algorithm basically uses "b" as a counter and just keeps adding "a" to some variable for each time b counts down. To calculate how "fast" the algorithm is (in terms of algorithmic complexity) we count how many runs different components will take. In this case, we only have a for loop and some initialization (which is negligible in this case, ignore it). How many times does the for loop run? You may be saying "Hey, guy! It only runs 'b' times! That may not even be half the input. Thats way better than O(n) time!"
The trick here, is that we are concerned with the size of the input in terms of storage... and we all (should) know that to store an n bit integer, we need lgn bits. In other words, if we have x bits, we can store any (unsigned) number up to (2^x)-1. As a result, if we are using a standard 4 byte integer, that number could be up to 2^32 - 1 which is a number well into the billions, if my memory serves me right. If you dont trust me, run this algorithm with a number like 10,000,000 and see how long it takes. Still not convinced? Use a long to use a number like 1,000,000,000.
Since you didn't ask for help with the algorithm, Ill leave it for you as a homework exercise (not trying to be a jerk, I am a total geek and love algorithm problems). If you need help with it, feel free to ask! I already typed up some hints by accident since I didnt read your question properly at first.
EDIT: I accidentally did a crappy multiplication algorithm. An example of a really terrible division algorithm (i cheated) would be:
AbsolutelyTerribleDivisionAlg(int a, int b)
int quotient = 0
while crappyMultiplicationAlg(int b, int quotient) < a
quotient = quotient + 1
return quotient
This algorithm is bad for a whole bunch of reasons, not the least of which is the use of my crappy multiplication algorithm (which will be called more than once even on a relatively "tame" run). Even if we were allowed to use the * operator though, this is still a really bad algorithm, largely due to the same mechanism used in my awful mult alg.
PS There may be a fence-post error or two in my two algs... i posted them more for conceptual clarity than correctness. No matter how accurate they are at doing multiplication or division, though, never use them. They will give your laptop herpes and then cause it to burn up in a sulfur-y implosion of sadness.

I don't know what you mean by fast...and this seems like a basic question to test your thought process.
A simple function can be use a counter and keep subtracting the divisor from the dividend till it becomes 0. This is O(n) process.
int divide(int n, int d){
int c = 0;
while(1){
n -= d;
if(n >= 0)
c++;
else
break;
}
return c;
}
Another way can be using shift operator, which should do it in log(n) steps.
int divide(int n, int d){
if(d <= 0)
return -1;
int k = d;
int i, c, index=1;
c = 0;
while(n > d){
d <<= 1;
index <<= 1;
}
while(1){
if(k > n)
return c;
if(n >= d){
c |= index;
n -= d;
}
index >>= 1;
d >>= 1;
}
return c;
}
This is just like integer division as we do in High-School Mathematics.
PS: If you need a better explanation, I will. Just post that in comments.
EDIT: edited the code wrt Erobrere's comment.

The simplest way to perform a division is by successive subtractions: subtract b from a as long as a remains positive. The quotient is the number of subtractions performed.
This can be pretty slow, as you will perform q subtractions and tests.
With a=28 and b=3,
28-3-3-3-3-3-3-3-3-3=1
the quotient is 9 and the remainder 1.
The next idea that comes to mind is to subtract several times b in a single go. We can try with 2b or 4b or 8b... as these numbers are easy to compute with additions. We can go as for as possible as long as the multiple of b does not exceed a.
In the example, 2³.3 is the largest multiple which is possible
28>=2³.3
So we subtract 8 times 3 in a single go, getting
28-2³.3=4
Now we continue to reduce the remainder with the lower multiples, 2², 2 and 1, when possible
4-2².3<0
4-2.3 <0
4-1.3 =1
Then our quotient is 2³+1=9 and the remainder 1.
As you can check, every multiple of b is tried once only, and the total number of attempts equals the number of doublings required to reach a. This number is just the number of bits required to write q, which is much smaller than q itself.

This is not the fastest solution, but I think it's readable enough and works:
def weird_div(dividend, divisor):
if divisor == 0:
return None
dend = abs(dividend)
dsor = abs(divisor)
result = 0
# This is the core algorithm, the rest is just for ensuring it works with negatives and 0
while dend >= dsor:
dend -= dsor
result += 1
# Let's handle negative numbers too
if (dividend < 0 and divisor > 0) or (dividend > 0 and divisor < 0):
return -result
else:
return result
# Let's test it:
print("49 divided by 7 is {}".format(weird_div(49,7)))
print("100 divided by 7 is {} (Discards the remainder) ".format(weird_div(100,7)))
print("-49 divided by 7 is {}".format(weird_div(-49,7)))
print("49 divided by -7 is {}".format(weird_div(49,-7)))
print("-49 divided by -7 is {}".format(weird_div(-49,-7)))
print("0 divided by 7 is {}".format(weird_div(0,7)))
print("49 divided by 0 is {}".format(weird_div(49,0)))
It prints the following results:
49 divided by 7 is 7
100 divided by 7 is 14 (Discards the remainder)
-49 divided by 7 is -7
49 divided by -7 is -7
-49 divided by -7 is 7
0 divided by 7 is 0
49 divided by 0 is None

unsigned bitdiv (unsigned a, unsigned d)
{
unsigned res,c;
for (c=d; c <= a; c <<=1) {;}
for (res=0;(c>>=1) >= d; ) {
res <<= 1;
if ( a >= c) { res++; a -= c; }
}
return res;
}

The pseudo code:
count = 0
while (dividend >= divisor)
dividend -= divisor
count++
//Get count, your answer

Related

How to efficiently move the decimal point in a number until reaching some threshold

Say I have a double x, whose value is > 0 and < 1 million. I want to move its decimal point left until it is > 1 million and < 10 million. So for example, 23.129385 becomes 2313938.5.
What I'm doing now is just multiplying by 10 until reaching the stopping condition. However I'm performing this algorithm a lot so if I can optimize it somehow it would be helpful. A constant time solution, irrelevant of the magnitude of x, is obviously ideal but so far I haven't been able to come up with one.
Some languages, such as C++ with frexp, expose the binary exponent as an integer very cheaply.
If you are so lucky you can have a precomputed lookup table pow2to10 from the 2k possible binary exponents to the power of 10 that it could be. Have another lookup table lookup10 for the powers of 10. Now your computation looks like:
frexp(x , &n);
int i = pow2to10[n];
if (lookup10[i+1] <= x) {
i++;
}
double result = x * lookup10[i];
Now instead of a series of multiplications, you have 3 array lookups, one comparison and one multiplication. If you are executing this in a tight loop, store pow2to10 as an array of short int, try to trim the ranges to what you need, and the lookups will be in a data structure that can fit in L1 cache.
If you are not so lucky, you can instead of repeatedly multiplying, just compare against an array of known powers of 10. Be warned that if you've got a high level language, you may find that the overhead of running instructions beats the savings of comparison vs multiply. It may be tempting to do a binary search to do less lookups, but I would bet on linear search being better because that helps branch prediction.
You don't say which language or which type of CPU, or how the numbers are distributed (e.g. if most of them are less than 5 but rarely a few are large, or..); however...
The fastest scalar version I can think of (assuming C and modern 80x86 CPUs maybe) is:
// x is between 1 and 999999
unsigned long x_int = x; // Integer comparisons are possibly faster
double multiplier;
if(x_int < 1000) {
// x is between 1 and 999
if(x_int < 100) {
// x is between 1 and 99
if(x_int < 10) {
// x is between 1 and 9
multiplier = 1000000;
} else {
// x is between 10 and 99
multiplier = 100000;
}
} else {
// x is between 100 and 999
multiplier = 10000;
}
} else {
// x is between 1000 and 999999
if(x_int < 10000) {
// x is between 1000 and 9999
multiplier = 1000;
} else {
// x is between 10000 and 999999
if(x_int < 100000) {
// x is between 10000 and 99999
multiplier = 100;
} else {
// x is between 100000 and 999999
multiplier = 10;
}
}
}
x *= multiplier;
This adds up to 2 or 3 branches and one multiplication per value. Note: for modern 80x86 the final branch can be replaced with a CMOVcc instruction.
If you're doing this a lot; then the next step would be to try to use SIMD to do multiple values at the same time (followed by multi-threading/multi-CPU).

How to turn integers into Fibonacci coding efficiently?

Fibonacci sequence is obtained by starting with 0 and 1 and then adding the two last numbers to get the next one.
All positive integers can be represented as a sum of a set of Fibonacci numbers without repetition. For example: 13 can be the sum of the sets {13}, {5,8} or {2,3,8}. But, as we have seen, some numbers have more than one set whose sum is the number. If we add the constraint that the sets cannot have two consecutive Fibonacci numbers, than we have a unique representation for each number.
We will use a binary sequence (just zeros and ones) to do that. For example, 17 = 1 + 3 + 13. Then, 17 = 100101. See figure 2 for a detailed explanation.
I want to turn some integers into this representation, but the integers may be very big. How to I do this efficiently.
The problem itself is simple. You always pick the largest fibonacci number less than the remainder. You can ignore the the constraint with the consecutive numbers (since if you need both, the next one is the sum of both so you should have picked that one instead of the initial two).
So the problem remains how to quickly find the largest fibonacci number less than some number X.
There's a known trick that starting with the matrix (call it M)
1 1
1 0
You can compute fibbonacci number by matrix multiplications(the xth number is M^x). More details here: https://www.nayuki.io/page/fast-fibonacci-algorithms . The end result is that you can compute the number you're look in O(logN) matrix multiplications.
You'll need large number computations (multiplications and additions) if they don't fit into existing types.
Also store the matrices corresponding to powers of two you compute the first time, since you'll need them again for the results.
Overall this should be O((logN)^2 * large_number_multiplications/additions)).
First I want to tell you that I really liked this question, I didn't know that All positive integers can be represented as a sum of a set of Fibonacci numbers without repetition, I saw the prove by induction and it was awesome.
To respond to your question I think that we have to figure how the presentation is created. I think that the easy way to find this is that from the number we found the closest minor fibonacci item.
For example if we want to present 40:
We have Fib(9)=34 and Fib(10)=55 so the first element in the presentation is Fib(9)
since 40 - Fib(9) = 6 and (Fib(5) =5 and Fib(6) =8) the next element is Fib(5). So we have 40 = Fib(9) + Fib(5)+ Fib(2)
Allow me to write this in C#
class Program
{
static void Main(string[] args)
{
List<int> fibPresentation = new List<int>();
int numberToPresent = Convert.ToInt32(Console.ReadLine());
while (numberToPresent > 0)
{
int k =1;
while (CalculateFib(k) <= numberToPresent)
{
k++;
}
numberToPresent = numberToPresent - CalculateFib(k-1);
fibPresentation.Add(k-1);
}
}
static int CalculateFib(int n)
{
if (n == 1)
return 1;
int a = 0;
int b = 1;
// In N steps compute Fibonacci sequence iteratively.
for (int i = 0; i < n; i++)
{
int temp = a;
a = b;
b = temp + b;
}
return a;
}
}
Your result will be in fibPresentation
This encoding is more accurately called the "Zeckendorf representation": see https://en.wikipedia.org/wiki/Fibonacci_coding
A greedy approach works (see https://en.wikipedia.org/wiki/Zeckendorf%27s_theorem) and here's some Python code that converts a number to this representation. It uses the first 100 Fibonacci numbers and works correctly for all inputs up to 927372692193078999175 (and incorrectly for any larger inputs).
fibs = [0, 1]
for _ in xrange(100):
fibs.append(fibs[-2] + fibs[-1])
def zeck(n):
i = len(fibs) - 1
r = 0
while n:
if fibs[i] <= n:
r |= 1 << (i - 2)
n -= fibs[i]
i -= 1
return r
print bin(zeck(17))
The output is:
0b100101
As the greedy approach seems to work, it suffices to be able to invert the relation N=Fn.
By the Binet formula, Fn=[φ^n/√5], where the brackets denote the nearest integer. Then with n=floor(lnφ(√5N)) you are very close to the solution.
17 => n = floor(7.5599...) => F7 = 13
4 => n = floor(4.5531) => F4 = 3
1 => n = floor(1.6722) => F1 = 1
(I do not exclude that some n values can be off by one.)
I'm not sure if this is an efficient enough for you, but you could simply use Backtracking to find a(the) valid representation.
I would try to start the backtracking steps by taking the biggest possible fib number and only switch to smaller ones if the consecutive or the only once constraint is violated.

Is this a good Primality Checking Solution?

I have written this code to check if a number is prime (for numbers upto 10^9+7)
Is this a good method ??
What will be the time complexity for this ??
What I have done is that I have made a unordered_set which stores the prime numbers upto sqrt(n).
When checking if a number is prime or not if first check if its is less than the max number in the table.
If it is less it is searched in the table so the complexity should be O(1) in this case.
If it is more the number is put through a divisibility test with the numbers from the set of number containing the prime numbers.
#include<iostream>
#include<set>
#include<math.h>
#include<unordered_set>
#define sqrt10e9 31623
using namespace std;
unordered_set<long long> primeSet = { 2, 3 }; //used for fast lookups
void genrate_prime_set(long range) //this generates prime number upto sqrt(10^9+7)
{
bool flag;
set<long long> tempPrimeSet = { 2, 3 }; //a temporay set is used for genration
set<long long>::iterator j;
for (int i = 3; i <= range; i = i + 2)
{
//cout << i << " ";
flag = true;
for (j = tempPrimeSet.begin(); *j * *j <= i; ++j)
{
if (i % (*j) == 0)
{
flag = false;
break;
}
}
if (flag)
{
primeSet.insert(i);
tempPrimeSet.insert(i);
}
}
}
bool is_prime(long long i,unordered_set<long long> primeSet)
{
bool flag = true;
if(i <= sqrt10e9) //if number exist in the lookup table
return primeSet.count(i);
//if it doesn't iterate through the table
for (unordered_set<long long>::iterator j = primeSet.begin(); j != primeSet.end(); ++j)
{
if (*j * *j <= i && i % (*j) == 0)
{
flag = false;
break;
}
}
return flag;
}
int main()
{
//long long testCases, a, b, kiwiCount;
bool primeFlag = true;
//unordered_set<int> primeNum;
genrate_prime_set(sqrt10e9);
cout << primeSet.size()<<"\n";
cout << is_prime(9999991,primeSet);
return 0;
}
This doesn't strike me as a particularly efficient way to do the job at hand.
Although it probably won't make a big difference in the end, the efficient way to generate all the primes up to some specific limit is clearly to use a sieve--the sieve of Eratosthenes is simple and fast. There are a couple of modifications that can be faster, but for the small size you're dealing with, they're probably not worthwhile.
These normally produce their output in a more effective format than you're currently using as well. In particular, you typically just dedicate one bit to each possible prime (i.e., each odd number) and end up with it zeroed if the number is composite, and one if it's prime (you can, of course, reverse the sense if you prefer).
Since you only need one bit for each odd number from 3 to 31623, this requires only about 16 K bits, or about 2K bytes--a truly minuscule amount of memory by modern standards (especially: little enough to fit in L1 cache quite easily).
Since the bits are stored in order, it's also trivial to compute and test by the factors up to the square root of the number you're testing instead of testing against all the numbers in the table (including those greater than the square root of the number you're testing, which is obviously a waste of time). This also optimizes access to the memory in case some of it's not in the cache (i.e., you can access all the data in order, making life as easy as possible for the hardware prefetcher).
If you wanted to optimize further, I'd consider just using the sieve to find all primes up to 109+7, and look up inputs. Whether this is a win will depend (heavily) upon the number of queries you can expect to receive. A quick check shows that a simple implementation of the Sieve of Eratosthenes can find all primes up to 109 in about 17 seconds. After that, each query is (of course) essentially instantaneous (i.e., the cost of a single memory read). This does require around 120 megabytes of memory for the result of the sieve, which would once have been a major consideration, but (except on fairly limited systems) normally wouldn't be any more.
The very short answer: do research on the subject, starting with the term "Miller-Rabin"
The short answer is no:
Looking for factors of a number is a poor way to check for primality
Exhaustively searching through primes is a poor way to look for factors
Especially if you search through every prime, rather than just the ones less than or equal to the square root of the number
Doing a primality test on each number of them is a poor way to generate a list of primes
Also, you should take in primeSet by reference rather than copy, if it really needs to be a parameter.
Note: testing small primes to see if they divide a number is a useful first step of a primality test, but should generally only be used for the smallest primes before switching to a better method
No, it's not a very good way to determine if a number is prime. Here is pseudocode for a simple primality test that is sufficient for numbers in your range; I'll leave it to you to translate to C++:
function isPrime(n)
d := 2
while d * d <= n
if n % d == 0
return False
d := d + 1
return True
This works by trying every potential divisor up to the square root of the input number n; if no divisor has been found, then the input number could not be composite, meaning of the form n = p × q, because one of the two divisors p or q must be less than the square root of n while the other is greater than the square root of n.
There are better ways to determine primality; for instance, after initially checking if the number is even (and hence prime only if n = 2), it is only necessary to test odd potential divisors, halving the amount of work necessary. If you have a list of primes up to the square root of n, you can use that list as trial divisors and make the process even faster. And there are other techniques for larger n.
But that should be enough to get you started. When you are ready for more, come back here and ask more questions.
I can only suggest a way to use a library function in Java to check the primality of a number. As for the other questions, I do not have any answers.
The java.math.BigInteger.isProbablePrime(int certainty) returns true if this BigInteger is probably prime, false if it's definitely composite. If certainty is ≤ 0, true is returned. You should try and use it in your code. So try rewriting it in Java
Parameters
certainty - a measure of the uncertainty that the caller is willing to tolerate: if the call returns true the probability that this BigInteger is prime exceeds (1 - 1/2^certainty). The execution time of this method is proportional to the value of this parameter.
Return Value
This method returns true if this BigInteger is probably prime, false if it's definitely composite.
Example
The following example shows the usage of math.BigInteger.isProbablePrime() method
import java.math.*;
public class BigIntegerDemo {
public static void main(String[] args) {
// create 3 BigInteger objects
BigInteger bi1, bi2, bi3;
// create 3 Boolean objects
Boolean b1, b2, b3;
// assign values to bi1, bi2
bi1 = new BigInteger("7");
bi2 = new BigInteger("9");
// perform isProbablePrime on bi1, bi2
b1 = bi1.isProbablePrime(1);
b2 = bi2.isProbablePrime(1);
b3 = bi2.isProbablePrime(-1);
String str1 = bi1+ " is prime with certainity 1 is " +b1;
String str2 = bi2+ " is prime with certainity 1 is " +b2;
String str3 = bi2+ " is prime with certainity -1 is " +b3;
// print b1, b2, b3 values
System.out.println( str1 );
System.out.println( str2 );
System.out.println( str3 );
}
}
Output
7 is prime with certainity 1 is true
9 is prime with certainity 1 is false
9 is prime with certainity -1 is true

What's the fastest algorithm to divide an integer by 3 without using a division instruction? [duplicate]

int x = n / 3; // <-- make this faster
// for instance
int a = n * 3; // <-- normal integer multiplication
int b = (n << 1) + n; // <-- potentially faster multiplication
The guy who said "leave it to the compiler" was right, but I don't have the "reputation" to mod him up or comment. I asked gcc to compile int test(int a) { return a / 3; } for an ix86 and then disassembled the output. Just for academic interest, what it's doing is roughly multiplying by 0x55555556 and then taking the top 32 bits of the 64 bit result of that. You can demonstrate this to yourself with eg:
$ ruby -e 'puts(60000 * 0x55555556 >> 32)'
20000
$ ruby -e 'puts(72 * 0x55555556 >> 32)'
24
$
The wikipedia page on Montgomery division is hard to read but fortunately the compiler guys have done it so you don't have to.
This is the fastest as the compiler will optimize it if it can depending on the output processor.
int a;
int b;
a = some value;
b = a / 3;
There is a faster way to do it if you know the ranges of the values, for example, if you are dividing a signed integer by 3 and you know the range of the value to be divided is 0 to 768, then you can multiply it by a factor and shift it to the left by a power of 2 to that factor divided by 3.
eg.
Range 0 -> 768
you could use shifting of 10 bits, which multiplying by 1024, you want to divide by 3 so your multiplier should be 1024 / 3 = 341,
so you can now use (x * 341) >> 10
(Make sure the shift is a signed shift if using signed integers), also make sure the shift is an actually shift and not a bit ROLL
This will effectively divide the value 3, and will run at about 1.6 times the speed as a natural divide by 3 on a standard x86 / x64 CPU.
Of course the only reason you can make this optimization when the compiler cant is because the compiler does not know the maximum range of X and therefore cannot make this determination, but you as the programmer can.
Sometime it may even be more beneficial to move the value into a larger value and then do the same thing, ie. if you have an int of full range you could make it an 64-bit value and then do the multiply and shift instead of dividing by 3.
I had to do this recently to speed up image processing, i needed to find the average of 3 color channels, each color channel with a byte range (0 - 255). red green and blue.
At first i just simply used:
avg = (r + g + b) / 3;
(So r + g + b has a maximum of 768 and a minimum of 0, because each channel is a byte 0 - 255)
After millions of iterations the entire operation took 36 milliseconds.
I changed the line to:
avg = (r + g + b) * 341 >> 10;
And that took it down to 22 milliseconds, its amazing what can be done with a little ingenuity.
This speed up occurred in C# even though I had optimisations turned on and was running the program natively without debugging info and not through the IDE.
See How To Divide By 3 for an extended discussion of more efficiently dividing by 3, focused on doing FPGA arithmetic operations.
Also relevant:
Optimizing integer divisions with Multiply Shift in C#
Depending on your platform and depending on your C compiler, a native solution like just using
y = x / 3
Can be fast or it can be awfully slow (even if division is done entirely in hardware, if it is done using a DIV instruction, this instruction is about 3 to 4 times slower than a multiplication on modern CPUs). Very good C compilers with optimization flags turned on may optimize this operation, but if you want to be sure, you are better off optimizing it yourself.
For optimization it is important to have integer numbers of a known size. In C int has no known size (it can vary by platform and compiler!), so you are better using C99 fixed-size integers. The code below assumes that you want to divide an unsigned 32-bit integer by three and that you C compiler knows about 64 bit integer numbers (NOTE: Even on a 32 bit CPU architecture most C compilers can handle 64 bit integers just fine):
static inline uint32_t divby3 (
uint32_t divideMe
) {
return (uint32_t)(((uint64_t)0xAAAAAAABULL * divideMe) >> 33);
}
As crazy as this might sound, but the method above indeed does divide by 3. All it needs for doing so is a single 64 bit multiplication and a shift (like I said, multiplications might be 3 to 4 times faster than divisions on your CPU). In a 64 bit application this code will be a lot faster than in a 32 bit application (in a 32 bit application multiplying two 64 bit numbers take 3 multiplications and 3 additions on 32 bit values) - however, it might be still faster than a division on a 32 bit machine.
On the other hand, if your compiler is a very good one and knows the trick how to optimize integer division by a constant (latest GCC does, I just checked), it will generate the code above anyway (GCC will create exactly this code for "/3" if you enable at least optimization level 1). For other compilers... you cannot rely or expect that it will use tricks like that, even though this method is very well documented and mentioned everywhere on the Internet.
Problem is that it only works for constant numbers, not for variable ones. You always need to know the magic number (here 0xAAAAAAAB) and the correct operations after the multiplication (shifts and/or additions in most cases) and both is different depending on the number you want to divide by and both take too much CPU time to calculate them on the fly (that would be slower than hardware division). However, it's easy for a compiler to calculate these during compile time (where one second more or less compile time plays hardly a role).
For 64 bit numbers:
uint64_t divBy3(uint64_t x)
{
return x*12297829382473034411ULL;
}
However this isn't the truncating integer division you might expect.
It works correctly if the number is already divisible by 3, but it returns a huge number if it isn't.
For example if you run it on for example 11, it returns 6148914691236517209. This looks like a garbage but it's in fact the correct answer: multiply it by 3 and you get back the 11!
If you are looking for the truncating division, then just use the / operator. I highly doubt you can get much faster than that.
Theory:
64 bit unsigned arithmetic is a modulo 2^64 arithmetic.
This means for each integer which is coprime with the 2^64 modulus (essentially all odd numbers) there exists a multiplicative inverse which you can use to multiply with instead of division. This magic number can be obtained by solving the 3*x + 2^64*y = 1 equation using the Extended Euclidean Algorithm.
What if you really don't want to multiply or divide? Here is is an approximation I just invented. It works because (x/3) = (x/4) + (x/12). But since (x/12) = (x/4) / 3 we just have to repeat the process until its good enough.
#include <stdio.h>
void main()
{
int n = 1000;
int a,b;
a = n >> 2;
b = (a >> 2);
a += b;
b = (b >> 2);
a += b;
b = (b >> 2);
a += b;
b = (b >> 2);
a += b;
printf("a=%d\n", a);
}
The result is 330. It could be made more accurate using b = ((b+2)>>2); to account for rounding.
If you are allowed to multiply, just pick a suitable approximation for (1/3), with a power-of-2 divisor. For example, n * (1/3) ~= n * 43 / 128 = (n * 43) >> 7.
This technique is most useful in Indiana.
I don't know if it's faster but if you want to use a bitwise operator to perform binary division you can use the shift and subtract method described at this page:
Set quotient to 0
Align leftmost digits in dividend and divisor
Repeat:
If that portion of the dividend above the divisor is greater than or equal to the divisor:
Then subtract divisor from that portion of the dividend and
Concatentate 1 to the right hand end of the quotient
Else concatentate 0 to the right hand end of the quotient
Shift the divisor one place right
Until dividend is less than the divisor:
quotient is correct, dividend is remainder
STOP
For really large integer division (e.g. numbers bigger than 64bit) you can represent your number as an int[] and perform division quite fast by taking two digits at a time and divide them by 3. The remainder will be part of the next two digits and so forth.
eg. 11004 / 3 you say
11/3 = 3, remaineder = 2 (from 11-3*3)
20/3 = 6, remainder = 2 (from 20-6*3)
20/3 = 6, remainder = 2 (from 20-6*3)
24/3 = 8, remainder = 0
hence the result 3668
internal static List<int> Div3(int[] a)
{
int remainder = 0;
var res = new List<int>();
for (int i = 0; i < a.Length; i++)
{
var val = remainder + a[i];
var div = val/3;
remainder = 10*(val%3);
if (div > 9)
{
res.Add(div/10);
res.Add(div%10);
}
else
res.Add(div);
}
if (res[0] == 0) res.RemoveAt(0);
return res;
}
If you really want to see this article on integer division, but it only has academic merit ... it would be an interesting application that actually needed to perform that benefited from that kind of trick.
Easy computation ... at most n iterations where n is your number of bits:
uint8_t divideby3(uint8_t x)
{
uint8_t answer =0;
do
{
x>>=1;
answer+=x;
x=-x;
}while(x);
return answer;
}
A lookup table approach would also be faster in some architectures.
uint8_t DivBy3LU(uint8_t u8Operand)
{
uint8_t ai8Div3 = [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, ....];
return ai8Div3[u8Operand];
}

Does a range of integers contain at least one perfect square?

Given two integers a and b, is there an efficient way to test whether there is another integer n such that a ≤ n2 < b?
I do not need to know n, only whether at least one such n exists or not, so I hope to avoid computing square roots of any numbers in the interval.
Although testing whether an individual integer is a perfect square is faster than computing the square root, the range may be large and I would also prefer to avoid performing this test for every number within the range.
Examples:
intervalContainsSquare(2, 3) => false
intervalContainsSquare(5, 9) => false (note: 9 is outside this interval)
intervalContainsSquare(9, 9) => false (this interval is empty)
intervalContainsSquare(4, 9) => true (4 is inside this interval)
intervalContainsSquare(5, 16) => true (9 is inside this interval)
intervalContainsSquare(1, 10) => true (1, 4 and 9 are all inside this interval)
Computing whether or not a number is a square isn't really faster than computing its square root in hard cases, as far as I know. What is true is that you can do a precomputation to know that it isn't a square, which might save you time on average.
Likewise for this problem, you can do a precomputation to determine that sqrt(b)-sqrt(a) >= 1, which then means that a and b are far enough apart that there must be a square between them. With some algebra, this inequality is equivalent to the condition that (b-a-1)^2 >= 4*a, or if you want it in a more symmetric form, that (a-b)^2+1 >= 2*(a+b). So this precomputation can be done with no square roots, only with one integer product and some additions and subtractions.
If a and b are almost exactly the same, then you can still use the trick of looking at low order binary digits as a precomputation to know that there isn't a square between them. But they have to be so close together that this precomputation might not be worth it.
If these precomputations are inconclusive, then I can't think of anything other than everyone else's solution, a <= ceil(sqrt(a))^2 < b.
Since there was a question of doing the algebra right:
sqrt(b)-sqrt(a) >= 1
sqrt(b) >= 1+sqrt(a)
b >= 1+2*sqrt(a)+a
b-a-1 >= 2*sqrt(a)
(b-a-1)^2 >= 4*a
Also: Generally when a is a large number, you would compute sqrt(a) with Newton's method, or with a lookup table followed by a few Newton's method steps. It is faster in principle to compute ceil(sqrt(a)) than sqrt(a), because the floating point arithmetic can be simplified to integer arithmetic, and because you don't need as many Newton's method steps to nail down high precision that you're just going to throw away. But in practice, a numerical library function can be much faster if it uses square roots implemented in microcode. If for whatever reason you don't have that microcode to help you, then it might be worth it to hand-code ceil(sqrt(a)). Maybe the most interesting case would be if a and b are unbounded integers (like, a thousand digits). But for ordinary-sized integers on an ordinary non-obsolete computer, you can't beat the FPU.
Get the square root of the lower number. If this is an integer then you are done.
Otherwise round up and square the number. If this is less than b then it is true.
You only need to compute one square root this way.
In order to avoid a problem of when a is equal to b, you should check that first. As this case is always false.
If you will accept calculating two square roots, because of its monotonicity you have this inequality which is equivalent to your starting one:
sqrt(a) <= n < sqrt(b)
thus, if floor(sqrt(a)) != floor(sqrt(b)), floor(sqrt(b)) - 1 is guaranteed to be such an n.
get the square root of the lower number and round it up
get the square root of the higher number and round it down
if 1 is lower or equal 2, there will be a perfect square
Find the integral part of sqrt(a) and sqrt(b), say sa and sb.
If sa2 = a, then output yes.
If sb2 = b and sa = sb-1, then output no.
If sa < sb output yes.
Else output no.
You can optimize the above to get rid of the computation of sqrt(b) (similar to JDunkerly's answer).
Or did you want to avoid computing square roots of a and b too?
You can avoid computing square roots completely by using a method similar to binary search.
You start with a guess for n, n = 1 and compute n2
Consider if a <= n < b, you can stop.
If n < a < b, you double your guess n.
if a < b < n, you make it close to average of current + previous guess.
This will be O(logb) time.
In addition to JDunkerley's nice solution (+1), there could be a possible improvement that needs to be tested and uses integer square roots to calculate integer square roots
Why are you hoping to avoid square roots entirely? Even before you get to the most efficient way of solving this, you have seen methods that call for only 2 square roots. That's done in O(1) time, so it seems to me that any improvement you could hope to make would take more time to think about than it would EVER save you computing time. Am I wrong?
One way is to use Newton's method to find the integer square root for b. Then you can check if that number falls in the range. I doubt that it is faster than simply calling the square root function, but it is certainly more interesting:
int main( int argc, char* argv[] )
{
int a, b;
double xk=0, xk1;
int root;
int iter=0;
a = atoi( argv[1] );
b = atoi( argv[2] );
xk1 = b / 32 + 1; // +1 to ensure > 0
xk1 = b;
while( fabs( xk1 - xk ) >= .5 ) {
xk = xk1;
xk1 = ( xk + b / xk ) / 2.;
printf( "%d) xk = %f\n", ++iter, xk1 );
}
root = (int)xk1;
// If b is a perfect square, then this finds that root, so it also
// needs to check if (n-1)^2 falls in the range.
// And this does a lot more multiplications than it needs
if ( root*root >= a && root*root < b ||
(root-1)*(root-1) >= a && (root-1)*(root-1) < b )
printf( "Contains perfect square\n" );
else
printf( "Does not contain perfect square\n" );
return 1;
}

Resources