I have seen this question asked a lot but never seen a true concrete answer to it. So I am going to post one here which will hopefully help people understand why exactly there is "modulo bias" when using a random number generator, like rand() in C++.
So rand() is a pseudo-random number generator which chooses a natural number between 0 and RAND_MAX, which is a constant defined in cstdlib (see this article for a general overview on rand()).
Now what happens if you want to generate a random number between say 0 and 2? For the sake of explanation, let's say RAND_MAX is 10 and I decide to generate a random number between 0 and 2 by calling rand()%3. However, rand()%3 does not produce the numbers between 0 and 2 with equal probability!
When rand() returns 0, 3, 6, or 9, rand()%3 == 0. Therefore, P(0) = 4/11
When rand() returns 1, 4, 7, or 10, rand()%3 == 1. Therefore, P(1) = 4/11
When rand() returns 2, 5, or 8, rand()%3 == 2. Therefore, P(2) = 3/11
This does not generate the numbers between 0 and 2 with equal probability. Of course for small ranges this might not be the biggest issue but for a larger range this could skew the distribution, biasing the smaller numbers.
So when does rand()%n return a range of numbers from 0 to n-1 with equal probability? When RAND_MAX%n == n - 1. In this case, along with our earlier assumption rand() does return a number between 0 and RAND_MAX with equal probability, the modulo classes of n would also be equally distributed.
So how do we solve this problem? A crude way is to keep generating random numbers until you get a number in your desired range:
int x;
do {
x = rand();
} while (x >= n);
but that's inefficient for low values of n, since you only have a n/RAND_MAX chance of getting a value in your range, and so you'll need to perform RAND_MAX/n calls to rand() on average.
A more efficient formula approach would be to take some large range with a length divisible by n, like RAND_MAX - RAND_MAX % n, keep generating random numbers until you get one that lies in the range, and then take the modulus:
int x;
do {
x = rand();
} while (x >= (RAND_MAX - RAND_MAX % n));
x %= n;
For small values of n, this will rarely require more than one call to rand().
Works cited and further reading:
CPlusPlus Reference
Eternally Confuzzled
Keep selecting a random is a good way to remove the bias.
Update
We could make the code fast if we search for an x in range divisible by n.
// Assumptions
// rand() in [0, RAND_MAX]
// n in (0, RAND_MAX]
int x;
// Keep searching for an x in a range divisible by n
do {
x = rand();
} while (x >= RAND_MAX - (RAND_MAX % n))
x %= n;
The above loop should be very fast, say 1 iteration on average.
#user1413793 is correct about the problem. I'm not going to discuss that further, except to make one point: yes, for small values of n and large values of RAND_MAX, the modulo bias can be very small. But using a bias-inducing pattern means that you must consider the bias every time you calculate a random number and choose different patterns for different cases. And if you make the wrong choice, the bugs it introduces are subtle and almost impossible to unit test. Compared to just using the proper tool (such as arc4random_uniform), that's extra work, not less work. Doing more work and getting a worse solution is terrible engineering, especially when doing it right every time is easy on most platforms.
Unfortunately, the implementations of the solution are all incorrect or less efficient than they should be. (Each solution has various comments explaining the problems, but none of the solutions have been fixed to address them.) This is likely to confuse the casual answer-seeker, so I'm providing a known-good implementation here.
Again, the best solution is just to use arc4random_uniform on platforms that provide it, or a similar ranged solution for your platform (such as Random.nextInt on Java). It will do the right thing at no code cost to you. This is almost always the correct call to make.
If you don't have arc4random_uniform, then you can use the power of opensource to see exactly how it is implemented on top of a wider-range RNG (ar4random in this case, but a similar approach could also work on top of other RNGs).
Here is the OpenBSD implementation:
/*
* Calculate a uniformly distributed random number less than upper_bound
* avoiding "modulo bias".
*
* Uniformity is achieved by generating new random numbers until the one
* returned is outside the range [0, 2**32 % upper_bound). This
* guarantees the selected random number will be inside
* [2**32 % upper_bound, 2**32) which maps back to [0, upper_bound)
* after reduction modulo upper_bound.
*/
u_int32_t
arc4random_uniform(u_int32_t upper_bound)
{
u_int32_t r, min;
if (upper_bound < 2)
return 0;
/* 2**32 % x == (2**32 - x) % x */
min = -upper_bound % upper_bound;
/*
* This could theoretically loop forever but each retry has
* p > 0.5 (worst case, usually far better) of selecting a
* number inside the range we need, so it should rarely need
* to re-roll.
*/
for (;;) {
r = arc4random();
if (r >= min)
break;
}
return r % upper_bound;
}
It is worth noting the latest commit comment on this code for those who need to implement similar things:
Change arc4random_uniform() to calculate 2**32 % upper_bound as
-upper_bound % upper_bound. Simplifies the code and makes it the
same on both ILP32 and LP64 architectures, and also slightly faster on
LP64 architectures by using a 32-bit remainder instead of a 64-bit
remainder.
Pointed out by Jorden Verwer on tech#
ok deraadt; no objections from djm or otto
The Java implementation is also easily findable (see previous link):
public int nextInt(int n) {
if (n <= 0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)next(31)) >> 31);
int bits, val;
do {
bits = next(31);
val = bits % n;
} while (bits - val + (n-1) < 0);
return val;
}
Definition
Modulo Bias is the inherent bias in using modulo arithmetic to reduce an output set to a subset of the input set. In general, a bias exists whenever the mapping between the input and output set is not equally distributed, as in the case of using modulo arithmetic when the size of the output set is not a divisor of the size of the input set.
This bias is particularly hard to avoid in computing, where numbers are represented as strings of bits: 0s and 1s. Finding truly random sources of randomness is also extremely difficult, but is beyond the scope of this discussion. For the remainder of this answer, assume that there exists an unlimited source of truly random bits.
Problem Example
Let's consider simulating a die roll (0 to 5) using these random bits. There are 6 possibilities, so we need enough bits to represent the number 6, which is 3 bits. Unfortunately, 3 random bits yields 8 possible outcomes:
000 = 0, 001 = 1, 010 = 2, 011 = 3
100 = 4, 101 = 5, 110 = 6, 111 = 7
We can reduce the size of the outcome set to exactly 6 by taking the value modulo 6, however this presents the modulo bias problem: 110 yields a 0, and 111 yields a 1. This die is loaded.
Potential Solutions
Approach 0:
Rather than rely on random bits, in theory one could hire a small army to roll dice all day and record the results in a database, and then use each result only once. This is about as practical as it sounds, and more than likely would not yield truly random results anyway (pun intended).
Approach 1:
Instead of using the modulus, a naive but mathematically correct solution is to discard results that yield 110 and 111 and simply try again with 3 new bits. Unfortunately, this means that there is a 25% chance on each roll that a re-roll will be required, including each of the re-rolls themselves. This is clearly impractical for all but the most trivial of uses.
Approach 2:
Use more bits: instead of 3 bits, use 4. This yield 16 possible outcomes. Of course, re-rolling anytime the result is greater than 5 makes things worse (10/16 = 62.5%) so that alone won't help.
Notice that 2 * 6 = 12 < 16, so we can safely take any outcome less than 12 and reduce that modulo 6 to evenly distribute the outcomes. The other 4 outcomes must be discarded, and then re-rolled as in the previous approach.
Sounds good at first, but let's check the math:
4 discarded results / 16 possibilities = 25%
In this case, 1 extra bit didn't help at all!
That result is unfortunate, but let's try again with 5 bits:
32 % 6 = 2 discarded results; and
2 discarded results / 32 possibilities = 6.25%
A definite improvement, but not good enough in many practical cases. The good news is, adding more bits will never increase the chances of needing to discard and re-roll. This holds not just for dice, but in all cases.
As demonstrated however, adding an 1 extra bit may not change anything. In fact if we increase our roll to 6 bits, the probability remains 6.25%.
This begs 2 additional questions:
If we add enough bits, is there a guarantee that the probability of a discard will diminish?
How many bits are enough in the general case?
General Solution
Thankfully the answer to the first question is yes. The problem with 6 is that 2^x mod 6 flips between 2 and 4 which coincidentally are a multiple of 2 from each other, so that for an even x > 1,
[2^x mod 6] / 2^x == [2^(x+1) mod 6] / 2^(x+1)
Thus 6 is an exception rather than the rule. It is possible to find larger moduli that yield consecutive powers of 2 in the same way, but eventually this must wrap around, and the probability of a discard will be reduced.
Without offering further proof, in general using double the number
of bits required will provide a smaller, usually insignificant,
chance of a discard.
Proof of Concept
Here is an example program that uses OpenSSL's libcrypo to supply random bytes. When compiling, be sure to link to the library with -lcrypto which most everyone should have available.
#include <iostream>
#include <assert.h>
#include <limits>
#include <openssl/rand.h>
volatile uint32_t dummy;
uint64_t discardCount;
uint32_t uniformRandomUint32(uint32_t upperBound)
{
assert(RAND_status() == 1);
uint64_t discard = (std::numeric_limits<uint64_t>::max() - upperBound) % upperBound;
RAND_bytes((uint8_t*)(&randomPool), sizeof(randomPool));
while(randomPool > (std::numeric_limits<uint64_t>::max() - discard)) {
RAND_bytes((uint8_t*)(&randomPool), sizeof(randomPool));
++discardCount;
}
return randomPool % upperBound;
}
int main() {
discardCount = 0;
const uint32_t MODULUS = (1ul << 31)-1;
const uint32_t ROLLS = 10000000;
for(uint32_t i = 0; i < ROLLS; ++i) {
dummy = uniformRandomUint32(MODULUS);
}
std::cout << "Discard count = " << discardCount << std::endl;
}
I encourage playing with the MODULUS and ROLLS values to see how many re-rolls actually happen under most conditions. A sceptical person may also wish to save the computed values to file and verify the distribution appears normal.
Mark's Solution (The accepted solution) is Nearly Perfect.
int x;
do {
x = rand();
} while (x >= (RAND_MAX - RAND_MAX % n));
x %= n;
edited Mar 25 '16 at 23:16
Mark Amery 39k21170211
However, it has a caveat which discards 1 valid set of outcomes in any scenario where RAND_MAX (RM) is 1 less than a multiple of N (Where N = the Number of possible valid outcomes).
ie, When the 'count of values discarded' (D) is equal to N, then they are actually a valid set (V), not an invalid set (I).
What causes this is at some point Mark loses sight of the difference between N and Rand_Max.
N is a set who's valid members are comprised only of Positive Integers, as it contains a count of responses that would be valid. (eg: Set N = {1, 2, 3, ... n } )
Rand_max However is a set which ( as defined for our purposes ) includes any number of non-negative integers.
In it's most generic form, what is defined here as Rand Max is the Set of all valid outcomes, which could theoretically include negative numbers or non-numeric values.
Therefore Rand_Max is better defined as the set of "Possible Responses".
However N operates against the count of the values within the set of valid responses, so even as defined in our specific case, Rand_Max will be a value one less than the total number it contains.
Using Mark's Solution, Values are Discarded when: X => RM - RM % N
EG:
Ran Max Value (RM) = 255
Valid Outcome (N) = 4
When X => 252, Discarded values for X are: 252, 253, 254, 255
So, if Random Value Selected (X) = {252, 253, 254, 255}
Number of discarded Values (I) = RM % N + 1 == N
IE:
I = RM % N + 1
I = 255 % 4 + 1
I = 3 + 1
I = 4
X => ( RM - RM % N )
255 => (255 - 255 % 4)
255 => (255 - 3)
255 => (252)
Discard Returns $True
As you can see in the example above, when the value of X (the random number we get from the initial function) is 252, 253, 254, or 255 we would discard it even though these four values comprise a valid set of returned values.
IE: When the count of the values Discarded (I) = N (The number of valid outcomes) then a Valid set of return values will be discarded by the original function.
If we describe the difference between the values N and RM as D, ie:
D = (RM - N)
Then as the value of D becomes smaller, the Percentage of unneeded re-rolls due to this method increases at each natural multiplicative. (When RAND_MAX is NOT equal to a Prime Number this is of valid concern)
EG:
RM=255 , N=2 Then: D = 253, Lost percentage = 0.78125%
RM=255 , N=4 Then: D = 251, Lost percentage = 1.5625%
RM=255 , N=8 Then: D = 247, Lost percentage = 3.125%
RM=255 , N=16 Then: D = 239, Lost percentage = 6.25%
RM=255 , N=32 Then: D = 223, Lost percentage = 12.5%
RM=255 , N=64 Then: D = 191, Lost percentage = 25%
RM=255 , N= 128 Then D = 127, Lost percentage = 50%
Since the percentage of Rerolls needed increases the closer N comes to RM, this can be of valid concern at many different values depending on the constraints of the system running he code and the values being looked for.
To negate this we can make a simple amendment As shown here:
int x;
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) );
x %= n;
This provides a more general version of the formula which accounts for the additional peculiarities of using modulus to define your max values.
Examples of using a small value for RAND_MAX which is a multiplicative of N.
Mark'original Version:
RAND_MAX = 3, n = 2, Values in RAND_MAX = 0,1,2,3, Valid Sets = 0,1 and 2,3.
When X >= (RAND_MAX - ( RAND_MAX % n ) )
When X >= 2 the value will be discarded, even though the set is valid.
Generalized Version 1:
RAND_MAX = 3, n = 2, Values in RAND_MAX = 0,1,2,3, Valid Sets = 0,1 and 2,3.
When X > (RAND_MAX - ( ( RAND_MAX % n ) + 1 ) % n )
When X > 3 the value would be discarded, but this is not a vlue in the set RAND_MAX so there will be no discard.
Additionally, in the case where N should be the number of values in RAND_MAX; in this case, you could set N = RAND_MAX +1, unless RAND_MAX = INT_MAX.
Loop-wise you could just use N = 1, and any value of X will be accepted, however, and put an IF statement in for your final multiplier. But perhaps you have code that may have a valid reason to return a 1 when the function is called with n = 1...
So it may be better to use 0, which would normally provide a Div 0 Error, when you wish to have n = RAND_MAX+1
Generalized Version 2:
int x;
if n != 0 {
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) );
x %= n;
} else {
x = rand();
}
Both of these solutions resolve the issue with needlessly discarded valid results which will occur when RM+1 is a product of n.
The second version also covers the edge case scenario when you need n to equal the total possible set of values contained in RAND_MAX.
The modified approach in both is the same and allows for a more general solution to the need of providing valid random numbers and minimizing discarded values.
To reiterate:
The Basic General Solution which extends mark's example:
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x;
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) ) );
x %= n;
The Extended General Solution which Allows one additional scenario of RAND_MAX+1 = n:
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x;
if n != 0 {
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) ) );
x %= n;
} else {
x = rand();
}
In some languages ( particularly interpreted languages ) doing the calculations of the compare-operation outside of the while condition may lead to faster results as this is a one-time calculation no matter how many re-tries are required. YMMV!
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x; // Resulting random number
int y; // One-time calculation of the compare value for x
y = RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n)
if n != 0 {
do {
x = rand();
} while (x > y);
x %= n;
} else {
x = rand();
}
There are two usual complaints with the use of modulo.
one is valid for all generators. It is easier to see in a limit case. If your generator has a RAND_MAX which is 2 (that isn't compliant with the C standard) and you want only 0 or 1 as value, using modulo will generate 0 twice as often (when the generator generates 0 and 2) as it will generate 1 (when the generator generates 1). Note that this is true as soon as you don't drop values, whatever the mapping you are using from the generator values to the wanted one, one will occurs twice as often as the other.
some kind of generator have their less significant bits less random than the other, at least for some of their parameters, but sadly those parameter have other interesting characteristic (such has being able to have RAND_MAX one less than a power of 2). The problem is well known and for a long time library implementation probably avoid the problem (for instance the sample rand() implementation in the C standard use this kind of generator, but drop the 16 less significant bits), but some like to complain about that and you may have bad luck
Using something like
int alea(int n){
assert (0 < n && n <= RAND_MAX);
int partSize =
n == RAND_MAX ? 1 : 1 + (RAND_MAX-n)/(n+1);
int maxUsefull = partSize * n + (partSize-1);
int draw;
do {
draw = rand();
} while (draw > maxUsefull);
return draw/partSize;
}
to generate a random number between 0 and n will avoid both problems (and it avoids overflow with RAND_MAX == INT_MAX)
BTW, C++11 introduced standard ways to the the reduction and other generator than rand().
With a RAND_MAX value of 3 (in reality it should be much higher than that but the bias would still exist) it makes sense from these calculations that there is a bias:
1 % 2 = 1
2 % 2 = 0
3 % 2 = 1
random_between(1, 3) % 2 = more likely a 1
In this case, the % 2 is what you shouldn't do when you want a random number between 0 and 1. You could get a random number between 0 and 2 by doing % 3 though, because in this case: RAND_MAX is a multiple of 3.
Another method
There is much simpler but to add to other answers, here is my solution to get a random number between 0 and n - 1, so n different possibilities, without bias.
the number of bits (not bytes) needed to encode the number of possibilities is the number of bits of random data you'll need
encode the number from random bits
if this number is >= n, restart (no modulo).
Really random data is not easy to obtain, so why use more bits than needed.
Below is an example in Smalltalk, using a cache of bits from a pseudo-random number generator. I'm no security expert so use at your own risk.
next: n
| bitSize r from to |
n < 0 ifTrue: [^0 - (self next: 0 - n)].
n = 0 ifTrue: [^nil].
n = 1 ifTrue: [^0].
cache isNil ifTrue: [cache := OrderedCollection new].
cache size < (self randmax highBit) ifTrue: [
Security.DSSRandom default next asByteArray do: [ :byte |
(1 to: 8) do: [ :i | cache add: (byte bitAt: i)]
]
].
r := 0.
bitSize := n highBit.
to := cache size.
from := to - bitSize + 1.
(from to: to) do: [ :i |
r := r bitAt: i - from + 1 put: (cache at: i)
].
cache removeFrom: from to: to.
r >= n ifTrue: [^self next: n].
^r
Modulo reduction is a commonly seen way to make a random integer generator avoid the worst case of running forever.
When the range of possible integers is unknown, however, there is no way in general to "fix" this worst case of running forever without introducing bias. It's not just modulo reduction (rand() % n, discussed in the accepted answer) that will introduce bias this way, but also the "multiply-and-shift" reduction of Daniel Lemire, or if you stop rejecting an outcome after a set number of iterations. (To be clear, this doesn't mean there is no way to fix the bias issues present in pseudorandom generators. For example, even though modulo and other reductions are biased in general, they will have no issues with bias if the range of possible integers is a power of 2 and if the random generator produces unbiased random bits or blocks of them.)
The following answer of mine discusses the relationship between running time and bias in random generators, assuming we have a "true" random generator that can produce unbiased and independent random bits. The answer doesn't even involve the rand() function in C because it has many issues. Perhaps the most serious here is the fact that the C standard does not explicitly specify a particular distribution for the numbers returned by rand(), not even a uniform distribution.
How to generate a random integer in the range [0,n] from a stream of random bits without wasting bits?
As the accepted answer indicates, "modulo bias" has its roots in the low value of RAND_MAX. He uses an extremely small value of RAND_MAX (10) to show that if RAND_MAX were 10, then you tried to generate a number between 0 and 2 using %, the following outcomes would result:
rand() % 3 // if RAND_MAX were only 10, gives
output of rand() | rand()%3
0 | 0
1 | 1
2 | 2
3 | 0
4 | 1
5 | 2
6 | 0
7 | 1
8 | 2
9 | 0
So there are 4 outputs of 0's (4/10 chance) and only 3 outputs of 1 and 2 (3/10 chances each).
So it's biased. The lower numbers have a better chance of coming out.
But that only shows up so obviously when RAND_MAX is small. Or more specifically, when the number your are modding by is large compared to RAND_MAX.
A much better solution than looping (which is insanely inefficient and shouldn't even be suggested) is to use a PRNG with a much larger output range. The Mersenne Twister algorithm has a maximum output of 4,294,967,295. As such doing MersenneTwister::genrand_int32() % 10 for all intents and purposes, will be equally distributed and the modulo bias effect will all but disappear.
I just wrote a code for Von Neumann's Unbiased Coin Flip Method, that should theoretically eliminate any bias in the random number generation process. More info can be found at (http://en.wikipedia.org/wiki/Fair_coin)
int unbiased_random_bit() {
int x1, x2, prev;
prev = 2;
x1 = rand() % 2;
x2 = rand() % 2;
for (;; x1 = rand() % 2, x2 = rand() % 2)
{
if (x1 ^ x2) // 01 -> 1, or 10 -> 0.
{
return x2;
}
else if (x1 & x2)
{
if (!prev) // 0011
return 1;
else
prev = 1; // 1111 -> continue, bias unresolved
}
else
{
if (prev == 1)// 1100
return 0;
else // 0000 -> continue, bias unresolved
prev = 0;
}
}
}
Related
I am currently performing Fourier transforms for some physics problem, and a huge bottleneck of my algorithm comes from the evaluation of a scalar product modulo 2.
For a given integer N, I have to represent all the numbers in binary up to 2^N-1.
For each of these numbers, represented as a binary vector (e.g. 15 = 2^3 + 2^2 +2+2^0 = (1,1,1,1,0,...,0)) I have to evaluate its scalar products with all numbers from 0 to 2^N-1 in binary form modulo 2.
(for example, the scalar product 1.15 =(1,0,0,...,0).(1,1,1,1,0,...,0)=1*1+1*0+...=1 mod 2)
Note that the components are kept in binary form during the reducing modulo 2
(1,1).(1,1)=1*1+1*1 and not 1*1+2*2
This is basically 2^(2N) scalar products that I have to perform and reduce modulo 2.
I am having difficulty to get more than N = 18.
I was wondering whether some clever mathematical trick can be used to greatly reduce the time spent doing them.
I was thinking of some kind of recursion (i.e. saving results for N in a file and deduce the results for N+1) but I am not sure this would help. Indeed, with this recursion, knowing the results for N, I could cut the vector for N+1 corresponding to the N part plus an additional digit, but then at each scalar product, instead of evaluating the scalar product, I would have to tell my computer to go and read a big file (because I probably wouldn't be able to keep it all in dynamic memory), which is probably time-consuming, perhaps more than the ~20 multiplications I have to perform for each of the products.
Is there any known optimized number-theoretical algorithm allowing the evaluation of such a scalar product modulo 2 very quickly ? Are there any rules or ideas I am not aware of that I could exploit ?
Sorry for the terrible formatting, I just can't get LateX to work in here.
The sum of the product of corresponding bits, modulo 2, will be equal to the number of 1 bits in the AND of the two numbers, modulo 2.
As you can get the binary representation of a number easily, it might not be necessary to actually create an array of bits for them, but just use the integer data type in your programming language, which allows for at least 32 bits. Many languages offer bit operators, such as a AND (&) and XOR (^).
Counting the 1 bits in a number can be done with the variable-precision SWAR algorithm.
Here is program in Python that calculates this product modulo 2 for 2 numbers:
def numberOfSetBits(i):
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
def product(a, b):
return numberOfSetBits(a & b) % 2
Instead of counting the bits with numberOfSetBits, you could fold the bits together with XORs, first the 16 most significant bits with the 16 least significant bits, then of that result the 8 most significant with the 8 least significant bits, until you have one bit left. Again in Python:
def bitParity(i):
i = (i >> 16) ^ i
i = (i >> 8) ^ i
i = (i >> 4) ^ i
i = (i >> 2) ^ i
i = (i >> 1) ^ i
return i % 2
def product(a, b):
return bitParity(a & b)
If you change the order that you are evaluating these pairs (a matrix of size 2n x 2n), then you can efficiently figure out which products-mod-2 change in each row of your evaluation.
Using Gray code, you can iterate over each value from 0 ... 2n-1 in a special order where only 1 bit of the outer-loop value changes each time. You can store 1 bit for each value from 0 ... 2n-1 representing the previous row's product-mod-2 values, and then change it based on whether the changing bit has any effect, which it only does when the corresponding bit in the other (inner loop) number is 1 (if it's 0 then the binary AND will be 0 no matter what the value of the other bit).
In C:
int N = 5;
int max = (1 << N) - 1;
unsigned char* prev = calloc((1 << N) / 8, 1);
// for the first row all the products will be zero, so start at row 1
for(int a = 1; a <= max; a++)
{
int grey = a ^ (a >> 1); // compute the grey code
int prev_grey = (a - 1) ^ ((a - 1) >> 1);
int changed_bit = grey ^ prev_grey;
for(int b = 0; b <= max; b++)
{
// the product will be changed only if b has a 1 at the same place
// (otherwise it will be 0 regardless)
if(b & changed_bit)
{
prev[b >> 3] ^= (1 << (b & 7));
}
int mod = (prev[b >> 3] & (1 << (b & 7))) != 0;
printf("mod value of %d and %d is %d\n", grey, b, mod);
}
}
The inner loop can be optimized even more because you can easily figure out which values of b have a non-zero value in the position of the changed bit: for example if it's in position 10 then there will be runs of 1024 in a row of 0 then 1 etc. So you know that you have 1024 values where the product-mod-2 is the same as in the previous row etc. It's not clear to me if this helps you though because I don't know what you are doing with these products.
The inner loop could also be unrolled (e.g. 32 or 64 times) so that you don't read and write to the prev array each time, but rather process blocks of 32 or 64 bits at a time.
Problem
I have a need to pick one unique random number at a time between 0 and 10,000,000,000 and do it till all numbers are selected. Essentially the behavior I need is a pre-built stack/queue with 10 billion numbers in random order, with no ability to push new items into it.
Not so good ways to solve:
There's no shortage of inefficient ways in my brain. Such as,
persist generated numbers and check newly generated random number is already used, at some point this gets us into indefinite wait before a usable number is produced.
Persist all possible numbers in a table and pop a random row and maintain new row count for next pick etc. Not sure if this is good or bad.
Questions:
Are there other deterministic ways besides storing all possible combinations and using random?
Like maintaining windows of available numbers and randomly select a window first and randomly select a number within that window etc. eg: like this
If not, what is the best type to store numbers in reasonably small amount of space?
50+% of numbers wont fit in a 32 bit (int), 64 bit (long) is waste. Cos largest number fits in 34 bits, wasting 30 bits per number (>37GB total).
If this problem hasn't been solved already.
What is a good data structure for storing & picking a random spot and quickly adjust the structure for next pick to be fast?
***Sorry for the ambiguity. The largest selectable number is 9,999,999,999 and smallest selectable is 1.
You ask: "Are there other deterministic ways besides storing all possible combinations and using random?"
Yes there is: Encryption. Encryption with a given key guarantees a unique result for unique inputs since it is reversible. Each key defines a one-to-one permutation of the possible inputs. You need an encryption of inputs in the range [1..10e9]. To deal with something that big you need 34 bit numbers, which go up to 17,179,869,183.
There is no standard 34 bit encryption. Depending on how much security you need, and how fast you need the numbers, you can either write your own simple, fast, insecure four-round Feistel Cipher or else for something slower and more secure use Hasty Pudding cipher in 34 bit mode.
With either solution, if the first encryption gives a result outside the range, just encrypt the result again until the new result is within the range you want. The one-to-one property ensures that the final result of the chain of encryptions will be unique.
To generate a sequence of unique random-seeming numbers just encrypt 0, 1, 2, 3, 4, ... in order with the same key. Encryption guarantees that the results will be unique for that key. If you record how far you have got, then you can generate more unique numbers later, up to your 10 billion limit.
As mentioned by AChampion in the comments, you could use a Linear Congruential generator.
Your modulo (m) value will be 10 billion. In order to get a full period (all values in the range appear before the series repeats) you need to choose the a and c constants to satisfy certain criteria. m and c need to be relatively prime and a - 1 needs to be divisible by the prime factors of m (which are just 2 and 5) and also by 4 (since 10 billion is divisible by 4).
If you just come up with a single set of constants, you will only have one possible series and the numbers will always occur in the same order. However you can easily randomly generate constants that satisfy the criteria. To test for relative primality of c and m, just test if c is divisible by 2 and 5, since these are the only prime factors of m (see first condition of coprimality test here)
Simple sketch in Python:
import random
m = 10000000000
a = 0
c = 0
r = 0
def setupLCG():
global a, c, r
# choose value of c that is 0 < c < m and relatively prime to m
c = 5
while ((c % 5 == 0) or (c % 2 == 0)):
c = random.randint(1, m - 1)
# choose value of a that is 0 < a <= m and a - 1 is divisible by
# prime factors of m, and 4
a = 4
while ((((a - 1) % 4) != 0) or (((a - 1) % 5) != 0)):
a = random.randint(1, m)
r = random.randint(0, m - 1)
def rand():
global m, a, c, r
r = (a*r + c) % m
return r
random.seed()
setupLCG()
for i in range(1000):
print rand() + 1
This approach won't give the full possibility of 10000000000! possible combinations, but it will still be on the order of 1019, which is quite a lot. It does have a few other issues (e.g. alternates even and odd values). You could mix it up a bit by having a small pool of numbers, adding a number from the sequence to it each time and randomly drawing one out.
Similar to what rossum has suggested, you can use invertible integer hash function, which uniquely maps an integer in [0,2^k) to another integer in the same range. For your particular problem, you choose k=34 (2^34=16 billion) and reject any number above 10 billion. Here is a complete implementation:
#include <stdint.h>
#include <stdlib.h>
#include <stdio.h>
uint64_t hash_64(uint64_t key, uint64_t mask)
{
key = (~key + (key << 21)) & mask; // key = (key << 21) - key - 1;
key = key ^ key >> 24;
key = ((key + (key << 3)) + (key << 8)) & mask; // key * 265
key = key ^ key >> 14;
key = ((key + (key << 2)) + (key << 4)) & mask; // key * 21
key = key ^ key >> 28;
key = (key + (key << 31)) & mask;
return key;
}
int main(int argc, char *argv[])
{
uint64_t i, shift, mask, max = 10000ULL;
char *dummy;
if (argc > 1) max = strtol(argv[1], &dummy, 10);
for (shift = 0; 1ULL<<shift <= max; ++shift) {}
mask = (1ULL<<shift) - 1;
for (i = 0; i <= mask; ++i) {
uint64_t x = hash_64(i, mask);
x = hash_64(x, mask);
x = hash_64(x, mask); // apply multiple times to increase randomness
if (x > max || x == 0) continue;
printf("%llu\n", x);
}
return 0;
}
This should give you number [0,10000000000] in random order.
For the range 1-999,999,999,999 is equivalent 0-999,999,999,998 (just add 1). Given the definition of LCG then you can implement this:
import functools as ft
import itertools as it
import operator as op
from sympy import primefactors, nextprime
def LCG(m, seed=0):
factors = set(primefactors(m))
a = ft.reduce(op.mul, factors)+1
assert(m%4 != 0 or (m%4 == 0 and (a-1)%m == 0))
c = nextprime(max(factors)+1)
assert(c < m)
x = seed
while True:
x = (a * x + c) % m
yield x
# Check the first 10,000,000 for duplicates
>>> x = list(it.islice(LCG(999999999999), 10000000))
>>> len(x) == len(set(x))
True
# Last 10 numbers
>>> x[-10:]
[99069910838, 876847698522, 765736597318, 99069940559, 210181061577,
432403293706, 99069970280, 543514424631, 99069990094, 99070000001]
I've taken a couple of shortcuts for the context of this question as the asserts should be replaced with handling code, currently it would just fail if those asserts were False
I'm not aware of any truly random methods of picking the numbers without storing a list of the numbers already picked. You could do some sort of linear hashing algorithm, and then pass the numbers 0 to n through it (repeating when your hash returns a value above 10000000000), but this wouldn't be truly random.
If you are to store the numbers, you might consider doing it via a bitmask. To pick quickly in the bitmask, you would likely keep a tree, where each leaf would represent the number of free bits in the corresponding 32 bytes, the branches above that would list the number of free bits in the corresponding 2K entries, and so forth. You then have O(log(n)) time to find your next entry, and O(log(n)) time to claim a bit (as you have to update the tree). It would require something to the order of 2n bits to store as well.
You definitely don't need to store all the numbers.
If you want a perfect set of the numbers from 1 to 10B each exactly once, there are two options that I see: as hinted at by the others, use a 34-bit LCG or Galois LFSR or XOR-shift that generates a sequence of numbers from 1 to 17B or so, then throw out the ones over 10B. I am not aware of any specifically 34-bit functions for this, but I'm sure someone is.
Option 2, if you can spare 1.25 GB of memory, is to create a bitmap that stores only the information that a certain number has been chosen, then use Floyd's Algorithm to get the numbers, which would be fast and give you much better quality numbers (in fact, it would work just fine with hardware RNGs).
Option 3, if you can live with a rare but occasional mistake (duplicate or never-selected number), replace the bitmap with a Bloom filter and save memory.
If predictability is not a concern, you can generate quickly using XOR operations. Suppose you want to generate a random sequence of unique numbers with n bits (34 in your case):
1- take a seed number on n bits. This number, K, can be considered as a seed that you can change each time you run a new experiment.
2- Use a counter from 0 upward
3- Each time XOR the counter with K : next = counter xor K; counter++;
To limit the range to 10 Billion, which is not a power of two, you will need to do rejection.
The obvious drawback is predictability. In step 3, you can do a prior transposition on the bytes of the counter, for example inverse the order of the bytes (like when you transform from little-endian to big endian). This would yield some improvement concerning the predictability of the next number.
Finally I have to admit that this answer can be considered as a particular implementation of encryption which was mentioned in the answer of #rossum, but it's more specific and probably fastest.
Incredibly slow but it should work. Completely random
using System;
using System.Diagnostics;
using System.IO;
using System.Runtime.InteropServices;
namespace ConsoleApplication1
{
class Program
{
static Random random = new Random();
static void Main()
{
const long start = 1;
const long NumData = 10000000000;
const long RandomNess = NumData;
var sz = Marshal.SizeOf(typeof(long));
var numBytes = NumData * sz;
var filePath = Path.GetTempFileName();
using (var stream = new FileStream(filePath, FileMode.Create))
{
// create file with numbers in order
stream.Seek(0, SeekOrigin.Begin);
for (var index = start; index < NumData; index++)
{
var bytes = BitConverter.GetBytes(index);
stream.Write(bytes, 0, sz);
}
for (var iteration = 0L; iteration < RandomNess; iteration++)
{
// get 2 random longs
var item1Index = LongRandom(0, NumData - 1, random);
var item2Index = LongRandom(0, NumData - 1, random);
// allocate room for data
var data1ByteArray = new byte[sz];
var data2ByteArray = new byte[sz];
// read the first value
stream.Seek(item1Index * sz, SeekOrigin.Begin);
stream.Read(data1ByteArray, 0, sz);
// read the second value
stream.Seek(item2Index * sz, SeekOrigin.Begin);
stream.Read(data2ByteArray, 0, sz);
var item1 = BitConverter.ToInt64(data1ByteArray, 0);
var item2 = BitConverter.ToInt64(data2ByteArray, 0);
Debug.Assert(item1 < NumData);
Debug.Assert(item2 < NumData);
// swap the values
stream.Seek(item1Index * sz, SeekOrigin.Begin);
stream.Write(data2ByteArray, 0, sz);
stream.Seek(item2Index * sz, SeekOrigin.Begin);
stream.Write(data1ByteArray, 0, sz);
}
}
File.Delete(filePath);
Console.WriteLine($"{numBytes}");
}
static long LongRandom(long min, long max, Random rand)
{
long result = rand.Next((int)(min >> 32), (int)(max >> 32));
result = (result << 32);
result = result | rand.Next((int)min, (int)max);
return result;
}
}
}
Given: set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500.
Asked: Find the sum of all least common multiples (LCM) of all subsets of A of size at least 2.
The LCM of a setB = {b0, b1, ..., bk-1} is defined as the minimum integer Bmin such that bi | Bmin, for all 0 ≤ i < k.
Example:
Let N = 3 and A = {2, 6, 7}, then:
LCM({2, 6}) = 6
LCM({2, 7}) = 14
LCM({6, 7}) = 42
LCM({2, 6, 7}) = 42
----------------------- +
answer 104
The naive approach would be to simply calculate the LCM for all O(2N) subsets, which is not feasible for reasonably large N.
Solution sketch:
The problem is obtained from a competition*, which also provided a solution sketch. This is where my problem comes in: I do not understand the hinted approach.
The solution reads (modulo some small fixed grammar issues):
The solution is a bit tricky. If we observe carefully we see that the integers are between 2 and 500. So, if we prime factorize the numbers, we get the following maximum powers:
2 8
3 5
5 3
7 3
11 2
13 2
17 2
19 2
Other than this, all primes have power 1. So, we can easily calculate all possible states, using these integers, leaving 9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 states, which is nearly 70000. For other integers we can make a dp like the following: dp[70000][i], where i can be 0 to 100. However, as dp[i] is dependent on dp[i-1], so dp[70000][2] is enough. This leaves the complexity to n * 70000 which is feasible.
I have the following concrete questions:
What is meant by these states?
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
How is dp[i] computed from dp[i-1]?
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
*The original problem description can be found from this source (problem F). This question is a simplified version of that description.
Discussion
After reading the actual contest description (page 10 or 11) and the solution sketch, I have to conclude the author of the solution sketch is quite imprecise in their writing.
The high level problem is to calculate an expected lifetime if components are chosen randomly by fair coin toss. This is what's leading to computing the LCM of all subsets -- all subsets effectively represent the sample space. You could end up with any possible set of components. The failure time for the device is based on the LCM of the set. The expected lifetime is therefore the average of the LCM of all sets.
Note that this ought to include the LCM of sets with only one item (in which case we'd assume the LCM to be the element itself). The solution sketch seems to sabotage, perhaps because they handled it in a less elegant manner.
What is meant by these states?
The sketch author only uses the word state twice, but apparently manages to switch meanings. In the first use of the word state it appears they're talking about a possible selection of components. In the second use they're likely talking about possible failure times. They could be muddling this terminology because their dynamic programming solution initializes values from one use of the word and the recurrence relation stems from the other.
Does dp stand for dynamic programming?
I would say either it does or it's a coincidence as the solution sketch seems to heavily imply dynamic programming.
If so, what recurrence relation is being solved? How is dp[i] computed from dp[i-1]?
All I can think is that in their solution, state i represents a time to failure , T(i), with the number of times this time to failure has been counted, dp[i]. The resulting sum would be to sum all dp[i] * T(i).
dp[i][0] would then be the failure times counted for only the first component. dp[i][1] would then be the failure times counted for the first and second component. dp[i][2] would be for the first, second, and third. Etc..
Initialize dp[i][0] with zeroes except for dp[T(c)][0] (where c is the first component considered) which should be 1 (since this component's failure time has been counted once so far).
To populate dp[i][n] from dp[i][n-1] for each component c:
For each i, copy dp[i][n-1] into dp[i][n].
Add 1 to dp[T(c)][n].
For each i, add dp[i][n-1] to dp[LCM(T(i), T(c))][n].
What is this doing? Suppose you knew that you had a time to failure of j, but you added a component with a time to failure of k. Regardless of what components you had before, your new time to fail is LCM(j, k). This follows from the fact that for two sets A and B, LCM(A union B} = LCM(LCM(A), LCM(B)).
Similarly, if we're considering a time to failure of T(i) and our new component's time to failure of T(c), the resultant time to failure is LCM(T(i), T(c)). Note that we recorded this time to failure for dp[i][n-1] configurations, so we should record that many new times to failure once the new component is introduced.
Why do the big primes not contribute to the number of states?
Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
You're right, of course. However, the solution sketch states that numbers with large primes are handled in another (unspecified) fashion.
What would happen if we did include them? The number of states we would need to represent would explode into an impractical number. Hence the author accounts for such numbers differently. Note that if a number less than or equal to 500 includes a prime larger than 19 the other factors multiply to 21 or less. This makes such numbers amenable for brute forcing, no tables necessary.
The first part of the editorial seems useful, but the second part is rather vague (and perhaps unhelpful; I'd rather finish this answer than figure it out).
Let's suppose for the moment that the input consists of pairwise distinct primes, e.g., 2, 3, 5, and 7. Then the answer (for summing all sets, where the LCM of 0 integers is 1) is
(1 + 2) (1 + 3) (1 + 5) (1 + 7),
because the LCM of a subset is exactly equal to the product here, so just multiply it out.
Let's relax the restriction that the primes be pairwise distinct. If we have an input like 2, 2, 3, 3, 3, and 5, then the multiplication looks like
(1 + (2^2 - 1) 2) (1 + (2^3 - 1) 3) (1 + (2^1 - 1) 5),
because 2 appears with multiplicity 2, and 3 appears with multiplicity 3, and 5 appears with multiplicity 1. With respect to, e.g., just the set of 3s, there are 2^3 - 1 ways to choose a subset that includes a 3, and 1 way to choose the empty set.
Call a prime small if it's 19 or less and large otherwise. Note that integers 500 or less are divisible by at most one large prime (with multiplicity). The small primes are more problematic. What we're going to do is to compute, for each possible small portion of the prime factorization of the LCM (i.e., one of the ~70,000 states), the sum of LCMs for the problem derived by discarding the integers that could not divide such an LCM and leaving only the large prime factor (or 1) for the other integers.
For example, if the input is 2, 30, 41, 46, and 51, and the state is 2, then we retain 2 as 1, discard 30 (= 2 * 3 * 5; 3 and 5 are small), retain 41 as 41 (41 is large), retain 46 as 23 (= 2 * 23; 23 is large), and discard 51 (= 3 * 17; 3 and 17 are small). Now, we compute the sum of LCMs using the previously described technique. Use inclusion-exclusion to get rid of the subsets whose LCM whose small portion properly divides the state instead of being exactly equal. Maybe I'll work a complete example later.
What is meant by these states?
I think here, states refer to if the number is in set B = {b0, b1, ..., bk-1} of LCMs of set A.
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
dp in the solution sketch stands for dynamic programming, I believe.
How is dp[i] computed from dp[i-1]?
It's feasible that we can figure out the state of next group of LCMs from previous states. So, we only need array of 2, and toggle back and forth.
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
We can use Prime Factorization and exponents only to present the number.
Here is one example.
6 = (2^1)(3^1)(5^0) -> state "1 1 0" to represent 6
18 = (2^1)(3^2)(5^0) -> state "1 2 0" to represent 18
Here is how we can get LMC of 6 and 18 using Prime Factorization
LCM (6,18) = (2^(max(1,1)) (3^ (max(1,2)) (5^max(0,0)) = (2^1)(3^2)(5^0) = 18
2^9 > 500, 3^6 > 500, 5^4 > 500, 7^4>500, 11^3 > 500, 13^3 > 500, 17^3 > 500, 19^3 > 500
we can use only count of exponents of prime number 2,3,5,7,11,13,17,19 to represent the LCMs in the set B = {b0, b1, ..., bk-1}
for the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500.
9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 <= 70000, so we only need two of dp[9][6][4][4][3][3][3][3] to keep tracks of all LCMs' states. So, dp[70000][2] is enough.
I put together a small C++ program to illustrate how we can get sum of LCMs of the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500. In the solution sketch, we need to loop through 70000 max possible of LCMs.
int gcd(int a, int b) {
int remainder = 0;
do {
remainder = a % b;
a = b;
b = remainder;
} while (b != 0);
return a;
}
int lcm(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return (a * b) / gcd(a, b);
}
int sum_of_lcm(int A[], int N) {
// get the max LCM from the array
int max = A[0];
for (int i = 1; i < N; i++) {
max = lcm(max, A[i]);
}
max++;
//
int dp[max][2];
memset(dp, 0, sizeof(dp));
int pri = 0;
int cur = 1;
// loop through n x 70000
for (int i = 0; i < N; i++) {
for (int v = 1; v < max; v++) {
int x = A[i];
if (dp[v][pri] > 0) {
x = lcm(A[i], v);
dp[v][cur] = (dp[v][cur] == 0) ? dp[v][pri] : dp[v][cur];
if ( x % A[i] != 0 ) {
dp[x][cur] += dp[v][pri] + dp[A[i]][pri];
} else {
dp[x][cur] += ( x==v ) ? ( dp[v][pri] + dp[v][pri] ) : ( dp[v][pri] ) ;
}
}
}
dp[A[i]][cur]++;
pri = cur;
cur = (pri + 1) % 2;
}
for (int i = 0; i < N; i++) {
dp[A[i]][pri] -= 1;
}
long total = 0;
for (int j = 0; j < max; j++) {
if (dp[j][pri] > 0) {
total += dp[j][pri] * j;
}
}
cout << "total:" << total << endl;
return total;
}
int test() {
int a[] = {2, 6, 7 };
int n = sizeof(a)/sizeof(a[0]);
int total = sum_of_lcm(a, n);
return 0;
}
Output
total:104
The states are one more than the powers of primes. You have numbers up to 2^8, so the power of 2 is in [0..8], which is 9 states. Similarly for the other states.
"dp" could well stand for dynamic programming, I'm not sure.
The recurrence relation is the heart of the problem, so you will learn more by solving it yourself. Start with some small, simple examples.
For the large primes, try solving a reduced problem without using them (or their equivalents) and then add them back in to see their effect on the final result.
Regular numbers are numbers that evenly divide powers of 60. As an example, 602 = 3600 = 48 × 75, so both 48 and 75 are divisors of a power of 60. Thus, they are also regular numbers.
This is an extension of rounding up to the next power of two.
I have an integer value N which may contain large prime factors and I want to round it up to a number composed of only small prime factors (2, 3 and 5)
Examples:
f(18) == 18 == 21 * 32
f(19) == 20 == 22 * 51
f(257) == 270 == 21 * 33 * 51
What would be an efficient way to find the smallest number satisfying this requirement?
The values involved may be large, so I would like to avoid enumerating all regular numbers starting from 1 or maintaining an array of all possible values.
One can produce arbitrarily thin a slice of the Hamming sequence around the n-th member in time ~ n^(2/3) by direct enumeration of triples (i,j,k) such that N = 2^i * 3^j * 5^k.
The algorithm works from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it in a "band" if inside the given "width" below the given high logarithmic top value (when width < 1 there can be at most one such i) then sorts them by their logarithms.
WP says that n ~ (log N)^3, i.e. run time ~ (log N)^2. Here we don't care for the exact position of the found triple in the sequence, so all the count calculations from the original code can be thrown away:
slice hi w = sortBy (compare `on` fst) b where -- hi>log2(N) is a top value
lb5=logBase 2 5 ; lb3=logBase 2 3 -- w<1 (NB!) is log2(width)
b = concat -- the slice
[ [ (r,(i,j,k)) | frac < w ] -- store it, if inside width
| k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral k*lb5,
j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral j*lb3 + p,
let (i,frac)=properFraction(hi-q) ; r = hi - frac ] -- r = i + q
-- properFraction 12.7 == (12, 0.7)
-- update: in pseudocode:
def slice(hi, w):
lb5, lb3 = logBase(2, 5), logBase(2, 3) -- logs base 2 of 5 and 3
for k from 0 step 1 to floor(hi/lb5) inclusive:
p = k*lb5
for j from 0 step 1 to floor((hi-p)/lb3) inclusive:
q = j*lb3 + p
i = floor(hi-q)
frac = hi-q-i -- frac < 1 , always
r = hi - frac -- r == i + q
if frac < w:
place (r,(i,j,k)) into the output array
sort the output array's entries by their "r" component
in ascending order, and return thus sorted array
Having enumerated the triples in the slice, it is a simple matter of sorting and searching, taking practically O(1) time (for arbitrarily thin a slice) to find the first triple above N. Well, actually, for constant width (logarithmic), the amount of numbers in the slice (members of the "upper crust" in the (i,j,k)-space below the log(N) plane) is again m ~ n^2/3 ~ (log N)^2 and sorting takes m log m time (so that searching, even linear, takes ~ m run time then). But the width can be made smaller for bigger Ns, following some empirical observations; and constant factors for the enumeration of triples are much higher than for the subsequent sorting anyway.
Even with constant width (logarthmic) it runs very fast, calculating the 1,000,000-th value in the Hamming sequence instantly and the billionth in 0.05s.
The original idea of "top band of triples" is due to Louis Klauder, as cited in my post on a DDJ blogs discussion back in 2008.
update: as noted by GordonBGood in the comments, there's no need for the whole band but rather just about one or two values above and below the target. The algorithm is easily amended to that effect. The input should also be tested for being a Hamming number itself before proceeding with the algorithm, to avoid round-off issues with double precision. There are no round-off issues comparing the logarithms of the Hamming numbers known in advance to be different (though going up to a trillionth entry in the sequence uses about 14 significant digits in logarithm values, leaving only 1-2 digits to spare, so the situation may in fact be turning iffy there; but for 1-billionth we only need 11 significant digits).
update2: turns out the Double precision for logarithms limits this to numbers below about 20,000 to 40,000 decimal digits (i.e. 10 trillionth to 100 trillionth Hamming number). If there's a real need for this for such big numbers, the algorithm can be switched back to working with the Integer values themselves instead of their logarithms, which will be slower.
Okay, hopefully third time's a charm here. A recursive, branching algorithm for an initial input of p, where N is the number being 'built' within each thread. NB 3a-c here are launched as separate threads or otherwise done (quasi-)asynchronously.
Calculate the next-largest power of 2 after p, call this R. N = p.
Is N > R? Quit this thread. Is p composed of only small prime factors? You're done. Otherwise, go to step 3.
After any of 3a-c, go to step 4.
a) Round p up to the nearest multiple of 2. This number can be expressed as m * 2.
b) Round p up to the nearest multiple of 3. This number can be expressed as m * 3.
c) Round p up to the nearest multiple of 5. This number can be expressed as m * 5.
Go to step 2, with p = m.
I've omitted the bookkeeping to do regarding keeping track of N but that's fairly straightforward I take it.
Edit: Forgot 6, thanks ypercube.
Edit 2: Had this up to 30, (5, 6, 10, 15, 30) realized that was unnecessary, took that out.
Edit 3: (The last one I promise!) Added the power-of-30 check, which helps prevent this algorithm from eating up all your RAM.
Edit 4: Changed power-of-30 to power-of-2, per finnw's observation.
Here's a solution in Python, based on Will Ness answer but taking some shortcuts and using pure integer math to avoid running into log space numerical accuracy errors:
import math
def next_regular(target):
"""
Find the next regular number greater than or equal to target.
"""
# Check if it's already a power of 2 (or a non-integer)
try:
if not (target & (target-1)):
return target
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
if target <= 6:
return target
match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
# From https://stackoverflow.com/a/17511341/125507
quotient = -(-target // p35)
# Quickly find next power of 2 >= quotient
# See https://stackoverflow.com/a/19164783/125507
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)
N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match
In English: iterate through every combination of 5s and 3s, quickly finding the next power of 2 >= target for each pair and keeping the smallest result. (It's a waste of time to iterate through every possible multiple of 2 if only one of them can be correct). It also returns early if it ever finds that the target is already a regular number, though this is not strictly necessary.
I've tested it pretty thoroughly, testing every integer from 0 to 51200000 and comparing to the list on OEIS http://oeis.org/A051037, as well as many large numbers that are ±1 from regular numbers, etc. It's now available in SciPy as fftpack.helper.next_fast_len, to find optimal sizes for FFTs (source code).
I'm not sure if the log method is faster because I couldn't get it to work reliably enough to test it. I think it has a similar number of operations, though? I'm not sure, but this is reasonably fast. Takes <3 seconds (or 0.7 second with gmpy) to calculate that 2142 × 380 × 5444 is the next regular number above 22 × 3454 × 5249+1 (the 100,000,000th regular number, which has 392 digits)
You want to find the smallest number m that is m >= N and m = 2^i * 3^j * 5^k where all i,j,k >= 0.
Taking logarithms the equations can be rewritten as:
log m >= log N
log m = i*log2 + j*log3 + k*log5
You can calculate log2, log3, log5 and logN to (enough high, depending on the size of N) accuracy. Then this problem looks like a Integer Linear programming problem and you could try to solve it using one of the known algorithms for this NP-hard problem.
EDITED/CORRECTED: Corrected the codes to pass the scipy tests:
Here's an answer based on endolith's answer, but almost eliminating long multi-precision integer calculations by using float64 logarithm representations to do a base comparison to find triple values that pass the criteria, only resorting to full precision comparisons when there is a chance that the logarithm value may not be accurate enough, which only occurs when the target is very close to either the previous or the next regular number:
import math
def next_regulary(target):
"""
Find the next regular number greater than or equal to target.
"""
if target < 2: return ( 0, 0, 0 )
log2hi = 0
mant = 0
# Check if it's already a power of 2 (or a non-integer)
try:
mant = target & (target - 1)
target = int(target) # take care of case where not int/float/decimal
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
mant = target & (target - 1)
# Quickly find next power of 2 >= target
# See https://stackoverflow.com/a/19164783/125507
try:
log2hi = target.bit_length()
except AttributeError:
# Fallback for Python <2.7
log2hi = len(bin(target)) - 2
# exit if this is a power of two already...
if not mant: return ( log2hi - 1, 0, 0 )
# take care of trivial cases...
if target < 9:
if target < 4: return ( 0, 1, 0 )
elif target < 6: return ( 0, 0, 1 )
elif target < 7: return ( 1, 1, 0 )
else: return ( 3, 0, 0 )
# find log of target, which may exceed the float64 limit...
if log2hi < 53: mant = target << (53 - log2hi)
else: mant = target >> (log2hi - 53)
log2target = log2hi + math.log2(float(mant) / (1 << 53))
# log2 constants
log2of2 = 1.0; log2of3 = math.log2(3); log2of5 = math.log2(5)
# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target**3 + 2 * fctr)**(1/3) # for up to 2 numbers higher
btm = 2 * log2target - top # or up to 2 numbers lower
match = log2hi # Anything found will be smaller
result = ( log2hi, 0, 0 ) # placeholder for eventual matches
count = 0 # only used for debugging counting band
fives = 0; fiveslmt = int(math.ceil(top / log2of5))
while fives < fiveslmt:
log2p = top - fives * log2of5
threes = 0; threeslmt = int(math.ceil(log2p / log2of3))
while threes < threeslmt:
log2q = log2p - threes * log2of3
twos = int(math.floor(log2q)); log2this = top - log2q + twos
if log2this >= btm: count += 1 # only used for counting band
if log2this >= btm and log2this < match:
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (2**twos * 3**threes * 5**fives) >= target:
match = log2this; result = ( twos, threes, fives )
threes += 1
fives += 1
return result
print(next_regular(2**2 * 3**454 * 5**249 + 1)) # prints (142, 80, 444)
Since most long multi-precision calculations have been eliminated, gmpy isn't needed, and on IDEOne the above code takes 0.11 seconds instead of 0.48 seconds for endolith's solution to find the next regular number greater than the 100 millionth one as shown; it takes 0.49 seconds instead of 5.48 seconds to find the next regular number past the billionth (next one is (761,572,489) past (1334,335,404) + 1), and the difference will get even larger as the range goes up as the multi-precision calculations get increasingly longer for the endolith version compared to almost none here. Thus, this version could calculate the next regular number from the trillionth in the sequence in about 50 seconds on IDEOne, where it would likely take over an hour with the endolith version.
The English description of the algorithm is almost the same as for the endolith version, differing as follows:
1) calculates the float log estimation of the argument target value (we can't use the built-in log function directly as the range may be much too large for representation as a 64-bit float),
2) compares the log representation values in determining qualifying values inside an estimated range above and below the target value of only about two or three numbers (depending on round-off),
3) compare multi-precision values only if within the above defined narrow band,
4) outputs the triple indices rather than the full long multi-precision integer (would be about 840 decimal digits for the one past the billionth, ten times that for the trillionth), which can then easily be converted to the long multi-precision value if required.
This algorithm uses almost no memory other than for the potentially very large multi-precision integer target value, the intermediate evaluation comparison values of about the same size, and the output expansion of the triples if required. This algorithm is an improvement over the endolith version in that it successfully uses the logarithm values for most comparisons in spite of their lack of precision, and that it narrows the band of compared numbers to just a few.
This algorithm will work for argument ranges somewhat above ten trillion (a few minute's calculation time at IDEOne rates) when it will no longer be correct due to lack of precision in the log representation values as per #WillNess's discussion; in order to fix this, we can change the log representation to a "roll-your-own" logarithm representation consisting of a fixed-length integer (124 bits for about double the exponent range, good for targets of over a hundred thousand digits if one is willing to wait); this will be a little slower due to the smallish multi-precision integer operations being slower than float64 operations, but not that much slower since the size is limited (maybe a factor of three or so slower).
Now none of these Python implementations (without using C or Cython or PyPy or something) are particularly fast, as they are about a hundred times slower than as implemented in a compiled language. For reference sake, here is a Haskell version:
{-# OPTIONS_GHC -O3 #-}
import Data.Word
import Data.Bits
nextRegular :: Integer -> ( Word32, Word32, Word32 )
nextRegular target
| target < 2 = ( 0, 0, 0 )
| target .&. (target - 1) == 0 = ( fromIntegral lg2hi - 1, 0, 0 )
| target < 9 = case target of
3 -> ( 0, 1, 0 )
5 -> ( 0, 0, 1 )
6 -> ( 1, 1, 0 )
_ -> ( 3, 0, 0 )
| otherwise = match
where
lg3 = logBase 2 3 :: Double; lg5 = logBase 2 5 :: Double
lg2hi = let cntplcs v cnt =
let nv = v `shiftR` 31 in
if nv <= 0 then
let cntbts x c =
if x <= 0 then c else
case c + 1 of
nc -> nc `seq` cntbts (x `shiftR` 1) nc in
cntbts (fromIntegral v :: Word32) cnt
else case cnt + 31 of ncnt -> ncnt `seq` cntplcs nv ncnt
in cntplcs target 0
lg2tgt = let mant = if lg2hi <= 53 then target `shiftL` (53 - lg2hi)
else target `shiftR` (lg2hi - 53)
in fromIntegral lg2hi +
logBase 2 (fromIntegral mant / 2^53 :: Double)
lg2top = (lg2tgt^3 + 2 * 6 * lg3 * lg5)**(1/3) -- for 2 numbers or so higher
lg2btm = 2* lg2tgt - lg2top -- or two numbers or so lower
match =
let klmt = floor (lg2top / lg5)
loopk k mtchlgk mtchtplk =
if k > klmt then mtchtplk else
let p = lg2top - fromIntegral k * lg5
jlmt = fromIntegral $ floor (p / lg3)
loopj j mtchlgj mtchtplj =
if j > jlmt then loopk (k + 1) mtchlgj mtchtplj else
let q = p - fromIntegral j * lg3
( i, frac ) = properFraction q; r = lg2top - frac
( nmtchlg, nmtchtpl ) =
if r < lg2btm || r >= mtchlgj then
( mtchlgj, mtchtplj ) else
if 2^i * 3^j * 5^k >= target then
( r, ( i, j, k ) ) else ( mtchlgj, mtchtplj )
in nmtchlg `seq` nmtchtpl `seq` loopj (j + 1) nmtchlg nmtchtpl
in loopj 0 mtchlgk mtchtplk
in loopk 0 (fromIntegral lg2hi) ( fromIntegral lg2hi, 0, 0 )
trival :: ( Word32, Word32, Word32 ) -> Integer
trival (i,j,k) = 2^i * 3^j * 5^k
main = putStrLn $ show $ nextRegular $ (trival (1334,335,404)) + 1 -- (1126,16930,40)
This code calculates the next regular number following the billionth in too small a time to be measured and following the trillionth in 0.69 seconds on IDEOne (and potentially could run even faster except that IDEOne doesn't support LLVM). Even Julia will run at something like this Haskell speed after the "warm-up" for JIT compilation.
EDIT_ADD: The Julia code is as per the following:
function nextregular(target :: BigInt) :: Tuple{ UInt32, UInt32, UInt32 }
# trivial case of first value or anything less...
target < 2 && return ( 0, 0, 0 )
# Check if it's already a power of 2 (or a non-integer)
mant = target & (target - 1)
# Quickly find next power of 2 >= target
log2hi :: UInt32 = 0
test = target
while true
next = test & 0x7FFFFFFF
test >>>= 31; log2hi += 31
test <= 0 && (log2hi -= leading_zeros(UInt32(next)) - 1; break)
end
# exit if this is a power of two already...
mant == 0 && return ( log2hi - 1, 0, 0 )
# take care of trivial cases...
if target < 9
target < 4 && return ( 0, 1, 0 )
target < 6 && return ( 0, 0, 1 )
target < 7 && return ( 1, 1, 0 )
return ( 3, 0, 0 )
end
# find log of target, which may exceed the Float64 limit...
if log2hi < 53 mant = target << (53 - log2hi)
else mant = target >>> (log2hi - 53) end
log2target = log2hi + log(2, Float64(mant) / (1 << 53))
# log2 constants
log2of2 = 1.0; log2of3 = log(2, 3); log2of5 = log(2, 5)
# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target^3 + 2 * fctr)^(1/3) # for 2 numbers or so higher
btm = 2 * log2target - top # or 2 numbers or so lower
# scan for values in the given narrow range that satisfy the criteria...
match = log2hi # Anything found will be smaller
result :: Tuple{UInt32,UInt32,UInt32} = ( log2hi, 0, 0 ) # placeholder for eventual matches
fives :: UInt32 = 0; fiveslmt = UInt32(ceil(top / log2of5))
while fives < fiveslmt
log2p = top - fives * log2of5
threes :: UInt32 = 0; threeslmt = UInt32(ceil(log2p / log2of3))
while threes < threeslmt
log2q = log2p - threes * log2of3
twos = UInt32(floor(log2q)); log2this = top - log2q + twos
if log2this >= btm && log2this < match
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (big(2)^twos * big(3)^threes * big(5)^fives) >= target
match = log2this; result = ( twos, threes, fives )
end
end
threes += 1
end
fives += 1
end
result
end
Here's another possibility I just thought of:
If N is X bits long, then the smallest regular number R ≥ N will be in the range
[2X-1, 2X]
e.g. if N = 257 (binary 100000001) then we know R is 1xxxxxxxx unless R is exactly equal to the next power of 2 (512)
To generate all the regular numbers in this range, we can generate the odd regular numbers (i.e. multiples of powers of 3 and 5) first, then take each value and multiply by 2 (by bit-shifting) as many times as necessary to bring it into this range.
In Python:
from itertools import ifilter, takewhile
from Queue import PriorityQueue
def nextPowerOf2(n):
p = max(1, n)
while p != (p & -p):
p += p & -p
return p
# Generate multiples of powers of 3, 5
def oddRegulars():
q = PriorityQueue()
q.put(1)
prev = None
while not q.empty():
n = q.get()
if n != prev:
prev = n
yield n
if n % 3 == 0:
q.put(n // 3 * 5)
q.put(n * 3)
# Generate regular numbers with the same number of bits as n
def regularsCloseTo(n):
p = nextPowerOf2(n)
numBits = len(bin(n))
for i in takewhile(lambda x: x <= p, oddRegulars()):
yield i << max(0, numBits - len(bin(i)))
def nextRegular(n):
bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n))
return min(bigEnough)
You know what? I'll put money on the proposition that actually, the 'dumb' algorithm is fastest. This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input. So simply start counting up, and after each increment, refactor and see if you've found a regular number. But create one processing thread for each available core you have, and for N cores have each thread examine every Nth number. When each thread has found a number or crossed the power-of-2 threshold, compare the results (keep a running best number) and there you are.
I wrote a small c# program to solve this problem. It's not very optimised but it's a start.
This solution is pretty fast for numbers as big as 11 digits.
private long GetRegularNumber(long n)
{
long result = n - 1;
long quotient = result;
while (quotient > 1)
{
result++;
quotient = result;
quotient = RemoveFactor(quotient, 2);
quotient = RemoveFactor(quotient, 3);
quotient = RemoveFactor(quotient, 5);
}
return result;
}
private static long RemoveFactor(long dividend, long divisor)
{
long remainder = 0;
long quotient = dividend;
while (remainder == 0)
{
dividend = quotient;
quotient = Math.DivRem(dividend, divisor, out remainder);
}
return dividend;
}
What would be the most optimal algorithm (performance-wise) to calculate the number of divisors of a given number?
It'll be great if you could provide pseudocode or a link to some example.
EDIT: All the answers have been very helpful, thank you. I'm implementing the Sieve of Atkin and then I'm going to use something similar to what Jonathan Leffler indicated. The link posted by Justin Bozonier has further information on what I wanted.
Dmitriy is right that you'll want the Sieve of Atkin to generate the prime list but I don't believe that takes care of the whole issue. Now that you have a list of primes you'll need to see how many of those primes act as a divisor (and how often).
Here's some python for the algo Look here and search for "Subject: math - need divisors algorithm". Just count the number of items in the list instead of returning them however.
Here's a Dr. Math that explains what exactly it is you need to do mathematically.
Essentially it boils down to if your number n is:
n = a^x * b^y * c^z
(where a, b, and c are n's prime divisors and x, y, and z are the number of times that divisor is repeated)
then the total count for all of the divisors is:
(x + 1) * (y + 1) * (z + 1).
Edit: BTW, to find a,b,c,etc you'll want to do what amounts to a greedy algo if I'm understanding this correctly. Start with your largest prime divisor and multiply it by itself until a further multiplication would exceed the number n. Then move to the next lowest factor and times the previous prime ^ number of times it was multiplied by the current prime and keep multiplying by the prime until the next will exceed n... etc. Keep track of the number of times you multiply the divisors together and apply those numbers into the formula above.
Not 100% sure about my algo description but if that isn't it it's something similar .
There are a lot more techniques to factoring than the sieve of Atkin. For example suppose we want to factor 5893. Well its sqrt is 76.76... Now we'll try to write 5893 as a product of squares. Well (77*77 - 5893) = 36 which is 6 squared, so 5893 = 77*77 - 6*6 = (77 + 6)(77-6) = 83*71. If that hadn't worked we'd have looked at whether 78*78 - 5893 was a perfect square. And so on. With this technique you can quickly test for factors near the square root of n much faster than by testing individual primes. If you combine this technique for ruling out large primes with a sieve, you will have a much better factoring method than with the sieve alone.
And this is just one of a large number of techniques that have been developed. This is a fairly simple one. It would take you a long time to learn, say, enough number theory to understand the factoring techniques based on elliptic curves. (I know they exist. I don't understand them.)
Therefore unless you are dealing with small integers, I wouldn't try to solve that problem myself. Instead I'd try to find a way to use something like the PARI library that already has a highly efficient solution implemented. With that I can factor a random 40 digit number like 124321342332143213122323434312213424231341 in about .05 seconds. (Its factorization, in case you wondered, is 29*439*1321*157907*284749*33843676813*4857795469949. I am quite confident that it didn't figure this out using the sieve of Atkin...)
#Yasky
Your divisors function has a bug in that it does not work correctly for perfect squares.
Try:
int divisors(int x) {
int limit = x;
int numberOfDivisors = 0;
if (x == 1) return 1;
for (int i = 1; i < limit; ++i) {
if (x % i == 0) {
limit = x / i;
if (limit != i) {
numberOfDivisors++;
}
numberOfDivisors++;
}
}
return numberOfDivisors;
}
I disagree that the sieve of Atkin is the way to go, because it could easily take longer to check every number in [1,n] for primality than it would to reduce the number by divisions.
Here's some code that, although slightly hackier, is generally much faster:
import operator
# A slightly efficient superset of primes.
def PrimesPlus():
yield 2
yield 3
i = 5
while True:
yield i
if i % 6 == 1:
i += 2
i += 2
# Returns a dict d with n = product p ^ d[p]
def GetPrimeDecomp(n):
d = {}
primes = PrimesPlus()
for p in primes:
while n % p == 0:
n /= p
d[p] = d.setdefault(p, 0) + 1
if n == 1:
return d
def NumberOfDivisors(n):
d = GetPrimeDecomp(n)
powers_plus = map(lambda x: x+1, d.values())
return reduce(operator.mul, powers_plus, 1)
ps That's working python code to solve this problem.
Here is a straight forward O(sqrt(n)) algorithm. I used this to solve project euler
def divisors(n):
count = 2 # accounts for 'n' and '1'
i = 2
while i ** 2 < n:
if n % i == 0:
count += 2
i += 1
if i ** 2 == n:
count += 1
return count
This interesting question is much harder than it looks, and it has not been answered. The question can be factored into 2 very different questions.
1 given N, find the list L of N's prime factors
2 given L, calculate number of unique combinations
All answers I see so far refer to #1 and fail to mention it is not tractable for enormous numbers. For moderately sized N, even 64-bit numbers, it is easy; for enormous N, the factoring problem can take "forever". Public key encryption depends on this.
Question #2 needs more discussion. If L contains only unique numbers, it is a simple calculation using the combination formula for choosing k objects from n items. Actually, you need to sum the results from applying the formula while varying k from 1 to sizeof(L). However, L will usually contain multiple occurrences of multiple primes. For example, L = {2,2,2,3,3,5} is the factorization of N = 360. Now this problem is quite difficult!
Restating #2, given collection C containing k items, such that item a has a' duplicates, and item b has b' duplicates, etc. how many unique combinations of 1 to k-1 items are there? For example, {2}, {2,2}, {2,2,2}, {2,3}, {2,2,3,3} must each occur once and only once if L = {2,2,2,3,3,5}. Each such unique sub-collection is a unique divisor of N by multiplying the items in the sub-collection.
An answer to your question depends greatly on the size of the integer. Methods for small numbers, e.g. less then 100 bit, and for numbers ~1000 bit (such as used in cryptography) are completely different.
general overview: http://en.wikipedia.org/wiki/Divisor_function
values for small n and some useful references: A000005: d(n) (also called tau(n) or sigma_0(n)), the number of divisors of n.
real-world example: factorization of integers
JUST one line
I have thought very carefuly about your question and I have tried to write a highly efficient and performant piece of code
To print all divisors of a given number on screen we need just one line of code!
(use option -std=c99 while compiling via gcc)
for(int i=1,n=9;((!(n%i)) && printf("%d is a divisor of %d\n",i,n)) || i<=(n/2);i++);//n is your number
for finding numbers of divisors you can use the following very very fast function(work correctly for all integer number except 1 and 2)
int number_of_divisors(int n)
{
int counter,i;
for(counter=0,i=1;(!(n%i) && (counter++)) || i<=(n/2);i++);
return counter;
}
or if you treat given number as a divisor(work correctly for all integer number except 1 and 2)
int number_of_divisors(int n)
{
int counter,i;
for(counter=0,i=1;(!(n%i) && (counter++)) || i<=(n/2);i++);
return ++counter;
}
NOTE:two above functions works correctly for all positive integer number except number 1 and 2
so it is functional for all numbers that are greater than 2
but if you Need to cover 1 and 2 , you can use one of the following functions( a little slower)
int number_of_divisors(int n)
{
int counter,i;
for(counter=0,i=1;(!(n%i) && (counter++)) || i<=(n/2);i++);
if (n==2 || n==1)
{
return counter;
}
return ++counter;
}
OR
int number_of_divisors(int n)
{
int counter,i;
for(counter=0,i=1;(!(i==n) && !(n%i) && (counter++)) || i<=(n/2);i++);
return ++counter;
}
small is beautiful :)
The sieve of Atkin is an optimized version of the sieve of Eratosthenes which gives all prime numbers up to a given integer. You should be able to google this for more detail.
Once you have that list, it's a simple matter to divide your number by each prime to see if it's an exact divisor (i.e., remainder is zero).
The basic steps calculating the divisors for a number (n) are [this is pseudocode converted from real code so I hope I haven't introduced errors]:
for z in 1..n:
prime[z] = false
prime[2] = true;
prime[3] = true;
for x in 1..sqrt(n):
xx = x * x
for y in 1..sqrt(n):
yy = y * y
z = 4*xx+yy
if (z <= n) and ((z mod 12 == 1) or (z mod 12 == 5)):
prime[z] = not prime[z]
z = z-xx
if (z <= n) and (z mod 12 == 7):
prime[z] = not prime[z]
z = z-yy-yy
if (z <= n) and (x > y) and (z mod 12 == 11):
prime[z] = not prime[z]
for z in 5..sqrt(n):
if prime[z]:
zz = z*z
x = zz
while x <= limit:
prime[x] = false
x = x + zz
for z in 2,3,5..n:
if prime[z]:
if n modulo z == 0 then print z
You might try this one. It's a bit hackish, but it's reasonably fast.
def factors(n):
for x in xrange(2,n):
if n%x == 0:
return (x,) + factors(n/x)
return (n,1)
Once you have the prime factorization, there is a way to find the number of divisors. Add one to each of the exponents on each individual factor and then multiply the exponents together.
For example:
36
Prime Factorization: 2^2*3^2
Divisors: 1, 2, 3, 4, 6, 9, 12, 18, 36
Number of Divisors: 9
Add one to each exponent 2^3*3^3
Multiply exponents: 3*3 = 9
Before you commit to a solution consider that the Sieve approach might not be a good answer in the typical case.
A while back there was a prime question and I did a time test--for 32-bit integers at least determining if it was prime was slower than brute force. There are two factors going on:
1) While a human takes a while to do a division they are very quick on the computer--similar to the cost of looking up the answer.
2) If you do not have a prime table you can make a loop that runs entirely in the L1 cache. This makes it faster.
This is an efficient solution:
#include <iostream>
int main() {
int num = 20;
int numberOfDivisors = 1;
for (int i = 2; i <= num; i++)
{
int exponent = 0;
while (num % i == 0) {
exponent++;
num /= i;
}
numberOfDivisors *= (exponent+1);
}
std::cout << numberOfDivisors << std::endl;
return 0;
}
Divisors do something spectacular: they divide completely. If you want to check the number of divisors for a number, n, it clearly is redundant to span the whole spectrum, 1...n. I have not done any in-depth research for this but I solved Project Euler's problem 12 on Triangular Numbers. My solution for the greater then 500 divisors test ran for 309504 microseconds (~0.3s). I wrote this divisor function for the solution.
int divisors (int x) {
int limit = x;
int numberOfDivisors = 1;
for (int i(0); i < limit; ++i) {
if (x % i == 0) {
limit = x / i;
numberOfDivisors++;
}
}
return numberOfDivisors * 2;
}
To every algorithm, there is a weak point. I thought this was weak against prime numbers. But since triangular numbers are not print, it served its purpose flawlessly. From my profiling, I think it did pretty well.
Happy Holidays.
You want the Sieve of Atkin, described here: http://en.wikipedia.org/wiki/Sieve_of_Atkin
Number theory textbooks call the divisor-counting function tau. The first interesting fact is that it's multiplicative, ie. τ(ab) = τ(a)τ(b) , when a and b have no common factor. (Proof: each pair of divisors of a and b gives a distinct divisor of ab).
Now note that for p a prime, τ(p**k) = k+1 (the powers of p). Thus you can easily compute τ(n) from its factorisation.
However factorising large numbers can be slow (the security of RSA crytopraphy depends on the product of two large primes being hard to factorise). That suggests this optimised algorithm
Test if the number is prime (fast)
If so, return 2
Otherwise, factorise the number (slow if multiple large prime factors)
Compute τ(n) from the factorisation
This is the most basic way of computing the number divissors:
class PrintDivisors
{
public static void main(String args[])
{
System.out.println("Enter the number");
// Create Scanner object for taking input
Scanner s=new Scanner(System.in);
// Read an int
int n=s.nextInt();
// Loop from 1 to 'n'
for(int i=1;i<=n;i++)
{
// If remainder is 0 when 'n' is divided by 'i',
if(n%i==0)
{
System.out.print(i+", ");
}
}
// Print [not necessary]
System.out.print("are divisors of "+n);
}
}
the prime number method is very clear here .
P[] is a list of prime number less than or equal the sq = sqrt(n) ;
for (int i = 0 ; i < size && P[i]<=sq ; i++){
nd = 1;
while(n%P[i]==0){
n/=P[i];
nd++;
}
count*=nd;
if (n==1)break;
}
if (n!=1)count*=2;//the confusing line :D :P .
i will lift the understanding for the reader .
i now look forward to a method more optimized .
The following is a C program to find the number of divisors of a given number.
The complexity of the above algorithm is O(sqrt(n)).
This algorithm will work correctly for the number which are perfect square as well as the numbers which are not perfect square.
Note that the upperlimit of the loop is set to the square-root of number to have the algorithm most efficient.
Note that storing the upperlimit in a separate variable also saves the time, you should not call the sqrt function in the condition section of the for loop, this also saves your computational time.
#include<stdio.h>
#include<math.h>
int main()
{
int i,n,limit,numberOfDivisors=1;
printf("Enter the number : ");
scanf("%d",&n);
limit=(int)sqrt((double)n);
for(i=2;i<=limit;i++)
if(n%i==0)
{
if(i!=n/i)
numberOfDivisors+=2;
else
numberOfDivisors++;
}
printf("%d\n",numberOfDivisors);
return 0;
}
Instead of the above for loop you can also use the following loop which is even more efficient as this removes the need to find the square-root of the number.
for(i=2;i*i<=n;i++)
{
...
}
Here is a function that I wrote. it's worst time complexity is O(sqrt(n)),best time on the other hand is O(log(n)). It gives you all the prime divisors along with the number of its occurence.
public static List<Integer> divisors(n) {
ArrayList<Integer> aList = new ArrayList();
int top_count = (int) Math.round(Math.sqrt(n));
int new_n = n;
for (int i = 2; i <= top_count; i++) {
if (new_n == (new_n / i) * i) {
aList.add(i);
new_n = new_n / i;
top_count = (int) Math.round(Math.sqrt(new_n));
i = 1;
}
}
aList.add(new_n);
return aList;
}
#Kendall
I tested your code and made some improvements, now it is even faster.
I also tested with #هومن جاویدپور code, this is also faster than his code.
long long int FindDivisors(long long int n) {
long long int count = 0;
long long int i, m = (long long int)sqrt(n);
for(i = 1;i <= m;i++) {
if(n % i == 0)
count += 2;
}
if(n / m == m && n % m == 0)
count--;
return count;
}
Isn't this just a question of factoring the number - determining all the factors of the number? You can then decide whether you need all combinations of one or more factors.
So, one possible algorithm would be:
factor(N)
divisor = first_prime
list_of_factors = { 1 }
while (N > 1)
while (N % divisor == 0)
add divisor to list_of_factors
N /= divisor
divisor = next_prime
return list_of_factors
It is then up to you to combine the factors to determine the rest of the answer.
I think this is what you are looking for.I does exactly what you asked for.
Copy and Paste it in Notepad.Save as *.bat.Run.Enter Number.Multiply the process by 2 and thats the number of divisors.I made that on purpose so the it determine the divisors faster:
Pls note that a CMD varriable cant support values over 999999999
#echo off
modecon:cols=100 lines=100
:start
title Enter the Number to Determine
cls
echo Determine a number as a product of 2 numbers
echo.
echo Ex1 : C = A * B
echo Ex2 : 8 = 4 * 2
echo.
echo Max Number length is 9
echo.
echo If there is only 1 proces done it
echo means the number is a prime number
echo.
echo Prime numbers take time to determine
echo Number not prime are determined fast
echo.
set /p number=Enter Number :
if %number% GTR 999999999 goto start
echo.
set proces=0
set mindet=0
set procent=0
set B=%Number%
:Determining
set /a mindet=%mindet%+1
if %mindet% GTR %B% goto Results
set /a solution=%number% %%% %mindet%
if %solution% NEQ 0 goto Determining
if %solution% EQU 0 set /a proces=%proces%+1
set /a B=%number% / %mindet%
set /a procent=%mindet%*100/%B%
if %procent% EQU 100 set procent=%procent:~0,3%
if %procent% LSS 100 set procent=%procent:~0,2%
if %procent% LSS 10 set procent=%procent:~0,1%
title Progress : %procent% %%%
if %solution% EQU 0 echo %proces%. %mindet% * %B% = %number%
goto Determining
:Results
title %proces% Results Found
echo.
#pause
goto start
i guess this one will be handy as well as precise
script.pyton
>>>factors=[ x for x in range (1,n+1) if n%x==0]
print len(factors)
Try something along these lines:
int divisors(int myNum) {
int limit = myNum;
int divisorCount = 0;
if (x == 1)
return 1;
for (int i = 1; i < limit; ++i) {
if (myNum % i == 0) {
limit = myNum / i;
if (limit != i)
divisorCount++;
divisorCount++;
}
}
return divisorCount;
}
I don't know the MOST efficient method, but I'd do the following:
Create a table of primes to find all primes less than or equal to the square root of the number (Personally, I'd use the Sieve of Atkin)
Count all primes less than or equal to the square root of the number and multiply that by two. If the square root of the number is an integer, then subtract one from the count variable.
Should work \o/
If you need, I can code something up tomorrow in C to demonstrate.