Finding the index of a given permutation - algorithm

I'm reading the numbers 0, 1, ..., (N - 1) one by one in some order. My goal is to find the lexicography index of this given permutation, using only O(1) space.
This question was asked before, but all the algorithms I could find used O(N) space. I'm starting to think that it's not possible. But it would really help me a lot with reducing the number of allocations.

Considering the following data:
chars = [a, b, c, d]
perm = [c, d, a, b]
ids = get_indexes(perm, chars) = [2, 3, 0, 1]
A possible solution for permutation with repetitions goes as follows:
len = length(perm) (len = 4)
num_chars = length(chars) (len = 4)
base = num_chars ^ len (base = 4 ^ 4 = 256)
base = base / len (base = 256 / 4 = 64)
id = base * ids[0] (id = 64 * 2 = 128)
base = base / len (base = 64 / 4 = 16)
id = id + (base * ids[1]) (id = 128 + (16 * 3) = 176)
base = base / len (base = 16 / 4 = 4)
id = id + (base * ids[2]) (id = 176 + (4 * 0) = 176)
base = base / len (base = 4 / 4 = 1)
id = id + (base * ids[3]) (id = 176 + (1 * 1) = 177)
Reverse process:
id = 177
(id / (4 ^ 3)) % 4 = (177 / 64) % 4 = 2 % 4 = 2 -> chars[2] -> c
(id / (4 ^ 2)) % 4 = (177 / 16) % 4 = 11 % 4 = 3 -> chars[3] -> d
(id / (4 ^ 1)) % 4 = (177 / 4) % 4 = 44 % 4 = 0 -> chars[0] -> a
(id / (4 ^ 0)) % 4 = (177 / 1) % 4 = 177 % 4 = 1 -> chars[1] -> b
The number of possible permutations is given by num_chars ^ num_perm_digits, having num_chars as the number of possible characters, and num_perm_digits as the number of digits in a permutation.
This requires O(1) in space, considering the initial list as a constant cost; and it requires O(N) in time, considering N as the number of digits your permutation will have.
Based on the steps above, you can do:
function identify_permutation(perm, chars) {
for (i = 0; i < length(perm); i++) {
ids[i] = get_index(perm[i], chars);
}
len = length(perm);
num_chars = length(chars);
index = 0;
base = num_chars ^ len - 1;
base = base / len;
for (i = 0; i < length(perm); i++) {
index += base * ids[i];
base = base / len;
}
}
It's a pseudocode, but it's also quite easy to convert to any language (:

If you are looking for a way to obtain the lexicographic index or rank of a unique combination instead of a permutation, then your problem falls under the binomial coefficient. The binomial coefficient handles problems of choosing unique combinations in groups of K with a total of N items.
I have written a class in C# to handle common functions for working with the binomial coefficient. It performs the following tasks:
Outputs all the K-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters.
Converts the K-indexes to the proper lexicographic index or rank of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle and is very efficient compared to iterating over the set.
Converts the index in a sorted binomial coefficient table to the corresponding K-indexes. I believe it is also faster than older iterative solutions.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to use the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with 2 cases and there are no known bugs.
To read about this class and download the code, see Tablizing The Binomial Coeffieicent.
The following tested code will iterate through each unique combinations:
public void Test10Choose5()
{
String S;
int Loop;
int N = 10; // Total number of elements in the set.
int K = 5; // Total number of elements in each group.
// Create the bin coeff object required to get all
// the combos for this N choose K combination.
BinCoeff<int> BC = new BinCoeff<int>(N, K, false);
int NumCombos = BinCoeff<int>.GetBinCoeff(N, K);
// The Kindexes array specifies the indexes for a lexigraphic element.
int[] KIndexes = new int[K];
StringBuilder SB = new StringBuilder();
// Loop thru all the combinations for this N choose K case.
for (int Combo = 0; Combo < NumCombos; Combo++)
{
// Get the k-indexes for this combination.
BC.GetKIndexes(Combo, KIndexes);
// Verify that the Kindexes returned can be used to retrive the
// rank or lexigraphic order of the KIndexes in the table.
int Val = BC.GetIndex(true, KIndexes);
if (Val != Combo)
{
S = "Val of " + Val.ToString() + " != Combo Value of " + Combo.ToString();
Console.WriteLine(S);
}
SB.Remove(0, SB.Length);
for (Loop = 0; Loop < K; Loop++)
{
SB.Append(KIndexes[Loop].ToString());
if (Loop < K - 1)
SB.Append(" ");
}
S = "KIndexes = " + SB.ToString();
Console.WriteLine(S);
}
}
You should be able to port this class over fairly easily to the language of your choice. You probably will not have to port over the generic part of the class to accomplish your goals. Depending on the number of combinations you are working with, you might need to use a bigger word size than 4 byte ints.

There is a java solution to this problem on geekviewpoint. It has a good explanation for why it's true and the code is easy to follow. http://www.geekviewpoint.com/java/numbers/permutation_index. It also has a unit test that runs the code with different inputs.

There are N! permutations. To represent index you need at least N bits.

Here is a way to do it if you want to assume that arithmetic operations are constant time:
def permutationIndex(numbers):
n=len(numbers)
result=0
j=0
while j<n:
# Determine factor, which is the number of possible permutations of
# the remaining digits.
i=1
factor=1
while i<n-j:
factor*=i
i+=1
i=0
# Determine index, which is how many previous digits there were at
# the current position.
index=numbers[j]
while i<j:
# Only the digits that weren't used so far are valid choices, so
# the index gets reduced if the number at the current position
# is greater than one of the previous digits.
if numbers[i]<numbers[j]:
index-=1
i+=1
# Update the result.
result+=index*factor
j+=1
return result
I've purposefully written out certain calculations that could be done more simply using some Python built-in operations, but I wanted to make it more obvious that no extra non-constant amount of space was being used.
As maxim1000 noted, the number of bits required to represent the result will grow quickly as n increases, so eventually big integers will be required, which no longer have constant-time arithmetic, but I think this code addresses the spirit of your question.

Nothing really new in the idea but a fully matricial method with no explicit loop or recursion (using Numpy but easy to adapt):
import numpy as np
import math
vfact = np.vectorize(math.factorial, otypes='O')
def perm_index(p):
return np.dot( vfact(range(len(p)-1, -1, -1)),
p-np.sum(np.triu(p>np.vstack(p)), axis=0) )

I just wrote a code using Visual Basic and my program can directly calculate every index or every corresponding permutation to a given index up to 17 elements (this limit is due to the approximation of the scientific notation of numbers over 17! of my compiler).
If you are interested I can I can send the program or publish it somewhere for download.
It works fine and It can be useful for testing and paragon the output of your codes.
I used the method of James D. McCaffrey called factoradic and you can read about it here and something also here (in the discussion at the end of the page).

Related

Generating random number in the range 0-N [duplicate]

I have seen this question asked a lot but never seen a true concrete answer to it. So I am going to post one here which will hopefully help people understand why exactly there is "modulo bias" when using a random number generator, like rand() in C++.
So rand() is a pseudo-random number generator which chooses a natural number between 0 and RAND_MAX, which is a constant defined in cstdlib (see this article for a general overview on rand()).
Now what happens if you want to generate a random number between say 0 and 2? For the sake of explanation, let's say RAND_MAX is 10 and I decide to generate a random number between 0 and 2 by calling rand()%3. However, rand()%3 does not produce the numbers between 0 and 2 with equal probability!
When rand() returns 0, 3, 6, or 9, rand()%3 == 0. Therefore, P(0) = 4/11
When rand() returns 1, 4, 7, or 10, rand()%3 == 1. Therefore, P(1) = 4/11
When rand() returns 2, 5, or 8, rand()%3 == 2. Therefore, P(2) = 3/11
This does not generate the numbers between 0 and 2 with equal probability. Of course for small ranges this might not be the biggest issue but for a larger range this could skew the distribution, biasing the smaller numbers.
So when does rand()%n return a range of numbers from 0 to n-1 with equal probability? When RAND_MAX%n == n - 1. In this case, along with our earlier assumption rand() does return a number between 0 and RAND_MAX with equal probability, the modulo classes of n would also be equally distributed.
So how do we solve this problem? A crude way is to keep generating random numbers until you get a number in your desired range:
int x;
do {
x = rand();
} while (x >= n);
but that's inefficient for low values of n, since you only have a n/RAND_MAX chance of getting a value in your range, and so you'll need to perform RAND_MAX/n calls to rand() on average.
A more efficient formula approach would be to take some large range with a length divisible by n, like RAND_MAX - RAND_MAX % n, keep generating random numbers until you get one that lies in the range, and then take the modulus:
int x;
do {
x = rand();
} while (x >= (RAND_MAX - RAND_MAX % n));
x %= n;
For small values of n, this will rarely require more than one call to rand().
Works cited and further reading:
CPlusPlus Reference
Eternally Confuzzled
Keep selecting a random is a good way to remove the bias.
Update
We could make the code fast if we search for an x in range divisible by n.
// Assumptions
// rand() in [0, RAND_MAX]
// n in (0, RAND_MAX]
int x;
// Keep searching for an x in a range divisible by n
do {
x = rand();
} while (x >= RAND_MAX - (RAND_MAX % n))
x %= n;
The above loop should be very fast, say 1 iteration on average.
#user1413793 is correct about the problem. I'm not going to discuss that further, except to make one point: yes, for small values of n and large values of RAND_MAX, the modulo bias can be very small. But using a bias-inducing pattern means that you must consider the bias every time you calculate a random number and choose different patterns for different cases. And if you make the wrong choice, the bugs it introduces are subtle and almost impossible to unit test. Compared to just using the proper tool (such as arc4random_uniform), that's extra work, not less work. Doing more work and getting a worse solution is terrible engineering, especially when doing it right every time is easy on most platforms.
Unfortunately, the implementations of the solution are all incorrect or less efficient than they should be. (Each solution has various comments explaining the problems, but none of the solutions have been fixed to address them.) This is likely to confuse the casual answer-seeker, so I'm providing a known-good implementation here.
Again, the best solution is just to use arc4random_uniform on platforms that provide it, or a similar ranged solution for your platform (such as Random.nextInt on Java). It will do the right thing at no code cost to you. This is almost always the correct call to make.
If you don't have arc4random_uniform, then you can use the power of opensource to see exactly how it is implemented on top of a wider-range RNG (ar4random in this case, but a similar approach could also work on top of other RNGs).
Here is the OpenBSD implementation:
/*
* Calculate a uniformly distributed random number less than upper_bound
* avoiding "modulo bias".
*
* Uniformity is achieved by generating new random numbers until the one
* returned is outside the range [0, 2**32 % upper_bound). This
* guarantees the selected random number will be inside
* [2**32 % upper_bound, 2**32) which maps back to [0, upper_bound)
* after reduction modulo upper_bound.
*/
u_int32_t
arc4random_uniform(u_int32_t upper_bound)
{
u_int32_t r, min;
if (upper_bound < 2)
return 0;
/* 2**32 % x == (2**32 - x) % x */
min = -upper_bound % upper_bound;
/*
* This could theoretically loop forever but each retry has
* p > 0.5 (worst case, usually far better) of selecting a
* number inside the range we need, so it should rarely need
* to re-roll.
*/
for (;;) {
r = arc4random();
if (r >= min)
break;
}
return r % upper_bound;
}
It is worth noting the latest commit comment on this code for those who need to implement similar things:
Change arc4random_uniform() to calculate 2**32 % upper_bound as
-upper_bound % upper_bound. Simplifies the code and makes it the
same on both ILP32 and LP64 architectures, and also slightly faster on
LP64 architectures by using a 32-bit remainder instead of a 64-bit
remainder.
Pointed out by Jorden Verwer on tech#
ok deraadt; no objections from djm or otto
The Java implementation is also easily findable (see previous link):
public int nextInt(int n) {
if (n <= 0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)next(31)) >> 31);
int bits, val;
do {
bits = next(31);
val = bits % n;
} while (bits - val + (n-1) < 0);
return val;
}
Definition
Modulo Bias is the inherent bias in using modulo arithmetic to reduce an output set to a subset of the input set. In general, a bias exists whenever the mapping between the input and output set is not equally distributed, as in the case of using modulo arithmetic when the size of the output set is not a divisor of the size of the input set.
This bias is particularly hard to avoid in computing, where numbers are represented as strings of bits: 0s and 1s. Finding truly random sources of randomness is also extremely difficult, but is beyond the scope of this discussion. For the remainder of this answer, assume that there exists an unlimited source of truly random bits.
Problem Example
Let's consider simulating a die roll (0 to 5) using these random bits. There are 6 possibilities, so we need enough bits to represent the number 6, which is 3 bits. Unfortunately, 3 random bits yields 8 possible outcomes:
000 = 0, 001 = 1, 010 = 2, 011 = 3
100 = 4, 101 = 5, 110 = 6, 111 = 7
We can reduce the size of the outcome set to exactly 6 by taking the value modulo 6, however this presents the modulo bias problem: 110 yields a 0, and 111 yields a 1. This die is loaded.
Potential Solutions
Approach 0:
Rather than rely on random bits, in theory one could hire a small army to roll dice all day and record the results in a database, and then use each result only once. This is about as practical as it sounds, and more than likely would not yield truly random results anyway (pun intended).
Approach 1:
Instead of using the modulus, a naive but mathematically correct solution is to discard results that yield 110 and 111 and simply try again with 3 new bits. Unfortunately, this means that there is a 25% chance on each roll that a re-roll will be required, including each of the re-rolls themselves. This is clearly impractical for all but the most trivial of uses.
Approach 2:
Use more bits: instead of 3 bits, use 4. This yield 16 possible outcomes. Of course, re-rolling anytime the result is greater than 5 makes things worse (10/16 = 62.5%) so that alone won't help.
Notice that 2 * 6 = 12 < 16, so we can safely take any outcome less than 12 and reduce that modulo 6 to evenly distribute the outcomes. The other 4 outcomes must be discarded, and then re-rolled as in the previous approach.
Sounds good at first, but let's check the math:
4 discarded results / 16 possibilities = 25%
In this case, 1 extra bit didn't help at all!
That result is unfortunate, but let's try again with 5 bits:
32 % 6 = 2 discarded results; and
2 discarded results / 32 possibilities = 6.25%
A definite improvement, but not good enough in many practical cases. The good news is, adding more bits will never increase the chances of needing to discard and re-roll. This holds not just for dice, but in all cases.
As demonstrated however, adding an 1 extra bit may not change anything. In fact if we increase our roll to 6 bits, the probability remains 6.25%.
This begs 2 additional questions:
If we add enough bits, is there a guarantee that the probability of a discard will diminish?
How many bits are enough in the general case?
General Solution
Thankfully the answer to the first question is yes. The problem with 6 is that 2^x mod 6 flips between 2 and 4 which coincidentally are a multiple of 2 from each other, so that for an even x > 1,
[2^x mod 6] / 2^x == [2^(x+1) mod 6] / 2^(x+1)
Thus 6 is an exception rather than the rule. It is possible to find larger moduli that yield consecutive powers of 2 in the same way, but eventually this must wrap around, and the probability of a discard will be reduced.
Without offering further proof, in general using double the number
of bits required will provide a smaller, usually insignificant,
chance of a discard.
Proof of Concept
Here is an example program that uses OpenSSL's libcrypo to supply random bytes. When compiling, be sure to link to the library with -lcrypto which most everyone should have available.
#include <iostream>
#include <assert.h>
#include <limits>
#include <openssl/rand.h>
volatile uint32_t dummy;
uint64_t discardCount;
uint32_t uniformRandomUint32(uint32_t upperBound)
{
assert(RAND_status() == 1);
uint64_t discard = (std::numeric_limits<uint64_t>::max() - upperBound) % upperBound;
RAND_bytes((uint8_t*)(&randomPool), sizeof(randomPool));
while(randomPool > (std::numeric_limits<uint64_t>::max() - discard)) {
RAND_bytes((uint8_t*)(&randomPool), sizeof(randomPool));
++discardCount;
}
return randomPool % upperBound;
}
int main() {
discardCount = 0;
const uint32_t MODULUS = (1ul << 31)-1;
const uint32_t ROLLS = 10000000;
for(uint32_t i = 0; i < ROLLS; ++i) {
dummy = uniformRandomUint32(MODULUS);
}
std::cout << "Discard count = " << discardCount << std::endl;
}
I encourage playing with the MODULUS and ROLLS values to see how many re-rolls actually happen under most conditions. A sceptical person may also wish to save the computed values to file and verify the distribution appears normal.
Mark's Solution (The accepted solution) is Nearly Perfect.
int x;
do {
x = rand();
} while (x >= (RAND_MAX - RAND_MAX % n));
x %= n;
edited Mar 25 '16 at 23:16
Mark Amery 39k21170211
However, it has a caveat which discards 1 valid set of outcomes in any scenario where RAND_MAX (RM) is 1 less than a multiple of N (Where N = the Number of possible valid outcomes).
ie, When the 'count of values discarded' (D) is equal to N, then they are actually a valid set (V), not an invalid set (I).
What causes this is at some point Mark loses sight of the difference between N and Rand_Max.
N is a set who's valid members are comprised only of Positive Integers, as it contains a count of responses that would be valid. (eg: Set N = {1, 2, 3, ... n } )
Rand_max However is a set which ( as defined for our purposes ) includes any number of non-negative integers.
In it's most generic form, what is defined here as Rand Max is the Set of all valid outcomes, which could theoretically include negative numbers or non-numeric values.
Therefore Rand_Max is better defined as the set of "Possible Responses".
However N operates against the count of the values within the set of valid responses, so even as defined in our specific case, Rand_Max will be a value one less than the total number it contains.
Using Mark's Solution, Values are Discarded when: X => RM - RM % N
EG:
Ran Max Value (RM) = 255
Valid Outcome (N) = 4
When X => 252, Discarded values for X are: 252, 253, 254, 255
So, if Random Value Selected (X) = {252, 253, 254, 255}
Number of discarded Values (I) = RM % N + 1 == N
IE:
I = RM % N + 1
I = 255 % 4 + 1
I = 3 + 1
I = 4
X => ( RM - RM % N )
255 => (255 - 255 % 4)
255 => (255 - 3)
255 => (252)
Discard Returns $True
As you can see in the example above, when the value of X (the random number we get from the initial function) is 252, 253, 254, or 255 we would discard it even though these four values comprise a valid set of returned values.
IE: When the count of the values Discarded (I) = N (The number of valid outcomes) then a Valid set of return values will be discarded by the original function.
If we describe the difference between the values N and RM as D, ie:
D = (RM - N)
Then as the value of D becomes smaller, the Percentage of unneeded re-rolls due to this method increases at each natural multiplicative. (When RAND_MAX is NOT equal to a Prime Number this is of valid concern)
EG:
RM=255 , N=2 Then: D = 253, Lost percentage = 0.78125%
RM=255 , N=4 Then: D = 251, Lost percentage = 1.5625%
RM=255 , N=8 Then: D = 247, Lost percentage = 3.125%
RM=255 , N=16 Then: D = 239, Lost percentage = 6.25%
RM=255 , N=32 Then: D = 223, Lost percentage = 12.5%
RM=255 , N=64 Then: D = 191, Lost percentage = 25%
RM=255 , N= 128 Then D = 127, Lost percentage = 50%
Since the percentage of Rerolls needed increases the closer N comes to RM, this can be of valid concern at many different values depending on the constraints of the system running he code and the values being looked for.
To negate this we can make a simple amendment As shown here:
int x;
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) );
x %= n;
This provides a more general version of the formula which accounts for the additional peculiarities of using modulus to define your max values.
Examples of using a small value for RAND_MAX which is a multiplicative of N.
Mark'original Version:
RAND_MAX = 3, n = 2, Values in RAND_MAX = 0,1,2,3, Valid Sets = 0,1 and 2,3.
When X >= (RAND_MAX - ( RAND_MAX % n ) )
When X >= 2 the value will be discarded, even though the set is valid.
Generalized Version 1:
RAND_MAX = 3, n = 2, Values in RAND_MAX = 0,1,2,3, Valid Sets = 0,1 and 2,3.
When X > (RAND_MAX - ( ( RAND_MAX % n ) + 1 ) % n )
When X > 3 the value would be discarded, but this is not a vlue in the set RAND_MAX so there will be no discard.
Additionally, in the case where N should be the number of values in RAND_MAX; in this case, you could set N = RAND_MAX +1, unless RAND_MAX = INT_MAX.
Loop-wise you could just use N = 1, and any value of X will be accepted, however, and put an IF statement in for your final multiplier. But perhaps you have code that may have a valid reason to return a 1 when the function is called with n = 1...
So it may be better to use 0, which would normally provide a Div 0 Error, when you wish to have n = RAND_MAX+1
Generalized Version 2:
int x;
if n != 0 {
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) );
x %= n;
} else {
x = rand();
}
Both of these solutions resolve the issue with needlessly discarded valid results which will occur when RM+1 is a product of n.
The second version also covers the edge case scenario when you need n to equal the total possible set of values contained in RAND_MAX.
The modified approach in both is the same and allows for a more general solution to the need of providing valid random numbers and minimizing discarded values.
To reiterate:
The Basic General Solution which extends mark's example:
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x;
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) ) );
x %= n;
The Extended General Solution which Allows one additional scenario of RAND_MAX+1 = n:
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x;
if n != 0 {
do {
x = rand();
} while (x > (RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n) ) );
x %= n;
} else {
x = rand();
}
In some languages ( particularly interpreted languages ) doing the calculations of the compare-operation outside of the while condition may lead to faster results as this is a one-time calculation no matter how many re-tries are required. YMMV!
// Assumes:
// RAND_MAX is a globally defined constant, returned from the environment.
// int n; // User input, or externally defined, number of valid choices.
int x; // Resulting random number
int y; // One-time calculation of the compare value for x
y = RAND_MAX - ( ( ( RAND_MAX % n ) + 1 ) % n)
if n != 0 {
do {
x = rand();
} while (x > y);
x %= n;
} else {
x = rand();
}
There are two usual complaints with the use of modulo.
one is valid for all generators. It is easier to see in a limit case. If your generator has a RAND_MAX which is 2 (that isn't compliant with the C standard) and you want only 0 or 1 as value, using modulo will generate 0 twice as often (when the generator generates 0 and 2) as it will generate 1 (when the generator generates 1). Note that this is true as soon as you don't drop values, whatever the mapping you are using from the generator values to the wanted one, one will occurs twice as often as the other.
some kind of generator have their less significant bits less random than the other, at least for some of their parameters, but sadly those parameter have other interesting characteristic (such has being able to have RAND_MAX one less than a power of 2). The problem is well known and for a long time library implementation probably avoid the problem (for instance the sample rand() implementation in the C standard use this kind of generator, but drop the 16 less significant bits), but some like to complain about that and you may have bad luck
Using something like
int alea(int n){
assert (0 < n && n <= RAND_MAX);
int partSize =
n == RAND_MAX ? 1 : 1 + (RAND_MAX-n)/(n+1);
int maxUsefull = partSize * n + (partSize-1);
int draw;
do {
draw = rand();
} while (draw > maxUsefull);
return draw/partSize;
}
to generate a random number between 0 and n will avoid both problems (and it avoids overflow with RAND_MAX == INT_MAX)
BTW, C++11 introduced standard ways to the the reduction and other generator than rand().
With a RAND_MAX value of 3 (in reality it should be much higher than that but the bias would still exist) it makes sense from these calculations that there is a bias:
1 % 2 = 1
2 % 2 = 0
3 % 2 = 1
random_between(1, 3) % 2 = more likely a 1
In this case, the % 2 is what you shouldn't do when you want a random number between 0 and 1. You could get a random number between 0 and 2 by doing % 3 though, because in this case: RAND_MAX is a multiple of 3.
Another method
There is much simpler but to add to other answers, here is my solution to get a random number between 0 and n - 1, so n different possibilities, without bias.
the number of bits (not bytes) needed to encode the number of possibilities is the number of bits of random data you'll need
encode the number from random bits
if this number is >= n, restart (no modulo).
Really random data is not easy to obtain, so why use more bits than needed.
Below is an example in Smalltalk, using a cache of bits from a pseudo-random number generator. I'm no security expert so use at your own risk.
next: n
| bitSize r from to |
n < 0 ifTrue: [^0 - (self next: 0 - n)].
n = 0 ifTrue: [^nil].
n = 1 ifTrue: [^0].
cache isNil ifTrue: [cache := OrderedCollection new].
cache size < (self randmax highBit) ifTrue: [
Security.DSSRandom default next asByteArray do: [ :byte |
(1 to: 8) do: [ :i | cache add: (byte bitAt: i)]
]
].
r := 0.
bitSize := n highBit.
to := cache size.
from := to - bitSize + 1.
(from to: to) do: [ :i |
r := r bitAt: i - from + 1 put: (cache at: i)
].
cache removeFrom: from to: to.
r >= n ifTrue: [^self next: n].
^r
Modulo reduction is a commonly seen way to make a random integer generator avoid the worst case of running forever.
When the range of possible integers is unknown, however, there is no way in general to "fix" this worst case of running forever without introducing bias. It's not just modulo reduction (rand() % n, discussed in the accepted answer) that will introduce bias this way, but also the "multiply-and-shift" reduction of Daniel Lemire, or if you stop rejecting an outcome after a set number of iterations. (To be clear, this doesn't mean there is no way to fix the bias issues present in pseudorandom generators. For example, even though modulo and other reductions are biased in general, they will have no issues with bias if the range of possible integers is a power of 2 and if the random generator produces unbiased random bits or blocks of them.)
The following answer of mine discusses the relationship between running time and bias in random generators, assuming we have a "true" random generator that can produce unbiased and independent random bits. The answer doesn't even involve the rand() function in C because it has many issues. Perhaps the most serious here is the fact that the C standard does not explicitly specify a particular distribution for the numbers returned by rand(), not even a uniform distribution.
How to generate a random integer in the range [0,n] from a stream of random bits without wasting bits?
As the accepted answer indicates, "modulo bias" has its roots in the low value of RAND_MAX. He uses an extremely small value of RAND_MAX (10) to show that if RAND_MAX were 10, then you tried to generate a number between 0 and 2 using %, the following outcomes would result:
rand() % 3 // if RAND_MAX were only 10, gives
output of rand() | rand()%3
0 | 0
1 | 1
2 | 2
3 | 0
4 | 1
5 | 2
6 | 0
7 | 1
8 | 2
9 | 0
So there are 4 outputs of 0's (4/10 chance) and only 3 outputs of 1 and 2 (3/10 chances each).
So it's biased. The lower numbers have a better chance of coming out.
But that only shows up so obviously when RAND_MAX is small. Or more specifically, when the number your are modding by is large compared to RAND_MAX.
A much better solution than looping (which is insanely inefficient and shouldn't even be suggested) is to use a PRNG with a much larger output range. The Mersenne Twister algorithm has a maximum output of 4,294,967,295. As such doing MersenneTwister::genrand_int32() % 10 for all intents and purposes, will be equally distributed and the modulo bias effect will all but disappear.
I just wrote a code for Von Neumann's Unbiased Coin Flip Method, that should theoretically eliminate any bias in the random number generation process. More info can be found at (http://en.wikipedia.org/wiki/Fair_coin)
int unbiased_random_bit() {
int x1, x2, prev;
prev = 2;
x1 = rand() % 2;
x2 = rand() % 2;
for (;; x1 = rand() % 2, x2 = rand() % 2)
{
if (x1 ^ x2) // 01 -> 1, or 10 -> 0.
{
return x2;
}
else if (x1 & x2)
{
if (!prev) // 0011
return 1;
else
prev = 1; // 1111 -> continue, bias unresolved
}
else
{
if (prev == 1)// 1100
return 0;
else // 0000 -> continue, bias unresolved
prev = 0;
}
}
}

Iterate binary numbers with the same quantity of ones (or zeros) in random order

I need to generate binary numbers with the same quantity of ones (or zeros) in random order.
Does anyone know any efficient algorithm for fixed-length binary numbers?
Example for 2 ones and 4 digits (just to be more clear):
1100
1010
1001
0110
0101
0011
UPDATE
Random order w/o repetitions is significant. Sequence of binary numbers required, not single permutation.
If you have enough memory to store all the possible bit sequences, and you don't mind generating them all before you have the first result, then the solution would be to use some efficient generator to produce all possible sequences into a vector and then shuffle the vector using the Fisher-Yates shuffle. That's easy and unbiased (as long as you use a good random number generator to do the shuffle) but it can use a lot of memory if n is large, particularly if you are not sure you will need to complete the iteration.
But there are a couple of solutions which do not require keeping all the possible words in memory. (C implementations of the two solutions follow the text.)
1. Bit shuffle an enumeration
The fastest one (I think) is to first generate a random shuffle of bit values, and then iterate over the possible words one at a time applying the shuffle to the bits of each value. In order to avoid the complication of shuffling actual bits, the words can be generated in a Gray code order in which only two bit positions are changed from one word to the next. (This is also known as a "revolving-door" iteration because as each new 1 is added, some other 1 must be removed.) This allows the bit mask to be updated rapidly, but it means that successive entries are highly correlated, which may be unsuitable for some purposes. Also, for small values of n the number of possible bit shuffles is very limited, so there will not be a lot of different sequences produced. (For example, for the case where n is 4 and k is 2, there are 6 possible words which could be sequenced in 6! (720) different ways, but there are only 4! (24) bit-shuffles. This could be ameliorated slightly by starting the iteration at a random position in the sequence.)
It is always possible to find a Gray code. Here's an example for n=6, k=3: (The bold bits are swapped at each step. I wanted to underline them but for some inexplicable reason SO allows strikethrough but not underline.)
111000 010110 100011 010101
101100 001110 010011 001101
011100 101010 001011 101001
110100 011010 000111 011001
100110 110010 100101 110001
This sequence can be produced by a recursive algorithm similar to that suggested by #JasonBoubin -- the only difference is that the second half of each recursion needs to be produced in reverse order -- but it's convenient to use a non-recursive version of the algorithm. The one in the sample code below comes from Frank Ruskey's unpublished manuscript on Combinatorial Generation (Algorithm 5.7 on page 130). I modified it to use 0-based indexing, as well as adding the code to keep track of the binary representations.
2. Randomly generate an integer sequence and convert it to combinations
The "more" random but somewhat slower solution is to produce a shuffled list of enumeration indices (which are sequential integers in [0, n choose k)) and then find the word corresponding to each index.
The simplest pseudo-random way to produce a shuffled list of integers in a contiguous range is to use a randomly-chosen Linear Congruential Generator (LCG). An LCG is the recursive sequence xi = (a * xi-1 + c) mod m. If m is a power of 2, a mod 4 is 1 and c mod 2 is 1, then that recursion will cycle through all 2m possible values. To cycle through the range [0, n choose k), we simply select m to be the next larger power of 2, and then skip any values which are not in the desired range. (That will be fewer than half the values produced, for obvious reasons.)
To convert the enumeration index into an actual word, we perform a binomial decomposition of the index based on the fact that the set of n choose k words consists of n-1 choose k words starting with a 0 and n-1 choose k-1 words starting with a 1. So to produce the ith word:
if i < n-1 choose k we output a 0 and then the ith word in the set of n-1 bit words with k bits set;
otherwise, we output a 1 and then subtract n-1 choose k from i as the index into the set of n-1 bit words with k-1 bits set.
It's convenient to precompute all the useful binomial coefficients.
LCGs suffer from the disadvantage that they are quite easy to predict after the first few terms are seen. Also, some of the randomly-selected values of a and c will produce index sequences where successive indices are highly correlated. (Also, the low-order bits are always quite non-random.) Some of these problems could be slightly ameliorated by also applying a random bit-shuffle to the final result. This is not illustrated in the code below but it would slow things down very little and it should be obvious how to do it. (It basically consists of replacing 1UL<<n with a table lookup into the shuffled bits).
The C code below uses some optimizations which make it a bit challenging to read. The binomial coefficients are stored in a lower-diagonal array:
row
index
[ 0] 1
[ 1] 1 1
[ 3] 1 2 1
[ 6] 1 3 3 1
[10] 1 4 6 4 1
As can be seen, the array index for binom(n, k) is n(n+1)/2 + k, and if we have that index, we can find binom(n-1, k) by simply subtracting n, and binom(n-1, k-1) by subtracting n+1. In order to avoid needing to store zeros in the array, we make sure that we never look up a binomial coefficient where k is negative or greater than n. In particular, if we have arrived at a point in the recursion where k == n or k == 0, we can definitely know that the index to look up is 0, because there is only one possible word. Furthermore, index 0 in the set of words with some n and k
will consist precisely of n-k zeros followed by k ones, which is the n-bit binary representation of 2k-1. By short-cutting the algorithm when the index reaches 0, we can avoid having to worry about the cases where one of binom(n-1, k) or binom(n-1, k-1) is not a valid index.
C code for the two solutions
Gray code with shuffled bits
void gray_combs(int n, int k) {
/* bit[i] is the ith shuffled bit */
uint32_t bit[n+1];
{
uint32_t mask = 1;
for (int i = 0; i < n; ++i, mask <<= 1)
bit[i] = mask;
bit[n] = 0;
shuffle(bit, n);
}
/* comb[i] for 0 <= i < k is the index of the ith bit
* in the current combination. comb[k] is a sentinel. */
int comb[k + 1];
for (int i = 0; i < k; ++i) comb[i] = i;
comb[k] = n;
/* Initial word has the first k (shuffled) bits set */
uint32_t word = 0;
for (int i = 0; i < k; ++i) word |= bit[i];
/* Now iterate over all combinations */
int j = k - 1; /* See Ruskey for meaning of j */
do {
handle(word, n);
if (j < 0) {
word ^= bit[comb[0]] | bit[comb[0] - 1];
if (--comb[0] == 0) j += 2;
}
else if (comb[j + 1] == comb[j] + 1) {
word ^= bit[comb[j + 1]] | bit[j];
comb[j + 1] = comb[j]; comb[j] = j;
if (comb[j + 1] == comb[j] + 1) j += 2;
}
else if (j > 0) {
word ^= bit[comb[j - 1]] | bit[comb[j] + 1];
comb[j - 1] = comb[j]; ++comb[j];
j -= 2;
}
else {
word ^= bit[comb[j]] | bit[comb[j] + 1];
++comb[j];
}
} while (comb[k] == n);
}
LCG with enumeration index to word conversion
static const uint32_t* binom(unsigned n, unsigned k) {
static const uint32_t b[] = {
1,
1, 1,
1, 2, 1,
1, 3, 3, 1,
1, 4, 6, 4, 1,
1, 5, 10, 10, 5, 1,
1, 6, 15, 20, 15, 6, 1,
// ... elided for space
};
return &b[n * (n + 1) / 2 + k];
}
static uint32_t enumerate(const uint32_t* b, uint32_t r, unsigned n, unsigned k) {
uint32_t rv = 0;
while (r) {
do {
b -= n;
--n;
} while (r < *b);
r -= *b;
--b;
--k;
rv |= 1UL << n;
}
return rv + (1UL << k) - 1;
}
static bool lcg_combs(unsigned n, unsigned k) {
const uint32_t* b = binom(n, k);
uint32_t count = *b;
uint32_t m = 1; while (m < count) m <<= 1;
uint32_t a = 4 * randrange(1, m / 4) + 1;
uint32_t c = 2 * randrange(0, m / 2) + 1;
uint32_t x = randrange(0, m);
while (count--) {
do
x = (a * x + c) & (m - 1);
while (x >= *b);
handle(enumerate(b, x, n, k), n);
}
return true;
}
Note: I didn't include the implementation of randrange or shuffle; code is readily available. randrange(low, lim) produces a random integer in the range [low, lim); shuffle(vec, n) randomly shuffles the integer vector vecof length n.
Also, the the loop calls handle(word, n) for each generated word. That must must be replaced with whatever is to be done with each combination.
With handle defined as a function which does nothing, gray_combs took 150 milliseconds on my laptop to find all 40,116,600 28-bit words with 14 bits set. lcg_combs took 5.5 seconds.
Integers with exactly k bits set are easy to generate in order.
You can do that, and then change the order by applying a bit-permutation to the results (see below), for example here's a randomly generated 16-bit (you should pick one with the right number of bits, based on the word size not on the number of set bits) bit-permutation (not tested):
uint permute(uint x) {
x = bit_permute_step(x, 0x00005110, 1); // Butterfly, stage 0
x = bit_permute_step(x, 0x00000709, 4); // Butterfly, stage 2
x = bit_permute_step(x, 0x000000a1, 8); // Butterfly, stage 3
x = bit_permute_step(x, 0x00005404, 1); // Butterfly, stage 0
x = bit_permute_step(x, 0x00000231, 2); // Butterfly, stage 1
return x;
}
uint bit_permute_step(uint x, uint m, int shift) {
uint t;
t = ((x >> shift) ^ x) & m;
x = (x ^ t) ^ (t << shift);
return x;
}
Generating the re-ordered sequence is easy:
uint i = (1u << k) - 1;
uint max = i << (wordsize - k);
do
{
yield permute(i);
i = nextPermutation(i);
} while (i != max);
yield permute(i); // for max
Where nextPermutation comes from the linked question,
uint nextPermutation(uint v) {
uint t = (v | (v - 1)) + 1;
uint w = t | ((((t & -t) / (v & -v)) >> 1) - 1);
return w;
}
The bit-permutation should be chosen as a random permutation (eg take 0..(wordsize-1) and shuffle) and then converted to bfly masks (I used programming.sirrida.de/calcperm.php), not as randomly generated bfly masks.
I think you can use Heap's algorithm. This algorithm generates all possible permutations of n objects. Just create simple array and use algorithm for generating all possible permutations.
This algorithm is non effective if you want to iterate over binary numbers with BINARY operations. For binary operations you can use LFSR.
LFSR is a simple method for iteration over all numbers. I think you can do some simple modifications for generations fixed size zeros numbers with LFSR.
How about this solution in Python which does permutations?
from itertools import permutations
fixed_length = 4
perms = [''.join(p) for p in permutations('11' + '0' * (fixed_length - 2))]
unique_perms = set(perms)
This would return the numbers as strings, easily convertible with int(num, 2).
As for efficiency, running this took 0.021 milliseconds on my machine.
You can modify the general permutation algorithm to work with binary. Here's an implementation in C++:
#include<iostream>
#include<string>
#include<iostream>
void binaryPermutation(int ones, int digits, std::string current){
if(digits <= 0 && ones <= 0){
std::cout<<current<<std::endl;
}
else if(digits > 0){
if(ones > 0){
binaryPermutation(ones-1, digits-1, current+"1");
}
binaryPermutation(ones, digits-1, current+"0");
}
}
int main()
{
binaryPermutation(2, 4, "");
return 0;
}
This code outputs the following:
1100
1010
1001
0110
0101
0011
You can modify it to store these outputs in a collection or do something other than simply print them.

Select one number at a time between 0 & 10 billion in random order

Problem
I have a need to pick one unique random number at a time between 0 and 10,000,000,000 and do it till all numbers are selected. Essentially the behavior I need is a pre-built stack/queue with 10 billion numbers in random order, with no ability to push new items into it.
Not so good ways to solve:
There's no shortage of inefficient ways in my brain. Such as,
persist generated numbers and check newly generated random number is already used, at some point this gets us into indefinite wait before a usable number is produced.
Persist all possible numbers in a table and pop a random row and maintain new row count for next pick etc. Not sure if this is good or bad.
Questions:
Are there other deterministic ways besides storing all possible combinations and using random?
Like maintaining windows of available numbers and randomly select a window first and randomly select a number within that window etc. eg: like this
If not, what is the best type to store numbers in reasonably small amount of space?
50+% of numbers wont fit in a 32 bit (int), 64 bit (long) is waste. Cos largest number fits in 34 bits, wasting 30 bits per number (>37GB total).
If this problem hasn't been solved already.
What is a good data structure for storing & picking a random spot and quickly adjust the structure for next pick to be fast?
***Sorry for the ambiguity. The largest selectable number is 9,999,999,999 and smallest selectable is 1.
You ask: "Are there other deterministic ways besides storing all possible combinations and using random?"
Yes there is: Encryption. Encryption with a given key guarantees a unique result for unique inputs since it is reversible. Each key defines a one-to-one permutation of the possible inputs. You need an encryption of inputs in the range [1..10e9]. To deal with something that big you need 34 bit numbers, which go up to 17,179,869,183.
There is no standard 34 bit encryption. Depending on how much security you need, and how fast you need the numbers, you can either write your own simple, fast, insecure four-round Feistel Cipher or else for something slower and more secure use Hasty Pudding cipher in 34 bit mode.
With either solution, if the first encryption gives a result outside the range, just encrypt the result again until the new result is within the range you want. The one-to-one property ensures that the final result of the chain of encryptions will be unique.
To generate a sequence of unique random-seeming numbers just encrypt 0, 1, 2, 3, 4, ... in order with the same key. Encryption guarantees that the results will be unique for that key. If you record how far you have got, then you can generate more unique numbers later, up to your 10 billion limit.
As mentioned by AChampion in the comments, you could use a Linear Congruential generator.
Your modulo (m) value will be 10 billion. In order to get a full period (all values in the range appear before the series repeats) you need to choose the a and c constants to satisfy certain criteria. m and c need to be relatively prime and a - 1 needs to be divisible by the prime factors of m (which are just 2 and 5) and also by 4 (since 10 billion is divisible by 4).
If you just come up with a single set of constants, you will only have one possible series and the numbers will always occur in the same order. However you can easily randomly generate constants that satisfy the criteria. To test for relative primality of c and m, just test if c is divisible by 2 and 5, since these are the only prime factors of m (see first condition of coprimality test here)
Simple sketch in Python:
import random
m = 10000000000
a = 0
c = 0
r = 0
def setupLCG():
global a, c, r
# choose value of c that is 0 < c < m and relatively prime to m
c = 5
while ((c % 5 == 0) or (c % 2 == 0)):
c = random.randint(1, m - 1)
# choose value of a that is 0 < a <= m and a - 1 is divisible by
# prime factors of m, and 4
a = 4
while ((((a - 1) % 4) != 0) or (((a - 1) % 5) != 0)):
a = random.randint(1, m)
r = random.randint(0, m - 1)
def rand():
global m, a, c, r
r = (a*r + c) % m
return r
random.seed()
setupLCG()
for i in range(1000):
print rand() + 1
This approach won't give the full possibility of 10000000000! possible combinations, but it will still be on the order of 1019, which is quite a lot. It does have a few other issues (e.g. alternates even and odd values). You could mix it up a bit by having a small pool of numbers, adding a number from the sequence to it each time and randomly drawing one out.
Similar to what rossum has suggested, you can use invertible integer hash function, which uniquely maps an integer in [0,2^k) to another integer in the same range. For your particular problem, you choose k=34 (2^34=16 billion) and reject any number above 10 billion. Here is a complete implementation:
#include <stdint.h>
#include <stdlib.h>
#include <stdio.h>
uint64_t hash_64(uint64_t key, uint64_t mask)
{
key = (~key + (key << 21)) & mask; // key = (key << 21) - key - 1;
key = key ^ key >> 24;
key = ((key + (key << 3)) + (key << 8)) & mask; // key * 265
key = key ^ key >> 14;
key = ((key + (key << 2)) + (key << 4)) & mask; // key * 21
key = key ^ key >> 28;
key = (key + (key << 31)) & mask;
return key;
}
int main(int argc, char *argv[])
{
uint64_t i, shift, mask, max = 10000ULL;
char *dummy;
if (argc > 1) max = strtol(argv[1], &dummy, 10);
for (shift = 0; 1ULL<<shift <= max; ++shift) {}
mask = (1ULL<<shift) - 1;
for (i = 0; i <= mask; ++i) {
uint64_t x = hash_64(i, mask);
x = hash_64(x, mask);
x = hash_64(x, mask); // apply multiple times to increase randomness
if (x > max || x == 0) continue;
printf("%llu\n", x);
}
return 0;
}
This should give you number [0,10000000000] in random order.
For the range 1-999,999,999,999 is equivalent 0-999,999,999,998 (just add 1). Given the definition of LCG then you can implement this:
import functools as ft
import itertools as it
import operator as op
from sympy import primefactors, nextprime
def LCG(m, seed=0):
factors = set(primefactors(m))
a = ft.reduce(op.mul, factors)+1
assert(m%4 != 0 or (m%4 == 0 and (a-1)%m == 0))
c = nextprime(max(factors)+1)
assert(c < m)
x = seed
while True:
x = (a * x + c) % m
yield x
# Check the first 10,000,000 for duplicates
>>> x = list(it.islice(LCG(999999999999), 10000000))
>>> len(x) == len(set(x))
True
# Last 10 numbers
>>> x[-10:]
[99069910838, 876847698522, 765736597318, 99069940559, 210181061577,
432403293706, 99069970280, 543514424631, 99069990094, 99070000001]
I've taken a couple of shortcuts for the context of this question as the asserts should be replaced with handling code, currently it would just fail if those asserts were False
I'm not aware of any truly random methods of picking the numbers without storing a list of the numbers already picked. You could do some sort of linear hashing algorithm, and then pass the numbers 0 to n through it (repeating when your hash returns a value above 10000000000), but this wouldn't be truly random.
If you are to store the numbers, you might consider doing it via a bitmask. To pick quickly in the bitmask, you would likely keep a tree, where each leaf would represent the number of free bits in the corresponding 32 bytes, the branches above that would list the number of free bits in the corresponding 2K entries, and so forth. You then have O(log(n)) time to find your next entry, and O(log(n)) time to claim a bit (as you have to update the tree). It would require something to the order of 2n bits to store as well.
You definitely don't need to store all the numbers.
If you want a perfect set of the numbers from 1 to 10B each exactly once, there are two options that I see: as hinted at by the others, use a 34-bit LCG or Galois LFSR or XOR-shift that generates a sequence of numbers from 1 to 17B or so, then throw out the ones over 10B. I am not aware of any specifically 34-bit functions for this, but I'm sure someone is.
Option 2, if you can spare 1.25 GB of memory, is to create a bitmap that stores only the information that a certain number has been chosen, then use Floyd's Algorithm to get the numbers, which would be fast and give you much better quality numbers (in fact, it would work just fine with hardware RNGs).
Option 3, if you can live with a rare but occasional mistake (duplicate or never-selected number), replace the bitmap with a Bloom filter and save memory.
If predictability is not a concern, you can generate quickly using XOR operations. Suppose you want to generate a random sequence of unique numbers with n bits (34 in your case):
1- take a seed number on n bits. This number, K, can be considered as a seed that you can change each time you run a new experiment.
2- Use a counter from 0 upward
3- Each time XOR the counter with K : next = counter xor K; counter++;
To limit the range to 10 Billion, which is not a power of two, you will need to do rejection.
The obvious drawback is predictability. In step 3, you can do a prior transposition on the bytes of the counter, for example inverse the order of the bytes (like when you transform from little-endian to big endian). This would yield some improvement concerning the predictability of the next number.
Finally I have to admit that this answer can be considered as a particular implementation of encryption which was mentioned in the answer of #rossum, but it's more specific and probably fastest.
Incredibly slow but it should work. Completely random
using System;
using System.Diagnostics;
using System.IO;
using System.Runtime.InteropServices;
namespace ConsoleApplication1
{
class Program
{
static Random random = new Random();
static void Main()
{
const long start = 1;
const long NumData = 10000000000;
const long RandomNess = NumData;
var sz = Marshal.SizeOf(typeof(long));
var numBytes = NumData * sz;
var filePath = Path.GetTempFileName();
using (var stream = new FileStream(filePath, FileMode.Create))
{
// create file with numbers in order
stream.Seek(0, SeekOrigin.Begin);
for (var index = start; index < NumData; index++)
{
var bytes = BitConverter.GetBytes(index);
stream.Write(bytes, 0, sz);
}
for (var iteration = 0L; iteration < RandomNess; iteration++)
{
// get 2 random longs
var item1Index = LongRandom(0, NumData - 1, random);
var item2Index = LongRandom(0, NumData - 1, random);
// allocate room for data
var data1ByteArray = new byte[sz];
var data2ByteArray = new byte[sz];
// read the first value
stream.Seek(item1Index * sz, SeekOrigin.Begin);
stream.Read(data1ByteArray, 0, sz);
// read the second value
stream.Seek(item2Index * sz, SeekOrigin.Begin);
stream.Read(data2ByteArray, 0, sz);
var item1 = BitConverter.ToInt64(data1ByteArray, 0);
var item2 = BitConverter.ToInt64(data2ByteArray, 0);
Debug.Assert(item1 < NumData);
Debug.Assert(item2 < NumData);
// swap the values
stream.Seek(item1Index * sz, SeekOrigin.Begin);
stream.Write(data2ByteArray, 0, sz);
stream.Seek(item2Index * sz, SeekOrigin.Begin);
stream.Write(data1ByteArray, 0, sz);
}
}
File.Delete(filePath);
Console.WriteLine($"{numBytes}");
}
static long LongRandom(long min, long max, Random rand)
{
long result = rand.Next((int)(min >> 32), (int)(max >> 32));
result = (result << 32);
result = result | rand.Next((int)min, (int)max);
return result;
}
}
}

Google Combinatorial Optimization interview problem

I got asked this question on a interview for Google a couple of weeks ago, I didn't quite get the answer and I was wondering if anyone here could help me out.
You have an array with n elements. The elements are either 0 or 1.
You want to split the array into k contiguous subarrays. The size of each subarray can vary between ceil(n/2k) and floor(3n/2k). You can assume that k << n.
After you split the array into k subarrays. One element of each subarray will be randomly selected.
Devise an algorithm for maximizing the sum of the randomly selected elements from the k subarrays.
Basically means that we will want to split the array in such way such that the sum of all the expected values for the elements selected from each subarray is maximum.
You can assume that n is a power of 2.
Example:
Array: [0,0,1,1,0,0,1,1,0,1,1,0]
n = 12
k = 3
Size of subarrays can be: 2,3,4,5,6
Possible subarrays [0,0,1] [1,0,0,1] [1,0,1,1,0]
Expected Value of the sum of the elements randomly selected from the subarrays: 1/3 + 2/4 + 3/5 = 43/30 ~ 1.4333333
Optimal split: [0,0,1,1,0,0][1,1][0,1,1,0]
Expected value of optimal split: 1/3 + 1 + 1/2 = 11/6 ~ 1.83333333
I think we can solve this problem using dynamic programming.
Basically, we have:
f(i,j) is defined as the maximum sum of all expected values chosen from an array of size i and split into j subarrays. Therefore the solution should be f(n,k).
The recursive equation is:
f(i,j) = f(i-x,j-1) + sum(i-x+1,i)/x where (n/2k) <= x <= (3n/2k)
I don't know if this is still an open question or not, but it seems like the OP has managed to add enough clarifications that this should be straightforward to solve. At any rate, if I am understanding what you are saying this seems like a fair thing to ask in an interview environment for a software development position.
Here is the basic O(n^2 * k) solution, which should be adequate for small k (as the interviewer specified):
def best_val(arr, K):
n = len(arr)
psum = [ 0.0 ]
for x in arr:
psum.append(psum[-1] + x)
tab = [ -100000 for i in range(n) ]
tab.append(0)
for k in range(K):
for s in range(n - (k+1) * ceil(n/(2*K))):
terms = range(s + ceil(n/(2*K)), min(s + floor((3*n)/(2*K)) + 1, n+1))
tab[s] = max( [ (psum[t] - psum[s]) / (t - s) + tab[t] for t in terms ])
return tab[0]
I used the numpy ceil/floor functions but you basically get the idea. The only `tricks' in this version is that it does windowing to reduce the memory overhead to just O(n) instead of O(n * k), and that it precalculates the partial sums to make computing the expected value for a box a constant time operation (thus saving a factor of O(n) from the inner loop).
I don't know if anyone is still interested to see the solution for this problem. Just stumbled upon this question half an hour ago and thought of posting my solution(Java). The complexity for this is O(n*K^log10). The proof is a little convoluted so I would rather provide runtime numbers:
n k time(ms)
48 4 25
48 8 265
24 4 20
24 8 33
96 4 51
192 4 143
192 8 343919
The solution is the same old recursive one where given an array, choose the first partition of size ceil(n/2k) and find the best solution recursively for the rest with number of partitions = k -1, then take ceil(n/2k) + 1 and so on.
Code:
public class PartitionOptimization {
public static void main(String[] args) {
PartitionOptimization p = new PartitionOptimization();
int[] input = { 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0};
int splitNum = 3;
int lowerLim = (int) Math.ceil(input.length / (2.0 * splitNum));
int upperLim = (int) Math.floor((3.0 * input.length) / (2.0 * splitNum));
System.out.println(input.length + " " + lowerLim + " " + upperLim + " " +
splitNum);
Date currDate = new Date();
System.out.println(currDate);
System.out.println(p.getMaxPartExpt(input, lowerLim, upperLim,
splitNum, 0));
System.out.println(new Date().getTime() - currDate.getTime());
}
public double getMaxPartExpt(int[] input, int lowerLim, int upperLim,
int splitNum, int startIndex) {
if (splitNum <= 1 && startIndex<=(input.length -lowerLim+1)){
double expt = findExpectation(input, startIndex, input.length-1);
return expt;
}
if (!((input.length - startIndex) / lowerLim >= splitNum))
return -1;
double maxExpt = 0;
double curMax = 0;
int bestI=0;
for (int i = startIndex + lowerLim - 1; i < Math.min(startIndex
+ upperLim, input.length); i++) {
double curExpect = findExpectation(input, startIndex, i);
double splitExpect = getMaxPartExpt(input, lowerLim, upperLim,
splitNum - 1, i + 1);
if (splitExpect>=0 && (curExpect + splitExpect > maxExpt)){
bestI = i;
curMax = curExpect;
maxExpt = curExpect + splitExpect;
}
}
return maxExpt;
}
public double findExpectation(int[] input, int startIndex, int endIndex) {
double expectation = 0;
for (int i = startIndex; i <= endIndex; i++) {
expectation = expectation + input[i];
}
expectation = (expectation / (endIndex - startIndex + 1));
return expectation;
}
}
Not sure I understand, the algorithm is to split the array in groups, right? The maximum value the sum can have is the number of ones. So split the array in "n" groups of 1 element each and the addition will be the maximum value possible. But it must be something else and I did not understand the problem, that seems too silly.
I think this can be solved with dynamic programming. At each possible split location, get the maximum sum if you split at that location and if you don't split at that point. A recursive function and a table to store history might be useful.
sum_i = max{ NumOnesNewPart/NumZerosNewPart * sum(NewPart) + sum(A_i+1, A_end),
sum(A_0,A_i+1) + sum(A_i+1, A_end)
}
This might lead to something...
I think its a bad interview question, but it is also an easy problem to solve.
Every integer contributes to the expected value with weight 1/s where s is the size of the set where it has been placed. Therefore, if you guess the sizes of the sets in your partition, you just need to fill the sets with ones starting from the smallest set, and then fill the remaining largest set with zeroes.
You can easily see then that if you have a partition, filled as above, where the sizes of the sets are S_1, ..., S_k and you do a transformation where you remove one item from set S_i and move it to set S_i+1, you have the following cases:
Both S_i and S_i+1 were filled with ones; then the expected value does not change
Both them were filled with zeroes; then the expected value does not change
S_i contained both 1's and 0's and S_i+1 contains only zeroes; moving 0 to S_i+1 increases the expected value because the expected value of S_i increases
S_i contained 1's and S_i+1 contains both 1's and 0's; moving 1 to S_i+1 increases the expected value because the expected value of S_i+1 increases and S_i remains intact
In all these cases, you can shift an element from S_i to S_i+1, maintaining the filling rule of filling smallest sets with 1's, so that the expected value increases. This leads to the simple algorithm:
Create a partitioning where there is a maximal number of maximum-size arrays and maximal number of minimum-size arrays
Fill the arrays starting from smallest one with 1's
Fill the remaining slots with 0's
How about a recursive function:
int BestValue(Array A, int numSplits)
// Returns the best value that would be obtained by splitting
// into numSplits partitions.
This in turn uses a helper:
// The additional argument is an array of the valid split sizes which
// is the same for each call.
int BestValueHelper(Array A, int numSplits, Array splitSizes)
{
int result = 0;
for splitSize in splitSizes
int splitResult = ExpectedValue(A, 0, splitSize) +
BestValueHelper(A+splitSize, numSplits-1, splitSizes);
if splitResult > result
result = splitResult;
}
ExpectedValue(Array A, int l, int m) computes the expected value of a split of A that goes from l to m i.e. (A[l] + A[l+1] + ... A[m]) / (m-l+1).
BestValue calls BestValueHelper after computing the array of valid split sizes between ceil(n/2k) and floor(3n/2k).
I have omitted error handling and some end conditions but those should not be too difficult to add.
Let
a[] = given array of length n
from = inclusive index of array a
k = number of required splits
minSize = minimum size of a split
maxSize = maximum size of a split
d = maxSize - minSize
expectation(a, from, to) = average of all element of array a from "from" to "to"
Optimal(a[], from, k) = MAX[ for(j>=minSize-1 to <=maxSize-1) { expectation(a, from, from+j) + Optimal(a, j+1, k-1)} ]
Runtime (assuming memoization or dp) = O(n*k*d)

Fastest way to modify one digit of an integer

Suppose I have an int x = 54897, old digit index (0 based), and the new value for that digit. What's the fastest way to get the new value?
Example
x = 54897
index = 3
value = 2
y = f(x, index, value) // => 54827
Edit: by fastest, I definitely mean faster performance. No string processing.
In simplest case (considering the digits are numbered from LSB to MSB, the first one being 0) AND knowing the old digit, we could do as simple as that:
num += (new_digit - old_digit) * 10**pos;
For the real problem we would need:
1) the MSB-first version of the pos, that could cost you a log() or at most log10(MAX_INT) divisions by ten (could be improved using binary search).
2) the digit from that pos that would need at most 2 divisions (or zero, using results from step 1).
You could also use the special fpu instruction from x86 that is able to save a float in BCD (I have no idea how slow it is).
UPDATE: the first step could be done even faster, without any divisions, with a binary search like this:
int my_log10(unsigned short n){
// short: 0.. 64k -> 1.. 5 digits
if (n < 1000){ // 1..3
if (n < 10) return 1;
if (n < 100) return 2;
return 3;
} else { // 4..5
if (n < 10000) return 4;
return 5;
}
}
If your index started at the least significant digit, you could do something like
p = pow(10,index);
x = (x / (p*10) * (p*10) + value * p + x % p).
But since your index is backwards, a string is probably the way to go. It would also be more readable and maintainable.
Calculate the "mask" M: 10 raised to the power of index, where index is a zero-based index from the right. If you need to index from the left, recalculate index accordingly.
Calculate the "prefix" PRE = x / (M * 10) * (M * 10)
Calculate the "suffix" SUF = x % M
Calculate the new "middle part" MID = value * M
Generate the new number new_x = PRE + MID + POST.
P.S. ruslik's answer does it more elegantly :)
You need to start by figuring out how many digits are in your input. I can think of two ways of doing that, one with a loop and one with logarithms. Here's the loop version. This will fail for negative and zero inputs and when the index is out of bounds, probably other conditions too, but it's a starting point.
def f(x, index, value):
place = 1
residual = x
while residual > 0:
if index < 0:
place *= 10
index -= 1
residual /= 10
digit = (x / place) % 10
return x - (place * digit) + (place * value)
P.S. This is working Python code. The principle of something simple like this is easy to work out, but the details are so tricky that you really need to iterate it a bit. In this case I started with the principle that I wanted to subtract out the old digit and add the new one; from there it was a matter of getting the correct multiplier.
You gotta get specific with your compute platform if you're talking about performance.
I would approach this by converting the number into pairs of decimal digits, 4 bit each.
Then I would find and process the pair that needs modification as a byte.
Then I would put the number back together.
There are assemblers that do this very well.

Resources