There are similar questions, but most of them too language specific. I'm looking for a general solution. Given some way to produce k random bytes and a number n, I need to produce a random number in range 1...n (inclusive).
What I've come up with so far:
To determine the number of bytes needed to represent n, calculate
f(n):=ceiling(ln(n)/8ln(2))=ceiling(0.180337*ln(n))
Get a random number in range in range 1...2^8f(n) for 0-indexed bytes b[i]:
r:=0
for i=0 to k-1:
r = r + b[i] * 2^(8*i)
end for
To scale to 1...n without bias:
R(n,r) := ceiling(n * (r / 256^f(n)))
But I'm not sure this does not create a bias or some subtle one-off error. Could you check whether this sound and/or make suggestions for improvements? Is this the right way to do this?
In answers, please assume that there are no modular bit-twiddling operations available, but you can assume arbitrary precision arithmetics. (I'm programming in Scheme.)
Edit: There is definitely something wrong with my approach, because in my tests rolling a dice yielded a few cases of 0! But where is the error?
This is similar to what you'd do if you wanted to generate a number from 1 to n from a random floating point number from 0 to 1, inclusive. If r is the random float:
result = (r * n) + 1
If you have arbitrary precision arithmetic, you can compute r by dividing your k-byte integer by the maximum value expressible in k bytes, + 1.
So if you have 4 bytes 87 6F BD 4A, and n = 200:
((0x876FBd4A/0x100000000) * 200) + 1
Related
I am want to calculate the value X =n!/2^r
where n<10^6 and r<10^6
and it's guarantee that value of X is between O to 10
How to calculate X since i can't simple divide the factorial and power term since they overflow the long integer.
My Approach
Do with the help of Modulus. Let take a prime number greater than 10 let say 101
X= [(Factorial N%101)*inverse Modulo of(2^r)]%101;
Note that inverse modulo can easily be calculate and 2^r%101 can also be calculated.
Problem:
It's not guarantee that X is always be integer it can be float also.
My method works fine when X is integer ? How to deal when X is a floating point number
If approximate results are OK and you have access to a math library with base-2 exponential (exp2 in C), natural log gamma (lgamma in C), and natural log (log in C), then you can do
exp2(lgamma(n+1)/log(2) - r).
Find the power that 2 appears at in n!. This is:
P = n / 2 + n / 2^2 + n / 2^3 + ...
Using integer division until you reach a 0 result.
If P >= r, then you have an integer result. You can find this result by computing the factorial such that you ignore r powers of 2. Something like:
factorial = 1
for i = 2 to n:
factor = i
while factor % 2 == 0 and r != 0:
factor /= 2
r -= 1
factorial *= factor
If P < r, set r = P, apply the same algorithm and divide the result by 2^(initial_r - P) in the end.
Except for a very few cases (with small n and r) X will not be an integer -- for if n >= 11 then 11 divides n! but doesn't divide any power of two, so if X were integral it would have to be at least 11.
One method would be: initialise X to one; then loop: if X > 10 divide by 2 till its not; if X < 10 multiply by the next factors till its not; until you run out of factors and powers of 2.
An approach that would be tunable for precision/performance would be the following:
Store the factorial in an integer with a fixed number of bits. We can drop the last few digits if the number gets too large, since they won't affect the overall result altogether that much. By scaling this integer larger/smaller the algorithm gets tunable for either performance or precision.
Whenever the integer would overflow due to multiplication, shift it to the right by a few places and subtract that value from r. In the end there should be a small number left as r and an integer v with the most significant bits of the factorial. This v can now be interpreted as a fixed-point number with r fractional digits.
Depending upon the required precision this approach might even work with long, though I haven't had the time to test this approach yet apart from a bit experimenting with a calculator.
Suppose all I have is a routine that generates 0 and 1 randomly with equal probability , how can I use this to find a random number between 1 and n .I can't use any other random function. I need to use my routine to achieve the goal.Please any pointers will he helpful.
Let m = ceil(log2(n)); that is, m is the number of bits needed to represent n.
Generate a random bitstring of length m, and interpret it as a nonnegative integer k.
If k >= n, go back to step 2. Otherwise your random number is k + 1.
This is a form of rejection sampling that will give you a uniform distribution on the random integer k + 1 in the range [1, n].
Simplely use the fomular
rand()*(n-1)+1
What's the most efficient way to implement Karatsuba large number multiplication with input operands of unequal size and whose size is not a power of 2 and perhaps not even an even number? Padding the operands means additional memory, and I want to try and make it memory-efficient.
One of the things I notice in non-even-number size Karatsuba is that if we try to divide the number into "halves" as close to even as possible, one half will have m+1 elements and the other m, where m = floor(n/2), n being the number of elements in the split number. If both numbers are of the same odd size, then we need to compute products of two numbers of size m+1, requiring n+1 storage, as opposed to n in the case when n is even. So am I right in guessing that Karatsuba for odd sizes may require slightly more memory than for even sizes?
Most of the time, length of operands will not be a power of 2. I think this is rare case. Most of the time there will be different lengths of operands. But this will not be a problem for a Karatsuba algo.
Actually, I don't see any problem here. This overhead (odd length) is so light and definitely not a big deal. Problem about different lengths - let's assume, that X = 1234 and Y = 45
So, a = 12, b = 34, c = 0, d = 45
So, after that X * Y = 10 ^ 4 * ac + 10 ^ 2 (ad + bc) + bd
ac = 0;
bd = 34 * 45;
ad + bc = (a + b) * (c + d) - ac - bd = 540;
And, if we assume, that we could multiply 2 digit numbers easily - you could get the answer = 55530. Same, as just multiply 1234 * 45 in any calculator :) So, I can't see any memory issue with different lengths of numbers.
You can multiply the numbers by powers of 10 so that each of them have even numbers of digits. Apply karatsuba algorithm and them divide the answer by the the factor of powers of 10 that you multiplied the original 2 numbers to make them even.
Eg: 123*12
Compute 1230*1200 and divide the answer with 1000.
To answer your doubt in comments above. Trick is to follow the formula to calculate powers of 10 in case of decimal calculation.
10^2m(A.C) + 10^m((A+B).(C+D)-A.C-B.D) + B.D
m = n/2 + n%2
n is length of number
Refer to wiki it explains in details.
i'm working on image processing, and i'm writing a parallel algorithm that iterates over all the pixels in an image, and changes the surrounding pixels based on it's value. In this algorithm, minor non-deterministic is acceptable, but i'd rather minimize it by only querying distant pixels simultaneously. Could someone give me an algorithm that bijectively maps the integers below n to the integers below n, in a fast and simple manner, such that two integers that are close to each other before mapping are likely to be far apart after application.
For simplicity let's say n is a power of two. Could you simply reverse the order of the least significant log2(n) bits of the number?
Considering the pixels to be a one dimentional array you could use a hash function j = i*p % n where n is the zero based index of the last pixel and p is a prime number chosen to place the pixel far enough away at each step. % is the remainder operator in C, mathematically I'd write j(i) = i p (mod n).
So if you want to jump at least 10 rows at each iteration, choose p > 10 * w where w is the screen width. You'll want to have a lookup table for p as a function of n and w of course.
Note that j hits every pixel as i goes from 0 to n.
CORRECTION: Use (mod (n + 1)), not (mod n). The last index is n, which cannot be reached using mod n since n (mod n) == 0.
Apart from reverting the bit order, you can use modulo. Say N is a prime number (like 521), so for all x = 0..520 you define a function:
f(x) = x * fac mod N
which is bijection on 0..520. fac is arbitrary number different from 0 and 1. For example for N = 521 and fac = 122 you get the following mapping:
which as you can see is quite uniform and not many numbers are near the diagonal - there are some, but it is a small proportion.
Given a function R which produces true random 32 bit numbers, I would like a function that returns random integers in the range 0 to n, where n is arbitrary (less than 2^32).
The function must produce all values 0 to n with equal probability.
I would like a function that executes in constant time with no if statements or loops, so something like the Java Random.nextInt(n) function is out.
I suspect that a simple modulus will not do the job unless n is a power of 2 -- am I right?
I have accepted Jason's answer, despite it requiring a loop of undetermined duration, since it appears to be the best method to use in practice and essentially answers my question. However I am still interested in any algorithms (even if less efficient) which would be deterministic in nature and be guaranteed to terminate, such as Mark Byers has pointed to.
Without discarding some of the values from the source, you can not do this. For example, a set of size 2^32 can not be partitioned into three equally sized sets. Therefore, it is impossible to do this without discarding some of the values and iterating until a non-discarded value is produced.
So, just use this (pseudocode):
rng is random number generator produces uniform integers from [0, max)
compute m = max modulo (n + 1)
do {
draw a random number r from rng
} while(r >= max - m)
return r modulo (n + 1)
Effectively I am throwing out the top part of the distribution that causes problems. If rng is uniform on [0, max), then this algorithm will be uniform on [0, n]
What you're asking for is impossible. You can't partition 2**32 numbers into three sets of exactly equal size.
If you want to guarantee an absolutely perfect uniform distribution in 0 <= x < n, where n is not a power of 2 then you have to be prepared to call R potentially an infinite number of times. In reality you will typically need only one or two calls, but the code has to in theory be able call R any number of times otherwise it can't be completely uniform.
I don't understand why modulus wouldn't do what you want? Since R is a function that produces true random 32 bit numbers, that means that each number has the same probability to be produced, right? So, if you use a modulus n:
randomNumber = R() % (n + 1) //EDITED: n+1 to return values from 0-n
then each number from 0 to n has the same probability!
You can generate two 32 bit numbers and put them together to form 64 bit number. Worst case scenario can be than biased by 0.99999999976716936 if you do not discharge numbers (if you need number whit no more than 32 bits) that mean that some number have by this factor lower probability than other.
But if you still want to remove this small bias you will have low ration "out of range" hits and in that case more that 1 discharge.
Depending upon your problem/use of the random numbers, maybe you could pre-allocate your random numbers using a slow method and put them into a simple array.
Then getNextRnd() can just return the next in the array.
Quick, fixed time call, no branches, just wasting memory (which is usually pretty cheap) and process initialization time.