I've been using a Calc spreadsheet to keep track of my D&D character, and I'm looking to increase the automation in it all the time - in this case, I want it to roll my dice for me.
My character might do something like 2d6 points of damage (roll 2 6-sided dice, add together) or 12d8 points of damage (roll 12 8-sided dice, add together). If I know both of these numbers separately - the number of dice, and the sides of the dice, can I 'roll' this number?
I'm aware of the RandBetween function, which when given (1, N) as arguments will simulate rolling an N-sided die. But M x RandBetween(1, N) just multiplies the roll by M, rather than 'rolling' M times.
For portability reasons, I don't want to write a macro for this. Is there any kind of function or trick that will let me add an arbitrary number of random numbers?
Make a list of random numbers in one column (for example column C below). Then use as many of them as needed in the formula to determine the outcome.
A B C D E
# of dice Sides Random Numbers Range Outcome
2 6 =RANDBETWEEN(1,$B$2) ="C2:C" & A2 + 1 =SUM(INDIRECT(D2))
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
=RANDBETWEEN(1,$B$2)
Related
A random generator produces values from the list: 0, 1, 2 (with equal probabilty for all three values).
How to get a random number, 0 or 1, with equal probability, from no more than ten uses of the generator.
Your goal is ultimately to roll a 2-sided die given only a 3-sided die, using a fixed number of rolls of the 3-sided die.
However, this is impossible no matter how many rolls you do, since 2 does not divide 3 (and both are prime numbers). The best you can do here is keep rejecting 2's (and the probability of getting n 2's in a row is 1/3^n, a probability that shrinks rapidly with increasing n).
More generally, it's impossible to roll a k-sided die using a p-sided die using a fixed number of rolls of the p-sided die unless "every prime number dividing k also divides p" (see Lemma 3 in "Simulating a dice with a dice" by B. Kloeckner.
See also the following questions:
Expand a random range from 1–5 to 1–7
Frugal conversion of uniformly distributed random numbers from one range to another
Best way to generate U(1,5) from U(1,3)?
With the edit, in order to shrink the range, you can do a few things
discard rolls of 2
if you are certain to roll an even number of times, choose 2 to be 0 or 1 before every other roll
(1/3) * .5 + (2/3) * .5
Let X be a random variable on probability space (Ω,P), Suppose X~U({1,2,3}), does this mean the space (Ω,P) is uniform space.
I tried to come up with counter example and did not work out but i still think this statement isn't right.
You are correct that the statement isn't right. Here is a concrete counterexample with a bit of R code to illustrate it.
A standard 6-sided die has 3 pairs: (1,6), (2,5), (3,4) where each number in the pair is on the opposite side of the other. Suppose that such a die is biased so that each pair is equally likely but that in a pair the larger of the two numbers is twice as likely as the smaller. For example, 6 is twice as likely as 1. This is easily seen to imply that the numbers 1,2,3 appear with probability 1/9 and the numbers 4,5,6 appear with probability 2/9.
You can simulate 1000 rolls like this:
rolls <- sample(1:6,1000,replace = TRUE, prob = c(1/9,1/9,1/9,2/9,2/9,2/9))
Here is a display created by making a barplot of the tabulation of the results:
Confirming the obvious fact that the distribution is not uniform.
We can define X on it as the function which indicates what pair a rolls is in (so that 1,6 are in the first pair, 2,5 in the second, 3,4 in the third):
X = function(x){min(x,7-x)}
and then:
barplot(table(sapply(rolls,X)))
leading to:
which confirms the obvious fact that X is uniform.
This is a purely theoretical question.
We all know that most, if not all, random-number generators actually only generate pseudo-random numbers.
Let's say I want a random number from 10 to 20. I can do this as follows (myRandomNumber being an integer-type variable):
myRandomNumber = rand(10, 20);
However, if I execute this statement:
myRandomNumber = rand(5, 10) + rand(5, 10);
Is this method more random?
No.
The randomness is not cumulative. The rand() function uses a uniform distribution between your two defined endpoints.
Adding two uniformly distributions invalidates the uniform distribution. It will make a strange looking pyramid, with the most probability tending toward the center. This is because of accumulation of the probability density function with increasing degrees of freedom.
I urge you to read this:
Uniform Distribution
and this:
Convolution
Pay special attention to what happens with the two uniform distributions on the top right of the screen.
You can prove this to yourself by writing to a file all the sums and then plotting in excel. Make sure you give yourself a large enough sample size. 25000 should be sufficient.
The best way to understand this is by considering the popular fair ground game "Lucky Seven".
If we roll a six sided die, we know that the probability of obtaining any of the six numbers is the same - 1/6.
What if we roll two dice and add the numbers that appear on the two ?
The sum can range from 2 ( both dice show 'one') uptil 12 (both dice show 'six')
The probabilities of obtaining different numbers from 2 to 12 are no longer uniform. The probability of obtaining a 'seven' is the highest. There can be a 1+6, a 6+1, a 2+5, a 5+2, a 3+4 and a 4+3. Six ways of obtaining a 'seven' out of 36 possibilities.
If we plot the distribution we get a pyramid. The probabilities would be 1,2,3,4,5,6,5,4,3,2,1 (of course each of these has to be divided by 36).
The pyramidal figure (and the probability distribution) of the sum can be obtained by 'convolution.
If we know the 'expected value' and standard deviation ('sigma') for the two random numbers, we can perform a quick a ready calculation of the expected value of the sum of the two random numbers.
The expected value is simply the addition of the two individual expected values.
The sigma is obtained by applying the "pythagoras theorem" on the two individual sigmas (square root of the sum of the square of each sigma).
I am looking to enumerate a random permutation of the numbers 1..N in fixed space. This means that I cannot store all numbers in a list. The reason for that is that N can be very large, more than available memory. I still want to be able to walk through such a permutation of numbers one at a time, visiting each number exactly once.
I know this can be done for certain N: Many random number generators cycle through their whole state space randomly, but entirely. A good random number generator with state size of 32 bit will emit a permutation of the numbers 0..(2^32)-1. Every number exactly once.
I want to get to pick N to be any number at all and not be constrained to powers of 2 for example. Is there an algorithm for this?
The easiest way is probably to just create a full-range PRNG for a larger range than you care about, and when it generates a number larger than you want, just throw it away and get the next one.
Another possibility that's pretty much a variation of the same would be to use a linear feedback shift register (LFSR) to generate the numbers in the first place. This has a couple of advantages: first of all, an LFSR is probably a bit faster than most PRNGs. Second, it is (I believe) a bit easier to engineer an LFSR that produces numbers close to the range you want, and still be sure it cycles through the numbers in its range in (pseudo)random order, without any repetitions.
Without spending a lot of time on the details, the math behind LFSRs has been studied quite thoroughly. Producing one that runs through all the numbers in its range without repetition simply requires choosing a set of "taps" that correspond to an irreducible polynomial. If you don't want to search for that yourself, it's pretty easy to find tables of known ones for almost any reasonable size (e.g., doing a quick look, the wikipedia article lists them for size up to 19 bits).
If memory serves, there's at least one irreducible polynomial of ever possible bit size. That translates to the fact that in the worst case you can create a generator that has roughly twice the range you need, so on average you're throwing away (roughly) every other number you generate. Given the speed an LFSR, I'd guess you can do that and still maintain quite acceptable speed.
One way to do it would be
Find a prime p larger than N, preferably not much larger.
Find a primitive root of unity g modulo p, that is, a number 1 < g < p such that g^k ≡ 1 (mod p) if and only if k is a multiple of p-1.
Go through g^k (mod p) for k = 1, 2, ..., ignoring the values that are larger than N.
For every prime p, there are φ(p-1) primitive roots of unity, so it works. However, it may take a while to find one. Finding a suitable prime is much easier in general.
For finding a primitive root, I know nothing substantially better than trial and error, but one can increase the probability of a fast find by choosing the prime p appropriately.
Since the number of primitive roots is φ(p-1), if one randomly chooses r in the range from 1 to p-1, the expected number of tries until one finds a primitive root is (p-1)/φ(p-1), hence one should choose p so that φ(p-1) is relatively large, that means that p-1 must have few distinct prime divisors (and preferably only large ones, except for the factor 2).
Instead of randomly choosing, one can also try in sequence whether 2, 3, 5, 6, 7, 10, ... is a primitive root, of course skipping perfect powers (or not, they are in general quickly eliminated), that should not affect the number of tries needed greatly.
So it boils down to checking whether a number x is a primitive root modulo p. If p-1 = q^a * r^b * s^c * ... with distinct primes q, r, s, ..., x is a primitive root if and only if
x^((p-1)/q) % p != 1
x^((p-1)/r) % p != 1
x^((p-1)/s) % p != 1
...
thus one needs a decent modular exponentiation (exponentiation by repeated squaring lends itself well for that, reducing by the modulus on each step). And a good method to find the prime factor decomposition of p-1. Note, however, that even naive trial division would be only O(√p), while the generation of the permutation is Θ(p), so it's not paramount that the factorisation is optimal.
Another way to do this is with a block cipher; see this blog post for details.
The blog posts links to the paper Ciphers with Arbitrary Finite Domains which contains a bunch of solutions.
Consider the prime 3. To fully express all possible outputs, think of it this way...
bias + step mod prime
The bias is just an offset bias. step is an accumulator (if it's 1 for example, it would just be 0, 1, 2 in sequence, while 2 would result in 0, 2, 4) and prime is the prime number we want to generate the permutations against.
For example. A simple sequence of 0, 1, 2 would be...
0 + 0 mod 3 = 0
0 + 1 mod 3 = 1
0 + 2 mod 3 = 2
Modifying a couple of those variables for a second, we'll take bias of 1 and step of 2 (just for illustration)...
1 + 2 mod 3 = 0
1 + 4 mod 3 = 2
1 + 6 mod 3 = 1
You'll note that we produced an entirely different sequence. No number within the set repeats itself and all numbers are represented (it's bijective). Each unique combination of offset and bias will result in one of prime! possible permutations of the set. In the case of a prime of 3 you'll see that there are 6 different possible permuations:
0,1,2
0,2,1
1,0,2
1,2,0
2,0,1
2,1,0
If you do the math on the variables above you'll not that it results in the same information requirements...
1/3! = 1/6 = 1.66..
... vs...
1/3 (bias) * 1/2 (step) => 1/6 = 1.66..
Restrictions are simple, bias must be within 0..P-1 and step must be within 1..P-1 (I have been functionally just been using 0..P-2 and adding 1 on arithmetic in my own work). Other than that, it works with all prime numbers no matter how large and will permutate all possible unique sets of them without the need for memory beyond a couple of integers (each technically requiring slightly less bits than the prime itself).
Note carefully that this generator is not meant to be used to generate sets that are not prime in number. It's entirely possible to do so, but not recommended for security sensitive purposes as it would introduce a timing attack.
That said, if you would like to use this method to generate a set sequence that is not a prime, you have two choices.
First (and the simplest/cheapest), pick the prime number just larger than the set size you're looking for and have your generator simply discard anything that doesn't belong. Once more, danger, this is a very bad idea if this is a security sensitive application.
Second (by far the most complicated and costly), you can recognize that all numbers are composed of prime numbers and create multiple generators that then produce a product for each element in the set. In other words, an n of 6 would involve all possible prime generators that could match 6 (in this case, 2 and 3), multiplied in sequence. This is both expensive (although mathematically more elegant) as well as also introducing a timing attack so it's even less recommended.
Lastly, if you need a generator for bias and or step... why don't you use another of the same family :). Suddenly you're extremely close to creating true simple-random-samples (which is not easy usually).
The fundamental weakness of LCGs (x=(x*m+c)%b style generators) is useful here.
If the generator is properly formed then x%f is also a repeating sequence of all values lower than f (provided f if a factor of b).
Since bis usually a power of 2 this means that you can take a 32-bit generator and reduce it to an n-bit generator by masking off the top bits and it will have the same full-range property.
This means that you can reduce the number of discard values to be fewer than N by choosing an appropriate mask.
Unfortunately LCG Is a poor generator for exactly the same reason as given above.
Also, this has exactly the same weakness as I noted in a comment on #JerryCoffin's answer. It will always produce the same sequence and the only thing the seed controls is where to start in that sequence.
Here's some SageMath code that should generate a random permutation the way Daniel Fischer suggested:
def random_safe_prime(lbound):
while True:
q = random_prime(lbound, lbound=lbound // 2)
p = 2 * q + 1
if is_prime(p):
return p, q
def random_permutation(n):
p, q = random_safe_prime(n + 2)
while True:
r = randint(2, p - 1)
if pow(r, 2, p) != 1 and pow(r, q, p) != 1:
i = 1
while True:
x = pow(r, i, p)
if x == 1:
return
if 0 <= x - 2 < n:
yield x - 2
i += 1
I have to generate two random sets of matrices
Each containing 3 digit numbers ranging from 2 - 10
like that
matrix 1: 994,878,129,121
matrix 2: 272,794,378,212
the numbers in both matrices have to be greater then 100 and less then 999
BUT
the mean for both matrices has to be in the ratio of 1:2 or 2:3 what ever constraint the user inputs
my math skills are kind of limited so any ideas how do i make this happen?
In order to do this, you have to know how many numbers are in each list. I'm assuming from your example that there are four numbers in each.
Fill the first list with four random numbers.
Calculate the mean of the first list.
Multiply the mean by 2 or by 3/2, whichever the user input. This is the required mean of the second list.
Multiply by 4. This is the required total of the second list.
Generate 3 random numbers.
Subtract the total of the three numbers in step 5 from the total in step 4. This is the fourth number for the second list.
If the number in step 6 is not in the correct range, start over from step 5.
Note that the last number in the second list is not truly random, since it's based on the other values in the list.
You have a set of random numbers, s1.
s1= [ random.randint(100,999) for i in range(n) ]
For some other set, s2, to have a different mean it's simply got to have a different range. Either you select values randomly from a different range, or you filter random values to get a different range.
No matter how many random numbers you select from the range 100 to 999, the mean is always just about 550. The odds of being a different value are exactly the normal distribution probabilities on either side of the mean.
You can't have a radically different mean with values selected from the same range.