I've checked out the docs for this on docs.racket-lang.org, and looked all over the internet for a possible implementation of this, but all I'm looking for is a random number generating function that generates a number between 0 and 1. For instance, in Javascript:
> Math.random()
0.16275149723514915
I'm looking for this in Racket, and I'd implement it if I could, but I just don't have the chops (yet).
Yes, just use (random). Example usages:
> (random)
0.9007041222291202
> (random)
0.6808167485543256
Related
I am struggling to find a way to generate a random number within a given interval in PostScript.
Basically PostScript has three functions to help you generate (pseudo-)random numbers. Those are rand, srand and rrand.
The later two are for passing a seed to the number generator to be able to reproduce specific results. At least that´s what I understood they are for. Anyway they don´t seem suitable for my case.
So rand seems to be the only function I can use to generate a random number, but...
rand returns a random integer in the range 0 to 231 − 1 (From the PostScript Language Reference, page 637 (651 in the PDF))
This is far beyond the the interval I´m looking for. I am more interested in values up to small thousands, maybe 10.000 or something like that and small float values, up to 100, all with the lower limit of 0.
I thought I could just narrow my numbers down by simple divisions and extracting the root but that tends to give me unusable small values in quite a lot cases. I am wondering if there are robust ways to either shrink a large number down to what I need or, I´d prefer that, only generate numbers in the desired interval.
Besides: while-loops are not possible in PostScript, otherwise I´d have written a function to generate numbers until they fit in my interval.
Any hints on what to look for breaking numbers down into my interval?
mod is often good enough and it's fast. But you may get a more uniform distribution by using floating-point ops.
rand 16#7fffffff div 100 mul cvi
This is because mod discards the upper bits of the input. And the PRNG is usually trying to randomize over all the bits. By scaling down then up, they all contribute something in the way of rounding effects.
Just use the modulo operator to get it down to the size you want:
GS>rand 100 mod stack
7
Is it possible to reverse a pseudo random number generator?
For example, take an array of generated numbers and get the original seed.
If so, how would this be implemented?
This is absolutely possible - you just have to create a PRNG which suits your purposes. It depends on exactly what you need to accomplish - I'd be happy to offer more advice if you describe your situation in more detail.
For general background, here are some resources for inverting a Linear Congruential Generator:
Reversible pseudo-random sequence generator
pseudo random distribution which guarantees all possible permutations of value sequence - C++
And here are some for inverting the mersenne twister:
http://www.randombit.net/bitbashing/2009/07/21/inverting_mt19937_tempering.html
http://b10l.com/reversing-the-mersenne-twister-rng-temper-function/
In general, no. It should be possible for most generators if you have the full array of numbers. If you don't have all of the numbers or know which numbers you have (do you have the 12th or the 300th?), you can't figure it out at all, because you wouldn't know where to stop.
You would have to know the details of the generator. Decoding a linear congruential generator is going to be different from doing so for a counter-based PRNG, which is going to be different from the Mersenne twister, which is going to be different with a Fibonacci generator. Plus you would probably need to know the parameters of the generator. If you had all of that AND the equation to generate a number is invertible, then it is possible. As to how, it really depends on the PRNG.
Use the language Janus a time-reversible language for doing reversible computing.
You could probably do something like create a program that does this (pseudo-code):
x = seed
x = my_Janus_prng(x)
x = reversible_modulus_op(x, N) + offset
Janus has the ability to give to you a program that takes the output number and whatever other data it needs to invert everything, and give you the program that ends with x = seed.
I don't know all the details about Janus or how you could do this, but just thought I would mention it.
Clearly, what you want to do is probably a better idea because if the RNG is not an injective function, then what should it map back to etc.
So you want to write a Janus program that outputs an array. The input to the Janus inverted program would then take an array (ideally).
I want to generate a sequence of random numbers that will be used to pick tiles for a "maze". Each maze will have an id and I want to use that id as a seed to a pseudo random function. That way I can generate the same maze over and over given it's maze id. Preferably I do not want to use a built in pseudo random function in a language since I do not have control over the algorithm and it could change from platform to platform. As such, I would like to know:
How should I go about implementing my own pseudo random function?
Is it even feasible to generate platform independent pseudo random numbers?
Yes, it is possible.
Here is an example of such an algorithm (and its use) for noise generation.
Those particular random functions (Noise1, Noise2, Noise3, ..) use input parameters and calculate the pseudo random values from there.
Their output range is from 0.0 to 1.0.
And there are many more out there (Like mentioned in the comments).
UPDATE 2019
Looking back at this answer, a better suited choice would be the below-mentioned mersenne twister. Or you could find any implementation of xorshift.
The Mersenne Twister may be a good pick for this. As you can see from the pseudocode on wikipedia, you can seed the RNG with whatever you prefer to produce identical values for any instance with that seed. In your case, the maze ID or the hash of the maze ID.
If you are using Python, you can use the random module by typing at the beginning,
import random. Then, to use it, you type-
var = random.randint(1000, 9999)
This gives the var a 4 digit number that can be used for its id
If you are using another language, there is likely a similar module
I need a random number generation algorithm that generates a random number for a specific input. But it will generate the same number every time it gets the same input. If this kind of algorithm available in the internet or i have to build one. If exists and any one knows that please let me know. (c, c++ , java, c# or any pseudo code will help much)
Thanks in advance.
You may want to look at the built in Java class Random. The description fits what you want.
Usually the standard implementation of random number generator depends on seed value.
You can use standard random with seed value set to some hash function of your input.
C# example:
string input = "Foo";
Random rnd = new Random(input.GetHashCode());
int random = rnd.Next();
I would use a hash function like SHA or MD5, this will generate the same output for a given input every time.
An example to generate a hash in java is here.
The Mersenne Twister algorithm is a good predictable random number generator. There are implementations in most languages.
How about..
public int getRandonNumber()
{
// decided by a roll of a dice. Can't get fairer than that!
return 4;
}
Or did you want a random number each time?
:-)
Some code like this should work for you:
MIN_VALUE + ((MAX_VALUE - MIN_VALUE +1) * RANDOM_INPUT / (MAX_VALUE + 1))
MIN_VALUE - Lower Bound
MAX_VALUE - Upper Bound
RANDOM_INPUT - Input Number
All pseudo-random number generators (which is what most RNGs on computers are) will generate the same sequence of numbers from a starting input, the seed. So you can use whatever RNG is available in your programming language of choice.
Given that you want one sample from a given seed, I'd steer clear of Mersenne Twister and other complex RNGs that have good statistical properties since you don't need it. You could use a simple LCG, or you could use a hash function like MD5. One problem with LCG is that often for a small seed the next value is always in the same region since the modulo doesn't apply, so if your input value is typically small I'd use MD5 for example.
I have an array of numbers that potentially have up to 8 decimal places and I need to find the smallest common number I can multiply them by so that they are all whole numbers. I need this so all the original numbers can all be multiplied out to the same scale and be processed by a sealed system that will only deal with whole numbers, then I can retrieve the results and divide them by the common multiplier to get my relative results.
Currently we do a few checks on the numbers and multiply by 100 or 1,000,000, but the processing done by the *sealed system can get quite expensive when dealing with large numbers so multiplying everything by a million just for the sake of it isn’t really a great option. As an approximation lets say that the sealed algorithm gets 10 times more expensive every time you multiply by a factor of 10.
What is the most efficient algorithm, that will also give the best possible result, to accomplish what I need and is there a mathematical name and/or formula for what I’m need?
*The sealed system isn’t really sealed. I own/maintain the source code for it but its 100,000 odd lines of proprietary magic and it has been thoroughly bug and performance tested, altering it to deal with floats is not an option for many reasons. It is a system that creates a grid of X by Y cells, then rects that are X by Y are dropped into the grid, “proprietary magic” occurs and results are spat out – obviously this is an extremely simplified version of reality, but it’s a good enough approximation.
So far there are quiet a few good answers and I wondered how I should go about choosing the ‘correct’ one. To begin with I figured the only fair way was to create each solution and performance test it, but I later realised that pure speed wasn’t the only relevant factor – an more accurate solution is also very relevant. I wrote the performance tests anyway, but currently the I’m choosing the correct answer based on speed as well accuracy using a ‘gut feel’ formula.
My performance tests process 1000 different sets of 100 randomly generated numbers.
Each algorithm is tested using the same set of random numbers.
Algorithms are written in .Net 3.5 (although thus far would be 2.0 compatible)
I tried pretty hard to make the tests as fair as possible.
Greg – Multiply by large number
and then divide by GCD – 63
milliseconds
Andy – String Parsing
– 199 milliseconds
Eric – Decimal.GetBits – 160 milliseconds
Eric – Binary search – 32
milliseconds
Ima – sorry I couldn’t
figure out a how to implement your
solution easily in .Net (I didn’t
want to spend too long on it)
Bill – I figure your answer was pretty
close to Greg’s so didn’t implement
it. I’m sure it’d be a smidge faster
but potentially less accurate.
So Greg’s Multiply by large number and then divide by GCD” solution was the second fastest algorithm and it gave the most accurate results so for now I’m calling it correct.
I really wanted the Decimal.GetBits solution to be the fastest, but it was very slow, I’m unsure if this is due to the conversion of a Double to a Decimal or the Bit masking and shifting. There should be a
similar usable solution for a straight Double using the BitConverter.GetBytes and some knowledge contained here: http://blogs.msdn.com/bclteam/archive/2007/05/29/bcl-refresher-floating-point-types-the-good-the-bad-and-the-ugly-inbar-gazit-matthew-greig.aspx but my eyes just kept glazing over every time I read that article and I eventually ran out of time to try to implement a solution.
I’m always open to other solutions if anyone can think of something better.
I'd multiply by something sufficiently large (100,000,000 for 8 decimal places), then divide by the GCD of the resulting numbers. You'll end up with a pile of smallest integers that you can feed to the other algorithm. After getting the result, reverse the process to recover your original range.
Multiple all the numbers by 10
until you have integers.
Divide
by 2,3,5,7 while you still have all
integers.
I think that covers all cases.
2.1 * 10/7 -> 3
0.008 * 10^3/2^3 -> 1
That's assuming your multiplier can be a rational fraction.
If you want to find some integer N so that N*x is also an exact integer for a set of floats x in a given set are all integers, then you have a basically unsolvable problem. Suppose x = the smallest positive float your type can represent, say it's 10^-30. If you multiply all your numbers by 10^30, and then try to represent them in binary (otherwise, why are you even trying so hard to make them ints?), then you'll lose basically all the information of the other numbers due to overflow.
So here are two suggestions:
If you have control over all the related code, find another
approach. For example, if you have some function that takes only
int's, but you have floats, and you want to stuff your floats into
the function, just re-write or overload this function to accept
floats as well.
If you don't have control over the part of your system that requires
int's, then choose a precision to which you care about, accept that
you will simply have to lose some information sometimes (but it will
always be "small" in some sense), and then just multiply all your
float's by that constant, and round to the nearest integer.
By the way, if you're dealing with fractions, rather than float's, then it's a different game. If you have a bunch of fractions a/b, c/d, e/f; and you want a least common multiplier N such that N*(each fraction) = an integer, then N = abc / gcd(a,b,c); and gcd(a,b,c) = gcd(a, gcd(b, c)). You can use Euclid's algorithm to find the gcd of any two numbers.
Greg: Nice solution but won't calculating a GCD that's common in an array of 100+ numbers get a bit expensive? And how would you go about that? Its easy to do GCD for two numbers but for 100 it becomes more complex (I think).
Evil Andy: I'm programing in .Net and the solution you pose is pretty much a match for what we do now. I didn't want to include it in my original question cause I was hoping for some outside the box (or my box anyway) thinking and I didn't want to taint peoples answers with a potential solution. While I don't have any solid performance statistics (because I haven't had any other method to compare it against) I know the string parsing would be relatively expensive and I figured a purely mathematical solution could potentially be more efficient.
To be fair the current string parsing solution is in production and there have been no complaints about its performance yet (its even in production in a separate system in a VB6 format and no complaints there either). It's just that it doesn't feel right, I guess it offends my programing sensibilities - but it may well be the best solution.
That said I'm still open to any other solutions, purely mathematical or otherwise.
What language are you programming in? Something like
myNumber.ToString().Substring(myNumber.ToString().IndexOf(".")+1).Length
would give you the number of decimal places for a double in C#. You could run each number through that and find the largest number of decimal places(x), then multiply each number by 10 to the power of x.
Edit: Out of curiosity, what is this sealed system which you can pass only integers to?
In a loop get mantissa and exponent of each number as integers. You can use frexp for exponent, but I think bit mask will be required for mantissa. Find minimal exponent. Find most significant digits in mantissa (loop through bits looking for last "1") - or simply use predefined number of significant digits.
Your multiple is then something like 2^(numberOfDigits-minMantissa). "Something like" because I don't remember biases/offsets/ranges, but I think idea is clear enough.
So basically you want to determine the number of digits after the decimal point for each number.
This would be rather easier if you had the binary representation of the number. Are the numbers being converted from rationals or scientific notation earlier in your program? If so, you could skip the earlier conversion and have a much easier time. Otherwise you might want to pass each number to a function in an external DLL written in C, where you could work with the floating point representation directly. Or you could cast the numbers to decimal and do some work with Decimal.GetBits.
The fastest approach I can think of in-place and following your conditions would be to find the smallest necessary power-of-ten (or 2, or whatever) as suggested before. But instead of doing it in a loop, save some computation by doing binary search on the possible powers. Assuming a maximum of 8, something like:
int NumDecimals( double d )
{
// make d positive for clarity; it won't change the result
if( d<0 ) d=-d;
// now do binary search on the possible numbers of post-decimal digits to
// determine the actual number as quickly as possible:
if( NeedsMore( d, 10e4 ) )
{
// more than 4 decimals
if( NeedsMore( d, 10e6 ) )
{
// > 6 decimal places
if( NeedsMore( d, 10e7 ) ) return 10e8;
return 10e7;
}
else
{
// <= 6 decimal places
if( NeedsMore( d, 10e5 ) ) return 10e6;
return 10e5;
}
}
else
{
// <= 4 decimal places
// etc...
}
}
bool NeedsMore( double d, double e )
{
// check whether the representation of D has more decimal points than the
// power of 10 represented in e.
return (d*e - Math.Floor( d*e )) > 0;
}
PS: you wouldn't be passing security prices to an option pricing engine would you? It has exactly the flavor...