Octave random Rational Numbers gernrator - random

I just start to study about Octave and I have a question about getting Rational Numbers.
I just check
http://www.gnu.org/software/octave/doc/interpreter/Random-Number-Generation.html#Random-Number-Generation
this page to learn the way to get random Rational Numbers.
for example..
if we use rand(1, 3.1)
i would like to get random number between 1 and 3.1 (like 2.34)
However, i am not really sure about function that i have to use..
can you give some example ?
thanks

The function unifrnd returns random numbers sampled from a uniform distribution. The first two arguments determine the lower and upper bounds. The remaining (optional) arguments determine the shape of the result. So, for example, to get random numbers between 1 and 3.1:
octave:12> unifrnd(1, 3.1)
ans = 2.4990
octave:13> unifrnd(1, 3.1)
ans = 3.0240
octave:14> unifrnd(1, 3.1, 2, 3)
ans =
1.8929 2.9675 2.1239
2.4756 2.6172 1.6197
(The results are regular floating point numbers. I don't understand why you are asking about rational numbers.)

Related

Generating random integer and real numbers in a given range

According the man page of getNext in the PCGRandom module, we can generate random numbers in a given range, for example:
use Random;
var rng1 = new owned RandomStream( eltType= real, seed= 100 );
var rng2 = new owned RandomStream( eltType= int, seed= 100 );
for i in 1..5 do
writeln( rng1.getNext( min= 3.0, max= 5.0 ) );
writeln();
for i in 1..5 do
writeln( rng2.getNext( min= 20, max= 80 ) );
which gives (with chpl-1.20.0):
4.50371
4.85573
4.2246
4.84289
3.63607
36
57
79
39
57
Here, I noticed that the man page gives the following notes for both the integer and real-number cases:
For integers, this class uses a strategy for generating a value in a particular range that has not been subject to rigorous study and may have statistical problems.
For real numbers, this class generates a random value in [max, min] by computing a random value in [0,1] and scaling and shifting that value. Note that not all possible floating point values in the interval [min, max] can be constructed in this way.
(where I used italics for emphasis). For real numbers, is this related to the so-called "density of floating-point number", e.g. asked in this page)? Also, for integers, is there some case that we need to be careful even for "typical" use?
(here, "typical" means, e.g., a generation of 10**8 random integers distributed approximately flat in a given range.)
FYI, my "use case" is not something like rigorous quality tests for random numbers, but just typical Monte Carlo calculations (e.g., selecting random sites on a cubic lattice).
The notes in the manual page are indicating a difference from the other PCG random number methods that have been studied (by the author of the PCG algorithm at the very least).
The issue with floating-point numbers is indeed related to floating-point number density. See http://www.pcg-random.org/using-pcg-c-basic.html#generating-doubles from the PCG author. It is a potential problem even when generating random numbers in [0.0, 1.0]. This paragraph from the documentation describes the issue:
When generating a real, imaginary, or complex number, this
implementation uses the strategy of generating a 64-bit unsigned
integer and then multiplying it by 2.0**-64 in order to convert it to
a floating point number. While this does construct a uniform
distribution on rounded floating point values, it leaves out many
possible real values (for example, 2**-128). We believe that this
strategy has reasonable statistical properties. One side effect of
this strategy is that the real number 1.0 can be generated because of
rounding. The real number 0.0 can be generated because PCG can produce
the value 0 as a random integer.
Note that a 64-bit real can store numbers as small as 2.0**-1024 but it is quite impossible to get such a number by dividing a positive integer by 2**64. (Here and in the above I am using ** as the exponentiation operator, as that is what it does in Chapel syntax). I recommend reading up on IEEE floating point formats (e.g. https://en.wikipedia.org/wiki/IEEE_754 or https://en.wikipedia.org/wiki/Double-precision_floating-point_format ) for background information in this area. You might care about this if you were using an RNG to generate test inputs to an algorithm operating on real(64) values. In that event you might wish for even the very small values to be generated. Note though that constructing an RNG that can generate all real(64) values in a non-uniform manner is not so hard (e.g. just by copying the bits from a uint into a real).
Regarding the other part of your question:
I did some basic statistical testing with the generation of random integers in a particular range with TestU01 and I'd be confident in its use with Monte Carlo calculations. However I am not an expert in this area and as a result I put that warning in the documentation. The below information from the documentation describes the testing that I did:
We have tested this implementation with TestU01 (available at
http://simul.iro.umontreal.ca/testu01/tu01.html ). We measured our
implementation with TestU01 1.2.3 and the Crush suite, which consists
of 144 statistical tests. The results were:
no failures for generating uniform reals
1 failure for generating 32-bit values (which is also true for the reference version of PCG with the same configuration)
0 failures for generating 64-bit values (which we provided to TestU01 as 2
different 32-bit values since it only accepts 32 bits at a time)
0 failures for generating bounded integers (which we provided to TestU01 by requesting values in [0..,2**31+2**30+1) until we had two values < 2**31, removing the top 0 bit, and then combining the top 16 bits into the value provided to TestU01).

Generate random number in interval in PostScript

I am struggling to find a way to generate a random number within a given interval in PostScript.
Basically PostScript has three functions to help you generate (pseudo-)random numbers. Those are rand, srand and rrand.
The later two are for passing a seed to the number generator to be able to reproduce specific results. At least that´s what I understood they are for. Anyway they don´t seem suitable for my case.
So rand seems to be the only function I can use to generate a random number, but...
rand returns a random integer in the range 0 to 231 − 1 (From the PostScript Language Reference, page 637 (651 in the PDF))
This is far beyond the the interval I´m looking for. I am more interested in values up to small thousands, maybe 10.000 or something like that and small float values, up to 100, all with the lower limit of 0.
I thought I could just narrow my numbers down by simple divisions and extracting the root but that tends to give me unusable small values in quite a lot cases. I am wondering if there are robust ways to either shrink a large number down to what I need or, I´d prefer that, only generate numbers in the desired interval.
Besides: while-loops are not possible in PostScript, otherwise I´d have written a function to generate numbers until they fit in my interval.
Any hints on what to look for breaking numbers down into my interval?
mod is often good enough and it's fast. But you may get a more uniform distribution by using floating-point ops.
rand 16#7fffffff div 100 mul cvi
This is because mod discards the upper bits of the input. And the PRNG is usually trying to randomize over all the bits. By scaling down then up, they all contribute something in the way of rounding effects.
Just use the modulo operator to get it down to the size you want:
GS>rand 100 mod stack
7

Why does Round[2.75,0.1] return 2.800000000003?

Mathematica 8.0.1
Any one could explain what would be the logic behind this result
In[24]:= Round[10.75, .1]
Out[24]= 10.8
In[29]:= Round[2.75, .1]
Out[29]= 2.8000000000000003
I have expected the second result above to be 2.8?
EDIT 1:
I was trying to do the above for formatting purposes only to make the number fit in the space. I ended up doing the following to get the result I want:
In[41]:= NumberForm[2.75,2]
Out[41] 2.8
I wish Mathematica has printf() like formatting function. I find formatting numbers in Mathematica for exact field width and form a little awkward compared to using printf() formatting rules.
EDIT 2:
I tried $MaxExtraPrecision=1000 on some number I was trying for format/round, but it did not work, that is why I posted this question. Here it is
In[42]:= $MaxExtraPrecision=1000;
Round[2035.7520395261859,.1]
Out[43]= 2035.8000000000002
In[46]:= $MaxExtraPrecision=50;
Round[2.75,.1]
Out[47]= 2.8000000000000003
EDIT 3:
I found this way, to format a number to one decimal point only. Use Numberform, but first need to find what n-digit precision to use by counting the number of digits to the left of the decimal point, then adding 1.
In[56]:= x=2035.7520395261859;
NumberForm[x,IntegerLength[Round#x]+1]
Out[57]//NumberForm= 2035.8
EDIT 4:
The above (Edit 3) did not work for numbers such as
a=2.67301785 10^7
After some trials, I found Accounting Form to do what I want. AccountingForm gets rid of the 10^n form which NumberForm did not:
In[76]:= x=2035.7520395261859;
AccountingForm[x,IntegerLength[Round#x]+1]
Out[77]//AccountingForm= 2035.8
In[78]:= x=2.67301785 10^7;
AccountingForm[x,IntegerLength[Round#x]+1]
Out[79]//AccountingForm= 26730178.5
For formatting numerical values, the best language I found was Fortran, followed COBOL and also by those languages that use or support printf() standard formatting. With Mathematica, one can do such formatting I am sure, but it sure seems too complicated to me. I never understood why Mathematics does not have Printf[].
Not all decimal (base 10) numbers with a finite number of digits are representable in binary (base 2) with a finite number of digits. E.g. 0.1 is not representable in binary, just like 1/3 ~= 0.33333... is not representable in decimal. Mathematica (and other software) will only use a limited number of decimal digits when showing the number to hide this effect. However, occasionally it might happen that enough decimal digits are shown that the mismatch becomes visible.
http://en.wikipedia.org/wiki/Floating_point#Representable_numbers.2C_conversion_and_rounding
EDIT
This command will show you what happens when you find the closes binary representation of 0.1 using 20 binary digits, then convert it back to decimal:
RealDigits[FromDigits[RealDigits[1/10, 2, 20], 2], 10]
The number is stored in base 2, rather than base 10 (decimal). It's impossible to represent 2.8 in base 2, so it uses the closest value: 2.8000000000000003
Number/AccountingForm can take a list in the second argument, the second item of which is how many digits after the decimal place to show:
In[61]:= x=2035.7520395261859;
In[62]:= AccountingForm[x,{Infinity,3}]
Out[62]//AccountingForm= 2035.752
Perhaps this is useful.

Computationally simple pseudo-Gaussian distribution with varying mean and standard deviation?

This picture from Wikipedia has a nice example of the sort of functions I'd ideally like to generate:
Right now I'm using the Irwin-Hall Distribution, which is more or less a polynomial approximation of the Gaussian distribution...basically, you use uniform random number generator and iterate it x times, and take the average. The more iterations, the more like a Gaussian Distribution it is.
It's pretty nice; however I'd like to be able to have one where I can vary the mean. For example, let's say I wanted a number between the range 0 and 10, but around 7. Like, the mean (if I repeated this function multiple times) would turn out to be 7, but the actual range is 0-10.
Is there one I should look up, or should I work on doing some fancy maths with standard Gaussian distributions?
I see a contradiction in your question. From one side you want normal distribution which is symmetrical by it's nature, from other side you want the range asymmetrically disposed to mean value.
I suspect you should try to look at other distributions density functions of which are like bell curve but asymmetrical. Like log distribution or beta distribution.
Look into generating normal random variates. You can generate pairs of normal random variates X = N(0,1) and tranform it into ANY normal random variate Y = N(m,s) (Y = m + s*X).
Sounds like the Truncated Normal distribution is just what the doctor ordered. It is not "computationally simple" per se, but easy to implement if you have an existing implementation of a normal distribution.
You can just generate the distribution with the mean you want, standard deviation you want, and the two ends wherever you want. You'll have to do some work beforehand to compute the mean and standard deviation of the underlying (non-truncated) normal distribution to get the mean for the TN that you want, but you can use the formulae in that article. Also note that you can adjust the variance as well using this method :)
I have Java code (based on the Commons Math framework) for both an accurate (slower) and quick (less accurate) implementation of this distribution, with PDF, CDF, and sampling.

Arithmetic in ruby

Why this code 7.30 - 7.20 in ruby returns 0.0999999999999996, not 0.10?
But if i'll write 7.30 - 7.16, for example, everything will be ok, i'll get 0.14.
What the problem, and how can i solve it?
What Every Computer Scientist Should Know About Floating-Point Arithmetic
The problem is that some numbers we can easily write in decimal don't have an exact representation in the particular floating point format implemented by current hardware. A casual way of stating this is that all the integers do, but not all of the fractions, because we normally store the fraction with a 2**e exponent. So, you have 3 choices:
Round off appropriately. The unrounded result is always really really close, so a rounded result is invariably "perfect". This is what Javascript does and lots of people don't even realize that JS does everything in floating point.
Use fixed point arithmetic. Ruby actually makes this really easy; it's one of the only languages that seamlessly shifts to Class Bignum from Fixnum as numbers get bigger.
Use a class that is designed to solve this problem, like BigDecimal
To look at the problem in more detail, we can try to represent your "7.3" in binary. The 7 part is easy, 111, but how do we do .3? 111.1 is 7.5, too big, 111.01 is 7.25, getting closer. Turns out, 111.010011 is the "next closest smaller number", 7.296875, and when we try to fill in the missing .003125 eventually we find out that it's just 111.010011001100110011... forever, not representable in our chosen encoding in a finite bit string.
The problem is that floating point is inaccurate. You can solve it by using Rational, BigDecimal or just plain integers (for example if you want to store money you can store the number of cents as an int instead of the number of dollars as a float).
BigDecimal can accurately store any number that has a finite number of digits in base 10 and rounds numbers that don't (so three thirds aren't one whole).
Rational can accurately store any rational number and can't store irrational numbers at all.
That is a common error from how float point numbers are represented in memory.
Use BigDecimal if you need exact results.
result=BigDecimal.new("7.3")-BigDecimal("7.2")
puts "%2.2f" % result
It is interesting to note that a number that has few decimals in one base may typically have a very large number of decimals in another. For instance, it takes an infinite number of decimals to express 1/3 (=0.3333...) in the base 10, but only one decimal in the base 3. Similarly, it takes many decimals to express the number 1/10 (=0.1) in the base 2.
Since you are doing floating point math then the number returned is what your computer uses for precision.
If you want a closer answer, to a set precision, just multiple the float by that (such as by 100), convert it to an int, do the math, then divide.
There are other solutions, but I find this to be the simplest since rounding always seems a bit iffy to me.
This has been asked before here, you may want to look for some of the answers given before, such as this one:
Dealing with accuracy problems in floating-point numbers

Resources