The rand crate has a Sample trait and an IndependentSample trait.
Obviously, "Independent" is the difference between the two, but what does this mean semantically in the numbers generated? In the non-independent case, how are samples possibly dependent on one another?
The only difference between these two traits is that Sample takes a &mut self whereas IndependentSample takes a &self.
This means that Sample could store a state, but not IndependentSample.
The naming choice is described in IndependentSample documentation:
Since no state is recorded, each sample is (statistically) independent of all others, assuming the Rng used has this property.
Related
I have a Halide::Runtime::Buffer and would like to remove elements that match a criteria, ideally such that the operation occurs in-place and that the function can be defined in a Halide::Generator.
I have looked into using reductions, but it seems to me that I cannot output a vector of a different length -- I can only set certain elements to a value of my choice.
So far, the only way I got it to work was by using a extern "C" call and passing the Buffer I wanted to filter, along with a boolean Buffer (1's and 0's as ints). I read the Buffers into vectors of another library (Armadillo), conducted my desired filter, then read the filtered vector back into Halide.
This seems quite messy and also, with this code, I'm passing a Halide::Buffer object, and not a Halide::Runtime::Buffer object, so I don't know how to implement this within a Halide::Generator.
So my question is twofold:
Can this kind of filtering be achieved in pure Halide, preferably in-place?
Is there an example of using extern "C" functions within Generators?
The first part is effectively stream compaction. It can be done in Halide, though the output size will either need to be fixed or a function of the input size (e.g. the same size as the input). One can get the max index produced as output as well to indicate how many results were produced. I wrote up a bit of an answer on how to do a prefix sum based stream compaction here: Halide: Reduction over a domain for the specific values . It is an open question how to do this most efficiently in parallel across a variety of targets and we hope to do some work on exploring that space soon.
Whether this is in-place or not depends on whether one can put everything into a single series of update definitions for a Func. E.g. It cannot be done in-place on an input passed into a Halide filter because reductions always allocate a buffer to work on. It may be possible to do so if the input is produced inside the Generator.
Re: the second question, are you using define_extern? This is not super well integrated with Halide::Runtime::Buffer as the external function must be implemented with halide_buffer_t but it is fairly straight forward to access from within a Generator. We don't have a tutorial on this yet, but there are a number of examples in the tests. E.g.:
https://github.com/halide/Halide/blob/master/test/generator/define_extern_opencl_generator.cpp#L19
and the definition:
https://github.com/halide/Halide/blob/master/test/generator/define_extern_opencl_aottest.cpp#L119
(These do not need to be extern "C" as I implemented C++ name mangling a while back. Just set the name mangling parameter to define_extern to NameMangling::CPlusPlus and remove the extern "C" from the external function's declaration. This is very useful as it gets one link time type checking on the external function, which catches a moderately frequent class of errors.)
I'm looking to use a specific set of seeds for the intrinsic function RANDOM_NUMBER (a PRNG). What I've read so far is that the seed value can be set via calling RANDOM_SEED, specifically RANDOM_SEED(GET = array). My confusion is how to (if it's possible) set a specific value for the algorithm, for instance in the RAND, RANDU, or RANDM algorithms one can specify their own seed directly. I'm not sure how to set the seed, as the get function seems to take an array. If it takes an array, does it always pull the seed value from a specific index in the array?
Basically, is there a way to set a specific single seed value? If so, would someone be able to write-it out?
As a side note - I'm attempting to set my seed because allegedly one of the other PRNGs I mentioned only works well with "large odd numbers" according to my professor, so I decided that I may as well control this when comparing the PRNG's.
First, the RANDOM_SEED is only used for controlimg the seed of RANDOM_NUMBER(). If you use any other random number generator subroutine or function, it will not affect them at all or if yes then in some compiler specific way which you must find in the manual. But most probably it does not affect them.
Second, you should not care at all whether the seed array contains 1, 4 or 42 integers, it doesn't matter because the PRNG uses the bits from the whole array in some unspecified custom way. For example you may be generating 64 bit real numbers, but the seed array is made of 32 bit integers. You cannot simply say which integer from the seed array does what. You can view the whole seed array as one big integer number that is cut into smaller pieces if you want.
Regarding your professors advice, who knows what he meant, but the seed is probably set by some procedure of that particular generator, and not by the standard RANDOM_SEED, so you must read the documentation for that generator.
And how to use a specific seed in RANDOM_SEED? It was described on this site several times,jkust search for RANDOM_SEED in the top right search field, really. But it is simple, once you know the size of the array, size it to any non-trivial numbers (you need enough non-zero bits) you want and use put=. That's really all, just don't think about individual values in the array, the whole array is one piece of data together.
My question is Can I generate a random number in Uppaal?
I would like to generate a number from a range of values. Even more, I would like to generate not just integers I would like to generate double values as well.
for example: double [7.25,18.3]
I found this question that were talking about the same. I tried it.
However, I got this error: syntax error unexpected T_SELECT.
It doesn't work. I'm pretty new in Uppaal world, I would appreciate any help that you can provide me.
Regards,
This is a common and misunderstood question in Uppaal.
Simple answer:
double val; // declaration
val = random(18.3-7.25)+7.25; // use in update, works in SMC (Uppaal v4.1)
Verbose answer:
Uppaal supports symbolic analysis as well as statistical and the treatment and possibilities are radically different. So one has to decide first what kind of analysis is needed. Usually one starts with simple symbolic analysis and then augment with stochastic features, sometimes stochastic behavior needs also to be checked symbolically.
In symbolic analysis (queries A[], A<>, E<>, E[] etc), random is synonymous with non-deterministic, i.e. if the model contains some "random" behavior, then verification should check all of them any way. Therefore such behavior is modelled as non-deterministic choices between edges. It is easy to setup a set of edges over an integer range by using select statement on the edge where a temporary variable is declared and its value can be used in guards, synchronization and update. Symbolic analysis supports only integer data types (no floating point types like double) and continuous ranges over clocks (specified by constraints in guards and invariants).
Statistical analysis (via Monte-Carlo simulations, queries like Pr[...](<> p), E[...](max: var), simulate, etc) supports double types and floating point functions like sin, cos, sqrt, random(MAX) (uniform distribution over [0, MAX)), random_normal(mean, dev) etc. in addition to int data types. Clock variables can also be treated as floating point type, except that their derivative is set to 1 by default (can be changed in the invariants which allow ODEs -- ordinary differential equations).
It is possible to create models with floating point operations (including random) and still apply symbolic analysis provided that the floating point variables do not influence/constrain the model behavior, and act merely as a cost function over the state space. Here are systematic rules to achieve this:
a) the clocks used in ODEs must be declared of hybrid clock type.
b) hybrid clock and double type variables cannot appear in guard and invariant constraints. Only ODEs are allowed over the hybrid clocks in the invariant.
Background: I'm writing a toy Lisp (Scheme) interpreter in Haskell. I'm at the point where I would like to be able to compile code using LLVM. I've spent a couple days dreaming up various ways of feeding untyped Lisp values into compiled functions that expect to know the format of the data coming at them. It occurs to me that I am not the first person to need to solve this problem.
Question: What are some historically successful ways of mapping untyped data into an efficient binary format.
Addendum: In point of fact, I do know which of about a dozen different types the data is, I just don't know which one might be sent to the function at compile time. The function itself needs a way to determine what it got.
Do you mean, "I just don't know which [type] might be sent to the function at runtime"? It's not that the data isn't typed; certainly 1 and '() have different types. Rather, the data is not statically typed, i.e., it's not known at compile time what the type of a given variable will be. This is called dynamic typing.
You're right that you're not the first person to need to solve this problem. The canonical solution is to tag each runtime value with its type. For example, if you have a dozen types, number them like so:
0 = integer
1 = cons pair
2 = vector
etc.
Once you've done this, reserve the first four bits of each word for the tag. Then, every time two objects get passed in to +, first you perform a simple bit mask to verify that both objects' first four bits are 0b0000, i.e., that they are both integers. If they are not, you jump to an error message; otherwise, you proceed with the addition, and make sure that the result is also tagged accordingly.
This technique essentially makes each runtime value a manually-tagged union, which should be familiar to you if you've used C. In fact, it's also just like a Haskell data type, except that in Haskell the taggedness is much more abstract.
I'm guessing that you're familiar with pointers if you're trying to write a Scheme compiler. To avoid limiting your usable memory space, it may be more sensical to use the bottom (least significant) four bits, rather than the top ones. Better yet, because aligned dword pointers already have three meaningless bits at the bottom, you can simply co-opt those bits for your tag, as long as you dereference the actual address, rather than the tagged one.
Does that help?
Your default solution should be a simple tagged union. If you want to narrow your typing down to more specific types, you can do it - but it won't be that "toy" any more. A thing to look at is called abstract interpretation.
There are few successful implementations of such an optimisation, with V8 being probably the most widespread. In the Scheme world, the most aggressively optimising implementation is Stalin.
I have some parallel Fortran90 code in which each thread needs to generate the same sequence of random numbers.
I have a random number generator that seems to be thread-unsafe, since, for a given seed, I'm completely unable to repeat the same results each time I run the program.
I surfed unsuccessfully (almost) the entire web looking for some code of a thread-safe RNG. Could anyone provide me with (the link to) the code of one?
Thanks in advance!
A good Pseudorandom number generator for Fortran90 can be found in the Intel Math Kernel Vector Statistical Library. They are thread safe. Also, why does it need to be threadsafe? If you want each thread to get the same list, instantiate a new PRNG for each thread with the same seed.
Most repeatable random number generators need state in some form. Without state, they can't do what comes next. In order to be thread safe, you need a way to hold onto the state yourself (ie, it can't be global).
When you say "needs to generate the same sequence of random numbers" do you mean that
Each thread needs to generate a stream of numbers identical to the other thread? This implies choosing the seed before peeling off threads, then instantiating the a thread-local PRNG in each thread with the same seed.
or
You want to be able to repeat the same sequence of numbers between different runs of the programs, but each thread generates it's own independent sequence? In this case, you still can't share a single PRNG because the thread operation sequence is non-deterministic. So seed a single PRNG with a known seed before launching threads, and use it to generate the initial seeds for the threads. Then you instantiate thread-local generators in each thread...
In each of these cases you should note what Neil Butterworth say about the statistics: most of the usual guarantees that the PRNG like to claim are not reliable when mix streams generated in this way.
In both cases you need a thread-local PRNG. I don't know what is available in f90...but you can also write you own (lookup Mersenne Twister, and write a routne that takes the saved state as a parameter...).
In fortran 77, this would look something like
function PRNGthread (state)
double state(statesize)
c stuff happens here which uses and manipulates the state vector...
PRNGthread = result
return
and each of your threads should maintain a separate state vector, though all will use the same initial value.
I understand you need every thread to produce the same stream of random numbers.
A very good Pseudo Random Generator that will generate a reproducable stream of numbers and is quite fast is the MT19937. Just make sure that you generate the seed before spawning off the threads, but generate a separate instance of the MT in every thread (make the instance of the MT thread local). That way it will be guaranteed that every MT will produce the same stream of numbers.
How about SPRNG? I have not tried it myself though.
I coded a thread-safe Fortran 90 version of the Mersenne Twister/MT19973. The state of the PRNG is saved in a derived type (randomNumberSequence), and you use procedures to seed the generator or get the next element in the sequence.
See http://code.google.com/p/i3rc-monte-carlo-model/source/browse/trunk/Code/RandomNumbersForMC.f95
The alternatives seem to be:
Use a synchronisation object (such as
a mutex) on the generator's seed
value. This will unfortunately
serialise your code on accesses to
generator
Use thread-local storage in the
generator so each thread gets its own
seed - this may cause statstical
problems for your app
If your platform supports a suitable
atomic operation, use that on the
seed (it probably won't, however)
Not a very encouraging list, I know. And to add to it, I have no idea how to implement any of them in FORTRAN!
This article https://www.cmiss.org/openCMISS/wiki/RandomNumberGenerationWithOpenMP does not only link to a Fortran implementation, but mentions key points needed to make a PRNG usable with threads. The most important point is:
The Fortran90 version of Ziggurat has several variables and arrays with the 'SAVE' attribute. In order to parallelize the uniform RNG, then, it appears that the required changes are to make these variables arrays with a separate value for each thread (beware of false sharing). Then when the PRNG function is called, we must pass the thread number, and use the corresponding state value.