Ada: Seeding Random - random

How can I seed Ada.Numerics.Discrete_Random with a discrete value? I see code like:
declare
type Rand_Range is range 25..75;
package Rand_Int is new Ada.Numerics.Discrete_Random(Rand_Range);
seed : Rand_Int.Generator;
Num : Rand_Range;
begin
Rand_Int.Reset(seed);
Num := Rand_Int.Random(seed);
Put_Line(Rand_Range'Image(Num));
end;
which seeds the "Rand_Int" with the "seed" value, but I cannot find any instruction on actually setting the seed value. Or I am completely looking at this the wrong way? I want to set the seed value to a number (like 4 or 5) that I can control to observe test results.
Thanks!

Pass a second Integer argument to Reset. Here it's initiator.
Rand_Int.Reset(seed, initiator);
Ada is one of the few languages with complete, detailed reference manual and rationale available free of charge. Use it! Additionally, here is the more recent Ada version's standard.
Another note: the variable name seed in your code is a terrible choice. A choice like state or generator would be much better.
NB: Ada is really a very nice language in many respects. People gripe about the very strong, detailed type system. Then when the system's done and it runs first try with few bugs, they mysteriously forget to attribute it to Ada. The significant down sides are library availability and maturity of IDEs.

Related

Is there a difference between fun(n::Integer) and fun(n::T) where T<:Integer in performance/code generation?

In Julia, I most often see code written like fun(n::T) where T<:Integer, when the function works for all subtypes of Integer. But sometimes, I also see fun(n::Integer), which some guides claim is equivalent to the above, whereas others say it's less efficient because Julia doesn't specialize on the specific subtype unless the subtype T is explicitly referred to.
The latter form is obviously more convenient, and I'd like to be able to use that if possible, but are the two forms equivalent? If not, what are the practicaly differences between them?
Yes Bogumił Kamiński is correct in his comment: f(n::T) where T<:Integer and f(n::Integer) will behave exactly the same, with the exception the the former method will have the name T already defined in its body. Of course, in the latter case you can just explicitly assign T = typeof(n) and it'll be computed at compile time.
There are a few other cases where using a TypeVar like this is crucially important, though, and it's probably worth calling them out:
f(::Array{T}) where T<:Integer is indeed very different from f(::Array{Integer}). This is the common parametric invariance gotcha (docs and another SO question about it).
f(::Type) will generate just one specialization for all DataTypes. Because types are so important to Julia, the Type type itself is special and allows parameterization like Type{Integer} to allow you to specify just the Integer type. You can use f(::Type{T}) where T<:Integer to require Julia to specialize on the exact type of Type it gets as an argument, allowing Integer or any subtypes thereof.
Both definitions are equivalent. Normally you will use fun(n::Integer) form and apply fun(n::T) where T<:Integer only if you need to use specific type T directly in your code. For example consider the following definitions from Base (all following definitions are also from Base) where it has a natural use:
zero(::Type{T}) where {T<:Number} = convert(T,0)
or
(+)(x::T, y::T) where {T<:BitInteger} = add_int(x, y)
And even if you need type information in many cases it is enough to use typeof function. Again an example definition is:
oftype(x, y) = convert(typeof(x), y)
Even if you are using a parametric type you can often avoid using where clause (which is a bit verbose) like in:
median(r::AbstractRange{<:Real}) = mean(r)
because you do not care about the actual value of the parameter in the body of the function.
Now - if you are Julia user like me - the question is how to convince yourself that this works as expected. There are the following methods:
you can check that one definition overwrites the other in methods table (i.e. after evaluating both definitions only one method is present for this function);
you can check code generated by both functions using #code_typed, #code_warntype, #code_llvm or #code_native etc. and find out that it is the same
finally you can benchmark the code for performance using BenchmarkTools
A nice plot explaining what Julia does with your code is here http://slides.com/valentinchuravy/julia-parallelism#/1/1 (I also recommend the whole presentation to any Julia user - it is excellent). And you can see on it that Julia after lowering AST applies type inference step to specialize function call before LLVM codegen step.
You can hint Julia compiler to avoid specialization. This is done using #nospecialize macro on Julia 0.7 (it is only a hint though).

How should OpenAI environments (gyms) use env.seed(0)?

I've created a very simple OpenAI gym (banana-gym) and wonder if / how I should implement env.seed(0).
See https://github.com/openai/gym/issues/250#issuecomment-234126816 for example.
In a recent merge, the developers of OpenAI gym changed the behavior of env.seed() to not call the method env._seed() anymore. Instead the method now just issues a warning and returns. I think if you want to use this method to set the seed of your environment, you should just overwrite it now.
The docstring of the env.seed() function (which can be found in this file) provides the following documentation on what the function should be implemented to do:
Sets the seed for this env's random number generator(s).
Note:
Some environments use multiple pseudorandom number generators.
We want to capture all such seeds used in order to ensure that
there aren't accidental correlations between multiple generators.
Returns:
list<bigint>: Returns the list of seeds used in this env's random
number generators. The first value in the list should be the
"main" seed, or the value which a reproducer should pass to
'seed'. Often, the main seed equals the provided 'seed', but
this won't be true if seed=None, for example.
Note that, unlike what the documentation and the comments in the issue you linked to seem to imply, it doesn't seem (to me) like env.seed() is supposed to be overridden by custom environments. env.seed() has a very simple implementation, where it only calls and returns the return value of env._seed(), and it seems to me like that is the function which should be overridden by custom environments.
For example, OpenAI gym's atari environments have a custom _seed() implementation which sets the seed used internally by the (C++-based) Arcade Learning Environment.
Since you have a random.random() call in your custom environment, you should probably implement _seed() to call random.seed(). In that way, users of your environments can reproduce experiments by making sure to call seed() on your environment with the same argument.
Note: Messing around with the global random seed like this may be unexpected though, it may be better to create a dedicated random object when your environment gets initialized, seed that object, and make sure to always obtain your random numbers if you need them in the environment from that object.
env.seed(seed) works like a charm. The key is to seed the env not just at the beginning, but EVERY time the reset() function is called. Since we invariably end up playing multiple games during training this seeding function should be inside one of the loops and will be executed multiple times. Possibly this is the reason why it is deprecated now in favor of env.reset(seed=seed)
Of course needless to say, if you are using randomness in the agent, you need to seed that as well. In that case, seeding once at the start of the training would be fine. You may want to seed the NN framework as well.. A typical seeding function (Pytorch) would be:
def seed_everything(seed):
random.seed(seed)
np.random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
env.seed(seed)
##One call at beginning is enough
seed_everything(SEED)
Howver remember to call env.seed every time you reset the env:
curr_state = env.reset()
env.seed(SEED)
or simply use the new API: env.reset(seed=seed)
BUT - While this determinism may be used in early training to debug your code, it is recommended not to use the the same(ie env.seed(SEED)) in your final training. This is because, by nature, the start position of the environment is supposed to be random and your RL code is expected to function considering that randomness. If you make your start position deterministic then your model will not perform well in the live environment

How to seed the pcg random number generator?

In the samples for PCG they only seed one way which I assume is best/preferred practice:
pcg32 rng(pcg_extras::seed_seq_from<std::random_device>{});
or
// Seed with a real random value, if available
pcg_extras::seed_seq_from<std::random_device> seed_source;
// Make a random number engine
pcg32 rng(seed_source);
However running this on my machine just produces the same seed every time. It is no better then if I just typed in some integer to seed with myself. What would be a good method to seed if trying it this way doesn't work ?
pcg_extras::seed_seq_from is supposed to be the recommended way, but it delegates the actual seed generation to the generator specified in the template parameter.
MinGW has a broken implementation of std::random_device. So at this moment, if you want to target MinGW, you must not use std::random_device.
Some potential alternatives:
boost::random_device
randutils, by the author of PCG, M.E. O'Neill
seed11::seed_device, drop-in replacement for std::random_device (disclaimer: it's my own library)
More info about seeding in this blog post by M.E. O'Neill.

Randomize dut parameters in system verilog

I am writing a test bench in system verilog for a dut, and in the field it is possible for the parameter DEPTH to change and so I have been trying to figure out how to randomize a parameter. It is currently set at 20 but it has a range of 7 to 255. Any suggestions and help would be greatly appreciated.
I know you cant directly in the script randomize it but ive heard of others do it by creating a package that they run along side the test that can insert random values as parameters.
It isn't possible to randomize parameter values, as these need to be fixed at elaboration time and randomization is a run-time task.
What I think you mean is that you can create a small SystemVerilog program that can model your parameters inside a class, randomize that and then write a package based on that.
class my_params;
rand bit [7:0] depth;
constraint legal_depth {
depth inside { [7:255] };
}
function void write_param_pkg();
// open a file
// start writing to it based on the randomized values
end
endclass
You can then instantiate this class in some dummy top module and use it to dump a package:
module param_randomizer;
initial begin
my_params params = new();
if (!params.randomize())
$fatal(...)
params.write_params_pkg();
end
endmodule
The output of writing the package could be:
package my_params_pkg;
parameter DEPTH = 42;
endpackage
This you need to call before you start compiling your real testbench. The testbench will import this package and set the DUT's parameter to this one:
module testbench;
my_design dut #(my_params::DEPTH) (...);
endmodule
If you only have one parameter (as opposed to multiple which are related to each other), it might not make sense to do the randomization in SystemVerilog, as scripting should be enough.
Here is some solution I found, not sure if it's what you're looking for:
https://verificationacademy.com/forums/ovm/randomizing-module-parameters
In a nutshell:
You can create a class with fields inside representing values of parameters that are to be randomized. Then you instantiate it in a new module, which randomizes that class and outputs a new package file with random values for parameters. Finally, you compile that package with the rest of your modules.
As has been stated by the other answers, parameter values have to be resolved at elaboration time. Typically they are passed in to the simulator command line:
vsim -gDEPTH=42
I don't think there's any advantage to using the SystemVerilog constraint solver to randomise your parameters. The hassle of writing out a package from SystemVerilog to feed into a subsequent compilation suggests something is wrong. Ideally all of the SystemVerilog code should be picking up the chosen parameter from the elaborated DUT so a package isn't required*. It's probably easier to update your build scripts, for example:
vsim -gDEPTH=$(shuf -i 7-255 -n 1)
Obviously this can be made more generic as part of an overall test harness. If you need constrained randomisation for parameters (unlikely but possible) then you could always use a more powerful scripting language.
This has the added advantage that if you have other configuration values (for example for non SystemVerilog software that is executed as part of the test) these can all be set from a single place and passed into the simulation.
* although it seems people often do use a package to share parameters because it can be awkward to access DUT params. Again for generating code and writing to files, you may find that using a standard scripting language will be more powerful easier to maintain.

Defining constants for 0 and 1

I was wondering whether others find it redundant to do something like this...
const double RESET_TIME = 0.0;
timeSinceWhatever = RESET_TIME;
rather than just doing
timeSinceWhatever = 0.0;
Do you find the first example to aid in readability? The argument comes down to using magic numbers, and while 0 and 1 are considered "exceptions" to the rule, I've always kind of thought that these exceptions only apply to initializing variables, or index accessing. When the number is meaningful, it should have a variable attached to its meaning.
I'm wondering whether this assumption is valid, or if it's just redundant to give 0 a named constant.
Well, in your particular example it doesn't make much sense to use a constant.
But, for example, if there was even a small chance that RESET_TIME will change in the future (and become, let's say, 1) then you should definitely use a constant.
You should also use a constant if your intent is not obvious from the number. But in your particular example I think that timeSinceWhatever = 0; is more clear than timeSinceWhatever = RESET_TIME.
Typically, one benefit of defining a constant rather than just using a literal is if the value ever needs to change in several places at once.
From your own example, what if REST_TIME needed to be -1.5 due to some obscure new business rule? You could change it one place, the definition of the constant, or you could change it everywhere you had last used 0.0 as a float literal.
In short, defining constants, in general, aids primarily in maintainability.
If you want to be more specific and letting others know why you're changing doing what you're doing you might want to instead create a function (if your language permits functions to float about) such as
timeSinceWhenever = ResetStopWatch();
or better yet when dealing with units either find a library which has built in function types or create your own. I wouldn't suggest creating your own with time as there are an abundant amount of such libraries. I've seen this before in code if it helps:
Temperature groundTemp = Temperature.AbsoluteZero();
which is a nice way of indicating what is going on.
I would define it only if there was ever a chance that RESET_TIME could be something different than 0.0, that way you can make one change and update all references. Otherwise 0.0 is the better choice to my eye just so you don't have to trace back and see what RESET_TIME was defined to.
Constants are preferable as it allows to use a value that can be then changed in successive versions of the code. It is not always possible to use constants, especially if you are programming in a OO language, and it is not possible to define a constant that doesn't contain a basic datatype. Generally, a programming language always has a way to define not modifiable objects / datatypes.
Well suppose that RESET_TIME is used often in your code and you want to change the value, it will be better to do it once and not in every statement.
better than a constant, make it a configuration variable, and set it to a default value. But yes, RESET_TIME is more readable, provided its used more than once otherwise just use a code comment.
That code is ok. const variable are unchangeable variables. so whenever you feel to reset something, you can always have your const to do that

Resources