Randomize dut parameters in system verilog - random

I am writing a test bench in system verilog for a dut, and in the field it is possible for the parameter DEPTH to change and so I have been trying to figure out how to randomize a parameter. It is currently set at 20 but it has a range of 7 to 255. Any suggestions and help would be greatly appreciated.
I know you cant directly in the script randomize it but ive heard of others do it by creating a package that they run along side the test that can insert random values as parameters.

It isn't possible to randomize parameter values, as these need to be fixed at elaboration time and randomization is a run-time task.
What I think you mean is that you can create a small SystemVerilog program that can model your parameters inside a class, randomize that and then write a package based on that.
class my_params;
rand bit [7:0] depth;
constraint legal_depth {
depth inside { [7:255] };
}
function void write_param_pkg();
// open a file
// start writing to it based on the randomized values
end
endclass
You can then instantiate this class in some dummy top module and use it to dump a package:
module param_randomizer;
initial begin
my_params params = new();
if (!params.randomize())
$fatal(...)
params.write_params_pkg();
end
endmodule
The output of writing the package could be:
package my_params_pkg;
parameter DEPTH = 42;
endpackage
This you need to call before you start compiling your real testbench. The testbench will import this package and set the DUT's parameter to this one:
module testbench;
my_design dut #(my_params::DEPTH) (...);
endmodule
If you only have one parameter (as opposed to multiple which are related to each other), it might not make sense to do the randomization in SystemVerilog, as scripting should be enough.

Here is some solution I found, not sure if it's what you're looking for:
https://verificationacademy.com/forums/ovm/randomizing-module-parameters
In a nutshell:
You can create a class with fields inside representing values of parameters that are to be randomized. Then you instantiate it in a new module, which randomizes that class and outputs a new package file with random values for parameters. Finally, you compile that package with the rest of your modules.

As has been stated by the other answers, parameter values have to be resolved at elaboration time. Typically they are passed in to the simulator command line:
vsim -gDEPTH=42
I don't think there's any advantage to using the SystemVerilog constraint solver to randomise your parameters. The hassle of writing out a package from SystemVerilog to feed into a subsequent compilation suggests something is wrong. Ideally all of the SystemVerilog code should be picking up the chosen parameter from the elaborated DUT so a package isn't required*. It's probably easier to update your build scripts, for example:
vsim -gDEPTH=$(shuf -i 7-255 -n 1)
Obviously this can be made more generic as part of an overall test harness. If you need constrained randomisation for parameters (unlikely but possible) then you could always use a more powerful scripting language.
This has the added advantage that if you have other configuration values (for example for non SystemVerilog software that is executed as part of the test) these can all be set from a single place and passed into the simulation.
* although it seems people often do use a package to share parameters because it can be awkward to access DUT params. Again for generating code and writing to files, you may find that using a standard scripting language will be more powerful easier to maintain.

Related

Omnet++ - Accessing parameters of a different module in initialization (.ini) file and using for loop

I need to generate Poisson arrival of traffic and thus need to set the start times of applications in clients accordingly. For this I need two things:
1. access parameters of different modules and use them as input for defining a parameter of another module
2. use a for loop to define parameters of modules
For e.g. - the example below demonstrates what I am trying to do.
I have 100 clients and each client has 20 applications. I want to set the start time of the first application of the first client and want to write the rest using a loop.
// iat = interArrivalTime
**.cli[0].app[0].startTime = 1 // define this
**.cli[0].app[1].startTime = <**.cli[0].app[0].startTime> + exponential(<iat>)
**.cli[0].app[2].startTime = <**.cli[0].app[1].startTime> + exponential(<iat>)
.
.
.
**.cli[n].app[m].startTime = <**.cli[n].app[m-1].startTime> + exponential(<iat>)
I looked at the 'ned' functions but could not find any solution.
Of course I can write a script for hardcoding the start times of several clients, but the script would output a huge file which is very hard to manage if the number of clients and applications are too big.
Thank You!
INI files are basically pattern matchers. Each time a module is initialized, the left side of the = sign on each line in the INI file is matched against the actual module path, beginning from the start of the INI file. On the first match from the beginning, the right side of the line is used as the value of the parameter.
In short, these are not assignment operations, rather rules telling each module how to initialize their own parameters. For example it is undefined, in what order these lines will be used during the initialization. Something that is earlier in the INI file is nit necessarily used earlier during module initialization. Of course this prevents you referring an other module's parameter. In fact you may not use any other parameters at all.
In short, INI files are declarative, not procedural constructs so cross references, loops and other procedural constructs cannot be used here.
If you want to create dependencies between module parameters, you can code that in the initialize() method of your module, by explicitly initializing a parameter from the C++ code. You can access any other module's parameter using C++ APIs.
Of course, if you don't want to modify existing applications this is not an optimal solution however you can create a separate module that is responsible for your 'procedural' initialization and that separate module can run through all of you applications and set the required parameters as needed. This approach is used at several places in INET where the initialization data must be computed. One notable example is the calculation of routing table information. e.g. Ipv4FlatNetworkConfigurator
An other approach would be to set up and configure your simulation from a scripting language like python. This is not (yet) supported by OMNeT++ however.
Long story short, write a configurator module and do your initialization there.

Is there a difference between fun(n::Integer) and fun(n::T) where T<:Integer in performance/code generation?

In Julia, I most often see code written like fun(n::T) where T<:Integer, when the function works for all subtypes of Integer. But sometimes, I also see fun(n::Integer), which some guides claim is equivalent to the above, whereas others say it's less efficient because Julia doesn't specialize on the specific subtype unless the subtype T is explicitly referred to.
The latter form is obviously more convenient, and I'd like to be able to use that if possible, but are the two forms equivalent? If not, what are the practicaly differences between them?
Yes Bogumił Kamiński is correct in his comment: f(n::T) where T<:Integer and f(n::Integer) will behave exactly the same, with the exception the the former method will have the name T already defined in its body. Of course, in the latter case you can just explicitly assign T = typeof(n) and it'll be computed at compile time.
There are a few other cases where using a TypeVar like this is crucially important, though, and it's probably worth calling them out:
f(::Array{T}) where T<:Integer is indeed very different from f(::Array{Integer}). This is the common parametric invariance gotcha (docs and another SO question about it).
f(::Type) will generate just one specialization for all DataTypes. Because types are so important to Julia, the Type type itself is special and allows parameterization like Type{Integer} to allow you to specify just the Integer type. You can use f(::Type{T}) where T<:Integer to require Julia to specialize on the exact type of Type it gets as an argument, allowing Integer or any subtypes thereof.
Both definitions are equivalent. Normally you will use fun(n::Integer) form and apply fun(n::T) where T<:Integer only if you need to use specific type T directly in your code. For example consider the following definitions from Base (all following definitions are also from Base) where it has a natural use:
zero(::Type{T}) where {T<:Number} = convert(T,0)
or
(+)(x::T, y::T) where {T<:BitInteger} = add_int(x, y)
And even if you need type information in many cases it is enough to use typeof function. Again an example definition is:
oftype(x, y) = convert(typeof(x), y)
Even if you are using a parametric type you can often avoid using where clause (which is a bit verbose) like in:
median(r::AbstractRange{<:Real}) = mean(r)
because you do not care about the actual value of the parameter in the body of the function.
Now - if you are Julia user like me - the question is how to convince yourself that this works as expected. There are the following methods:
you can check that one definition overwrites the other in methods table (i.e. after evaluating both definitions only one method is present for this function);
you can check code generated by both functions using #code_typed, #code_warntype, #code_llvm or #code_native etc. and find out that it is the same
finally you can benchmark the code for performance using BenchmarkTools
A nice plot explaining what Julia does with your code is here http://slides.com/valentinchuravy/julia-parallelism#/1/1 (I also recommend the whole presentation to any Julia user - it is excellent). And you can see on it that Julia after lowering AST applies type inference step to specialize function call before LLVM codegen step.
You can hint Julia compiler to avoid specialization. This is done using #nospecialize macro on Julia 0.7 (it is only a hint though).

What is the best way to manage a large quantity of constants

I am currently working on a very complex program that processes rows from an input table and has a huge number of possible outcomes for each record. Because of this I have a very large number of constants defined for the outcome messages. There is one success message for the record, but a multitude of possible warnings and errors.
My first thought was to define all of my constants for these messages at the package body level, but then I decided to move each constant to the procedure where it is used. I'm now second guessing that decision and thinking of moving everything back to package body level. What is the best way to define this many constants? Ease of maintainability is my ultimate goal for this program since it is so complex.
I think this is a matter of taste. In my application I put all error codes into an Error-Package. All main and commonly used constants I put into a separate package (without a package body).
Again, a matter of taste, but I tend to put a list of named constants at the package spec level rather than the package body so that they can be referenced by any portion of the application. If I ever want to change the error code that c_err_for_specific_reason_x uses, it becomes a single place to do so.
If I wanted to hide the codes and put them within the body I would have a get_error_code(p_get_error_name varchar) function that did the translation based on you passing a valid constant name.
I've done both on different projects, but tend towards the list over the function most times. I tend to use the function if it a table-driven source of the data.
It ... wait for it ... depends!
Since you currently define your constants in the package body, you don't need them to be publicly accessible outside the package. So defining them in a spec really doesn't buy you anything.
Here's is the rule I follow: Define constants within the smallest scope needed. So if a constant is used only within one procedure, define it in that procedure. If it is used within more than one procedure, define it in the body. If it is used elsewhere by code in other packages (or non-packaged SPs) but only when using a particular package, define it in the spec of that package. If it is used by other code for general use, put it in a separate spec of such general constants.

Generating Single Port ROM on Spartan 6 using Xilinx ISE Design Suite

I'm having some trouble designing a single port rom onto a spartan 6 board. I use the provided core generator to create block memory and choose single port rom with 32 bit width and 256 depth with a coe file that just counts from 0 to 255. I drop the rom into my vhdl as a component and add the XilinxCoreLib as a library. When I try to generate the programming file I get the translate error:
logical block 'rom1' with type 'rom' could not be
resolved. A pin name misspelling can cause this, a missing edif or ngc file,
case mismatch between the block name and the edif or ngc file name, or the
misspelling of a type name. Symbol 'rom' is not supported in target
'spartan6'.
I'm currently using Xilinx ISE 13.1 if that helps. I feel like this should be really easy to do but I haven't been able to find how to do it.
Edit: Thanks everyone, was a combination of things. Wrong speed grade, and didn't add a copy of the ngc file to my working directory. I'll use arrays in the future.
Easiest way is to forget the vendor tools altogether and simply declare a constant array!
If this is in a package separate from the rest of the design, a few lines of printf's or a simple script can generate the VHDL boilerplate around the contents, which come from your assembler or whatever tool creates the actual data
Since you're adding a Xilinx generated core to your design in ISE, you need to add both the VHD file and the NGC file via "Add Source" via the Project menu.
Even easier, depending on how large your ROM needs to be and what data goes into it, would be to not even bother with a Xilinx core, but to use pure VHDL to declare a constant array and initialization values right in your VHDL file. Here is an example:
type array_ROM is array (0 to NUMBER_OF_ROWS-1) of std_logic_vector (ROM_BITWIDTH-1 downto 0);
signal my_ROM : array_ROM
:=
(
x"12345678",
x"ABCDEF01",
...
x"01010101"
);
Now, you don't put the elipsis (...) in that initialization list, just put rows of constants with bit widths matching ROM_BITWIDTH. The NUMBER_OF_ROWS is the number of address locations you need in the ROM. In this example, ROM_BITWIDTH would have to be set to 32 as I've used 32-bit hexadecimal constants in the initialization list. Being a signal, it's actually modifiable, so if you need it to be constant, just use "constant" instead of signal.
I guess the problem is, as the message says, a misspelling. to get the correct component declaration/instantiation, select your rom.xco in the design-window of ISE. then select "view vhdl instantiation template" from process window. use the component declaration and instantiation described therein.
There are a number of things that can cause this problem, one is that you are using a blocck that was generated for another FPGA family and using it inside the Spartan6. the other is that you may have generated the ROM using an older version of the tool and the wrapper for the ROM has changed since then.
You can either generate a anrray like Brian suggested and forgetting about the tool specific ROM type, or re-generate the IP under your curernt project settings and see how it goes.

Software engineering with Ada: stubs; separate and compilation units [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm with a mechanical engineering background but I'm interested to learn good software engineering practice with Ada. I have a few queries.
Q1. If I understand correctly then someone can just write a package specification (ads) file, compile it and then compile the main program which is using the package. Later on, when one knows what to include in the package body then the latter can be written and compiled. Afterwards, the main program can now be run. I've tried this and I would like to confirm that this is good practice.
Q2. My second question is about stubs (sub-units) and the use of SEPARATE. Say I have a main program as follows:
WITH Ada.Float_Text_IO;
WITH Ada.Text_IO;
WITH Ada.Integer_Text_IO;
PROCEDURE TEST2 IS
A,B : FLOAT;
N : INTEGER;
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS SEPARATE;
BEGIN -- main program
INPUT(A,B,N);
Ada.Float_Text_IO.Put(Item => A);
Ada.Text_IO.New_line;
Ada.Integer_Text_IO.Put(Item => N);
END TEST2;
Then I have the procedure INPUT in a separate file:
separate(TEST2)
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
BEGIN
Ada.Float_Text_IO.Get(Item => A);
Ada.Text_IO.New_line;
Ada.Float_Text_IO.Get(Item => B);
Ada.Text_IO.New_line;
Ada.Integer_Text_IO.Get(Item => N);
END INPUT;
My questions:
a) AdaGIDE suggests me to save the the INPUT procedure file as input.adb. But then on compiling the main program test2, I get the warning:
warning: subunit "TEST2.INPUT" in file "test2-input.adb" not found
cannot generate code for file test2.adb (missing subunits)
To AdaGIDE, this is more of an error as the above warnings come before the message:
Compiling...
Done--error detected
So I renamed the input.adb file to test2-input.adb as was suggested to me by AdaGIDE on compiling. Now on compiling the main file, I don't have any warnings. My question now is if it's ok to write
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
as I did in the sub-unit file test2-input.adb or is it better to write a more descriptive term like
PROCEDURE TEST2-INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
to emphasize that procedure input has a parent procedure test2 ? This thought follows from AdaGIDE hinting me about test2-input.adb as I mentioned above.
b) My next question:
If I understand well the compilation order, then I should compile the main file test2.adb first and then the stub test2-input.adb . On compiling the stub I get the error message:
cannot generate code for file test2-input.adb (subunit)
Done--error detected
However, I can now do the binding and linking for test2.adb and run the program .
I would like to know if I did wrong by trying to compile the stub test2-input.adb or should it not be compiled?
Q3. What is the use of having subunits? Is it just to break a large program into smaller parts? I know an error arises if one doesn't put any statements between BEGIN and END in the subunit. So this means that one always has to put a statement there. And if one wants to write the statements later, one can always put a NULL statement between between BEGIN and END in the subunit and comes back to the latter at a later time. Is this how software engineering is done in practice?
Thanks a lot...
Q1: That is excellent practice.
And by treating the package specification as a specification, you can provide it to other developers so that they will know how to interface to your code.
Q2: I believe that AdaGIDE actually uses the GNAT compiler for all compilation, so it's actually GNAT that is in charge of the acceptable filenames. (This can be configured, but unless you have a very compelling reason to do so, it is far simpler to simply go with GNAT/AdaGIDE's file naming conventions.) More pertinent to your question, though, there's no strong reason to include the parent unit as part of the separate unit's name. But see the answer to Q3...
Q3: Subunits were introduced with the first version of Ada--Ada 83--in part to help modularize code, and allow for deferred development and compilation. However, Ada software development practice has pretty much abandoned the use of subunits, all the procedure/function/task/etc bodies are simply all maintained in the body of the package. They are still used in some areas, like if a platform-specific version of subprogram may be needed, but for the most part they're rarely used. It leaves fewer files to keep track of, and keeps the implementation code of a package all together. So I strongly recommend you simply ignore the subunit capabilities and place all your implementation code in package bodies.
It's pretty normal to split a problem up into component parts (packages), each supporting a different aspect. If you've learnt Ada, it'd be normal to write the specs of the packages first, argue (perhaps with yourself) why that's the right design, and then implement them. And this would be normal, I think, in any language that supports specs and bodies - for example, C.
Personally I would do check compilations as I went, just to make sure I'm not doing anything stupid.
As for separates - one (not very good) reason is to reduce clutter, to stop the unit getting too long. Another reason (for a code generator I wrote) was so that the code generator didn't need to worry about preserving developers' hand-written code in the UML model; all code bodies were separates. A third might be for environment-dependent implementation (eg, Windows vs Unix), where you'd let the compiler see a different version of the separate body for each environment (people normally use library packages for this, though).
Compilers have their own rules about file names, and what order things can be compiled in. When GNAT sees
procedure Foo is
procedure Bar is separate;
it expects to find Foo's body in a file named foo.adb and Bar's body in foo-bar.adb (you can, I believe, tell it different - gnatmake's package Naming - but it's probably not worth the trouble). It's best to go with the flow here;
separate (Foo)
procedure Bar is
is clear enough.
You can compile foo-bar.adb, and that will do a full analysis and catch almost all errors in the code; but GNAT can't generate code for this on its own. Instead, when you compile foo.adb it includes all the separate bodies in the one generated object file. It certainly isn't wrong to do this.
With GNAT, there's no need to worry about compilation order, you can compile in any order you like. But it's best to use gnatmake and let the computer take the strain!
You can indeed work the way you describe, except of course your program won't link until all of the package bodies have some kind of implementation. For that reason, I think it is more normal to write a dummy package body with all procedures implemented as:
begin
null;
end;
And all functions implemented as something like:
begin
return The_Return_Type'first; --'
end;
As for separates...I don't like them. For me I'd much rather be able to follow the rule that all the code for a package is in its package body. Separates are marginally acceptable if for some reason the routine is huge, but in that case a better solution is almost always to refactor your code. So any time I see one, it is a big red flag.
As for the file name thing, this is a gnat issue, not an Ada issue. Gnat took the unusual position for a compiler that the name of the contents of a file dictate what the file itself must be named. There are probably other compilers in the world that do that, but I've yet to find one in 30 years of coding.

Resources