Developing multi-use VHDL modules - vhdl

I've recently just started learning VHDL again after not having touched it for years. The hope is to use it to develop a control system with various sensor interfaces, etc. I'm fairly competent in embedded C, which has been my go-to language for any embedded projects I've had up until this point, which makes learning VHDL all the more frustrating.
Basically, my issue right now, which I see as my biggest barrier in being able to progress with my intended project, is that I don't know how to develop and incorporate a module that I can just pass variables to and call (like a function in C) to carry out some task, i.e. display an integer 0-9999 on a 4 digit 7-segment display.
I know there are components in VHDL, but that seems like such a long winded way of performing one task. Is there a better way of doing this?
It seems to me like after you've done all the digital tutorials, there's a huge gap in the info on how to actually develop a full system in VHDL.

EDIT : further explanation re: third comment at the bottom. Apologies for the length of this!
VHDL has functions and procedures too, just like C. (OK, C calls its procedures "void functions"!) And, just like C, you can call them from sequential code using the same kinds of sequential code constructs (variables, loops, if statements, case-statements which have some similarities to C's switch), and so on.
So far, all this is synthesisable; some other VHDL features are for simulation only (to let you test the synthesisable code). So in simulation you can - like C - open, read and write files, and use pointers (access types) to handle memory. I hope you can see why these parts of VHDL aren't synthesisable!)
But what is different in VHDL is that you need a few extra lines to wrap this sequential code in a process. In C, that happens for you; a typical C program is just a single process (you have to resort to libraries or OS functionality like fork or pthreads if you want multiple processes)
But VHDL can do so much more. You can very easily create multiple processes, interconnect them with signals, wrap them as re-usable components, use "for ... generate" to create multiple processes, and so on. Again, all synthesisable : this imposes some restrictions, for example the size of the hardware (number of processes) cannot be changed while the system is running!
KEY: Understand the rules for signal assignment as opposed to variable assignment. Variables work very much as in C; signals do not! What they do instead is provide safe, synchronised, inter-process communication with no fuss. To see how, you need to understand "postponed assignment", delta cycles, wait statements, and how a process suspends itself and wakes up again.
You seem to be asking two questions here:
(1) - can I use functions as in C? Very much so; you can do better than C by wrapping useful types and related functions, procedures in a package and re-using the package in multiple designs. It's a little like a C++ reusable class with some stronger points, and some weaker.
(2) can I avoid entities, architectures and components altogether in VHDL? You can avoid components (search for "VHDL direct entity instantiation") but at some point you will need entities and architectures.
The least you can get away with is to write a single process that does your job, receiving inputs (clk, count) on signals and transmitting to the LEDs on other signals.
Create an entity with all those signals as ports, and an architecture containing your process, connecting its signals up to the ports. This is easy - it's just boilerplate stuff. On an FPGA, you also need to define the mapping between these ports and the actual pins your LEDs are wired to. Synthesise it and you're done, right? .. not quite.
Create another "testbench" entity with no external ports. This will contain your entity (directly instantiated), a bunch of signals connecting to its ports, and a new process which drives your entity's input ports and watches its output ports. (Best practice is to make the testbench self-checking, and assert when something bad comes out!) Usually "clk" comes from its own one-liner process, and clocks both testbench and entity.
Now you can simulate the testbench and watch your design working (or not!) at any level of detail you want. When it works - synthesise.
EDIT for more information: re: components, procedures, functions.
Entities/components are the main tool (you can ignore components if you wish, I'll deal with entities later).
Procedures and functions usually work together in a single process. If they are refactored into a package they can be re-used in other like-minded processes (e.g. operating on the same datatypes). A common abstraction is a datatype, plus all the functions and procedures operating on it, wrapped in a package - this somewhat resembles a C++ class. Functions also have uses in any declaration area, as initialisers (aka "factory" pattern in software terms)
But the main tool is the entity.
This is a level that is probably unfamiliar to an embedded C programmer, as C basically stops at the level of the process.
If you have written a physical block like an SPI master, as a process, you will wrap that process up in an entity. This will communicate with the rest of the world via ports (which, inside the entity, behave like signals). It can be parameterised via generics (e.g. for memory size, if it is a memory). The entity can wrap several processes, other entities, and other logic that didn't fit neatly into the process (e.g. unclocked logic, where the process was clocked)
To create a system, you will interconnect entities, in "structural HDL code" (useful search term!) - perhaps a whole hierarchy of them - in a top level entity. Which you will typically synthesise into an FPGA.
To test the system via simulation, you will embed that top level entity (=FPGA) in another entity - the testbench - which has no external ports. Instead, the FPGA's ports connect to signals inside the testbench. These signals connect to ... some of them connect to other entities - perhaps models of memories or SPI slave peripherals, so you can simulate SPI transactions ... some of them are driven by a process in the testbench, which feeds your FPGA stimuli and checks its responses, detecting and reporting errors.
Best practice involves a testbench for each entity you create - unit testing, in other words. An SPI master might connect to somebody else's SPI slave and a test process, to start SPI transactions and check for the correct responses. This way, you localise and correct problems at the entity level, instead of attempting to diagnose them from the top level testbench.
A basic example is shown here :
Note that he shows port mapping by both positional association and (later) named association - you can also use both forms for function arguments, as in Ada, but not C which only allows positional association.
What "vhdlguru" doesn't say is that named association is MUCH to be preferred, as positional association is a rich source of confusion and bugs.
Is this starting to help?

Basically there are two possibilities of handing information to an entity:
At runtime
Entities communicate with each other using signals that are defined inside a port statement. For better readability I suggest using std_logic_vector, numeric_std or even better record types where appropriate. See link below.
At synthesis time
If you want to set a parameter of an entity to a fixed value at synthesis time (for instance the size of a fifo) you might want to use a generic statement. See also link below.
I can also recommend reading this paper. It helped me a lot when coping with system that exceed a certain complexity:
A structured VHDL design method

Some simple generalizations for entity/component vs subprogram (function / procedure).
Use an entity when a reusable block of code contains a flip-flop (register).
Use a function do to a calculation. For RTL code, I use functions for small, reusable pieces of combinational logic.
I primarily use procedures for testbenches. I use procedures to apply a single transaction (waveform or interface action) to the design I am testing. For testbenches, I further use procedures to encapsulate frequently used sequences of transactions.

Related

Best variable structure for initialization of several motors in CodeSys

Problem
I have a PLC hooked up to several motors (which are all of the same type) via CanOpen. The PLC is programmed using CodeSys with "Structured Text". In order to activate the motors each one has to run through an initialization state machine, for which I have to send some commands in sequence (Power on, activate etc.). But as far as I understand I have to explicitly assign a variable for each boolean which has to be activated (mot1_power_on, mot2_power_on, mot1_enable, mot2_enable etc.).
Question
How to efficiently initialize several (likewise) motors with CodeSys and structured text, where each has to run through a initialization state machine? I find it bad practice to assign a bool for each motor and each variable and then programming the same code several times. How can this task be handled efficiently? Is there a way to pass the motor or some struct to some function, which then performs this task for each of the motors? In C++ I would instantiate a class to perform this task, but how can this be done in CodeSys where I have to explicitly assign a variable for each motor?
Background
I am new to codesys, but I have some background in c/c++, matlab, python and other coding languages.
Having programmed in C++, I will assume you are familiar with object-oriented programming. Function blocks in CODESYS are really similar to classes in OO languages. So go ahead and create a "motors" class with whatever member variable and methods you wish to use. Instantiate this class for each of your motors (either through individual variables or an array), and make sure whatever code needs to run is called, somehow, from your main program.
I expect the part that will not feel as natural is the part about I/O, and this is what you refer to when you say "assign a variable for each boolean". Because in your project, the (probably BIT rather than BOOL) values you need to read and write have hardware addresses (like %I12.3, %Q3.2). Once you have the classes/instances in place, you still need to tell each instance where to find its own I/O. You would rather not use separate global variables for that, which could lead to code duplication, right?
Here is one way to do it:
Create a structure for each I/O memory block you want to address. The simplest case is all inputs variables are together in I/O memory, and all outputs variables are together too, so that means two structures to define. These structures must match the I/O memory layout bit for bit.
Be careful that TRUE/FALSE I/O is generally exposed as BIT values. When you include consecutive BIT members in your structures, CODESYS will pack them inside bytes (whereas BOOL take up at least one full byte). BIT members are very often needed to ensure a structure matches the true layout of values in I/O memory. Be mindful that all types other than BIT are byte-aligned ; as an example, a lonely BIT variable between to BYTE variables will take up a whole byte.
In your function block, declare variables using your structures as a type, with undefined addresses.
inputs AT %I*: MY_INPUTS_STRUCTURE;
outputs AT %Q*: MY_OUTPUTS_STRUCTURE;
These undefined addresses essentially act as references. Each instance of your function block will receive its own, independent reference. In other for that to work, you have, however, to "map" those undefined addresses to hardware addresses. Under CODESYS this can be done in a couple of ways : you can go to the mapping page of the motor in the project and do it for each variable individually, or you can add a VAR_CONFIG to your project, which will allow you to have one mapping per structure (no need to associate each variable in the structure individually).
Note that when mapping whole structures rather than individual variables, you may have byte order (little-endian vs big-endian) issues to deal with when using multi-byte types if the fieldbus byte order differs from the CPU byte order.
It may seem a bit heavy at first, but once you figure it out it really is not, and it allows you to create function blocks with I/O that can be put in libraries and reused in many projects.

What is the usefulness of a component declaration?

With VHDL '93 introducing direct instantiation, when would you actually use a component now when your entity is in VHDL? The following are the only time when a component is required I can think of:
Component maps to non VHDL source (Verilog, netlist etc)
You don't have the source yet and need something to compile against (eg. colleague hasn't finished their code yet)
You are binding different entity/architecture pairs to specific components in specific entities via configs. (but who ever actually does this? maybe if you have a simulation arch and synth arch - but again - never seen it used in any meaningful way)
I am discounting people who say that "A component lets me see the port map in the same file" or "having a component library allows me to see everything". This is mostly an old-school approach that people have got into the habit of. In my eyes, maintaining the same code in two places makes no sense.
Are there any others I've missed?
Sorry for the late response to Can't compile VHDL package - Modelsim error: (vcom-1576) expecting END
In addition to the use case listed by the OP and in compliance with his criteria of not so useful cases, I'll add two more cases:
To write platform independent code, one needs to implement e.g. an Altera and a Xilinx specific solution. This code will reference vendor specific libraries like alter_mf or unisim. Both vendor specific implementations can be selected by a if ... generate-statement, or since VHDL-2008 by a case ... generate-statement.
But even with generate-statements, this solution needs to instantiate components, because instances of a direct entity instantiation are bound regardless of the fact that some instances will never appear in the elaborated model. (I consider this a bug in the language - but there was no time to investigate and fix it for VHDL-2018.) Because an entity is immediately bound, the tool tries to load the referenced vendor library and it's packages.
Let's assume you compile on Altera with Quartus, it will complain about an unknown unisim library and a unknown vcomponents package. The same happens on Xilinx with Vivado complaining about an unknown altera_mf library.
Thus, to cut-off the (direct) instantiation tree, a component instantiation is needed.
This technique is used by the PoC-Library. See for example the PoC.misc.sync.Bits implementation for a standard double-FF synchronizer with different attributes applied to an Altera, Xilinx or generic implementation.
In the Open Source VHDL Verification Methodology (OSVVM), components are used for two use cases:
In a toplevel DUT, IP cores are instantiated as components so they can be replaced by dummy implementations. Either as unbound components or as components loading a dummy architecture. This can speed up simulations, reduce possible error sources in testing, replace complex slow implementations like MGTs or memory controllers with simpler and faster implementations, ...
In OSVVM, the test control is implemented in a separate entity called TestController. This entity has several architectures to implement the different test cases applied to the same test hardness. The architectures are bound with a toplevel configuration per test case / architecture. Thus, running a "testbench" means elaborating and simulating one of these configurations.

Why should an HDL simulation (from source code) have access to the simulator's API?

This is a question inspired by this question and answer pair: call questa sim commands from SystemVerilog test bench
The questions asks how Verilog code could control the executing simulator (QuestaSim). I saw similar questions and approaches for VHDL, too.
So my question is:
Why should a simulation (slave) have power of its simulator (master)?
What are typical use cases?
Further reading:
call questa sim commands from SystemVerilog test bench
VerTcl - A Tcl interpreter implemented in VHDL
Why? Can anyone answer "why"? Well, perhaps the product engineer, or developer at Mentor that drove the creation of such behavior can answer that. But lacking that, we can only guess. And that's what I'm doing here.
I can think of a few possible use cases, but they aren't something that cannot be done in another manner. For example, one could have a generic "testbench controller" that depending on generics/parameters could invoke certain simulator behavior. (Edit: After re-reading one of your links, I see that's the exact use case.)
For example, say I have this "generic" testbench code as:
module testbench;
parameter LOG_SIGNALS = 1'b0;
initial
begin
if LOG_SIGNALS
begin
// Log all signals in the design
mti_fli::mti_Cmd("add wave -r /*")
end
endmodule
Then, one could invoke this as:
vsim -c -gLOG_SIGNALS=1 work.testbench
The biggest use case for this might be if vsim is invoked from some environment. If one were to do a do file, I'm not sure one can pass parameters to the script. Say one had the following do file:
if {$log_signals} {
add wave -r /*
}
How does one set $log_signals from the command line? I suppose one could do that through environment variables, such as:
if { [info exists ::env(LOG_SIGNALS)] } {
add wave -r /*
}
Other uses cases might be to turn on/off the capturing of coverage data, list files, maybe even a quirky case of ending simulation.
But of course, all of these can be handled in other manners. And in manners I think are much more clear and much easier to maintain.
As for VerTCL, I find it fascinating. But incomplete. Or at very least barebones. I find scripted testenches exceedingly useful (we use them where I work). And VerTCL is a great way to do it (if you like TCL). But it does need some framework around it to read signals, drive signals, and otherwise manage a simulation.
Ideally, you wouldn't need a simulator API if the HDL was comprehensive enough to do all the functions that are currently left to the simulator. At its inception, Verilog was implemented as an interpreted language and the command line was the Verilog language instead of some other command line interface we see today based on Tcl.
But as simulators became more complex, they needed commands that interacted with the file system, platform, OS, as well as other features at faster pace than the HDL standard could keep up with. The IEEE HDL standards stop at any implementation specific details.
So simulation tool vendors developed command line interfaces (CLI) to meet user needs for debugging and analysis. These CLIs are robust enough to create stimulus and check behaviors that there can be an overlap in functionality of the CLI code versus what's in your testbench source code. So having an API to your simulators CLI can make it easier to control the flow of commands to the simulator and avoid duplication of procedures.
For example, maybe you want to start logging signals to add to a waveform after the design gets out of reset. From the CLI, you would have to set a watch condition on the reset signal that executes the logging command when reset goes inactive. Using the simulator API, you could just put that command in your bench in the spot in your where release reset.

Graphical VHDL Component Editor

In many of my VHDL designs I create a lot of low level components in HDL (which is fine). However, when I am ready to create multiple instantiations and link them together at the top-level, I find that the file ends up being rather large with tons of internal signals going between tons of component instantiations. It gets a little unwieldy and hard to follow.
Instead, I thought what might be easier to understand and faster to develop with is if there was a graphical tool to do the high level component linking. It would be able to parse my low level HDL files and determine the port inputs/outputs and create a block (or multiple instances of that block). I could then use my mouse to create connections between the blocks and give them a text label. When I am done, it would be able to auto-generate a VHDL file with all appropriate syntax for creating internal signals, component instantiations, port declarations, etc.
I tried experimenting with Xilinx Schematic Editor, but this thing was a beast and I did not have any luck.
Is there any tool out there like this? If it could even get me 90% of the way I'd be happy.
You should take a look at www.sigasi.com
Sigasi has autocompletes to help you with the instantiations and quick-fixes to help you with the wiring. There is also a premium version that gives you a live, graphical block diagram view (demo screencast)

What does it mean when someone says that the module has both behavior and state?

As I understand I got a code review that my module has behavior and state at the same time, what does it mean anyway ?
Isn't that the whole point of object oriented programming, that instead of operating on data directly with logical circuitry using functions. We choose to operate on these closed black-boxes (encapsulation) using a set of neatly designed keys, switches and gears.
Wouldn't such a scheme naturally contain data(state) and logic(behavior) at the same time ?
By module I mean : a real Ruby module.
I designed something like this : How to design an application keeping SOLID principles and Design Patterns in mind
and implemented the commands in a module which I used to mixin.
Whatever you are referring to, be it an object defined by a class (or type), a module, or anything else with code in it, state is data that is persisted over multiple calls to the thing. If it "remembers" anything between one execution and the next, then it has state.
Behavior, otoh, is code that manipulates or processes that state-data, or non-state data that is used only during a single execution of the code, (like parameter values passed to a function). Methods, subroutines or functions, anything that changes or does something is behavior.
Most classes, types, or whatever, have both data (state) and behavior, but....
Some classes or types are designed simply to carry data around. They are referred to as Data Transfer objects or DTOs, or Plain Old Container Objects (POCOs). They only have state, and, generally, have little or no behavior.
Other times, a class or type is constructed to hold general utility functions, (like a Math Library). It will not maintain or keep any state between the many times it is called to perform one of its utilities. The only data used in it is data passed in as parameters for each call to the library function, and that data is discarded when the routine is finished. It has behavior. but no state.
You're right in your thinking that OOP encapsulates the ideas of both behaviour and state and mixes the two things together, but from the wording of your question, I'm wondering if you have written a ruby module (mixin, whatever you want to call it) that is stateful, such that there is the potential for state leakage across multiple uses of the same module.
Without seeing the code in question I can't really give you a full answer.
In Object-Oriented terminology, an object is said to have state when it encapsulates data (attributes, properties) and is said to have behavior when it offers operations (methods, procedures, functions) that operate (create, delete, modify, make calculations) on the data.
The same concepts can be extrapolated to a ruby module, it has "state" if it defined data accessible within the module, and it has "behavior" in the form of operations provided which operate on the data.

Resources