In VHDL, can a new enumeration type be created as identical to another? - vhdl

Does VHDL provide a simple mechanism for duplicating enumeration types to allow the creation of objects that can hold the same set of values but cannot be connected or assigned to one another?
The use case would be to have logic types that would represent signals that should not be connected except to a certain other set of signals - e.g. clocks of various frequencies or resets that should not be tied to each other or to a data bit. This way, the compiler can check for those sorts of connection errors and the type names in the code can carry meaning themselves, e.g. clk40mhz_t.
It seems as though one could just copy-paste define an enumeration similar to that for std_ulogic for any such type and define suitable conversion functions. Is there a handier way than that, though?
I’ve tried a straightforward declaration like
type clock_t is std_ulogic;
But since std_ulogic is not a range, this isn’t a valid type declaration.
So I also tried a few other ill-conceived and ill-fated constructions such as std_ulogic’range, range std_ulogic’range, range std_ulogic’left to std_ulogic’right, and (std_ulogic’range).
All were understandably met with compiler errors (compiling with Quartus Prime Pro 22.3).

Related

Is there a difference between fun(n::Integer) and fun(n::T) where T<:Integer in performance/code generation?

In Julia, I most often see code written like fun(n::T) where T<:Integer, when the function works for all subtypes of Integer. But sometimes, I also see fun(n::Integer), which some guides claim is equivalent to the above, whereas others say it's less efficient because Julia doesn't specialize on the specific subtype unless the subtype T is explicitly referred to.
The latter form is obviously more convenient, and I'd like to be able to use that if possible, but are the two forms equivalent? If not, what are the practicaly differences between them?
Yes Bogumił Kamiński is correct in his comment: f(n::T) where T<:Integer and f(n::Integer) will behave exactly the same, with the exception the the former method will have the name T already defined in its body. Of course, in the latter case you can just explicitly assign T = typeof(n) and it'll be computed at compile time.
There are a few other cases where using a TypeVar like this is crucially important, though, and it's probably worth calling them out:
f(::Array{T}) where T<:Integer is indeed very different from f(::Array{Integer}). This is the common parametric invariance gotcha (docs and another SO question about it).
f(::Type) will generate just one specialization for all DataTypes. Because types are so important to Julia, the Type type itself is special and allows parameterization like Type{Integer} to allow you to specify just the Integer type. You can use f(::Type{T}) where T<:Integer to require Julia to specialize on the exact type of Type it gets as an argument, allowing Integer or any subtypes thereof.
Both definitions are equivalent. Normally you will use fun(n::Integer) form and apply fun(n::T) where T<:Integer only if you need to use specific type T directly in your code. For example consider the following definitions from Base (all following definitions are also from Base) where it has a natural use:
zero(::Type{T}) where {T<:Number} = convert(T,0)
or
(+)(x::T, y::T) where {T<:BitInteger} = add_int(x, y)
And even if you need type information in many cases it is enough to use typeof function. Again an example definition is:
oftype(x, y) = convert(typeof(x), y)
Even if you are using a parametric type you can often avoid using where clause (which is a bit verbose) like in:
median(r::AbstractRange{<:Real}) = mean(r)
because you do not care about the actual value of the parameter in the body of the function.
Now - if you are Julia user like me - the question is how to convince yourself that this works as expected. There are the following methods:
you can check that one definition overwrites the other in methods table (i.e. after evaluating both definitions only one method is present for this function);
you can check code generated by both functions using #code_typed, #code_warntype, #code_llvm or #code_native etc. and find out that it is the same
finally you can benchmark the code for performance using BenchmarkTools
A nice plot explaining what Julia does with your code is here http://slides.com/valentinchuravy/julia-parallelism#/1/1 (I also recommend the whole presentation to any Julia user - it is excellent). And you can see on it that Julia after lowering AST applies type inference step to specialize function call before LLVM codegen step.
You can hint Julia compiler to avoid specialization. This is done using #nospecialize macro on Julia 0.7 (it is only a hint though).

Why are some signal attributes implicit signals while others are not?

In VHDL some signal attributes (eg 'TRANSACTION) are implicit signals. Others (eg 'EVENT) are not. Why is this?
The returned VHDL object, its type and value is not restricted by the language. Wheras user-defined attributes are restricted to be constant values.
6.7 Attribute declarations
An attribute is a value, function, type, range, signal, or constant that may be associated with one or more named entities in a description. There are two categories of attributes: predefined attributes and user-defined attributes. Predefined attributes provide information about named entities in a description. Clause 16 contains the definition of all predefined attributes. Predefined attributes that are signals shall not be updated.
User-defined attributes are constants of arbitrary type. Such attributes are defined by an attribute declaration.
§ 6.7 on p. 92, IEEE Standard VHDL Language Reference Manual, IEEE Standard 1076-2008
So built-in attributes can map to almost everything. In case of 'transaction the return object is a signal of type bit.
The attribute or tick syntax is a nice compact thing in the VHDL language. It's used for several purposes.

Move Semantics in Golang

This from Bjarne Stroustrup's The C++ Programming Language, Fourth Edition 3.3.2.
We didn’t really want a copy; we just wanted to get the result out of
a function: we wanted to move a Vector rather than to copy it.
Fortunately, we can state that intent:
class Vector {
// ...
Vector(const Vector& a); // copy constructor
Vector& operator=(const Vector& a); // copy assignment
Vector(Vector&& a); // move constructor
Vector& operator=(Vector&& a); // move assignment
};
Given that definition, the compiler will choose the move constructor
to implement the transfer of the return value out of the function.
This means that r=x+y+z will involve no copying of Vectors. Instead,
Vectors are just moved.As is typical, Vector’s move constructor is
trivial to define...
I know Golang supports traditional passing by value and passing by reference using Go style pointers.
Does Go support "move semantics" the way C++11 does, as described by Stroustrup above, to avoid the useless copying back and forth? If so, is this automatic, or does it require us to do something in our code to make it happen.
Note: A few answers have been posted - I have to digest them a bit, so I haven't accepted one yet - thanks.
The breakdown is like here:
Everything in Go is passed by value.
But there are five built-in "reference types" which are passed by value as well but internally they hold references to separately maintained data structure: maps, slices, channels, strings and function values (there is no way to mutate the data the latter two reference).
Your own answer, #Vector, is incorrect is that nothing in Go is passed by reference. Rather, there are types with reference semantics. Values of them are still passed by value (sic!).
Your confusion suppsedly stems from the fact your mind is supposedly currently burdened by C++, Java etc while these things in Go are done mostly "as in C".
Take arrays and slices for instance. An array is passed by value in Go, but a slice is a packed struct containing a pointer (to an underlying array) and two platform-sized integers (the length and the capacity of the slice), and it's the value of this structure which is copied — a pointer and two integers — when it's assigned or returned etc. Should you copy a "bare" array, it would be copied literally — with all its elements.
The same applies to channels and maps. You can think of types defining channels and maps as declared something like this:
type Map struct {
impl *mapImplementation
}
type Slice struct {
impl *sliceImplementation
}
(By the way, if you know C++, you should be aware that some C++ code uses this trick to lower exposure of the details into header files.)
So when you later have
m := make(map[int]string)
you could think of it as m having the type Map and so when you later do
x := m
the value of m gets copied, but it contains just a single pointer, and so both x and m now reference the same underlying data structure. Was m copied by reference ("move semantics")? Surely not! Do values of type map and slice and channel have reference semantincs? Yes!
Note that these three types of this kind are not at all special: implementing your custom type by embedding in it a pointer to some complicated data structure is a rather common pattern.
In other words, Go allows the programmer to decide what semantics they want for their types. And Go happens to have five built-in types which have reference semantics already (while all the other built-in types have value semantics). Picking one semantics over the other does not affect the rule of copying everything by value in any way. For instance, it's fine to have pointers to values of any kind of type in Go, and assign them (so long they have compatible types) — these pointers will be copied by value.
Another angle to look at this is that many Go packages (standard and 3rd-party) prefer to work with pointers to (complex) values. One example is os.Open() (which opens a file on a filesystem) returning a value of the type *os.File. That is, it returns a pointer and expects the calling code to pass this pointer around. Surely, the Go authors might have declared os.File to be a struct containing a single pointer, essentially making this value have reference semantics but they did not do that. I think the reason for this is that there's no special syntax to work with the values of this type so there's no reason to make them work as maps, channels and slices. KISS, in other words.
Recommended reading:
"Go Data Structures"
"Go Slices: Usage and Internals"
Arrays, slices (and strings): The mechanics of 'append'"
A thead on golang-nuts — pay close attention to the reply by Rob Pike.
The Go Programming Language Specification
Calls
In a function call, the function value and arguments are evaluated in
the usual order. After they are evaluated, the parameters of the call
are passed by value to the function and the called function begins
execution. The return parameters of the function are passed by value
back to the calling function when the function returns.
In Go, everything is passed by value.
Rob Pike
In Go, everything is passed by value. Everything.
There are some types (pointers, channels, maps, slices) that have
reference-like properties, but in those cases the relevant data
structure (pointer, channel pointer, map header, slice header) holds a
pointer to an underlying, shared object (pointed-to thing, channel
descriptor, hash table, array); the data structure itself is passed by
value. Always.
Always.
-rob
It is my understanding that Go, as well as Java and C# never had the excessive copying costs of C++, but do not solve ownership transference to containers. Therefore there is still copying involved. As C++ becomes more of a value-semantics language, with references/pointers being relegated to i) smart-pointer managed objects inside classes and ii) dependence references, move semantics solves the problem of excessive copying. Note that this has nothing to do with "pass by value", nowadays everyone passes objects by Reference (&) or Const Reference (const &) in C++.
Let's look at this (1) :
BigObject BO(big,stuff,inside);
vector<BigObject> vo;
vo.reserve(1000000);
vo.push_back(BO);
Or (2)
vector<BigObject> vo;
vo.reserve(1000000);
vo.push_back(BigObject(big,stuff,inside));
Although you're passing by reference to the vector vo, in C++03 there was a copy inside the vector code.
In the second case, there is a temporary object that has to be constructed and then is copied inside the vector. Since it can only be accessed by the vector, that is a wasteful copy.
However, in the first case, our intent could be just to give control of BO to the vector itself. C++17 allows this:
(1, C++17)
vector<BigObject> vo;
vo.reserve(1000000);
vo.emplace_back(big,stuff,inside);
Or (2, C++17)
vector<BigObject> vo;
vo.reserve(1000000);
vo.push_back(BigObject(big,stuff,inside));
From what I've read, it is not clear that Java, C# or Go are exempt from the same copy duplication that C++03 suffered from in the case of containers.
The old-fashioned COW (copy-on-write) technique, also had the same problems, since the resources will be copied as soon as the object inside the vector is duplicated.
Stroustrup is talking about C++, which allows you to pass containers, etc by value - so the excessive copying becomes an issue.
In Go, (like in Delphi, Java, etc) when you pass a container type, etc they are always references, so it's a non-issue. Regardless, you don't have to deal with it or worry about in GoLang - the compiler just does what it needs to do, and from what I've seen thus far, it's doing it right.
Tnx to #KerrekSB for putting me on the right track.
#KerrekSB - I hope this is the right answer. If it's wrong, you bear no responsibility.:)

Generating Single Port ROM on Spartan 6 using Xilinx ISE Design Suite

I'm having some trouble designing a single port rom onto a spartan 6 board. I use the provided core generator to create block memory and choose single port rom with 32 bit width and 256 depth with a coe file that just counts from 0 to 255. I drop the rom into my vhdl as a component and add the XilinxCoreLib as a library. When I try to generate the programming file I get the translate error:
logical block 'rom1' with type 'rom' could not be
resolved. A pin name misspelling can cause this, a missing edif or ngc file,
case mismatch between the block name and the edif or ngc file name, or the
misspelling of a type name. Symbol 'rom' is not supported in target
'spartan6'.
I'm currently using Xilinx ISE 13.1 if that helps. I feel like this should be really easy to do but I haven't been able to find how to do it.
Edit: Thanks everyone, was a combination of things. Wrong speed grade, and didn't add a copy of the ngc file to my working directory. I'll use arrays in the future.
Easiest way is to forget the vendor tools altogether and simply declare a constant array!
If this is in a package separate from the rest of the design, a few lines of printf's or a simple script can generate the VHDL boilerplate around the contents, which come from your assembler or whatever tool creates the actual data
Since you're adding a Xilinx generated core to your design in ISE, you need to add both the VHD file and the NGC file via "Add Source" via the Project menu.
Even easier, depending on how large your ROM needs to be and what data goes into it, would be to not even bother with a Xilinx core, but to use pure VHDL to declare a constant array and initialization values right in your VHDL file. Here is an example:
type array_ROM is array (0 to NUMBER_OF_ROWS-1) of std_logic_vector (ROM_BITWIDTH-1 downto 0);
signal my_ROM : array_ROM
:=
(
x"12345678",
x"ABCDEF01",
...
x"01010101"
);
Now, you don't put the elipsis (...) in that initialization list, just put rows of constants with bit widths matching ROM_BITWIDTH. The NUMBER_OF_ROWS is the number of address locations you need in the ROM. In this example, ROM_BITWIDTH would have to be set to 32 as I've used 32-bit hexadecimal constants in the initialization list. Being a signal, it's actually modifiable, so if you need it to be constant, just use "constant" instead of signal.
I guess the problem is, as the message says, a misspelling. to get the correct component declaration/instantiation, select your rom.xco in the design-window of ISE. then select "view vhdl instantiation template" from process window. use the component declaration and instantiation described therein.
There are a number of things that can cause this problem, one is that you are using a blocck that was generated for another FPGA family and using it inside the Spartan6. the other is that you may have generated the ROM using an older version of the tool and the wrapper for the ROM has changed since then.
You can either generate a anrray like Brian suggested and forgetting about the tool specific ROM type, or re-generate the IP under your curernt project settings and see how it goes.

Why do I need to redeclare VHDL components before instantiating them in other architectures?

I've been scratching my head since my first VHDL class and decided to post my question here.
Given that I have a declared entity (and also an architecture of it) and want to instantiate it inside another architecture, why is it that I seemingly have to redeclare the "entity" (component) inside this containing architecture before instantiating it?
Isn't the compiler smart enough to match an instantiation to its architecture just by its name? Where is the need for the component declaration?
You can directly instantiate the component, if desired:
MyInstantiatedEntity : entity work.MyEntity_E
generic map (
config => whatever)
port map (
clk => signal1,
clk_vid => signal2,
...
Creating a component declaration gives you the extra ability to change what gets bound to the instantiation via a configuration specification or similar.
Back when I did my VHDL assignments back when I was in school, I was required to have all our code all in one file so I don't remember whether or not you could write one file for each module and how it was done.
That being said, you would have to declare the entity you would use when defining the behavior, if you were using it much in the same way that you would define prototypes, structures, classes and whatnot in C or C++. The difference here is that you don't have the luxury of defining header files for this "redeclaration" in VHDL (at least I don't think there is an equivalent). So it seems perfectly reasonable to me to have to do this. Note that VHDL came out when C was very common and the compilers weren't "smart enough" as they are today.
A VHDL guru might have a definitive answer for this but this is how I understand it.

Resources