My current method of comparing two reals (after calculations) is to take the difference and cast to an integer and compare to 0, for example (just to highlight the problem, example might work in simulator)
variable a : real := 0.1;
constant epsilon : real := 1.0E-5; -- Whatever accuracy needed, not too low though
a := a + 5.3;
assert a = 5.4; -- Yields intermitent errors
assert integer(a - 5.4) = '0'; -- Erroneous if 4.8 < a < 5.9 due to cast rounding
assert abs(a - 5.4) < epsilon; -- Will work everytime, but seems a bit forced
The reason for this way is that I got a lot of assertion errors in an testbench I wrote (a tad more complext then the example code). I was accounting these errors as floating point errors in GHDL simulator. Is there a better way to compare two reals to each other, like using machine epsilon, or any build in methods?
As indicated by #Philippe, comparison of reals requires some margin to account for limited precision and accumulated errors in the least significant bits. Using a simple epsilon value is one common way to do this but it has a limitation in that its value is absolute in relation to the numbers being compared. You need to know in advance the expected values you're comparing to pick a suitable epsilon.
If the set of numbers you need to compare covers a wide range of magnitudes you end up with an epsilon that is too large for properly comparing small values. In this situation you'd want a small epsilon when comparing small reals and a larger epsilon for larger numbers. This is accomplished by using a comparison that accounts for relative error.
This page gives a good overview of a method that allows comparison of reals using relative error rather than absolute error. The following function is an implementation of the relative comparison in VHDL:
-- Adapted from: http://floating-point-gui.de/errors/comparison/
function relatively_equal(a, b, epsilon : real) return boolean is
begin
if a = b then -- Take care of infinities
return true;
elsif a * b = 0.0 then -- Either a or b is zero
return abs(a - b) < epsilon ** 2;
else -- Relative error
return abs(a - b) / (abs(a) + abs(b)) < epsilon;
end if;
end function;
Here the epsilon parameter is a fraction that specifies the number of significant digits to compare for relative equality.
-- Compare for relative equality to three significant digits
-- These are all considered equal while using the same epsilon parameter
assert relatively_equal(1.001, 1.002, 1.0E-3) report "1.001 != 1.002";
assert relatively_equal(100.1, 100.2, 1.0E-3) report "100.1 != 100.2";
assert relatively_equal(1001.0, 1002.0, 1.0E-3) report "1001 != 1002";
-- Compare for relative equality to four significant digits
-- This will raise the assertion
assert relatively_equal(1.001, 1.002, 1.0E-4) report "1.001 != 1.002";
This question is generic to any programming language that uses "real" values (a.k.a. floating point numbers).
The standard way to compare reals in automatic tests is to define an small value epsilon. Then check that the absolute difference between your two reals is less than epsilon. You can define your own procedure assertEqual(x,y, epsilon) if you want to write concise test benches.
procedure assertEquals(
x, y : real; epsilon : real := 1.0E-5;
message : string := "";
level : severity_level := error) is
begin
assert (abs (x - y) < epsilon) report message severity level;
end procedure assertEquals;
Related
This question already has answers here:
Round to nearest multiple of a number
(3 answers)
Closed last year.
Let’s imagine I would like to round a number (i.e x = 7.4355) to a given arbitrary precision (i.e p = 0.002). In this case, I would expect to see:
round_arbitrary(x, p) = 7.436
What would be the best approach to design such a rounding function? Ideas in pseudocode or Rust are welcome
What would be the best approach to design such a rounding function?
An approach that gets near to OP's goal:
// Pseudo code (p != 0)
round_arbitrary(x, p)
x /= p
x = round(x)
return x*p
A key point is that floating point numbers are finite in size and so can represent about 264 different values exactly whereas code values like 7.4355, 0.002 and the math quotient 1/7.0 are of a much bigger set. Thus the above will get one close, but not certainty to an exact mathematically rounded value.
More advanced code would avoid overflow by not rounding large values which do not need rounding.
// Assume 0 < |p| < 1.0
round_arbitrary_2(x, p)
if (round(x) != x)
x /= p
x = round(x)
x *= p;
return x*p
Deeper
This issues lies with floating point numbers that are encoded with an integer times a power-of-2. Then the question is not so much "How to round to an arbitrary (non power-of-ten) precision", but "How to round to an arbitrary (non power-of-2) precision".
Say you have the following code,
signal a : std_logic_vector( N - 1 downto 0 );
Or,
for i in 0 to N - 1
...
Does the N-1 part get synthesized into actual logic, or does the FPGA software (e.g. Quartus) do the math before creating the logic?
Example use case, say generic N represents the number of desired bits. Is it better to have a generic Nm1 which holds the subtraction, or can I get by using N-1 as above without it creating additional logic?
As N is a generic parameter, it is considered as a constant at synthesis time. Your logic synthesizer, as all logic synthesizers I know, will propagate constants before inferring any piece of hardware. The synthesizer will compute the N-1 expression before synthesis and this operation will not cost you any single transistor. It would be the same with a much more complex operation on constants like, for instance, calling a function. Example:
function log2_up(x: positive) return natural is
begin
if x = 1 then
return 0;
else
return log2_up((x+1)/2) + 1;
end if;
end function log2_up;
...
constant word_width: positive := 64;
constant write_byte_enable_width: positive := log2_up(64 / 8);
is synthesizable by all logic synthesizers I know and computing write_byte_enable_width does not cost a single transistor; it is computed beforehand by the synthesizer during the constant propagation phase.
In both of these cases, the value of N would have to be calculated at compile time. It isn't -- and cannot be! -- generated in logic.
Keep in mind that any HDL code ultimately has to be able to synthesize to physical hardware. Changing the number of bits in a signal, or changing the number of times some logic is instantiated, isn't something that can be done electronically.
I would like to know what is corresponding VHDL code for $clog2(DATA_WIDTH) , for example in this line:
parameter DATA_OUT_WIDTH = $clog2(DATA_WIDTH)
and also for this sign " -: " in this example
if ( Pattern == In[i_count-:PATTERN_WIDTH] )
I will appreciate if anyone can help me.
You can do something like this
constant DATA_OUT_WIDTH : positive := positive(ceil(log2(real(DATA_WIDTH))));
or define a clog2 function encapsulating that expression. ceil and log2 can be found in math_real
use ieee.math_real.all;
In VHDL you can just specify the full range, for example
foo(i_count to i_count + 7)
foo(i_count downto i_count - 7)
Don't use In as an identifier though, it's a reserved word in VHDL.
In addition to Lars example you can easily write a function for finding the ceiling log 2 to determine the number of element address 'bits' necessary for some bus width. Some vendors or verification support libraries provide one already.
The reason there isn't a predefined function in an IEEE library already is expressed in Lars answer, you tend not to use it much, you can assign the value to a constant and an expression can be cobbled together from existing functions.
An example clog2 function
A borrowed and converted log2 routine from IEEE package float_generic:
function clog2 (A : NATURAL) return INTEGER is
variable Y : REAL;
variable N : INTEGER := 0;
begin
if A = 1 or A = 0 then -- trivial rejection and acceptance
return A;
end if;
Y := real(A);
while Y >= 2.0 loop
Y := Y / 2.0;
N := N + 1;
end loop;
if Y > 0.0 then
N := N + 1; -- round up to the nearest log2
end if;
return N;
end function clog2;
The argument A type NATURAL prevents passing negative integer values. Rounding is strict, any remainder below 2.0 causes rounding up.
Note that because this uses REAL and uses division it's only suitable for use during analysis and elaboration. It's a pure function.
You could note Lars example:
constant DATA_OUT_WIDTH : positive := positive(ceil(log2(real(DATA_WIDTH))));
has the same constraints on use for analysis (locally static) and elaboration (globally static). REAL types are generally not supported for synthesis and floating point operations can consume lots of real estate.
The if condition
if ( Pattern == In[i_count-:PATTERN_WIDTH] )
Is a base index (an lsb or msb depending on ascending or descending declared bit order) and a width.
See IEEE Std 1800-2012 (SystemVerilog), 11.5.1 Vector bit-select and part-select addressing.
An indexed part-select is given with the following syntax:
logic [15:0] down_vect;
logic [0:15] up_vect;
down_vect[lsb_base_expr +: width_expr]
up_vect[msb_base_expr +: width_expr]
down_vect[msb_base_expr -: width_expr]
up_vect[lsb_base_expr -: width_expr]
The msb_base_expr and lsb_base_expr shall be integer expressions, and the width_expr shall be a positive constant integer expression. Each of these expressions shall be evaluated in a self-determined context. The lsb_base_expr and msb_base_expr can vary at run time. The first two examples select bits starting at the base and ascending the bit range. The number of bits selected is equal to the width expression. The second two examples select bits starting at the base and descending the bit range.
In VHDL terms this would be a slice with bounds determined from the high index and a width by subtraction.
PATTERN_WIDTH can be globally static (as in a generic constant) as well as locally static (a non-deferred constant). i_count can be variable.
Depending on the declared range of In for example:
constant DATAWIDTH: natural := 8;
signal In_in: std_logic_vector (31 downto 0);
The equivalent expression would be
if Pattern = In_in(i_count downto i_count - DATAWIDTH - 1) then
Note that if the slice length or i_count is less than DATAWIDTH - 1 you'll get a run time error. The - 1 is because In_in'RIGHT = 0.
Without providing the declarations for In (or Pattern) and DATAWIDTH a better answer can't be provided. It really wants to be re-written as VHDL friendly.
Note as Lars indicated in is reserved word (VHDL is not case sensitive here) and the name was changed.
I am trying to implement the equation in VHDL which has multiplication by some constant and addition. The equation is as below,
y<=-((x*x*x)*0.1666)+(2.5*(x*x))- (21.666*x) + 36.6653; ----error
I got the error
HDLCompiler:1731 - found '0' definitions of operator "*",
can not determine exact overloaded matching definition for "*".
entity is
entity eq1 is
Port ( x : in signed(15 downto 0);
y : out signed (15 downto 0) );
end eq1;
I tried using the function RESIZE and x in integer but it gives same error. Should i have to use another data type? x is having pure integer values like 2,4,6..etc.
Since x and y are of datatype signed, you can multiply them. However, there is no multiplication of signed with real. Even if there was, the result would be real (not signed or integer).
So first, you need to figure out what you want (the semantics). Then you should add type casts and conversion functions.
y <= x*x; -- OK
y <= 0.5 * x; -- not OK
y <= to_signed(integer(0.5 * real(to_integer(x))),y'length); -- OK
This is another case where simulating before synthesis might be handy. ghdl for instances tells you which "*" operator it finds the first error for:
ghdl -a implement.vhdl
implement.vhdl:12:21: no function declarations for operator "*"
y <= -((x*x*x) * 0.1666) + (2.5 * (x*x)) - (21.666 * x) + 36.6653;
---------------^ character position 21, line 12
The expressions with x multiplied have both operands with a type of signed.
(And for later, we also note that the complex expression on the right hand side of the signal assignment operation will eventually be interpreted as a signed value with a narrow subtype constraint when assigned to y).
VHDL determines the type of the literal 0.1666, it's an abstract literal, that is decimal literal or floating-point literal (IEEE Std 1076-2008 5.2.5 Floating-point types, 5.2.5.1 General, paragraph 5):
Floating-point literals are the literals of an anonymous predefined type that is called universal_real in this standard. Other floating-point types have no literals. However, for each floating-point type there exists an implicit conversion that converts a value of type universal_real into the corresponding value (if any) of the floating-point type (see 9.3.6).
There's only one predefined floating-point type in VHDL, see 5.2.5.2, and the floating-point literal of type universal_real is implicitly converted to type REAL.
9.3.6 Type conversions paragraph 14 tells us:
In certain cases, an implicit type conversion will be performed. An implicit conversion of an operand of type universal_integer to another integer type, or of an operand of type universal_real to another floating-point type, can only be applied if the operand is either a numeric literal or an attribute, or if the operand is an expression consisting of the division of a value of a physical type by a value of the same type; such an operand is called a convertible universal operand. An implicit conversion of a convertible universal operand is applied if and only if the innermost complete context determines a unique (numeric) target type for the implicit conversion, and there is no legal interpretation of this context without this conversion.
Because you haven't included a package containing another floating-point type that leaves us searching for a "*" multiplying operator with one operand of type signed and one of type REAL with a return type of signed (or another "*" operator with the opposite operand type arguments) and VHDL found 0 of those.
There is no
function "*" (l: signed; r: REAL) return REAL;
or
function "*" (l: signed; r: REAL) return signed;
found in package numeric_std.
Phillipe suggests one way to overcome this by converting signed x to integer.
Historically synthesis doesn't encompass type REAL, prior to the 2008 version of the VHDL standard you were likely to have arbitrary precision, while 5.2.5 paragraph 7 now tells us:
An implementation shall choose a representation for all floating-point types except for universal_real that conforms either to IEEE Std 754-1985 or to IEEE Std 854-1987; in either case, a minimum representation size of 64 bits is required for this chosen representation.
And that doesn't help us unless the synthesis tool supports floating-point types of REAL and is -2008 compliant.
VHDL has the float_generic_pkg package introduced in the 2008 version, which performs synthesis eligible floating point operations and is compatible with the used of signed types by converting to and from it's float type.
Before we suggest something so drastic as performing all these calculations as 64 bit floating point numbers and synthesize all that let's again note that the result is a 16 bit signed which is an array type of std_ulogic and represents a 16 bit integer.
You can model the multiplications on the right hand side as distinct expressions executed in both floating point or signed
representation to determine when the error is significant.
Because you are using a 16 bit signed value for y, significant would mean a difference greater than 1 in magnitude. Flipped signs or unexpected 0s between the two methods will likely tell you there's a precision issue.
I wrote a little C program to look at the differences and right off the bat it tells us 16 bits isn't enough to hold the math:
int16_t x, y, Y;
int16_t a,b,c,d;
double A,B,C,D;
a = x*x*x * 0.1666;
A = x*x*x * 0.1666;
b = 2.5 * x*x;
B = 2.5 * x*x;
c = 21.666 * x;
C = 21.666 * x;
d = 36;
D = 36.6653;
y = -( a + b - c + d);
Y = (int16_t) -(A + B - C + D);
And outputs for the left most value of x:
x = -32767, a = 11515, b = 0, c = 10967, y = -584, Y = 0
x = -32767, A = -178901765.158200, B = 2684190722.500000, C = -709929.822000
x = -32767 , y = -584 , Y= 0, double = -2505998923.829100
The first line of output is for 16 bit multiplies and you can see all three expressions with multiplies are incorrect.
The second line says double has enough precision, yet Y (-(A + B - C + D)) doesn't fit in a 16 bit number. And you can't cure that by making the result size larger unless the input size remains the same. Chaining operations then becomes a matter of picking best product and keeping track of the scale, meaning you might as well use floating point.
You could of course do clamping if it were appropriate. The double value on the third line of output is the non truncated value. It's more negative than x'LOW.
You could also do clamping in the 16 bit math domain, though all this tells you this math has no meaning in the hardware domain unless it's done in floating point.
So if you were trying to solve a real math problem in hardware it would require floating point, likely accomplished using package float_generic_pkg, and wouldn't fit meaningfully in a 16 bit result.
As stated in found '0' definitions of operator “+” in VHDL, the VHDL compiler is unable to find the matching operator for your operation, which is e.g. multiplying x*x. You probably want to use numeric_std (see here) in order to make operators for signed (and unsigned) available.
But note, that VHDL is not a programming language but a hardware design language. That is, if your long-term goal is to move the code to an FPGA or CPLD, these functions might not work any longer, because they are not synthesizable.
I'm stating this, because you will become more problems when you try to multiply with e.g. 0.1666, because VHDL usually has no knowledge about floating point numbers out of the box.
I'm currently working on a design in which I need to do sgn(x)*y, where both x and y are signed vectors. What is the preferred method to implement a synthesizable sgn function in VHDL with signed vectors? I would use the SIGN function in the IEEE.math_real package but it seems like I won't be able to synthesize it.
You don't really need a sign function to accomplish what you need. In numeric_std the leftmost bit is always the sign bit for the signed type. Examine this bit to decide if you need to negate y.
variable z : signed(y'range);
...
if x(x'left) = '1' then -- Negative
z := -y; -- z := -1 * y
else
z := y; -- z := 1 * y
end if;
-- The same as a VHDL-2008 one-liner
z := -y when x(x'left) else y;
Modify as needed if you need to do this with signal assignments.
If you are only interested in using the sign function, the best option is that you define a block in your program whose input is the signed vector x and its output is another bit vector whit the value of the sign.
The way of designing that block is taking the most significative bit of the vector as it indicates the sign. Then, if you write that the output must be one(1(0), others->(0) or how it writes) if that bit is 0 (positive or null value), or every bit are ones(others->(1)) if that bit is 1 (negative value).
You can also define that if input value is 0, the output will also be a 0.