Let's say I have a fixed point value in my VHDL - Code which is defined as std_logic_vector. I know that my last 4-bit are the decimals.
When I use the simulator it will of course not see the last 4 bits as decimals, is there any possibility to change it in the simulation, so that the simulation knows that the 3rd bit has the value of 0.5 , the 2nd the value of 0.25 and so on ?
It's possible in Vivado to show the result in the simulator as a fixed point representation.
When you rightclick in the simulator on the signal you want to show in fixed point, click radix --> real settings. There you get the following window and you can select fixed point.
Real settings window
I doubt about that this is possible unfortunately. Specifically for std_logic the types are the following:
'U': Uninitialized. This signal hasn't been set yet.
'X': Unknown Impossible to determine this value/result.
'0': Logic 0
'1': Logic 1
'Z': High Impedance
'W': Weak signal, can't tell if it should be 0 or 1.
'L': Weak signal that should probably go to 0
'H': Weak signal that should probably go to 1
'-': Don't care.
As a result the simulator is recognizing ONLY the above symbols, anything else will result in error, including a floating point number to describe a bit. Here's an example from me trying to put a floating point value to be displayed:
add_force {/test/a[27]} -radix unsigned {0.4 0ns}
ERROR: [Simtcl 6-179] Couldn't add force for the following reason:
Illegal value '0.4': Could not convert value '0.4' to a decimal
number.
I also observed Vivado as tagged so I guess you use the integrated simulator. In my example the closest to a floating point number was the decimal number. There is no built in variable at Vivado to support floating points for display in the simulation. Below you see the radix it supports so you are tightly restricted only to those choices, which all result in error except ASCII but I don't think that's the behavior you want.
Related
I am working on a university assignment where I have to do different calculations based on the value of a Signal. If C has the value 00 ,I have to return the sum of a and b and also OVF must be 1 if overflow happened.
The code was really basic :
temp <= ('0'&a)+('0'&b);
Result <= temp(3 downto 0);
OVF <= temp(4);
Yet ,I somehow did something wrong .My issue is that vivado keeps showing me a value called C instead of the actual value of the vector. What does C mean? It's not included in any of the slides of the class.
You can better understand some signal values in Vivado simulator if they display in a different radix format than the default, for instance, binary values instead of hexadecimal values.
AFAIK, the default radix in Vivado simulator is Hexadecimal unless you
override the radix for a specific object.
Supported radix values in Vivado Simulator are as following:
Binary,
Hexadecimal,
Octal,
ASCII,
Signed and Unsigned decimal.
Therefore I think, here the C value indicates hexadecimal value.
and of course the equivalent syntax in VHDL?
Is the lower index a minimum bound for indexing? What happens in assignments between signals with differing bounds but the same width?
Assuming you meant the widths to be the same...
...so, in Verilog let's assume you meant
logic [19:4] v19_4;
logic [15:0] v15_0;
In a Verilog simulation, you will not experience any difference unless to try to index the bits.
If you index the bits, you will find that
in the first case, the left hand bit is bit 19 (ie v19_4[19]), whereas in the second case, the left hand bit is bit 15 (ie v15_0[15]);
in the first case, the right hand bit is bit 4 (ie v19_4[4]), whereas in the second case, the right hand bit is bit 0 (ie v15_0[0]).
In Verilog, it is valid to call the left hand bit "the MSB" and the right hand bit "the LSB".
You will experience exactly the same behaviour in VHDL. In VHDL let's assume you meant
signal v19_4 : std_logic_vector(19 downto 4);
signal v15_0 : std_logic_vector(15 downto 0);
Again, in a VHDL simulation, you will not experience any difference unless to try to index the bits. If you index the bits, you will find that
in the first case, the left hand bit is bit 19 (ie v19_4(19)), whereas in the second case, the left hand bit is bit 15 (ie v15_0(15));
in the first case, the right hand bit is bit 4 (ie v19_4(4)), whereas in the second case, the right hand bit is bit 0 (ie v15_0(0)).
With synthesis you might see a difference. It is generally recommended to index vectors from 0 for synthesis to remove the possibility of synthesising more bits than you need. However, I would think that most synthesisers would optimise away the excess logic.
I am trying to make a BCD converter to show numbers from 0 to 9999, I need to implement Double Dabble Algorithm using the shift operators. But I just cannot start coding without running into warnings i dont really know about, I am still a beginner so please ignore any stupid mistakes that I make. I started off by first implementing the algorithm. I have never used shift operators so I am probably not doing it right, please help, here is my code
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
entity algorithm is
Port (x: in unsigned (15 downto 0);
y: out unsigned (15 downto 0));
end algorithm;
architecture Behavioral of algorithm is
begin
y <= x sll 16;
end Behavioral;
And the error
Xst:647 - Input <x> is never used. This port will be preserved and left unconnected
if it belongs to a top-level block or it belongs to a sub-block and the hierarchy of
this sub-block is preserved.
Even if I implement this
y <= x sll 1;
I get this error
Xst:647 - Input <x<15>> is never used. This port will be preserved and left
unconnected if it belongs to a top-level block or it belongs to a sub-block
and the hierarchy of this sub-block is preserved.
What am I doing wrong here?
What you are doing wrong is, firstly, attempting to debug a design via synthesis.
Write a simple testbench which, first, exercises your design (i.e. given the code above, feeds some data into the X input port).
Later you can extend the testbench to read the Y output port and compare the output with what you would expect for each input, but you're not ready for that yet.
Simulate the testbench and add the entity's internal signals to the Wave window : does the entity do what you expect? If so, proceed to synthesis. Otherwise, find and fix the problem.
The specific lines of code above, y <= x sll 16; and y <= x sll 1; work correctly and the synthesis warnings (NOT errors) are as expected. Shifting a 16 bit number by 16 bits and fitting the result into a 16 bit value, there is nothing left, so (as the warning tells you) port X is entirely unused. Shifting by 1 bit, the MSB falls off the top of the result, again exactly as the warning says.
It is the nature of synthesis to warn you of hundreds of such things (often most of them come from the vendor's own IP, strangely enough!) : if you have verified the design in simulation you can glance at the warnings and ignore most of them. Sometimes things really do go wrong, then one or two of the warnings MAY be useful. But they are not a primary debugging technique; most of them are natural and expected, as above.
As David says, you probably do want a loop inside a clocked process : FOR loops are synthesisable. I have recently read a statement that WHILE loops are often also synthesisable, but I have found this to be less reliable.
I am working in 8 bit pixel values..for ease of coding i want to use conv_integer to convert this 8 bit std_logic_vector.is it cause any synthesise problem?is it reduce the speed of hardware...
No, integers synthesise just fine. Don't use conv_integer though - that's from an old non-standard library.
You want to use ieee.numeric_std; and then to_integer(unsigned(some_vector));
If you still want to access the bits, and treat the vector as a number, then use the signed or unsigned type - they define vectors of bits (which can still have -, Z etc.) which behave as numbers, so you can write unsigned_vector <= unsigned_vector + 1.
You will lose a lot of the functionality that comes with the standard logic vector such as having the value 'Z' or 'X'. If you need access to the bits leave it as std_logic_vector, or cast it to numeric_std. If you don't and you need to do some fancy arithmetic maybe it's better to have as an int. At the end of the day its all bits. Its normally best to keep to a vector type (std_logic_vector, unsigned, signed etc) at the top level so you can map each bit to a specific pin, but otherwise, you can use whatever types you want. Don't forget you are designing hardware now, not software, and there is a difference.
In VHDL, is initialization necessary when creating a signal or a vector?
What happens if one forgets to initialize a signal or integer value?
In simulation, if you do not set an initial value, each element of your vector will get the default value (this is defined by the VHDL language specification). For enum types, this is the first element defined in the enumeration type: booleans will be false, std_logic will be 'U' (undefined). Note that 'U' has no meaning in electrical circuits. It is merely a hint for the verification engineer that you don't know which value the flip-flop has at power-on.
After synthesis: FPGA synthesizers will use the initial value that you set as the "power on" value of the flip-flops and memories if the target technology supports this! If the technology does not support a forced initial value (and for ASICs), the initial value at power-on is not known. It could be 1 or 0. (See for example: http://quartushelp.altera.com/11.0/mergedProjects/hdl/vhdl/vhdl_pro_power_up_state.htm)
Two possible styles:
Choose an explicit initial value, with or without explicit reset circuits (usually for modern FPGAs)
Set 'U' as initial value, and have a proper reset circuit to force a known reset value
If you go with the first choice, be sure to check if your target technology supports this style!
In simulation, everything in VHDL is initialised at the start to the "left-most" element of the range which represents them.
So, std_logic will get 'U', boolean will get false, integer will get a big negative number. Any enumerated types you've defined yourself will init to their first member. etc.
You can override this with an explicit initialisation:
variable i : integer := 0;
the simulator will then use your initialisation.
When it comes to synthesising your code, in an ASIC, explicit initialisations are ignored (there's no silicon to support them!), and everything initialises unpredictably. So you have a reset pin and explicit code which assigns the value you want when that pin is asserted.
When you target an FPGA and don't explicitly initialise, most of the time things will initialise to something 'like zero', but you can't rely on it (sometimes inverters are pushed around and things look like they've inited to 'one'). So you have a reset pin and explicit code which assigns the value you want when that pin is asserted.
Some synthesisers (XST at least) will support explicit initialisations and pass them into the netlist so that you can rely on them. In this case you can still have a reset signal - which can do something different so a particular flipflop could initialise to one value and reset to another!
It is not strictly necessary in VHDL, just like it is not necessary in C/C++, but a similar result can occur. Without initializing a signal or vector of signals, a simulator will typically simulate that it is in an unknown state (assuming you are using std_logic signals). However, a synthesis engine will pick one or the other as an initial value since when an FPGA is programmed all memory elements will be initialized one way or another (i.e. they are not initialized to an unknown state).
Some people will not initialize a signal on declaration, but will instead use their circuit to initialize the memory element (e.g. create reset logic to initialize the memory element). Other will initialize the memory element when it is declared. These are design decisions which have their own tradeoffs.