I have a VHDL BCD counter whose output is an integer value (digit).
But when I simulate the code in Xilinx ISE it shows the code's waveform in binary value. The code works but the output should be integer but it's not. I have tested this code in Modelsim and the output is correct and it's in integer value. This problem is in code synthesize too and the value is binary.
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity bcdcnt is
Port ( clk : in STD_LOGIC;
digit : out INTEGER RANGE 0 TO 9);
end bcdcnt;
architecture Behavioral of bcdcnt is
begin
count: PROCESS(clk)
VARIABLE temp : INTEGER RANGE 0 TO 10;
BEGIN
IF (clk'EVENT AND clk = '1') THEN
temp := temp + 1;
IF (temp = 10) THEN temp := 0;
END IF;
END IF;
digit <= temp;
END PROCESS count;
end Behavioral;
That's what synthesis does.
That's what synthesis MUST do : it translates your high level design to the resources in your FPGA or ASIC, which are binary. So, what's the problem here?
If you need to simulate the post-synth result, the usual approach is to create a wrapper entity that takes the correct port types, and translates between those and the post-synthesis netlist component.
Then, simulation should work with either the original entity, or this wrapper entity, both of which have integer ports.
Better still, you can re-use the same entity, and add the wrapper as a second architecture, thus guaranteeing that it uses the same interface (ports).
(You can even instantiate both the original and post-synth in its wrapper in the testbench, in parallel with a comparator on their outputs, to see they both do the same thing. But note there will be gate-level delays between them; usually you check outputs only on clock edges so these do not matter.)
Another approach is to restrict port types on the top level of the design to binary types like std_logic_vector. This plays nicer with badly designed tools, like ISE where the automatically generated testbench will have binary port types, (I generally edit them back to the correct ones; it's almost easier to write TBs from scratch).
But it restricts you to using an obscure and complex design style instead of higher level abstractions like Integers.
It's bad - really bad - that this approach is taught and encouraged so widely. But it is, and sometimes you'll just have to live with it. (Even in this approach, there's no reason to avoid decent abstractions internal to the FPGA, as long as the synthesis tool understands them).
A third approach - roughly, "trust, but verify" - is to trust that synthesis tools are competently written - which is usually true - and forget about post-synthesis simulation.
Just verify the design thoroughly at the behavioural level in simulation, then synthesise it, and test in live FPGA.
99% of the time (unless you're writing really weird VHDL), synth and P&R have done the right thing, and any differences you see are due to the aforementioned I/O timings (gate delays at the I/O pins). Then model these in the testbench and/or wrapper until you see the same behaviour in both (fix anything that needs fixing, and re-synthesise).
In this approach you only need to bother with the (MUCH slower) post-synth and post-PAR simulations if you need to track down a suspected synthesis tool bug.
This does happen : I've seen two in a quarter century.
Most of the time I just use the third approach.
Related
I use Xilinx ISE as a IDE.
If I add a 100 ps delay at every assignment in a always(Verilog)/process(VHDL) with sensitive list only have clock and reset.
Like this.
always#(posedge clk)
if(rst)
a <= #100 'd0;
else
a <= #100 b;
end
I think the delay function is only effect the simulation process.Because every book and user guide tell us delay is not synthesizable.
But I still wondering if the delay function can really effect the place or route's result?Like static timing or clock report?
Like can make a circuit max frequency higher or slower?
No the #delay in your code is not going to affect the timing of the design when it is loaded on to the FPGA.
It also does not affect the place and route results or the static timing analysis. Both of these steps use timing information that is provided by the manufacturer in the form of device models.
You are correct that there's nothing intrinsic about delay statements that makes them unsynthesizable, however it's wildly impractical to attempt to do so. The reason for this is that once on the FPGA you are dealing with a physical circuit whose performance varies with PVT (process, voltage, temperature) and can do so by a lot! The only hedge against this would be an analog circuit that attempts to sense all of the above and adjust itself accordingly. Such a beast will still be limited in what it can do, and would be physically large and power hungry depending on the rage of delay and the variance in all of the above you want to support.
So with than in mind and considering that there is very little (read: no) demand for this outside of special purpose IO FPGA vendors don't provide any such components making the construct unsythesizable.
Delay statements (#100) are usually ignored during synthesis in Verilog. So in synthesis it is the same as:
always#(posedge clk)
if(rst)
a <= 0;
else
a <= b;
end
Xlinx Synthesis and Simuation Design Guide states:
Delays in Synthesis Code
Do not use Wait for XX ns (VHDL) or the #XX (Verilog) statements in
your code. (...) This statement does not synthesize to a component.
In designs that include this construct, the functionality of the
simulated design does not always match the functionality of the
synthesized design.
(...)
Wait for XX ns Statement Verilog Coding Example
#XX;
Do not use the After XX ns statement in your VHDL code or the Delay
assignment in your Verilog code
(...)
Delay Assignment Verilog Coding Example
assign #XX Q=0;
XX specifies the number of nanoseconds that must pass before a
condition is executed. This statement is usually ignored by the
synthesis tool. In this case, the functionality of the simulated
design does not match the functionality of the synthesized design.
"Usually" there is no impact on synthesis and P&R results.
Xilinx: This statement is usually ignored by the synthesis tool.
When does it have impact then?
Although the delay statement is ignored by the synthesis tool, the HDL code is a little bit different. That may change the seed of randomization in any stage (parsing, elaboration, synthesis etc.), so there is a possibility for different results. These results may be better or worse.
If a delay statement exists in the code, the following warning is expected from Xilinx ISE:
WARNING:Xst:916 - design.v line x: Delay is ignored for synthesis.
I want to be able to have a shift register that does an XOR against another register loaded with some value. The issue is that I wish to do this with a large scale vector, something on the order of thousands of bits wide.
The obvious way to do this in VHDL would be something like
generic( length : integer := 15);
signal shiftreg : std_logic_vector(length downto 0);
process(clk)
begin
if rising_edge(clk) then
shiftreg<= shiftreg(length-1 downto 0) & input;
endif;
end process;
However, if length here is set to some very high number, attempting to synthesize this becomes a massive undertaking. Since this is a relatively simple structure I imagine it is taking so long because the length is far beyond the number of registers in a single block.
My question is if there is some way to implement a large vector like this in a way that would be quicker to synthesize. For example, is it quicker to use something like
array(length downto 0) of std_logic;
or does a synthesis tool recognize those are equivalent?
Synthesis time is not typically relevant in FPGA design, although area utilization and timing usually is. If your shift register takes most of the resources that your target FPGA has, synthesis will take a long time trying to figure out a way to make it work, and likewise builds take longer as you fill up larger parts. For some ballpark, an 80% full design with tight timing in a modern midrange FPGA usually takes about 30 minutes to synthesize and 3 hours to place&route. This will not be significantly affected by coding style if you're still describing the same functionality.
If you describe a shift register (with the same functional features) in VHDL using std_logic_vector, a type you defined as an array of std_logic, or anything else, it will synthesize into the same thing.
In recent-ish Xilinx parts at least, a single LUT can be used for a 64-deep shift register as long as you haven't described a reset (synchronous OR asynchronous). You can likewise produce a 1000 deep shift register with just a handful of LUTs.
Now if you're looking to use the whole thousand+ bits of this shift register to xor against some other register, you can't use SRLs (LUT used as a shift register) because only the final bit is accessible as an output. This makes it put the whole thing in registers which may be rather large, and could require more registers than your part has. The key thing here is that you have to think about the scale of the hardware you describe, and whether that's feasible in your target part.
If you want a really deep shift register, block rams can be used to act like shift registers at depths exceeding 100,000 but these have the same issue where you only access the final output.
a real junior question with hopefully a junior answer, regarding one of the main assignments of VHDL (concurrent selective assignment) can anyone explain what a VHDL compiler would synthesise the following description into?
LIBRARY IEEE;
USE IEEE.std_logic_1164.ALL;
USE IEEE.numeric_std.ALL;
ENTITY Q2 IS
PORT (a,b,c,d : IN std_logic;
EW_NS : OUT std_logic
);
END ENTITY Q2;
ARCHITECTURE hybrid OF Q2 IS
SIGNAL INPUT : std_logic_vector(3 DOWNTO 0);
SIGNAL EW_NS : std_logic;
BEGIN
INPUT <= (a & b & c & d); -- concatination
WITH (INPUT) SELECT
EW_NS <= '1' WHEN "0001"|"0010"|"0011"|"0110"|"1011",
'0' WHEN OTHERS;
END ARCHITECTURE hybrid;
Why do I ask? well I have previously gone about things the wrong way i.e. describing things on VHDL before making a block diagram of the components needed. I would envisage this been synthed as a group of and gate logic ?
Any help would be really helpful.
Thanks D
You need to look at the user guide for your target FPGA, and understand what is contained within one 'logic element' ('slice' in Xilinx terminology). In general an FPGA does not implement combinatorial logic by connecting up discrete gates like AND, OR, etc. Instead, a logic element will contain one or more 'look-up tables', with typically four (but now 6 in some newer devices) inputs. The inputs to this look up table (LUT) are the inputs to your logic function, and the output is one of the outputs of the function. The LUT is then programmed as a ROM, allowing your input signals to function as an address. There is one ROM entry for every possible combination of inputs, with the result being the intended logic function.
A function with several outputs would simply use several of these LUTs in parallel, with the same inputs, one LUT for each of the function's outputs. A function requiring more inputs than the LUT has (say, 7 inputs, where a LUT has only 4), simply combines two LUTs in parallel, using a multiplexer to choose between the output of the two LUTs. This final multiplexer uses one of the input signals as it's control, and again every possible combination of inputs is accounted for.
This may sound inefficient for creating something simple like an AND gate, but the benefit is that this simple building block (a LUT) can implement absolutely any combinatorial function. It's also worth noting that an FPGA tool chain is extremely good at optimising logic functions in order to simplify them, and to better map them into the FPGA. The LUT provides a highly generic element for these tools to target.
A logic element will also contain some dedicated resources for functions that aren't well suited to the LUT approach. These might include dedicated carry chains for adders, multiplexers for combining the output of several LUTS, registers (most designs are synchronous). LUTs can also sometimes be configured as small shift registers or RAM elements. External to the logic elements, there will be more specific blocks like large multipliers, larger memories, PLLs, etc, none of which can be as efficiently implemented using LUT resource. Again, this will all be explained in the user guide for your target FPGA.
Back in the day, your code would have been implemented as a single 74150 TTL circuit, which is a 16-to-1 mux. you have a 4-bit select (INPUT), and this selects one of 16 inputs to the chip, which is routed to a single output ('EW_NS`). The 74150 is obsolete and I can't find any datasheets, but it's easy to find diagrams of what an 8-to-1 mux looks like (here, for example). The 16->1 is identical, but everything is wider. My old TI databook shows basically exactly the diagram at this link doubled up.
But - wait. Your problem is easier, because you're not routing real inputs to the output - you're just setting fixed data values. On the '150, you do this by wiring 5 of the 16 inputs to 1, and the remaining 11 to 0. This makes the logic much easier.
The 74150 has basiscally exactly the same functionality as a 4-input look-up table (where the fixed look-up data is the same as fixed levels at the '150 inputs), so it's trivial to implement your entire circuit in a single LUT in an FPGA, as per scary_jeff's answer, rather than using a NAND-level implementation. In a proper chip, though, it would be implemented as a sum-of-products, or something similar (exactly what's in the linked diagram). In this case, draw a K-map and find a minimum solution. My 2 minutes on the back of an envelope comes up with three 3-input AND gates, driving a 3-input OR gate. I'll leave it as an exercise to you to check this :)
Let's suppose I have to test different bits on an std_logic_vector. would it be better to implement one single process, that for-loops for each bit or to instantiate 'n' processes using for-generate on which each process tests one bit?
FOR-LOOP
my_process: process(clk, reset) begin
if rising_edge (clk) then
if reset = '1' then
--init stuff
else
for_loop: for i in 0 to n loop
test_array_bit(i);
end loop;
end if;
end if;
end process;
FOR-GENERATE
for_generate: for i in 0 to n generate begin
my_process: process(clk, reset) begin
if rising_edge (clk) then
if reset = '1' then
--init stuff
else
test_array_bit(i);
end if;
end if;
end process;
end generate;
What would be the impact on FPGA and ASIC implementations for this cases? What is easy for the CAD tools to deal with?
EDIT:
Just adding a response I gave to one helping guy, to make my question more clear:
For instance, when I ran a piece of code using for-loops on ISE, the synthesis summary gave me a fair result, taking a long while to compute everything. when I re-coded my design, this time using for-generate, and several processes, I used a bit more area, but the tool was able to compute everything way way faster and my timing result was better as well. So, does it imply on a rule, that is always better to use for-generates with a cost of extra area and lower complexity or is it one of the cases I have to verify every single implementation possibility?
Assuming relatively simple logic in the reset and test functions (for example, no interactions between adjacent bits) I would have expected both to generate the same logic.
Understand that since the entire for loop is executed in a single clock cycle, synthesis will unroll it and generate a separate instance of test_array_bit for each input bit. Therefore it is quite possible for synthesis tools to generate identical logic for both versions - at least in this simple example.
And on that basis, I would (marginally) prefer the for ... loop version because it localises the program logic, whereas the "generate" version globalises it, placing it outside the process boilerplate. If you find the loop version slightly easier to read, then you will agree at some level.
However it doesn't pay to be dogmatic about style, and your experiment illustrates this : the loop synthesises to inferior hardware. Synthesis tools are complex and imperfect pieces of software, like highly optimising compilers, and share many of the same issues. Sometimes they miss an "obvious" optimisation, and sometimes they make a complex optimisation that (e.g. in software) runs slower because its increased size trashed the cache.
So it's preferable to write in the cleanest style where you can, but with some flexibility for working around tool limitations and occasionally real tool defects.
Different versions of the tools remove (and occasionally introduce) such defects. You may find that ISE's "use new parser" option (for pre-Spartan-6 parts) or Vivado or Synplicity get this right where ISE's older parser doesn't. (For example, passing signals out of procedures, older ISE versions had serious bugs).
It might be instructive to modify the example and see if synthesis can "get it right" (produce the same hardware) for the simplest case, and re-introduce complexity until you find which construct fails.
If you discover something concrete this way, it's worth reporting here (by answering your own question). Xilinx used to encourage reporting such defects via its Webcase system; eventually they were even fixed! They seem to have stopped that, however, in the last year or two.
The first snippet would be equivalent to the following:
my_process: process(clk, reset) begin
if rising_edge (clk) then
if reset = '1' then
--init stuff
else
test_array_bit(0);
test_array_bit(1);
............
test_array_bit(n);
end if;
end if;
end process;
While the second one will generate n+1 processes for each i, together with the reset logic and everything (which might be a problem as that logic will attempt to drive the same signals from different processes).
In general, the for loops are sequential statements, containing sequential statements (i.e. each iteration is sequenced to be executed after the previous one). The for-generate loops are concurrent statements, containing concurrent statements, and this is how you can use it to make several instances of a component, for example.
I am trying to make a BCD converter to show numbers from 0 to 9999, I need to implement Double Dabble Algorithm using the shift operators. But I just cannot start coding without running into warnings i dont really know about, I am still a beginner so please ignore any stupid mistakes that I make. I started off by first implementing the algorithm. I have never used shift operators so I am probably not doing it right, please help, here is my code
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
entity algorithm is
Port (x: in unsigned (15 downto 0);
y: out unsigned (15 downto 0));
end algorithm;
architecture Behavioral of algorithm is
begin
y <= x sll 16;
end Behavioral;
And the error
Xst:647 - Input <x> is never used. This port will be preserved and left unconnected
if it belongs to a top-level block or it belongs to a sub-block and the hierarchy of
this sub-block is preserved.
Even if I implement this
y <= x sll 1;
I get this error
Xst:647 - Input <x<15>> is never used. This port will be preserved and left
unconnected if it belongs to a top-level block or it belongs to a sub-block
and the hierarchy of this sub-block is preserved.
What am I doing wrong here?
What you are doing wrong is, firstly, attempting to debug a design via synthesis.
Write a simple testbench which, first, exercises your design (i.e. given the code above, feeds some data into the X input port).
Later you can extend the testbench to read the Y output port and compare the output with what you would expect for each input, but you're not ready for that yet.
Simulate the testbench and add the entity's internal signals to the Wave window : does the entity do what you expect? If so, proceed to synthesis. Otherwise, find and fix the problem.
The specific lines of code above, y <= x sll 16; and y <= x sll 1; work correctly and the synthesis warnings (NOT errors) are as expected. Shifting a 16 bit number by 16 bits and fitting the result into a 16 bit value, there is nothing left, so (as the warning tells you) port X is entirely unused. Shifting by 1 bit, the MSB falls off the top of the result, again exactly as the warning says.
It is the nature of synthesis to warn you of hundreds of such things (often most of them come from the vendor's own IP, strangely enough!) : if you have verified the design in simulation you can glance at the warnings and ignore most of them. Sometimes things really do go wrong, then one or two of the warnings MAY be useful. But they are not a primary debugging technique; most of them are natural and expected, as above.
As David says, you probably do want a loop inside a clocked process : FOR loops are synthesisable. I have recently read a statement that WHILE loops are often also synthesisable, but I have found this to be less reliable.