for process statement in vhdl, it is said that the order of execution inside a process statement is sequential. My question is that, please look at the code below first, are a, b and c signals assigned to their new values concurrently or sequentially in if statement which is in process statement?
process(clk) is
begin
if rising_edge(clk) then
a <= b ;
b <= c ;
c <= a;
end if;
end process;
So if this is sequential, I must say that after the end of process, a is equal to b, b is equal to c and c is equal to b because we assigned b to a before we assigned a to c. However, this does seem not possible for hardware to do.
Constructing a Minimal, Complete, and Verifiable example containing your process:
library ieee;
use ieee.std_logic_1164.all;
entity sequent_exec is
end entity;
architecture foo of sequent_exec is
signal a: std_ulogic := '1';
signal b, c: std_ulogic := '0';
signal clk: std_ulogic := '0';
begin
CLOCK:
process
begin
wait for 10 ns;
clk <= not clk;
if now > 200 ns then
wait;
end if;
end process;
DUT:
process(clk) is
begin
if rising_edge(clk) then
a <= b ;
b <= c ;
c <= a;
end if;
end process;
end architecture;
We see a, b and c shift values from one to another as a recirculating shift register:
Why that occurs is do to how VHDL's simulation cycle operates.
See IEEE Std 1076-2008
10.5 Simple Signal assignments (10.5.1 General):
A signal assignment statement modifies the projected output waveforms contained in the drivers of one or more signals (see 14.7.2), schedules a force for one or more signals, or schedules release of one or more signals (see 14.7.3).
A signal assignment queues a new value for signal update. How the projected output waveform queue is operated is described in 10.5.2.2 Executing a simple assignment statement:
Evaluation of a waveform element produces a single transaction. The time component of the transaction is determined by the current time added to the value of the time expression in the waveform element. For the first form of waveform element, the value component of the transaction is determined by the value expression in the waveform element.
An assignment without a time expression is to the current simulation time. (A delta cycle will occur - a simulation cycle without advancing the simulation time). The sequence of transactions described in
10.5.2.2 tell us old transactions to the same simulation time are deleted.
This means there's only one queue entry for any simulation time and explains why the last assignment to a particular signal is the one resulting in a transaction (and producing an event for a signal a process is sensitive to).
14.7 Execution of a model contains information about how a simulation cycle operates (14.7.5 Model execution).
14.7.5.1 General:
The execution of a model consists of an initialization phase followed by the repetitive execution of process statements in the description of that model. Each such repetition is said to be a simulation cycle. In each cycle, the values of all signals in the description are computed. If as a result of this computation an event occurs on a given signal, process statements that are sensitive to that signal will resume and will be executed as part of the simulation cycle.
14.7.5.3 Simulation cycle describes the simulation cycle, the IEEE Std 1076-1993 is used here for simplicity not being cluttered with VHPI actions:
12.6.4 The simulation cycle
The execution of a model consists of an initialization phase followed by the repetitive execution of process statements in the description of that model. Each such repetition is said to be a simulation cycle. In each cycle, the values of all signals in the description are computed. If as a result of this computation an event occurs on a given signal, process statements that are sensitive to that signal will resume and will be executed as part of the simulation cycle.
At the beginning of initialization, the current time, Tc, is assumed to be 0 ns.
The initialization phase consists of the following steps:
-- The driving value and the effective value of each explicitly declared signal are computed, and the current value of the signal is set to the effective value. This value is assumed to have been the value of the signal for an infinite length of time prior to the start of simulation.
-- The value of each implicit signal of the form S'Stable(T) or S'Quiet(T)is set to True. The value of each implicit signal of the form S'Delayed(T) is set to the initial value of its prefix, S.
-- The value of each implicit GUARD signal is set to the result of evaluating the corresponding guard expression.
-- Each nonpostponed process in the model is executed until it suspends.
-- Each postponed process in the model is executed until it suspends.
-- The time of the next simulation cycle (which in this case is the first simulation cycle), Tn, is calculated according to the rules of step f of the simulation cycle, below.
A simulation cycle consists of the following steps:
a. The current time, Tc is set equal to Tn. Simulation is complete when Tn= TIME'HIGH and there are no active drivers or process resumptions at Tn.
b. Each active explicit signal in the model is updated. (Events may occur on signals as a result.)
c. Each implicit signal in the model is updated. (Events may occur on signals as a result.)
d. For each process P, if P is currently sensitive to a signal S and if an event has occurred on S in this simulation cycle, then P resumes.
e. Each nonpostponed process that has resumed in the current simulation cycle is executed until it suspends.
f. The time of the next simulation cycle, Tn, is determined by setting it to the earliest of
TIME'HIGH,
The next time at which a driver becomes active, or
The next time at which a process resumes.
If Tn = Tc, then the next simulation cycle (if any) will be a delta cycle.
g. If the next simulation cycle will be a delta cycle, the remainder of this step is skipped. Otherwise, each postponed process that has resumed but has not been executed since its last resumption is executed until it suspends. Then Tn is recalculated according to the rules of step f. It is an error if the execution of any postponed process causes a delta cycle to occur immediately after the current simulation cycle.
Signal values don't change during the execution of a process. Their updates are queued and applied in a different step in the execution of a simulation cycle.
back to -2008:
Sequential statements, 10.1 General
The various forms of sequential statements are described in this clause. Sequential statements are used to define algorithms for the execution of a subprogram or process; they execute in the order in which they appear.
We see the order of sequential signal assignment execution doesn't relate to the order signals are updated.
Signals assignments are events that are queued, see the links in the comments. So after evaluation a(new)=b(old), b(new)=c(old), and c(new)=a(old).
If you really want sequential assignment, you can use variables (but preferably don't, because you can easily make a mistake)
process(clk) is
variable i_a, i_b, i_c : [some type];
begin
if rising_edge(clk) then
-- initialize with signal value
i_a := a;
i_b := b;
i_c := c;
--- modify
i_a := i_b;
i_b := i_c;
i_c := i_a;
-- write back to signal
a <= i_a;
b <= i_b;
c <= i_c;
end if;
end process;
Now c(new)=a(new)=b(old) and b(new)=c(old)
Related
I'm new to VHDL. I learned that the sequential statement under the process is always sequential. But in my snippet below, Q will have the old temp value. It seems that it contradicts to the sequential statements since Q will not update to the newest temp value.
process (CLK)
begin
if(rising_edge(CLK)) then
temp <= D;
Q <= temp;
end if;
end
process
The signals temp and Q will be two separate flip-flops. The value of a flip-flop only updates on each rising edge of CLK.
Note that it is not like programming. You are essentially connecting flip-flops, but the their values will not be updated until the next rising edge. With that, the order of the assignements within the process does not matter. (as long as you have only one assignement for each signal)
The scematic in hardware will look something like this. Note, each flip-flop will only update on the next rising edge.
------ -----
|temp| ---> | Q |
------ -----
If you need a different behavour (where temp is not a separate flip-flop), temp needs to be a variable instead of a signal.
I hope that helps.
I have a fundamental question on VHDL.
Consider the following process:
process(Clk)
begin
if(rising_edge(Clk)) then
a <= data_in;
b <= a;
c <= b;
data_out <= c;
end if;
end process;
The above process acts as a delay register, where data_in is output to data_out after 4 clock cycles.
From my understanding this happens because signals are assigned parallelly. But then why does the statements inside a process called sequential?
For example:
process(Clk)
begin
if(rising_edge(Clk)) then
a <= b or c;
a <= b and c;
end if;
end process;
In the above process the 'a' takes the value from the 2nd statement and I understand, how it works in a sequential way unlike the first process.
Please help.
It's actually very simple: all statements inside a VHDL process are executed sequentially, in order, from top to bottom, no exceptions. However,
the left hand side of a signal assignment operator (<=) does not
take its new value until the process (and all other processes) have
suspended (either hit the bottom or hit a wait statement) and
if you assign to a signal again (as in your second example) the last
assignment executed overwrites the previous ones.
Now you know that, simulate the above two processes in your head and you will see that they behave as you say they will. (The statements in your first example are NOT executed in parallel. But because of (1) above, it seems like they are.)
I am reading the book Free Range VHDL and here is an example of chapter 8.
-- library declaration
library IEEE;
use IEEE.std_logic_1164.all;
-- entity
entity my_fsm1 is
port ( TOG_EN : in std_logic;
CLK,CLR : in std_logic;
Z1 : out std_logic);
end my_fsm1;
-- architecture
architecture fsm1 of my_fsm1 is
type state_type is (ST0,ST1);
signal PS,NS : state_type;
begin
sync_proc: process(CLK,NS,CLR)
begin
-- take care of the asynchronous input
if (CLR = '1') then
PS <= ST0;
elsif (rising_edge(CLK)) then
PS <= NS;
end if;
end process sync_proc;
comb_proc: process(PS,TOG_EN)
begin
Z1 <= '0'; -- pre-assign output
case PS is
when ST0 => -- items regarding state ST0
Z1 <= '0'; -- Moore output
if (TOG_EN = '1') then NS <= ST1;
else NS <= ST0;
end if;
when ST1 => -- items regarding state ST1
Z1 <= '1'; -- Moore output
if (TOG_EN = '1') then NS <= ST0;
else NS <= ST1;
end if;
when others => -- the catch-all condition
Z1 <= '0'; -- arbitrary; it should never
NS <= ST0; -- make it to these two statements
end case;
end process comb_proc;
end fsm1;
Is there any difference if I remove NS from the sensitivity list of sync_proc?
sync_proc: process(CLK,NS,CLR)
begin
-- take care of the asynchronous input
if (CLR = '1') then
PS <= ST0;
elsif (rising_edge(CLK)) then
PS <= NS;
end if;
end process sync_proc;
After examining the question this is considered a duplicate of as well as it's answer lacking any authoritative reference as to when a signal belongs in the sensitivity list it may be worth asking where is that information derived from?
You could note Free Range VHDL only mentions wait as a reserved word in VHDL. There's much more to it than that. A process statement described in the VHDL standard (IEEE Std 1076-2008 10.3 Process statement) tells us:
If a process sensitivity list appears following the reserved word process, then the process statement is assumed to contain an implicit wait statement as the last statement of the process statement part; this implicit wait statement is of the form
wait on sensitivity_list ;
And then goes on to dicuss how the rules of 10.2 Wait statement are applied to a sensitivity list consisting of the reserved word all.
The syn_proc from architecture fsm1 of entity my_fsm1 from Free Range VHDL Listing 7.1 Solution to Example 18 has a sensitivity list in accordance with the rules found in 10.2 Wait statement for an implicitly generated sensitivity.
However, that's not the complete set of authorities. There's also IEEE Std 1076.6-2004 (RTL Synthesis, now withdrawn) 6.1.3.1 Edge-sensitive storage from a process with sensitivity list and one clock:
d) The process sensitivity list includes the clock and any signal controlling an <async_assignment>.
Where <async_assignment> is defined in 6.13 Modeling edge-sensitive storage elements:
<async_assignment>. An assignment to a signal or variable that is not controlled by <clock_edge> in any execution path.
And <clock_edge>is defined by convention (1.4) as one of the forms for clock_edge defined in BNF found in 6.1.2 Clock edge specification.
(translation: d) above means what you think it means when you read it.)
This tells us what signals are necessary here. There are no restrictions on unnecessary signals in the process sensitivity list. However their effect can be discerned from IEEE Std 1076-2008 10.2 Wait statement:
The suspended process also resumes as a result of an event occurring on any signal in the sensitivity set of the wait statement. If such an event occurs, the condition in the condition clause is evaluated. If the value of the condition is FALSE, the process suspends again. Such repeated suspension does not involve the recalculation of the timeout interval.
For a wait statement:
wait_statement ::=
[ label : ] wait [ sensitivity_clause ] [ condition_clause ] [ timeout_clause ] ;
it helps if you know the condition clause is optional as indicated by the square brackets above:
The condition clause specifies a condition that shall be met for the process to continue execution. If no condition clause appears, the condition clause until TRUE is assumed.
That means the process will resume for any event on any of the signals in the process sensitivity list and will traverse it's sequential statements.
There is no harm to the state of the design hierarchy during simulation by executing the sync_proc for an event on signal NS. Neither assignment statement in the if statement subject to conditions will execute. You could also note the same holds true for an event on the falling edge of CLK.
The objective in paring the sensitivity list is to minimize the number of times the process is resumed needlessly. The bigger more complex the design model the slower simulation will proceed, particularly dragging around needless resumptions and suspensions.
Of the three signals shown in the process sensitivity list only three binary value transitions are of interest, and none on signal NS. The current value of NS is assigned to PS on the rising edge of CLK.
A process suspends and resumes in a particular wait statement. A process with a sensitivity list shall not contain an explicit wait statement (10.3), meaning it will have only one wait statement, the implicit one.
It would seem with your first question on VHDL here you've reached beyond the limits of answers the book Free Range VHDL can supply.
A better entreaty on the subject might be The Designer's Guide to VHDL, 3rd edition by Peter Ashenden.
The idea being conveyed here that you can't get why without knowing how.
We always use process block with clock and reset in sensitivity list to describe sequence circuit.And use process block with every driver signals in sensitivity list to describe combinational circuit.
Sometimes sensitivity list is only important for simulations but if you forget a signal or add too many signals in sensitivity list you may get the wrong simulation result. Most time the real FPGA function will work fine if your logic is correct.
But it can cause some problem.
For example, if you describe a function like a=b&c in an always block with sensitivity (b); But you forget c. Then in your simulation a will not change when c is changed. But the circuit in real FPGA, will be the correct description of the function a=b&c. And you may get a warning when you synthesize your code.
You can call it ‘pre-sim and post-sim inconsistent’.
The real scary thing is that your pre-sim is right but your post-sim is wrong. That may cause the FPGA to incorrect function.
So I advise you to describe the circuit than the function when you write VHDL code.
In VHDL, in a process all steps will be executed sequentially, but I wonder how an FPGA can execute steps sequentially. I am very confused about how sequential assignments, functions and similar are being generated in an FPGA, so can anyone throw some light on this topic?
process(d, clk)
begin
if(rising_edge(clk)) then
q <= d;
else
q <= q;
end if;
end process;
This is just code for a simple D-Latch, but how will this be implemented in an FPGA?
It is not "executed" sequentially as such - but the synthesizer interprets the code sequentially, and creates the hardware design to fit such an interpretation.
For instance, if you assign a value to a signal twice during a clocked process, the first assignment is simply ignored, while the second takes effect (remember that a signal is only assigned at the end of a process statement, not immediately):
signal a : UNSIGNED(3 downto 0) := (others => '0');
(...)
process(clk)
begin
if(rising_edge(clk)) then
a <= a - 1;
a <= a + 1;
end if;
end process;
The above process will always increment a by 1. Similarly, if you have the second assignment inside an if statement, the synthesizer will simply create two paths for a - a decrement for when the if statement is not fullfilled, and an increment for when it is.
If you use variables, the idea is the same - although intermediate values are used, as variables take on their new value immediately.
But it all boils down to that the synthesizer does all the "magic" of interpreting your process in a sequential way, then generating hardware that does what you have described.
Your example basically describes a d-flip-flop (the Xilinx FPGA tools iirc distinguish latches and flip-flops in that flip-flops are edge-sensitive, and latches are level-sensitive), although in a different way than typically recommended.
You can basically write the same code as:
process(clk)
begin
if(rising_edge(clk)) then
q <= d;
end if;
end process;
It will automatically keep its value in the other cases. This will be implemented as a flip-flop inside the FPGA. Most FPGAs consist of blocks of look-up tables and flip-flops, to which quite a lot of different hardware can be mapped. The above code will simply by-pass the look-up table, and just use the flip-flop of one of the blocks.
You can learn more about the internal workings by having a look at the datasheet for your particular FPGA. For Spartan3-series FPGAs for instance, have a look at page 24 of the Xilinx Spartan3 FPGA Family Data Sheet
I would like to latch a signal, however when I try to do so, I get a delay of one cycle, how can I avoid this?
myLatch: process(wclk, we) -- Can I ommit the we in the sensitivity list?
begin
if wclk'event and wclk = '1' then
lwe <= we;
end if;
end process;
However if I try this and look into the waves during simulation lwe is delayed by one cycle of wclk. All I want tp achieve is to sample we on the rising edge of wclk and keep it stable till the next rising edge. I then assign the latched signal to another entities port map which is defined in the architecture.
==============================================
Well I figured out that I have to omit the wclk'event to get a latch instead of a flip flop. This seems rather unintuitive to me. By simply shortening the time where I sample the signal to be latched I go from latch to flip flop. Can anyone explain why this is and where my perception is wrong. (I am a vhdl beginner)
First off, a few observations on the process you pasted above:
myLatch: process(wclk, we)
begin
if wclk'event and wclk = '1' then
lwe <= we;
end if;
end process;
The signal we can be omitted from the sensitivity list because you have described a clocked process. The only signals required in the sensitivity list of a process like this are the clock and the asynchronous reset if you choose to use one (a synchronous reset would not need to be added to the sensitivity list).
Instead of using if wclk'event and wclk = '1' then you should instead use if rising_edge(wclk) then or if falling_edge(wclk) then, there's a good blog post on the reasons why here.
By omitting the wclk'event you changed the process from a clocked process to a combinatorial process, like so:
myLatch: process(wclk, we)
begin
if wclk = '1' then
lwe <= we;
end if;
end process;
In a combinatorial process all inputs should be present in the sensitivity list, so you would be correct to have both wclk and we in the list as they had an influence on the output. Normally you would ensure that lwe is assigned in all cases of your if statement to avoid inferring a latch, however this appears to be your intention in this case.
Latches in general should be avoided, so if you find yourself needing one you should perhaps pause and consider your approach. Doulos have a couple of articles on latches here and here that you might find useful.
You stated that all you want to achieve is to sample we on the rising edge of wclk and keep it stable until the next rising edge. The process below will accomplish this:
store : process(wclk)
begin
if rising_edge(wclk) then
lwe <= we;
end if;
end process;
With this process, lwe will be updated with the value of we upon every rising edge of wclk and it will remain valid for a single clock cycle.
Let me know if this clears things up for you.
Believe it or not, the issue is actually in your testbench. This has to do with how the VHDL simulation model works.
VHDL is usually used for synchronous hardware design -- that means, using flip-flops that sample on the rising edge and set outputs on the falling edge, so that there are no race conditions between reading and writing. But in VHDL this master/slave logic is not actually simulated using opposite clock edges.
Consider a process
process (clock) begin
if rising_edge(clock) then
a <= b;
end if;
end process;
At the start of a simulation timestep, if clock has just risen, the if will execute. Then the assignment a <= b will be executed, and this will not immediately cause an assignment to take place, but schedule the assignment for the end of the timestep.
After all processes have been run, then all scheduled assignments take place. This means that no process will "see" the new value of a until the next timestep.
Time a b Actions
Start of ts 1 '0' '1' a <= '1' is scheduled
End of ts 1 '1' '0' a <= '1' is executed
Start of ts 2 '1' '0' a <= '0' is scheduled
End of ts 2 '0' '1' a <= '0' is executed
So when you look on the waveform viewer, what you will see is a apparently being set on the rising edge of the clock, and following b delayed by one clock cycle; you don't see the intermediate scheduling of assignments that causes this to happen.
Of course, in real life, there is no "end of the timestep", and the actual changing of signal a happens when the slave part of the flip-flop triggers, ie, on the negative edge. (Maybe it would have been less confusing for VHDL to just use the negative edge; but, oh well, this is how it works).
Here are two testbenches for your latch code:
Test bench 1, using rising edges
Test bench 2, using falling edges
In the first, if you look in the waveform viewer you will see exactly what you describe -- lwe appears to be delayed by 1 clock cycle -- but really, the delay is happening in the non-blocking assignment that sets counter -- so when the rising edge happens, we does not actually have its new value yet. And in the second, you see no such delay; lwe is set exactly on the rising edge to the value of we at that time.
For a related topic in Verilog, see Nonblocking Assignments in Verilog Synthesis, Coding Styles That Kill .
The process you have is what you want according to your description, although 'we' should be removed from the sensitivity list. If this doesn't work as you believe it should it is almost certainly a problem with your test bench/simulation. (See Owen's answer.) Specifically you are probably changing the value of 'we' too late, so that the flip-flop latches the previous value instead of the new one.
I'm interested to know what the source of this signal is though, if it's an asynchronous signal that can change at any time you will have to add some logic to protect against metastability.
To answer your second question about latches, it is correct that omitting wclk'event will result in a latch. This process will not do what you want, however, because it will propagate changes to 'we' to 'lwe' during the whole positive half-period of the clock. The short answer to your question is that implementing this type of behavior requires a latch, while the behavior described by the original process requires a flip-flop.