I have been modelling a few simple VHDL gates, but I can't seem to get the time delay rightI have the following code:
LIBRARY IEEE;
USE IEEE.std_logic_1164.ALL;
ENTITY AND_4 IS
GENERIC (delay : delay_length := 0 ns);
PORT (a, b, c, d : IN std_logic;
x : OUT STD_logic);
END ENTITY AND_4;
ARCHITECTURE dflow OF AND_4 IS
BEGIN
x <= ( a and b and c and d) AFTER delay;
END ARCHITECTURE dflow;
LIBRARY IEEE;
USE IEEE.std_logic_1164.ALL;
ENTITY TEST_AND_4 IS
END ENTITY TEST_AND_4;
ARCHITECTURE IO OF TEST_AND_4 IS
COMPONENT AND_4 IS
GENERIC (delay : delay_length := 0 ns);
PORT (a, b, c, d : IN std_logic;
x : OUT STD_logic);
END COMPONENT AND_4;
SIGNAL a,b,c,d,x : std_logic := '0';
BEGIN
G1 : AND_4 GENERIC MAP (delay => 5ns) PORT MAP (a,b,c,d,x);
PROCESS
VARIABLE error_count : integer:= 0;
BEGIN
WAIT FOR 1 NS;
a <= '1';
b <= '0';
c <= '0';
d <= '0';
ASSERT (x = '1') REPORT "output error" SEVERITY error;
IF (x /= '1') THEN
error_count := error_count + 1;
END IF;
--Repeated test vector -- omitted
END PROCESS;
END ARCHITECTURE IO;
CONFIGURATION TESTER1 OF TEST_AND_4 IS
FOR IO
FOR G1 : AND_4
USE ENTITY work.AND_4(dflow)
GENERIC MAP (delay);
END FOR;
END FOR;
END CONFIGURATION TESTER1;
When I simulate the model I only get the 1 ns delay that I added to each test vector. I'm guessing the problem is how I pass the delay to the component declaration in the test bench. I've tried a few things and reread the topic in the book I have but still no joy. Any help ?
Many thanks
D
modifying the unlabelled stimulus process in your testbench:
process
variable error_count : integer:= 0;
begin
wait for 1 ns;
a <= '1';
-- b <= '0';
-- c <= '0';
-- d <= '0';
-- assert (x = '1') report "output error" severity error;
-- if (x /= '1') then
-- error_count := error_count + 1;
-- end if;
--repeated test vector -- omitted
b <= '1';
c <= '1';
d <= '1';
wait for 5 ns;
wait for 5 ns;
wait;
end process;
to simply demonstrated the delay shows that the generic delay is being passed to the instantiated component:
If you get something different perhaps you could convert your question to a Minimal, Complete, and Verifiable example by ensuring that the example actually reproduces the problem and that we know your results:
Describe the problem. "It doesn't work" is not a problem statement. Tell us what the expected behavior should be. Tell us what the exact wording of the error message is, and which line of code is producing it. Put a brief summary of the problem in the title of your question.
The little bit of stimulus you left in your testbench doesn't appear properly test the and_4.
In the event there was more stimulus and you weren't waiting the pulse rejection limit implied by your signal assignment delay mechanism, you'd get nothing but those annoying assertions.
See IEEE Std 1076-2008 10.5. Simple signal assignment statements, 5.2.1 General, paragraphs 5 and 6:
The right-hand side of a simple waveform assignment may optionally specify a delay mechanism. A delay mechanism consisting of the reserved word transport specifies that the delay associated with the first waveform element is to be construed as transport delay. Transport delay is characteristic of hardware devices (such as transmission lines) that exhibit nearly infinite frequency response: any pulse is transmitted, no matter how short its duration. If no delay mechanism is present, or if a delay mechanism including the reserved word inertial is present, the delay is construed to be inertial delay. Inertial delay is characteristic of switching circuits: a pulse whose duration is shorter than the switching time of the circuit will not be transmitted, or in the case that a pulse rejection limit is specified, a pulse whose duration is shorter than that limit will not be transmitted.
Every inertially delayed signal assignment has a pulse rejection limit. If the delay mechanism specifies inertial delay, and if the reserved word reject followed by a time expression is present, then the time expression specifies the pulse rejection limit. In all other cases, the pulse rejection limit is specified by the time expression associated with the first waveform element.
(Note you can go to 10.5.2.2 Executing a simple assignment statement and see the after time_expression is part of the waveform_element and not the delay mechanism).
Sure
ENTITY TEST_AND_4 IS
END ENTITY TEST_AND_4;
ARCHITECTURE IO OF TEST_AND_4 IS
COMPONENT AND_4 IS
GENERIC (delay : delay_length := 0 ns);
PORT (a, b, c, d : IN std_logic;
x : OUT STD_logic);
END COMPONENT AND_4;
SIGNAL a,b,c,d,x : std_logic := '0';
BEGIN
G1 : AND_4 GENERIC MAP (delay => 5 NS) PORT MAP (a,b,c,d,x);
PROCESS
VARIABLE error_count : integer:= 0;
BEGIN
WAIT FOR 1 NS; -- Changed to 6 ns so that the wait is longer then the
-- generic gate propagation delay
a <= '1';
b <= '1';
c <= '1';
d <= '1';
ASSERT (x = '1') REPORT "output error" SEVERITY error;
IF (x /= '1') THEN
error_count := error_count + 1;
END IF;
I have noted the change I made to the test bench model above, seems kinda obvious now but yesterday it had me pulling my hair out.
Cheers
D
The 'fix' was to change the WAIT value in the sequential test bench model from 1 ns to 6 ns. This gives the gate the time to change state because it has a 5 ns inertial delay.
WAIT FOR 6 NS; -- Changed to 6 ns so that the wait is longer then the
-- generic gate propagation delay
Thanks for the help, but I spotted the problem this morning after reading USER115520's post. The delay I set was 'inertial' and set generically at 5 ns. In my test bench process I had only set 1 ns wait statements in between input signal changes. Thus the gate would not perform the transition when the correct stimuli and introduced.
I inserted a 6 ns delay after a=1 b=1 c=1 d=1 and got the correct response from the gate
Related
I've learned that SR-Latch does oscillate when S and R are both '0' after they were just '1' in following circuit VHDL Code.
here is VHDL of SRLATCH
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity SRLATCH_VHDL is
port(
S : in STD_LOGIC;
R : in STD_LOGIC;
Q : inout STD_LOGIC;
NOTQ: inout STD_LOGIC);
end SRLATCH_VHDL;
architecture Behavioral of SRLATCH_VHDL is
begin
process(S,R,Q,NOTQ)
begin
Q <= R NOR NOTQ;
NOTQ<= S NOR Q;
end process;
end Behavioral;
and followings are process in Testbench code and its simulation results
-- Stimulus process
stim_proc: process
begin
S <= '1'; R <= '0'; WAIT FOR 100NS;
S <= '0'; R <= '0'; WAIT FOR 100NS;
S <= '0'; R <= '1'; WAIT FOR 100NS;
S <= '0'; R <= '0'; WAIT FOR 100NS;
S <= '1'; R <= '1'; WAIT FOR 500NS;
end process;
and totally I don't have any idea why simulation doesn't reflect...
(click to enlarge)
Someone is teaching you wrong knowledge!
SR and RS basic flip-flops (also called latches) don't oscillate. The problem on S = R = 1 (forbidden) is that you don't know the state after you leave S = R = 1 because you can never go to S = R = 0 (save) simultaneously. You will transition for S = R = 1 to S = R = 0 through S = 1; R = 0 (set) or S = 0; R = 1 (reset). This will trigger either a set or reset operation before you arrive in state save.
Be aware that VHDL simulates with discrete time and is reproducing the same simulation results on every run. You can not (easily) simulate physical effects that cause different signal delays per simulation run.
Btw. you VHDL description is also wrong. Q and NOTQ are of mode out, not inout. Use either a proper simulator supporting VHDL-2008 (that allows read back of out-ports) or use an intermediate signal.
Nice question, and your instructor is right - this circuit will oscillate if both S and R are released at the "same" time. Your issue is that your TB isn't doing this, but this one does:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity TOP is
end entity TOP;
architecture A of TOP is
signal S,R,Q,NOTQ: std_logic;
component SRLATCH_VHDL is
port(
S : in std_logic;
R : in std_logic;
Q : inout std_logic;
NOTQ : inout std_logic);
end component SRLATCH_VHDL;
begin
U1 : SRLATCH_VHDL port map(S, R, Q, NOTQ);
process is
begin
S <= '1';
R <= '1';
wait for 10 ns;
S <= '0';
R <= '0';
wait;
end process;
end architecture A;
This will produce infinite delta-delay oscillation:
This isn't a great way to demonstrate asynchronous behaviour, because you are effectively simplifying the physical nature of the circuit, and using the VHDL scheduler to show that there's a problem (with the use of 'delta delays'). A better way to do this is to model real circuit behaviour by adding signal delays (this is exactly what your tools are doing when they back-annotate for timing simulations). Look up signal assignments with after, and the difference between transport and inertial delays. If you draw a circuit diagram, you'll see that the issue arises if both S and R are released in a 'small' time window that doesn't allow the signal propagation around your circuit to complete before the second control signal changes. You now need to write a testbench that changes S and R inside this time window.
Pretty much everything you ever design will be asynchronous, in exactly the same way as your SR circuit. We make circuits 'synchronous' only by ensuring that input signals don't change at the same time. The job of the timing tools is to tell us what 'same' actually means: when you get a report or a datasheet value giving you a setup or a hold time, then that number is simply the numerical version of 'not the same'.
First of all I'm sorry to bother you guys with my very noob question, but I can't find any sense to what's happening with my (ModelSim simulated) circuit.
Here's my code, simple as can be :
LIBRARY ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
ENTITY Counter IS
PORT(
enable : in std_logic;
clk : in std_logic;
count : out integer range 0 to 255);
END Counter;
ARCHITECTURE LogicFunction OF Counter IS
signal count_i : integer range 0 to 255;
begin
cnt : process(clk, enable, count_i)
begin
count <= count_i;
if (enable = '0') then
count_i <= 0;
else
count_i <= count_i + 1;
end if;
end process;
end LogicFunction;
My problem is : when I perform a timing simulation with ModelSim, with a clock signal, "enabled" is first '0' and then '1', the output ("count") stays at zero all the time. I tried a lot of different things, like setting the "count" out as a vector, doing all sorts of casts, but it still stays the same.
The increment "count_i <= count_i + 1;" seems to be the problem : I tried to replace it with something like "count_i <= 55", and then the output changes (to "55" in the previous example).
I've seen the exact same increment in the code on that webpage for example :
http://surf-vhdl.com/how-to-connect-serial-adc-fpga/
I've created a project, simulated it and... it works ! I really don't get what the guy did that I didn't, excepted for a bunch of "if" that I don't need in my code.
Any help would be greatly appreciated, I've spent like 3 hours of trial and errors...
Thanx in advance !
In addition to not using a clock edge to increment i_count you're using enable as a clear because it's both in the sensitivity list and encountered first in an if statement condition.
library ieee;
use ieee.std_logic_1164.all;
-- use ieee.numeric_std.all;
entity counter is
port(
enable : in std_logic;
clk : in std_logic;
count : out integer range 0 to 255);
end counter;
architecture logicfunction of counter is
signal count_i : integer range 0 to 255;
begin
cnt : process (clk) -- (clk, enable, count_i)
begin
-- count <= count_i; -- MOVED
-- if (enable = '0') then -- REWRITTEN
-- count_i <= 0;
-- else
-- count_i <= count_i + 1;
-- end if;
if rising_edge(clk) then
if enable = '1' then
count_i <= count_i + 1;
end if;
end if;
end process;
count <= count_i; -- MOVED TO HERE
end architecture logicfunction;
Your code is modified to using the rising edge of clk and require enable = '1' before i_count increment. The superfluous use clause referencing package numeric_std has been commented out. The only numeric operation you're performing is on an integer and those operators are predefined in package standard.
Note the replacement if statement doesn't surround it's condition with parentheses. This isn't a programming language and they aren't needed.
The count assignment is moved to a concurrent signal assignment. This removes the need of having i_count in the sensitivity list just to update count.
Throw in a testbench to complete a Miminal Complete and Verifiable Example:
library ieee;
use ieee.std_logic_1164.all;
entity counter_tb is
end entity;
architecture foo of counter_tb is
signal enable: std_logic := '0';
signal clk: std_logic := '0';
signal count: integer range 0 to 255;
begin
DUT:
entity work.counter
port map (
enable => enable,
clk => clk,
count => count
);
CLOCK:
process
begin
wait for 5 ns; -- 1/2 clock period
clk <= not clk;
if now > 540 ns then
wait;
end if;
end process;
STIMULUS:
process
begin
wait for 30 ns;
enable <= '1';
wait for 60 ns;
enable <= '0';
wait for 30 ns;
enable <= '1';
wait;
end process;
end architecture;
And that gives:
Which shows that the counter doesn't counter when enable is '0' nor does enable = '0' reset the value of i_count.
The Quartus II Handbook Volume 1 Design and Synthesis doesn't give an example using a clock edge and an enable without an asynchronous clear or load signal.
The secret here is anything inside the if statement condition specified using a clock edge will be synchronous to the clock. Any condition outside will be asynchronous.
The form of synthesis eligible sequential logic is derived from the now withdrawn IEEE Std 1076.6-2004 IEEE Standard for VHDL Register
Transfer Level (RTL) Synthesis. Using those behavioral descriptions guarantees you can produce hardware through synthesis that matches simulation.
This is a generic question that has bugged me since I was able to understand the Basics of a finite state machine. Suppose I have four states s0 - s3, where
the FSM will automatically start at 's0' after power is applied. After some defined delay, the FSM shall enter 's1' - the same goes for the other states.
The delay between the different states is not the same.
For example:
Power up -> 's0' -> 100 ms -> 's1' -> 50 us -> 's2' -> 360 us -> 's3' -> 's3'
In a procedural language as C, I'd just call a delay routine with one parameter being the required delay and be done with it.
How do I implement this sort of FSM elegantly ?
best,
Chris
My pattern : a delay counter which each state transition can program as and when required, i.e. at the start of each new delay.
It's all synthesisable, though some tools (notably Synplicity) have trouble with accurate Time calculations unless your clock period is an integer number of nanoseconds. For more information on this bug, see this Q&A. If you run into this situation, magic numbers (32000 instead of Synplicity's calculated 32258 in that question) may be the simplest workaround.
Wrapping it in an entity/architecture left as an (easy) exercise.
-- first, some declarations for readability instead of magic numbers
constant clock_period : time := 10 ns;
--WARNING : Synplicity has a bug : by default it rounds to nanoseconds!
constant longest_delay : time := 100 ms;
subtype delay_type is natural range 0 to longest_delay / clock_period;
constant reset_delay : delay_type := 100 ms / clock_period - 1;
constant s1_delay : delay_type := 50 us / clock_period - 1;
constant s2_delay : delay_type := 360 us / clock_period - 1;
-- NB take care to avoid off-by-1 error!
type state_type is (s0, s1, s2, s3);
-- now the state machine declarations:
signal state : state_type;
signal delay : delay_type;
-- now the state machine itself:
process(clock, reset) is
begin
if reset = '1' then
state <= s0;
delay <= reset_delay;
elsif rising_edge(clock) then
-- default actions such as default outputs first
-- operate the delay counter
if delay > 0 then
delay <= delay - 1;
end if;
-- state machine proper
case state is
when s0 =>
-- do nothing while delay counts down
if delay = 0 then
--start 50us delay when entering S1
delay <= s1_delay;
state <= s1;
end if;
when s1 =>
if delay = 0 then
delay <= s2_delay;
state <= s2;
end if;
when s2 =>
if delay = 0 then
state <= s3;
end if;
when others =>
null;
end case;
end if;
end process;
You could use a combination of a clock divider and counters. Find out what the clock speed on your device is. All the delays you mentioned are factorable by 10us so I'll use a clock divider to get to that speed. Let's assume your original clock speed of your device is 50MHz. You'll need to find out how many cycles you'll need to count to 10us. The following calculation does that:
# of cycles = 10ms * 50MHz = 5000 cycles
So you're going to need a counter that counts to 5000. A rough example would be the following:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity new_clk is
Port (
clk_in : in STD_LOGIC; -- your 50MHZ device clock
reset : in STD_LOGIC;
clk_out: out STD_LOGIC -- your new clock with a 10us period
);
end clk200Hz;
architecture Behavioral of new_clk is
signal temporal: STD_LOGIC;
signal counter : integer range 0 to 4999 := 0;
begin
clk_div: process (reset, clk_in) begin
if (reset = '1') then
temporal <= '0';
counter <= 0;
elsif rising_edge(clk_in) then
if (counter = 4999) then
temporal <= NOT(temporal);
counter <= 0;
else
counter <= counter + 1;
end if;
end if;
end process;
clk_out <= temporal;
end Behavioral;
Note how the counter goes from 0 to 4999. The signal clk_out will now have a period of 10us. You can use this to generate your delays now.
For example, for your 360us delay, count 36 periods of the clk_out signal. The code will be roughly similar to what is above but this time you're counting clk_out and your counter is only going from 0 to 35.
(I can add more later but this should get you started.)
Check chapters 8-9 of "Finite State Machines in Hardware: Theory and Design (with VHDL and SystemVerilog)" MIT Press, 2013, for a detailed discussion covering any case and many complete examples.
a Little Ode To Brian Drummond's Splendid Answer : Thanks Brian ! :)
The Main Difference is The Removal of The Fixed Upper Limit To The Delay's Length : It's Now Only Limited By The 'delay' SIGNAL Type's Length - Which Can, Generally, Be As Long As Needed.
LIBRARY IEEE;
USE IEEE.STD_LOGIC_1164.ALL;
USE IEEE.NUMERIC_STD.ALL;
ENTITY Test123 IS
PORT (
clk_in1 : IN std_logic := '0';
rst1, en1 : IN std_logic;
);
END ENTITY Test123;
ARCHITECTURE Test123_Arch OF Test123 IS
-- first, some declarations for readability instead of magic numbers
CONSTANT clock_period : TIME := 20 ns; -- 50 MHz
--WARNING : Synplicity has a bug : by default it rounds to nanoseconds!
CONSTANT reset_delay : TIME := 100 ms - clock_period;
CONSTANT s1_delay : TIME := 50 us - clock_period;
CONSTANT s2_delay : TIME := 360 us - clock_period;
-- NB take care to avoid off-by-1 error!
-- now the state machine declarations:
TYPE state_type IS (s0, s1, s2, s3);
SIGNAL state : state_type;
--
--signal delay : unsigned(47 downto 0) := (others => '0'); -- a 48-Bit 'unsigned' Type, Along a 50-MHz Clock, Evaluates To an Upper-Limit of ~90,071,992.5474 Seconds.
SIGNAL delay : NATURAL := 0; -- a 'natural' Type, Along a 50-MHz Clock, Evaluates To an Upper-Limit of ~85.8993459 Seconds.
--
FUNCTION time_to_cycles(time_value : TIME; clk_period : TIME) RETURN NATURAL IS
BEGIN
-- RETURN TO_UNSIGNED((time_value / clk_period), 48); -- Return a 48-Bit 'unsigned'
RETURN (time_value / clk_period); -- Return a 32-Bit 'natural'
END time_to_cycles;
--
BEGIN
-- now the state machine itself:
sm0 : PROCESS (clk_in1, rst1)
BEGIN
IF (rst1 = '1') THEN
state <= s0;
delay <= time_to_cycles(reset_delay, clock_period);
ELSIF rising_edge(clk_in1) THEN
-- default actions such as default outputs first
-- operate the delay counter
IF (delay > 0) THEN
delay <= delay - 1;
END IF;
-- state machine proper
CASE state IS
WHEN s0 =>
-- do nothing while delay counts down
IF (delay = 0) THEN
--start 50us delay when entering S1
delay <= time_to_cycles(s1_delay, clock_period);
state <= s1;
END IF;
WHEN s1 =>
IF (delay = 0) THEN
delay <= time_to_cycles(s2_delay, clock_period);
state <= s2;
END IF;
WHEN s2 =>
IF (delay = 0) THEN
state <= s3;
END IF;
WHEN OTHERS =>
NULL;
END CASE;
END IF;
END PROCESS;
END ARCHITECTURE Test123_Arch;
So i saw this VHDL code for a testbench for a DFF somewhere and i don't quite get a few things.
1) Why are there 5 cases? Why aren't there just two? when the input is 0 and when it is 1;
2) Why did he pick those waiting periods so randomly? It seems that 12,28,2,10,20 ns seem very randomly chosen. What was the logic behind that?
architecture testbench of dff_tb is
signal T_din: std_logic;
signal T_dclk: std_logic;
signal T_qout: std_logic;
signal T_nqout: std_logic;
component dff
port ( din: in std_logic;
dclk: in std_logic;
qout: out std_logic;
nqout: out std_logic
);
end component;
begin
dut_dff: dff port map (T_din,T_dclk,T_qout,T_nqout);
process
begin
T_dclk <= '0';
wait for 5 ns;
T_dclk <= '1';
wait for 5 ns;
end process;
process
variable err_cnt: integer := 0;
begin
--case1
T_din <= '1';
wait for 12 ns;
assert (T_qout='1') report "Error1!" severity error;
-- case 2
T_din <= '0';
wait for 28 ns;
assert (T_qout='0') report "Error2!" severity error;
-- case 3
T_din <= '1';
wait for 2 ns;
assert (T_qout='0') report "Error3!" severity error;
-- case 4
T_din <= '0';
wait for 10 ns;
assert (T_qout='0') report "Error4!" severity error;
-- case 5
T_din <= '1';
wait for 20 ns;
assert (T_qout='1') report "Error5!" severity error;
wait;
end process;
end testbench;
In the following the case 1, case 2, through case 5 are represented by named markers A through E:
Case 1 checks to see that T_qout doesn't get updated on the falling edge of T_clk with T_din = '1'. See marker A.
Case 2 (marker B) checks to see T_qout doesn't get updated on the falling edge of T_clk with T_din = '0'.
(And about now you get the impression a student was supposed to do a gate level implementation of dff).
Case 3 (marker C) checks to see if T_qout remains a '0' (an assertion occurs when the condition is False), that the dff is clocked. That a '1' on T_din doesn't cause the output to change.
Case 4 (marker D) checks to see if T_qout remains a '1' the for the opposite value of T_din.
(These all appear to be checking gate level dff implementations).
Case 5 (marker E) appears to be checking the that a Master–slave edge-triggered D flip-flop isn't oscillating or 'relaxing' to the original state.
The testbench appears to be specific to a class assignment for implementing a DFF as a gate level model.
Now the question is, did the instructor cover all possible cases for a student to get it wrong?
You have to look closer on the clock. The clock switches every 5 ns.
Case 1
So in the beginning the DFF should be '1'.
Case 2
After 15 ns since the start, the output should be '0', not after 12 ns, thats, what you have to check.
Case 3
You set the input to '1' but the DFF should never react on it, because the duration is too short.
Case 4 & Case 5
They just ensure you, that after Case 3, there is everything alright. Again, like in Case 2 here you can check after which amount of time the DFF really switches.
This testbench IS a little bit large, if you consider testing a DFF. But if you are learning about hardware description and testbenching, it is good to know what you have to look after, before you start to implement more complicated and complex designs. Especially if you are going to make a silicon out of it, there is never enough testing done :)
Description:
I want to include vhdl assert statements to report when set_delay and hold_delay time violations occur. I am not sure how to do this with my code and I have been to many places on the web and I don't understand. Please give examples with my code.
Code:
LIBRARY ieee;
USE ieee.std_logic_1164.all;
ENTITY dff IS
GENERIC (set_delay : TIME := 3 NS; prop_delay : TIME := 12 NS;
hold_delay : TIME := 5 NS);
PORT (d, set, rst, clk : IN BIT; q : OUT BIT; nq : OUT BIT := '1');
END dff;
--
ARCHITECTURE dff OF dff IS
SIGNAL state : BIT := '0';
BEGIN
dff: PROCESS
BEGIN
wait until rst;
wait until set;
wait until clk;
IF set = '1' THEN
q <= '1' AFTER set_delay;
nq <= '0' AFTER set_delay;
ELSIF rst = '1' THEN
q <= '0' AFTER prop_delay;
nq <= '1' AFTER prop_delay;
ELSIF clk = '1' AND clk'EVENT THEN
q <= d AFTER hold_delay;
nq <= NOT d AFTER hold_delay;
END IF;
END PROCESS dff;
END dff;
I do understand that the general assert syntax is:
ASSERT
condition
REPORT
"message"
SEVERITY
severity level;
Part of my problem is that I don't know where to put these assert statements and I am not sure how I would write them.
I would introduce additional signals in which you store the time of the last manipulation. Then I'd add other process which manage the signals and check the times:.
time_debug : block
signal t_setup, t_hold : time := 0 ns;
begin
setup_check : process (clk)
begin
if clk'event and clk = '1' then
t_hold <= now;
assert (t_setup - now)>set_delay REPORT "Setup time violated." SEVERITY note;
end if;
end process setup_check;
hold_check: process (d)
begin
if d'event then
t_setup <= now;
assert (t_hold - now)>hold_delay REPORT "Hold time violated." SEVERITY note;
end if;
end process hold_check;
end block time_debug;
What this does is it saves the time of the last positive clock edge and the time of the last input change. Now every time either d changes or the clock rises the delays are checked. I couldn't verify this in a compiler because I don't have one set up here, but I'll gladly do so if there are problems with this solution.
I personally like to keep debug stuff in a dedicated block, so I can easily keep track of which signals I only use for debugging and can later easily remove them. It also makes it easier to add all debug signals to e.g. modelsim's wave screen.
Also note that these asserts and reports will only work in simulation.