I apologize for the cumbersome wording of the title question, but can't think of any other way to ask it concisely. I'm aware that this is quite different from, yet analogous to, the pass-by-value vs reference dichotomy. I am wondering which of the following snippets of code would behave identically:
Declarations:
Port (source : in STD_LOGIC);
...
signal destination : STD_LOGIC := '0';
#1 - Sets the "value" on clock
process (clk) begin
if falling_edge(clk) then
if source = '1' then
destination <= '1';
else
destination <= '0';
end if;
#2 - Sets a "reference" universally
-- Top-Level
destination <= source;
#3 - Sets the (value? reference?) on clock
process (clk) begin
if falling_edge(clk) then
destination <= source;
Snippet #1 will change the value of the destination to match the source every falling edge of the clock cycle. In #2, the destination will become 'connected' to the source, and changes in the source will be followed by the destination regardless of the clock cycle. I am unsure about #3; does it behave like #1 or #2, taking the value of the source and putting it in the destination, or linking the two together as in the top-level assignment?
#1
This process has a sensitivity list. When there is an event (a change in value) on any signal in a sensitivity list, the process starts executing. This process has one signal in the sensitivity list - clk - so when there is a change on that signal, the process starts executing. If that change were a rising edge, then the condition in the if statement evaluates to FALSE and so no more lines of code are executed and the process suspends (goes to sleep). If that change were a falling edge, however, then the condition in the if statement evaluates to TRUE and so, depending on the value of source, one of two lines of code is executed that assigns a value to destination. Here's the really important bit:
When a line of code containing a signal assignment (<=) is executed in VHDL (and the effect is to change the value of the target signal - the signal on the LHS of the assignment), an event is placed on the what I shall call the event queue.
The event queue is the simulator's ToDo list and events placed on it will be actioned at some future time. If there is no explicit delay specified then that event with be actioned on the next simulation cycle (or delta cycle). Bear with...
So, assuming that the effect of executing the lines containing signal assignments is to change the value of destination, then the effect of executing those lines is to place an event on the event queue. The process then suspends.
Once all the processes have suspended, the simulator takes a look at its event queue and moves the simulation time forward to the time of the next event on the queue. In this case, that event will have been scheduled for the next simulation cycle, so the simulator time advances one simulation cycle and the events for that cycle are actioned. If any sensitivity list contains a signal that has been caused to change by actioning one of those events, then the whole cycle starts again: processes get executed, lines of code containing signal assignments get executed, new events get place on the event queue for some future time...
#3
From the point of view of this discussion, case #3 is exactly the same as case #1. A falling edge on clk will cause a line of code containing a signal assignment to be executed and, if a change of value on the target signal (destination) is required then an event will get placed on the event queue to action that change on the next simulation cycle.
#2
Case #2 clearly does not depend on clk, so is different. However, #2 is an example of a concurrent signal assignment. This is effectively and implicit process: you get an implicit sensitivity list, which contains any signal on the RHS of the signal assignment. In this case, the implicit sensitivity list contains one signal - source. So, if there is an event on source then the (implicit) process starts executing resulting in a line of code containing a signal assignment being executed and hence resulting in an event being placed on the event queue (which will be scheduled for the next simulation cycle).
So, to answer your question: "do VHDL signal assignments set destination value or reference?"
The value that is placed on the event queue (ie the value to which the target signal is to be driven) is the value that was evaluated at the time the line of code containing the signal assignment was executed.
So, in case #3, the value to assign to destination which is placed on the event queue is the value that source signal had at the time the line of code containing the signal assignment was executed. If you want to cal that 'pass by copy' then do so, but I wouldn't take that analogy very far.
Note: The LRM - the VHDL standard - uses the terminology Projected Output Waveform for what I call the event queue.
Related
The MSDN page for SleepConditionVariableCS states that
Condition variables are subject to spurious wakeups (those not
associated with an explicit wake) and stolen wakeups (another thread
manages to run before the woken thread). Therefore, you should recheck
a predicate (typically in a while loop) after a sleep operation
returns.
As a result the conditional wait has to be enclosed in a while loop i.e.
while (check_predicate())
{
SleepConditionVariableCS(...)
}
If I were to use events instead of Conditional Variables can I do away with the while loop while waiting (WaitForSingleObject) for the event to be signaled?
For WaitForSingleObject(), there are no spurious wakeups, so you can eliminate the loop.
If you use WaitForMultipleObjectsEx() with bAlertable=TRUE, MsgWaitForMultipleObjects() with a wake mask, or MsgWaitForMultipleObjectsEx() with bAlertable=TRUE or a wake mask, then the wait can end on other conditions before the event is actually signaled.
I have manageg to implement a simulation timeout in VHDL. If processes are running longer the MaxRuntime they get 'killed'.
Unfortunately, this does not work the other way around. If my simulation is finished before MaxRuntime, everything is waiting for the last wait statement on MaxRuntime.
I found that it's possible to combine wait on, wait for and wait until statements into one.
My current code in snippets. A full example is quite to long...
package sim is
shared variable IsFinalized : BOOLEAN := FALSE;
procedure initialize(MaxRuntime : TIME := TIME'high);
procedure finalize;
end package;
package body sim is
procedure initialize(MaxRuntime : TIME := TIME'high) is
begin
-- do init stuff
if (MaxRuntime = TIME'high) then
wait on IsFinalized for MaxRuntime;
finalize;
end if;
end procedure;
procedure finalize;
begin
if (not IsFinalized) then
IsFinalized := TRUE;
-- do finalize stuff:
-- -> stop all clocks
-- write a report
end if;
end procedure;
end package body;
entity test is
end entity;
architecture tb of test is
begin
initialize(200 us);
process
begin
-- simulate work
wait for 160 us;
finalize;
end process;
end architecture;
The wait statement is not exited if IsFinalized changed. My simulation runs for circa 160 us. If I set MaxRuntime to 50 us, the simulation stops at circa 50 us (plus some extra cycles until each process noticed the stop condition). When I set MaxRuntime to 200 us, the simulation exits at 200 us and not after 162 us.
How can I exit/abort the wait statement?
Why can't I wait on a variable?
I don't want to use a command line switch for a simulator to set the max execution time.
You cannot wait on a variable for the reasons given by user1155120. So, instead you need to use a signal. (A signal in a package is a global signal).
Unfortunately, even though the global signal is in scope, it still needs to be an output parameter of the procedure, which is ugly. Not only that, in your code, you will then be driving the signal from more than one place, this global signal needs to be a resolved type, eg std_logic. Which is also a bit ugly.
Here is a version of your code with the shared variable replaced by a signal, the boolean type replaced by a std_logic and the global signal added as output parameters:
library IEEE;
use IEEE.std_logic_1164.all;
package sim is
signal IsFinalized : std_logic := '0';
procedure initialize(signal f : out std_logic; MaxRuntime : TIME := TIME'high);
procedure finalize (signal f : out std_logic);
end package;
package body sim is
procedure initialize(signal f : out std_logic; MaxRuntime : TIME := TIME'high) is
begin
-- do init stuff
if (MaxRuntime = TIME'high) then
wait on IsFinalized for MaxRuntime;
finalize(f);
end if;
end procedure;
procedure finalize (signal f : out std_logic) is
begin
if (IsFinalized = '0') then
f <= '1';
-- do finalize stuff:
-- -> stop all clocks
-- write a report
report "Finished!";
end if;
end procedure;
end package body;
use work.sim.all;
entity test is
end entity;
architecture tb of test is
begin
initialize(IsFinalized, 200 us);
process
begin
-- simulate work
wait for 160 us;
finalize(IsFinalized);
wait;
end process;
end architecture;
http://www.edaplayground.com/x/VBK
So it sounds like you want to terminate the simulation, either at time out, or at finished test prior to timeout. It is correct that the simulator will stop when the event queue is empty, but that is pretty high do achieve, as you also experience.
VHDL-2008 has introduced stop and finish in the std.env package for stop or termination of simulation.
In VHDL-2002, a common way is to stop simulation is by an assert with severity FAILURE:
report "OK ### Sim end: OK :-) ### (not actual failure)" severity FAILURE;
This method for simulation stop is based on the fact that simulators (e.g. ModelSim) will usually stop simulation when an assert with severity FAILURE occurs.
How can I exit/abort the wait statement?
Why can't I wait on a variable?
All sequential statements in VHDL are atomic.
A wait statement happens to wait until a resumption condition is met, waiting on signal events, signal expressions or simulation time.
What would you use to select the statement a process resumed at if you could exit/abort it? Would it resume? You'd invalidate the execution of the sequence of statements. VHDL doesn't do exception handling, it's event driven.
And here it's worth remembering that everything that executes in simulation is a process, function calls are expressions and concurrent statements (including procedure calls) are devolved into processes potentially with super-imposed block statements limiting scope, during elaboration.
There's this basic difference between variables and signals (IEEE Std 1076-2008 Appendix I Glossary):
signal: An object with a past history of values. A signal may have multiple drivers, each with a current value and projected future values. The term signal refers to objects declared by signal declarations or port declarations. (6.4.2.3)
variable: An object with a single current value. (6.4.2.4)
You can't wait on something that doesn't have a future value, or a little more simply variables don't signal (from a dictionary - an event or statement that provides the impulse or occasion for something specified to happen).
Simulation is driven by signal events depending on future values. It defines the progression of simulation time. Time is the basis for discrete event simulation.
And about now you could wonder if someone would be telling you the truth if they were to claim VHDL is a general purpose programming language. How can it be while maintaining the ability formally specify the operation of hardware by discrete time events if you have the ability to break and resume a process arbitrarily?
And all this tells you is you might consider using signals instead of share variables.
Morton's stop and finish procedures are found in std.env.
From a test automation point of view it's important to have a time-out that can force a test to a stop using stop/finish/assert failure. Even if your intention is to make everything terminate by not producing more events there is a risk that there is a bug causing everything to hang. If that happens in the first of your tests in a longer nightly test run you're wasting a lot of time.
If you use VUnit it would work like this
library vunit_lib;
context vunit_lib.vunit_context;
entity tb_test is
generic (runner_cfg : runner_cfg_t);
end tb_test;
architecture tb of tb_test is
begin
test_runner : process
begin
test_runner_setup(runner, runner_cfg);
while test_suite loop
if run("Test that hangs") then
-- Simulate hanging test case
wait;
elsif run("Test that goes well") then
-- Simulate test case running for 160 us
wait for 160 us;
end if;
end loop;
test_runner_cleanup(runner); -- Normal forced exit point
end process test_runner;
test_runner_watchdog(runner, 200 us);
end;
The first test case which hangs is terminated by the watchdog after 200 us such that the second can run.
A problem that you may encounter when forcing the testbench to stop with the test_runner_cleanup procedure is if you have one or more additional test processes that have more things to do when the test_runner process reaches the test_runner_cleanup procedure call. For example, if you have a process like this
another_test_process: process is
begin
for i in 1 to 4 loop
wait for 45 us;
report "Testing something";
end loop;
wait;
end process;
it will not be allowed to run all four iterations. The last iteration is supposed to run at 180 us but test_runner_cleanup is called at 160 us.
So there is a need to synchronize the test_runner and another_test_process processes. You can create a package with a global resolved signal to fix this (as discussed before) but VUnit already provide such a signal which you can use to save some time. runner is a record containing, among other things, the current phase of the VUnit testbench. When calling test_runner_cleanup it enters the phase with the same name and then it moves to the test_runner_exit phase in which the simulation is forced to a stop. Any process can prevent VUnit from entering or exiting its phases by placing a temporary lock. The updated another_test_process will prevent VUnit from exiting the test_runner_cleanup phase until it's done with all its iterations.
another_test_process: process is
begin
lock_exit(runner, test_runner_cleanup);
for i in 1 to 4 loop
wait for 45 us;
report "Testing something";
end loop;
unlock_exit(runner, test_runner_cleanup);
wait;
end process;
Any number of processes can place a lock on a phase and the phase transition is prevented until all locks have been unlocked. You can also get the current phase with the get_phase function which may be useful in some cases.
If you're developing reusable components you should probably not use this technique. It saves you some code but it also makes your components dependent on VUnit and not everyone uses VUnit. Working on that :-)
DISCLAIMER: I'm one of the authors for VUnit.
In case of Events on Windows, If no threads are waiting, the event object's state remains signaled. What happens in case of pthread_cond_signal, what happens in case if no threads are blocked?
For pthread_cond_signal()... if there are no threads waiting at that exact moment, nothing happens, nothing at all -- in particular, the signal is completely forgotten, it is as if it had never happened.
IMHO POSIX "condition variables" are badly named, because the name suggests that the "condition variable" has a value, and it is natural to suppose that the value might collect pending signals. Nothing could be further from the truth. (For that you need "semaphores".)
The "state" associated with the "condition" is what matters. A thread which needs to wait for the "state" to have a particular value will:
pthread_mutex_lock(foo_mutex) ;
while (...state not as required...)
pthread_cond_wait(foo_cond, foo_mutex) ;
...perhaps update the state...
pthread_mutex_unlock(foo_mutex) ;
A thread which updates the "state" such that some other thread may now continue, will:
pthread_mutex_lock(foo_mutex) ;
...update the state...
if (...there may be a waiter...)
pthread_cond_signal(foo_cond) ;
pthread_mutex_unlock(foo_mutex) ;
NB: the standard explicitly allows pthread_cond_signal() to start one, some or all waiters... hence the while(...) loop in the waiter... and the waiter that "claims" the signal needs to update the state so that any other threads will loop in that while (...)
I have a state machine for my npcs that is structured like the following,
execute state
[[[pred1 pred2....] state1]
[[pred3 pred4....] state2]
[[pred5 pred6....] staten]]
What happens is after current state completes, it starts iterating through the states/predicates list and as soon as a list of predicates that returns all true it will jump to the state that it is associated with.
Certain events can happen during all states, say player commands npc to go somewhere. Just like any other state transition I can check for a predicate and change state but adding the same piece of code to every state seems a bit lame. So I was wondering how people deal with events in state machines?
Just make a data structure like:
state1 -> event1, event2
state2 -> event1
state3 -> event2, event3
By the way, what you have outlined doesn't look like a state machine. In state machine, next state depends on the previous one, so it would look like:
[state1, condition1] -> state2
[state1, condition2] -> state3
...
(condition is your set of predicates). You must also somehow assure that the transition is unique, i.e. that condition1 and condition2 cannot be fulfilled at the same time. Or take the first one which is true.
Try to use pattern called "State" - http://en.wikipedia.org/wiki/State_pattern
Define abstract superclass (e.g. class AbsatractState, put there code of methods that common for all states), and all classes that represents real states must been subclassed from it.
When you have a particular action that can be undertaken from all possible states, do as you would do in mathematics : factor it!
i.e : 4*10 + 5*10 + 6*10 + 4*20 + 5*20 + 6*20 = (4+5+6)*(10+20)
i.e : your (oblivion) npc can be sleeping, or at work, or eating. In all three cases, it must react to some events (be talked to, be attacked, ...). Build two FSM : Daily activity = [sleep, work, eat]. Reaction state : {Undisturbed, Talked to, Attacked, ...}. Then forward the events only to the second FSM
And you can keep on factoring FSM, as long as you're factoring independant things. For example, the mood of the npc (happy, neutral, angry, ...) is independant from it's daily activity and its reaction state. I mean "independant" in the sense "the npc can be at work, talked to, and angry, and there's no contradiction". Of course FSM influence each other, npc that are attacked tend to be angry.
So you could add a third FSM for the Mood state, instead of incorporating it in every single node
Use a hierarchical state machine (HSM). HSMs are exactly designed to factor out the common behavior into superstates and make it reusable for all substates. To learn more about HSMs, you can start with the Wikipedia article "UML State Machine" at http://en.wikipedia.org/wiki/UML_state_machine.
I wrote a multi-threaded windows application where thread:
A – is a windows form that handles user interaction and process the data from B.
B – occasionally generates data and passes it two A.
A thread safe queue is used to pass the data from thread B to A. The enqueue and dequeue functions are guarded using a windows critical section objects.
If the queue is empty when the enqueue function is called, the function will use PostMessage to tell A that there is data in the queue. The function checks to make sure the call to PostMessage is executed successfully and repeatedly calls PostMessage if it is not successful (PostMessage has yet to fail).
This worked well for quite some time until one specific computer started to lose the occasional message. By lose I mean that, PostMessage returns successfully in B but A never receives the message. This causes the software to appear frozen.
I have already come up with a couple acceptable workarounds. I am interesting in knowing why windows is loosing these messages and why this is only happening on the one computer.
Here is the relevant portions of the code.
// Only called by B
procedure TSharedQueue.Enqueue(AItem: TSQItem);
var
B: boolean;
begin
EnterCriticalSection(FQueueLock);
if FCount > 0 then
begin
FLast.FNext := AItem;
FLast := AItem;
end
else
begin
FFirst := AItem;
FLast := AItem;
end;
if (FCount = 0) or (FCount mod 10 = 0) then // just in case a message is lost
repeat
B := PostMessage(FConsumer, SQ_HAS_DATA, 0, 0);
if not B then
Sleep(1000); // this line of code has never been reached
until B;
Inc(FCount);
LeaveCriticalSection(FQueueLock);
end;
// Only called by A
function TSharedQueue.Dequeue: TSQItem;
begin
EnterCriticalSection(FQueueLock);
if FCount > 0 then
begin
Result := FFirst;
FFirst := FFirst.FNext;
Result.FNext := nil;
Dec(FCount);
end
else
Result := nil;
LeaveCriticalSection(FQueueLock);
end;
// procedure called when SQ_HAS_DATA is received
procedure TfrmMonitor.SQHasData(var AMessage: TMessage);
var
Item: TSQItem;
begin
while FMessageQueue.Count > 0 do
begin
Item := FMessageQueue.Dequeue;
// use the Item somehow
end;
end;
Is FCount also protected by FQueueLock? If not, then your problem lies with FCount being incremented after the posted message is already processed.
Here's what might be happening:
B enters critical section
B calls PostMessage
A receives the message but doesn't do anything since FCount is 0
B increments FCount
B leaves critical section
A sits there like a duck
A quick remedy would be to increment FCount before calling PostMessage.
Keep in mind that things can happen quicker than one would expect (i.e. the message posted with PostMessage being caught and processed by another thread before you have a chance to increment FCount a few lines later), especially when you're in a true multi-threaded environment (multiple CPUs). That's why I asked earlier if the "problem machine" had multiple CPUs/cores.
An easy way to troubleshoot problems like these is to scaffold the code with additonal logging to log every time you enter a method, enter/leave a critical section etc. Then you can analyze the log to see the true order of events.
On a separate note, a nice little optimization that can be done in a producer/consumer scenario like this is to use two queues instead of one. When the consumer wakes up to process the full queue, you swap the full queue with an empty one and just lock/process the full queue while the new empty queue can be populated without the two threads trying to lock each other's queues. You'd still need some locking in the swapping of the two queues though.
If the queue is empty when the enqueue
function is called, the function will
use PostMessage to tell A that there
is data in the queue.
Are you locking the message queue before checking the queue size and issuing the PostMessage? You may be experiencing a race condition where you check the queue and find it non-empty when in fact A is processing the very last message and is about to go idle.
To see if you're in fact experiencing a race condition and not a problem with PostMessage, you could switch to using an event. The worker thread (A) would wait on the event instead of waiting for a message. B would simply set that event instead of posting a message.
This worked well for quite some time
until one specific computer started to
lose the occasional message.
By any chance, does the number of CPUs or cores that this specific computer have different than the others where you see no problem? Sometimes when you switch from a single-CPU machine to a machine with more than one physical CPU/core, new race conditions or deadlocks may arise.
Could there be a second instance unknowingly running and eating the messages, marking them as handled?