Boost statechart, communication between separate FSMs - boost

let's say I have created several separate FSM classes inheriting from statechart. Then, I instantiate those objects, and I would like them to be able to trigger events in each other; for example the first FSM would enter a "ON" state and would trigger an event in the second FSM (like process_event(EvSomething()) ).
What would be the best method to do that?
Thank you very much,
Fabrizio

The main motivation for Asynchronous State Machines is exactly the scenario you describe. So, I would suggest you convert your machines into asynchronous ones. See here for an example.

Related

Novice question about structuring events in Tcl/Tk

If one is attempting to build a desktop program with a semi-complex GUI, especially one in which users can open multiple instances of identical GUI components such as having a "project" GUI and permitting users to open multiple projects concurrently within the main window, is it good practice to push the event listeners further up the widget hierarchy and use the event detail to determine upon which widget the event took place, as opposed to placing event listeners on each individual widget?
For example, in doing something similar in a web browser, there were no event listeners on any individual project GUI elements. The listeners were on the parent container that held the multiple instances of each project GUI. A project had multiple tabs within its GUI, but only one tab was visible within a project at a time and only one project was visible at any one time; so, it was fairly easy to use classes on the HTML elements and then the e.matches() method on the event.target to act upon the currently visible tab within the currently visible project in a manner that was independent of which project it was that was visible. Without any real performance testing, it was my unqualified impression as an amateur that having as few event listeners as possible was more efficient and I got most of that by reading information that wasn't very exact.
I read recently in John Ousterhout's book that Tk applications can have hundreds of event handlers and wondered whether or not attempting to limit the number of them as described above really makes any difference in Tcl/Tk.
My purpose in asking this question is solely to understand events better in order to start off the coding of my Tcl/Tk program correctly and not have to re-code a bunch of poorly structured event listeners. I'm not attempting to dispute anything written in the mentioned book and don't know enough to do so if I wanted to.
Thank you for any guidance you may be able to provide.
Having hundreds of event handlers is usually just a mark that there's a lot of different events possibly getting sent around. Since you usually (but not always) try to specialize the binding to be as specific as possible, the actual event handler is usually really small, but might call a procedure to do the work. That tends to work out well in practice. Myself, my rule of thumb is that if it is not a simple call then I'll put in a helper procedure; it's easier to debug them that way. (The main exception to my rule is if I want to generate a break.)
There are four levels you can usually bind on (plus more widget-specific ones for canvas and text):
The individual widget. This is the one that you'll use most.
The widget class. This is mostly used by Tk; you'll usually not want to change it because it may alter the behaviour of code that you just use. (For example, don't alter the behaviour of buttons!)
The toplevel containing the widget. This is ideal for hotkeys. (Be very careful though; some bindings at this level can be trouble. <Destroy> is the one that usually bites.) Toplevel widgets themselves don't have this, because of rule 1.
all, which does what it says, and which you almost never need.
You can define others with bindtags… but it's usually not a great plan as it is a lot of work.
The other thing to bear in mind is that Tk supports virtual events, <<FooBarHappened>>. They have all sorts of uses, but the main one in a complex application (that you should take note of) is for defining higher-level events that are triggered by a sequence of low-level events occasionally, and yet which other widgets than the originator may wish to take note of.

Finite State Machine / Workflow for Marionette

I have seen many FSM implementation and workflow implementation in javascript but i couldn't figure out something that exist for Marionette framework or something that goes hand in hand with marionette framework?
I am afraid Marionette may not need such implementation or it may be overkill....given the case it may already doing it in one of their components.
If i need to implement an FSM, do i need look beyond marionette or i can do some simple tweak in one of their components and get the work done.
Thoughts?
The simplest FSM is an object which stores state and emits a signal upon state changes. If this is what you need you can use a Backbone Model for it. If you need a little more complexity here's a few Backbone specific ones:
backbone.statemachine
backbone.fsm

Is there a preferred way to design signal or event APIs in Go?

I am designing a package where I want to provide an API based on the observer pattern: that is, there are points where I'd like to emit a signal that will trigger zero or more interested parties. Those interested parties shouldn't necessarily need to know about each other.
I know I can implement an API like this from scratch (e.g. using a collection of channels or callback functions), but was wondering if there was a preferred way of structuring such APIs.
In many of the languages or frameworks I've played with, there has been standard ways to build these APIs so that they behave the way users expect: e.g. the g_signal_* functions for glib based applications, events and addEventListener() for JavaScript DOM apps, or multicast delegates for .NET.
Is there anything similar for Go? If not, is there some other way of structuring this type of API that is more idiomatic in Go?
I would say that a goroutine receiving from a channel is an analogue of an observer to a certain extent. An idiomatic way to expose events in Go would be thus IMHO to return channels from a package (function). Another observation is that callbacks are not used too often in Go programs. One of the reasons is also the existence of the powerful select statement.
As a final note: some people (me too) consider GoF patterns as Go antipatterns.
Go gives you a lot of tools for designing a signal api.
First you have to decide a few things:
Do you want a push or a pull model? eg. Does the publisher push events to the subscribers or do the subscribers pull events from the publisher?
If you want a push system then having the subscribers give the publisher a channel to send messages on would work really well. If you want a pull method then just a message box guarded with a mutex would work. Other than that without knowing more about your requirements it's hard to give much more detail.
I needed an "observer pattern" type thing in a couple of projects. Here's a reusable example from a recent project.
It's got a corresponding test that shows how to use it.
The basic theory is that an event emitter calls Submit with some piece of data whenever something interesting occurs. Any client that wants to be aware of that event will Register a channel it reads the event data off of. This channel you registered can be used in a select loop, or you can read it directly (or buffer and poll it).
When you're done, you Unregister.
It's not perfect for all cases (e.g. I may want a force-unregister type of event for slow observers), but it works where I use it.
I would say there is no standard way of doing this because channels are built into the language. There is no channel library with standard ways of doing things with channels, there are simply channels. Having channels as built in first class objects frees you from having to work with standard techniques and lets you solve problems in the simplest most natural way.
There is a basic Golang version of Node EventEmitter at https://github.com/chuckpreslar/emission
See http://itjumpstart.wordpress.com/2014/11/21/eventemitter-in-go/

Developing multi-use VHDL modules

I've recently just started learning VHDL again after not having touched it for years. The hope is to use it to develop a control system with various sensor interfaces, etc. I'm fairly competent in embedded C, which has been my go-to language for any embedded projects I've had up until this point, which makes learning VHDL all the more frustrating.
Basically, my issue right now, which I see as my biggest barrier in being able to progress with my intended project, is that I don't know how to develop and incorporate a module that I can just pass variables to and call (like a function in C) to carry out some task, i.e. display an integer 0-9999 on a 4 digit 7-segment display.
I know there are components in VHDL, but that seems like such a long winded way of performing one task. Is there a better way of doing this?
It seems to me like after you've done all the digital tutorials, there's a huge gap in the info on how to actually develop a full system in VHDL.
EDIT : further explanation re: third comment at the bottom. Apologies for the length of this!
VHDL has functions and procedures too, just like C. (OK, C calls its procedures "void functions"!) And, just like C, you can call them from sequential code using the same kinds of sequential code constructs (variables, loops, if statements, case-statements which have some similarities to C's switch), and so on.
So far, all this is synthesisable; some other VHDL features are for simulation only (to let you test the synthesisable code). So in simulation you can - like C - open, read and write files, and use pointers (access types) to handle memory. I hope you can see why these parts of VHDL aren't synthesisable!)
But what is different in VHDL is that you need a few extra lines to wrap this sequential code in a process. In C, that happens for you; a typical C program is just a single process (you have to resort to libraries or OS functionality like fork or pthreads if you want multiple processes)
But VHDL can do so much more. You can very easily create multiple processes, interconnect them with signals, wrap them as re-usable components, use "for ... generate" to create multiple processes, and so on. Again, all synthesisable : this imposes some restrictions, for example the size of the hardware (number of processes) cannot be changed while the system is running!
KEY: Understand the rules for signal assignment as opposed to variable assignment. Variables work very much as in C; signals do not! What they do instead is provide safe, synchronised, inter-process communication with no fuss. To see how, you need to understand "postponed assignment", delta cycles, wait statements, and how a process suspends itself and wakes up again.
You seem to be asking two questions here:
(1) - can I use functions as in C? Very much so; you can do better than C by wrapping useful types and related functions, procedures in a package and re-using the package in multiple designs. It's a little like a C++ reusable class with some stronger points, and some weaker.
(2) can I avoid entities, architectures and components altogether in VHDL? You can avoid components (search for "VHDL direct entity instantiation") but at some point you will need entities and architectures.
The least you can get away with is to write a single process that does your job, receiving inputs (clk, count) on signals and transmitting to the LEDs on other signals.
Create an entity with all those signals as ports, and an architecture containing your process, connecting its signals up to the ports. This is easy - it's just boilerplate stuff. On an FPGA, you also need to define the mapping between these ports and the actual pins your LEDs are wired to. Synthesise it and you're done, right? .. not quite.
Create another "testbench" entity with no external ports. This will contain your entity (directly instantiated), a bunch of signals connecting to its ports, and a new process which drives your entity's input ports and watches its output ports. (Best practice is to make the testbench self-checking, and assert when something bad comes out!) Usually "clk" comes from its own one-liner process, and clocks both testbench and entity.
Now you can simulate the testbench and watch your design working (or not!) at any level of detail you want. When it works - synthesise.
EDIT for more information: re: components, procedures, functions.
Entities/components are the main tool (you can ignore components if you wish, I'll deal with entities later).
Procedures and functions usually work together in a single process. If they are refactored into a package they can be re-used in other like-minded processes (e.g. operating on the same datatypes). A common abstraction is a datatype, plus all the functions and procedures operating on it, wrapped in a package - this somewhat resembles a C++ class. Functions also have uses in any declaration area, as initialisers (aka "factory" pattern in software terms)
But the main tool is the entity.
This is a level that is probably unfamiliar to an embedded C programmer, as C basically stops at the level of the process.
If you have written a physical block like an SPI master, as a process, you will wrap that process up in an entity. This will communicate with the rest of the world via ports (which, inside the entity, behave like signals). It can be parameterised via generics (e.g. for memory size, if it is a memory). The entity can wrap several processes, other entities, and other logic that didn't fit neatly into the process (e.g. unclocked logic, where the process was clocked)
To create a system, you will interconnect entities, in "structural HDL code" (useful search term!) - perhaps a whole hierarchy of them - in a top level entity. Which you will typically synthesise into an FPGA.
To test the system via simulation, you will embed that top level entity (=FPGA) in another entity - the testbench - which has no external ports. Instead, the FPGA's ports connect to signals inside the testbench. These signals connect to ... some of them connect to other entities - perhaps models of memories or SPI slave peripherals, so you can simulate SPI transactions ... some of them are driven by a process in the testbench, which feeds your FPGA stimuli and checks its responses, detecting and reporting errors.
Best practice involves a testbench for each entity you create - unit testing, in other words. An SPI master might connect to somebody else's SPI slave and a test process, to start SPI transactions and check for the correct responses. This way, you localise and correct problems at the entity level, instead of attempting to diagnose them from the top level testbench.
A basic example is shown here :
Note that he shows port mapping by both positional association and (later) named association - you can also use both forms for function arguments, as in Ada, but not C which only allows positional association.
What "vhdlguru" doesn't say is that named association is MUCH to be preferred, as positional association is a rich source of confusion and bugs.
Is this starting to help?
Basically there are two possibilities of handing information to an entity:
At runtime
Entities communicate with each other using signals that are defined inside a port statement. For better readability I suggest using std_logic_vector, numeric_std or even better record types where appropriate. See link below.
At synthesis time
If you want to set a parameter of an entity to a fixed value at synthesis time (for instance the size of a fifo) you might want to use a generic statement. See also link below.
I can also recommend reading this paper. It helped me a lot when coping with system that exceed a certain complexity:
A structured VHDL design method
Some simple generalizations for entity/component vs subprogram (function / procedure).
Use an entity when a reusable block of code contains a flip-flop (register).
Use a function do to a calculation. For RTL code, I use functions for small, reusable pieces of combinational logic.
I primarily use procedures for testbenches. I use procedures to apply a single transaction (waveform or interface action) to the design I am testing. For testbenches, I further use procedures to encapsulate frequently used sequences of transactions.

TDD for a Device Communicator

I've been reading about TDD, and would like to use it for my next project, but I'm not sure how to structure my classes with this new paradigm. The language I'd like to use is Java, although the problem is not really language-specific.
The Project
I have a few pieces of hardware that come with a ASCII-over-RS232 interface. I can issue simple commands, and get simple responses, and control them as if from their front panels. Each one has a slightly different syntax and very different sets of commands. My goal is to create an abstraction/interface so I can control them all through a GUI and/or remote procedure calls.
The Problem
I believe the first step is to create an abstract class (I'm bad at names, how about 'Communicator'?) to implement all the stuff like Serial I/O, and then create a subclass for each device. I'm sure it will be a little more complicated than that, but that's the core of the application in my mind.
Now, for unit tests, I don't think I really need the actual hardware or a serial connection. What I'd like to do is hand my Communicators an InputStream and OutputStream (or Reader and Writer) that could be from a serial port, file, stdin/stdout, piped from a test function, whatever. So, would I just have the Communicator constructor take those as inputs? If so, it would be easy to put the responsibility of setting it all up on the testing framework, but for the real thing, who makes the actual connection? A separate constructor? The function calling the constructor again? A separate class who's job it is to 'connect' the Communicator to the correct I/O streams?
Edit
I was about to rewrite the problem section in order to get answers to the question I thought I was asking, but I think I figured it out. I had (correctly?) identified two different functional areas.
1) Dealing with the serial port
2) Communicating with the device (understanding its output & generating commands)
A few months ago, I would have combined it all into one class. My first idea towards breaking away from this was to pass just the IO streams to the class that understands the device, and I couldn't figure out who would then be responsible for creating the streams.
Having done more research on inversion of control, I think I have an answer. Have a separate interface and class that solve problem #1 and pass it to the constructor of the class(es?) that solve problem #2. That way, it's easy to test both separately. #1 by connecting to the actual hardware and allowing the test framework to do different things. #2 can be tested by being given a mock of #1.
Does this sound reasonable? Do I need to share more information?
With TDD, you should let your design emerge, start small, with baby steps and grow your classes test by test, little by little.
CLARIFIED: Start with a concrete class, to send one command, unit test it with a mock or a stub. When it will work enough (perhaps not with all options), test it against your real device, once, to validate your mock/stub/simulator.
Once the class for the first command is operational, start implementing a second command, the same way: first again your mock/stub, then once against the device for validation. Now if you're seeing similarities between your two classes, you can refactor to your abstract class based design - or to something different.
Sorry for being a little Linux centric ..
My favorite way of simulating gadgets is to write character device drivers that simulate their behavior. This also gives you fun abilities, like providing an ioctl() interface that makes the simulated device behave abnormally.
At that point .. from testing to real world, it only matters which device(s) you actually open, read and write.
It should not be too hard to mimic the behavior of your gadgets .. it sounds like they take very basic instructions and return very basic responses. Again, a simple ioctl() could tell the simulated device that its time to misbehave, so you can ensure that your code is handling such events adequately. For instance, fail intentionally on every n'th instruction, where n is randomly selected upon the call to ioctl().
After seeing your edits I think you are heading in exactly the right direction. TDD tends to drive you towards a design composed of small classes with a well-defined responsibility. I would also echo tinkertim's advice - a device simulator which you can control and "provoke" into behaving in different ways is invaluable for testing.

Resources