How to translate SDL signal list to something similar in SysML? - sysml

This is not so much a programming question as it is a question about modelling. But you could argue that modelling is an integral part of programming.
In SDL it is possible to annotate "communication lines" between blocks (processes, services) with signal list. This is very convenient for developers because it informs them about which types of signals (messages in my case) a block either sends or accepts. (see also wikipedia on SDL and communication between blocks)
I can't find a similar notion in SysML. Either I've to introduce a new class for each signal and use a class (interface) to represent a list, or I need to define an interface class with methods, each representing a signal?
I was a bit surprised it is so difficult to find, because in the development of SysML, the ITU (i.e. the original makers of SDL) purportedly were a stakeholder in the definition of SysML.
I'm looking not for "something that works", but for a readily and widely accepted (say, canonical) way of defining signal lists for SysML blocks.
Anyhow, anyone any idea?
Thanks!
BTW: a suggestions for redirects to more appropriate SO sites is welcome.

In SysML block element is consists of various compartments, one of such compartments is signal compartment where both input and output signals can be defined.
In a case where you need to define a concrete subset of signals for a purpose of concrete communication case, you need to define an interface block and a corresponding port (proxy or not) that includes input and output signal that supposed to appear when a communication is occurred via the channel that block represents.
In general, if you want to simplify a model and if you can afford such a simplification, an interface and a port could be omitted, then the whole block element can be treated as a "port" with an "interface" defined by its compartments. Such blocks then can be connected directly on IBD in a way to transmit only a subset of signals.
I would like to recommend A Practical Guide to SysML, The Systems Modeling Language – Sanford Friedenthal, Alan Moore, Rick Steiner books as a massive source of practical explanations and examples of modeling approaches in a scope of SysML applications.

Related

Are muxes more "expensive" than other logic?

This is mostly out of curiosity.
One fragment from some VHDL code that I've been working on recently resembles the following:
led_q <= (pwm_d and ch_ena) when pwm_ena = '1' else ch_ena;
This is a mux-style expression, of course. But it's also equivalent to the following basic logic expression (at least when ignoring non-binary states):
led_q <= ch_ena and (pwm_d or not pwm_ena);
Is one "better" than the other in terms of logic utilisation or efficiency when actually implemented in an FPGA? Is it preferable to use one over the other, or is the compiler smart enough to pick the "best" on its own?
(For the curious, the purpose of the expression is to define the state of an LED -- if ch_ena is false it should always be off as the channel is disabled, otherwise it should either be on solidly or flashing according to pwm_d, according to pwm_ena (PWM enable). I think the first form describes this more obviously than the second, although it's not too hard to realise how the second behaves.)
For a simple logical expression, like the one shown, where the synthesis tool can easily create a complete truth table, the expression is likely to be converted to an internal truth table, which is then directly mapped to the available FPGA LUT resources. Since the truth table is identical for the two equivalent expressions, the hardware will also be the same.
However, for complex expressions where a complete truth table can't be generated, e.g. when using arithmetic operations, and/or where dedicated resources are available, the synthesis tool may choose to hold an internal representation that is more closely related to the original VHDL code, and in this case the VHDL coding style can have a great impact on the resulting logic, even for equivalent expressions.
In the end, the implementation is tool specific, so the best way to find out what logic is generated is to try it with the specific tool, in special for large or timing critical parts of the design, where the implementation is critical.
In general it depends on the target architecture. For Xilinx FPGAs the logic is mostly mapped into LUTs with sporadic use of the hard logic resources where the mapper can make use of them. Every possible LUT configuration has essentially equal performance so there's little benefit to scrutinizing the mapper's work unless you're really pushing the speed limits of the device where you'd be forced into manually instantiating hand-mapped LUTs.
Non-LUT based architectures like the Actel/Microsemi device families use 2-input muxes as the main logic primitive and everything is mapped down to them. You can't generalize what is best across all types of FPGAs and CPLDs but nowadays you can mostly trust that the mapper will do a decent enough job using timing constraints to push it toward the results you need.
With regards to the question I think it is best to avoid obscure Boolean expressions where possible. They tend to be hard to decipher months later when you forgot what you meant them to do. I would lean toward the when-else simply from a code maintenance point of view. Even for this trivial example you have to think closely about what behavior it describes whereas the when-else describes the intended behavior directly in human level syntax.
HDLs work best when you use the highest abstraction possible and avoid wallowing around with low-level bit twiddling. This is a place where VHDL truly shines if you leverage the more advanced features of the language and move away from describing raw logic everywhere. Let the synthesizer do the work. Introductory learning materials focus on the low level structural gate descriptions and logic expressions because that is easiest for beginners to get a start on but it is not the best way to use VHDL for complex designs in the long run.
Of course there are situations where Booleans are better, particularly when doing bitwise operations across vectors in parallel which requires messy loops to do the same imperatively. It all depends on the context.

Driving module output from combinatorial block

Is it a good design practice to use combinatorial logic to drive the output of a module in VHDL/Verilog?
Is it okay to use the module input directly inside a combinatorial block,and use the output of that combinatorial block to drive another sequential block in the same module?
An answer to the two questions really depends on the overall design methodology
and conditions, and will be opinion based, as Morgan points out in his comment.
The questions are in special relevant for a large design with timing pushed to
the limit, and where multiple designers contribute with different modules. In
this case it is important to determine a design methodology up front which
answers the two questions, in order to ensure that modules provided by
different designers can be integrated smoothly without timing issues.
Designing with flip-flops on all outputs of each module, gives the advantage
that when an output is used as input to other module, then the input timing is
reasonable well defined, and only depends on the routing delay. This makes it
a Yes to question 1.
Having a reasonable well-defined input timing makes it possible to make complex
combinatorial logic directly on the inputs, since most of the clock cycle will
be available for this. So this also makes it a Yes to question 2.
With the above Yes/Yes design methodology, the available cycle time is only
used once, and that is at the input side of the module, before the flip-flops
that goes on the output. The result is that multiple modules will click nicely
together like LEGO bricks, as shown in the figure below.
If a strict design methodology is not adhered to in different modules, then
some modules may place flip-flops on the input, and some on the output. A
longer cycle time, thus slower frequency, is then required, since the worst
case path goes through twice the depth of combinatorial logic. Such a design
is shown in the figure below, and should be avoided.
A third option exists, where flip-flops are placed on all inputs, and the
design will look like the figure below if two different modules use the same
output.
One disadvantage with this approach is that the number of flip-flops may be
higher, since the same output is used as input to multiple flip-flops, and the
synthesis tool may not combine these equivalent flip-flops. And even more
flip-flops than this may be required, if the module that generates the output
will also have to make a flip-flopped version for internal use, which is often
the case.
So the short answer to the questions is: Yes and Yes.
The answer to both questions as expressed is basically yes, provided the final design meets speed targets, and the input signals are clean.
The problem with blocks designed this way are that the signal timings through them are not accurately defined, so that combining several such blocks may result in an absurdly slow design, or one in which fast input signals don't propagate cleanly through the design.
If you design such a circuit, and it meets ALL your input and output timing constraints as well as any clock speed constraints you set, it will work.
However if it fails to meet the clock constraints you will have to insert registers to "pipeline" the design, breaking up long slow chains of combinational logic. And you will have to observe the input and output timings reported by synthesis and PAR, and they can get complicated.
In practice (in an FPGA : ASICs can be different) registers are free with each logic block (Xilinx/Altera, not true for Actel/Microsemi) and placing registers on each block's inputs and/or outputs makes the timings much simpler to understand and analyse.
And because such a design is pipelined, it is normally also much faster.

Serializing a std::function for a distributed population-based optimization application in C++

Hopefully the title of this question conveys the general concept that I am looking for assistance in designing/implementing.
To clarify:
I am trying to write a distributed application to perform optimization. The general approach will follow that of a genetic algorithm: I am trying to find a vector of parameters which minimizes the value of some cost function.
My present difficulty deals with the cost function itself. I am trying to make the design as generic as I possibly can; therefore, my intent is to use a std::function as an argument to the optimizer class.
Specifically, my question centers around: how can I share information about this cost function to the distributed nodes?
A couple more details:
The computation nodes are intended to be fully decoupled. This type of algorithm is embarrassingly parallel and requires no communication between each of the nodes.
It has been my intention, therefore, not to use MPI (or related message-passing approaches), but rather to use a scheduler such as Condor or SGE (most likely the former) to spawn instances of the application on available nodes. The nodes will be given a lookup key for retrieving its candidate vector from a database.
I can use the Factory Pattern to instantiate objects of a given cost function type; however, I am looking for an alternate approach. (I'm not sure what it is about using a Factory class that I find disagreeable)
I am obviously not looking for a way to execute arbitrary code on a remote machine. Each node runs a full instance of the executable.
As an aside, I am making fairly extensive use of Boost throughout and am highly amenable to an alternative solution using Boost functionality.
I'd appreciate if anyone can offer some advice on how to distribute the cost function to other nodes. As well, I'd also greatly appreciate if anyone can point out whether there is something flawed with my general design concept.
Update:
Valid or not, one of my hesitations regarding a factory pattern for the cost function, involves the configuration/parameters/details of the cost function. There are certain assumptions and/or specs that I can assign to this family of classes: the objects that it is expecting to operate on, etc. I am already passing various options to other Factory classes elsewhere in the design and I find it adds to the complexity.
Perhaps a bit of additional clarification on the context is warranted:
I am using this optimizer as a module in an app for the purpose of modeling data. The details of the model are, in fact, parsed into an AST and passed to a Factory class which passes arguments to the applicable model class(es).
Therefore, the optimizer has a reference to the model itself, can call a Calculate(Iterator&) method on the model and expect a Matrix as a return value (fyi, the Iterator is a reference to the parameter Vector).
That part is straightforward. What I find to be less straightforward is then determining what to do with the returned matrix. Certainly, a simple cost function might simply be a least-squares. But I can conceive of relatively complex functions as well which may themselves be parameterized. I can deal with that, as well, but I was hoping that there might be a completely different approach -- one that I haven't thought of, which might help to simplify what I am doing.
Thanks and kind regards,
Shmuel

What is the generic term for a node/link programming interface?

There are several (many?) programming/design systems where the user constructs a (node-edge) graph to represent the algorithm, and can then run the resulting algorithm to obtain results.
The two examples that I know off the top of my head are
Simulink
Pure Data
but I want to look into the general features of this approach for designing a user interface for setting up numerical processing problems, so I need to know some general terms for concisely describing this interface design.
I'm sort of looking for:
I type in "What programming systems (environments) use an XXX interface" into Google,
and amongst the answers are Simulink and Pure Data.
I find the Wikipedia page on XXX user interface and it includes in its list of systems, Simulink and Pure Data.
Someone wrote an academic paper "AmazingSoftware: an XXX system for modelling ecosystems", where they constructed a system, with this type of node/edge interface, that allows for modelling population dynamics in some way (I'm not particularly interested in ecology, rather I'd want to find this to understand what they did with respect to the interface itself).
Pure Data is generally described as "real-time graphical dataflow programming", so there are three key words there:
real-time: its a real-time system, so there is a built in sense of time and concurrency, and "guarantees" a response within strict time constraints
graphical: the programming is performed and represented graphically rather than text or punch cards or whatever (this could also be labeled visual)
dataflow: the programming logic is based on the flow of the data, versus object-oriented or procedural
My guess is that you are most interested in the graphical/visual part of that.

How does Racket Scheme's "design by contract" features different from Eiffel?

I know that both Eiffel (the progenitor) and Racket both to implement "Design by Contract" features. Sadly, I am not sure how one would different from the other. Eiffel's DBC is reliant on the OOP paradigm and inheritance, but how would Racket, a very different language account for such a disparity?
Racket's main claim to contract fame is the notion of blame, and dealing with ho function is a big part of that for everyday Racket programming, definitely.
You might also want to check out the first two sections of this paper:
http://www.ccs.neu.edu/scheme/pubs/oopsla01-ff.pdf
First of all, your best source of information at this point is the Racket Guide, which is intended as an introductory text rather than a reference manual. Specifically, there is an extensive chapter about contracts that would help. EDIT: Also see the paper that Robby pointed at, he's the main Racket contract guy.
As for your question -- I don't know much about the Eiffel contract system, but I think that it precedes Racket's system. However (and this is again an "IIRC") I think that Racket's contract system was the first one that introduced higher order contracts. Specifically, when you deal with higher order functions assigning proper blame gets a little more complicated -- for example, if you take a foo function that has a contract of X? -> Y? and you send it a value that doesn't match X? then the client code that sent this value to foo is blamed. But if your function is (X? -> Y?) -> Z? and the X? predicate is not satisfied, then the blame goes to foo itself, not to the client (and if Y? is not satisfied then the blame is still with the client).
I think you're asking, how could a contract system work without OOP and inheritance? As a user of Racket who is unfamiliar with Eiffel, I'm wondering why a contract system would have anything to do with OOP and inheritance. :)
On a practical level I think of Racket contracts as a way to get some of the benefits of static type declarations, while keeping the flexibility of dynamically typed languages. Plus contracts go beyond just types, and can fill the role of asserts.
For instance I can say a function requires one argument that is an exact integer ... but also say that it should be an exact positive integer, or a union of certain specific values, or in fact any arbitrarily complicated test of the passed value. In this way, contracts in Racket combine what you might do with both (a) type declarations and (b) assertions in say C/C++.
One gotcha with contracts in Racket is that they can be slow. One way to deal with this is to use them at first while developing, then remove them selectively especially from "inner-loop" types of functions. Another approach I've tried is to turn them on/off wholesale: Make a pair modules like contracts-on.rkt and contract-off.rkt, where the latter provides some do-nothing macros. Have your modules require a contracts.rkt, which provides all from either of the -on or -off files. This is like compiling in DEBUG vs RELEASE mode.
If you're coming from Eiffel maybe my C/C++ slant on Racket contracts won't be helpful, but I wanted to share it anyway.

Resources