How to set a value, attribute or property of a block instance independent from a block - sysml

I am trying to define different systems (software and hardware) made of common building blocks in SysML. In the following I try to describe my approach with a sample:
For this purpose I have defined a common set of blocks in one BDD and described the common relationships of all blocks using ports and connectors in one IBD - nothing special so far:
block A, B, C
each block using each two ports
each block's port connected to other blocks ports
Now when using the blocks defined as given above, I want to add static characteristics of blocks and ports for each system I define based on the above building blocks. The system is defined in one additional BDD and IBD using the same blocks from above:
System(s) AX and AY have:
additional connections between two blocks A and B, described in IBD (OK)
additional characteristics of the ports (NOK)
additional characteristics of the blocks (NOK)
Problem:
The last two "NOK" points are a problem as follows:
Whenever I add additional properties/attributes/tags to a block in one system/IBD it also applies to the other systems/blocks
Whenever I add additional properties/attributes/tags to a port in one system it also applies to the other systems/blocks
My question can be generalized:
How would I define characteristics of instances of blocks in a way that they do not affect the original blocks they are instantiated from. The issue came up in multiple attempts to design systems, maybe SysML is not intended to be used in such a way at all?
I also tried to design my system in UML using component diagrams and components / component instances and the same problem appears there: instance specific attributes/values/ports do not seem to be supported.
Side Note:
I am using MagicDraw as a tool for SysML und UML.

I understand you want to define context specific connectors and properties.
First I want to clarify that all of these are already context specific. The context of properties is their owning block or InterfaceBlock (type of the port). The context of connectors is their owning block (IntefaceBlocks cannot have parts, therefore also no connectors).
So, a connector needs a context. Let's call it system A0. It has parts of type A, B and C and owns the connectors between the ports of its parts.
Now you can define systems AX and AY as special kinds of system A0. As such it has the same parts, but you can add more parts and connectors.
If you define additional properties of the parts of your special systems, you are in fact creating new special types of A, B and C and the port types. SysML 1 forces you to define these new types. And I think rightly so. If block A' shall have more features than block A, then it is a new type. It is irrelevant that A' is only used in the context of system AX. If you later decide to use it in system AZ, it would still have the same features.
Not all of these changes mean that it is a new type. For example if you only want to change the voltage of an adjustable power supply, this is not a new type of power supply. In the context of system AX it might be set to 12 V and in system AY it might be set to 24 V. In order to allow this, SysML 1 has context specific initial values. Cameo has great support for these. This helps around the somewhat clumsy definition in SysML 1. This will be much better in SysML 2.
If the value is not adjustable, a 12 V power supply would technically be a new type of power supply. However, I can see that it might be useful to only define this as context specific value, even though it is strictly speaking not context specific. I don't want to be more papal than the pope.
Now a lot of systems engineers don't like to define their blocks. I really don't understand why. But in order to accomodate this modeling habit, SysML 1 has property specific types. In the background these types are still regular blocks, only on the surface it appears as if they are only defined in the context. Up to now, no one could explain to me, what the advantage of this would be. However, SysML 2 has made it the core of the language. Here you can define properties of properties, even without first defining blocks.
Sometimes you have sub assemblies with some flexibility regarding cardinalities and types. If you use such a sub assembly in a certain context, it is often necessary to define context specific limitations. For example you could say the generic landing gear can have 4..8 Wheels, which could be High load wheels or Medium load wheels, but when used in a Boing 747, it will have 6 high load wheels. For this SysML 1 has bound references. Is that your use case?

Related

Best variable structure for initialization of several motors in CodeSys

Problem
I have a PLC hooked up to several motors (which are all of the same type) via CanOpen. The PLC is programmed using CodeSys with "Structured Text". In order to activate the motors each one has to run through an initialization state machine, for which I have to send some commands in sequence (Power on, activate etc.). But as far as I understand I have to explicitly assign a variable for each boolean which has to be activated (mot1_power_on, mot2_power_on, mot1_enable, mot2_enable etc.).
Question
How to efficiently initialize several (likewise) motors with CodeSys and structured text, where each has to run through a initialization state machine? I find it bad practice to assign a bool for each motor and each variable and then programming the same code several times. How can this task be handled efficiently? Is there a way to pass the motor or some struct to some function, which then performs this task for each of the motors? In C++ I would instantiate a class to perform this task, but how can this be done in CodeSys where I have to explicitly assign a variable for each motor?
Background
I am new to codesys, but I have some background in c/c++, matlab, python and other coding languages.
Having programmed in C++, I will assume you are familiar with object-oriented programming. Function blocks in CODESYS are really similar to classes in OO languages. So go ahead and create a "motors" class with whatever member variable and methods you wish to use. Instantiate this class for each of your motors (either through individual variables or an array), and make sure whatever code needs to run is called, somehow, from your main program.
I expect the part that will not feel as natural is the part about I/O, and this is what you refer to when you say "assign a variable for each boolean". Because in your project, the (probably BIT rather than BOOL) values you need to read and write have hardware addresses (like %I12.3, %Q3.2). Once you have the classes/instances in place, you still need to tell each instance where to find its own I/O. You would rather not use separate global variables for that, which could lead to code duplication, right?
Here is one way to do it:
Create a structure for each I/O memory block you want to address. The simplest case is all inputs variables are together in I/O memory, and all outputs variables are together too, so that means two structures to define. These structures must match the I/O memory layout bit for bit.
Be careful that TRUE/FALSE I/O is generally exposed as BIT values. When you include consecutive BIT members in your structures, CODESYS will pack them inside bytes (whereas BOOL take up at least one full byte). BIT members are very often needed to ensure a structure matches the true layout of values in I/O memory. Be mindful that all types other than BIT are byte-aligned ; as an example, a lonely BIT variable between to BYTE variables will take up a whole byte.
In your function block, declare variables using your structures as a type, with undefined addresses.
inputs AT %I*: MY_INPUTS_STRUCTURE;
outputs AT %Q*: MY_OUTPUTS_STRUCTURE;
These undefined addresses essentially act as references. Each instance of your function block will receive its own, independent reference. In other for that to work, you have, however, to "map" those undefined addresses to hardware addresses. Under CODESYS this can be done in a couple of ways : you can go to the mapping page of the motor in the project and do it for each variable individually, or you can add a VAR_CONFIG to your project, which will allow you to have one mapping per structure (no need to associate each variable in the structure individually).
Note that when mapping whole structures rather than individual variables, you may have byte order (little-endian vs big-endian) issues to deal with when using multi-byte types if the fieldbus byte order differs from the CPU byte order.
It may seem a bit heavy at first, but once you figure it out it really is not, and it allows you to create function blocks with I/O that can be put in libraries and reused in many projects.

Are cache ways in gem5 explicit or are they implied/derived from the number of cache sets and cache size?

I am trying to implement a gem5 version of HybCache as described in HYBCACHE: Hybrid Side-Channel-Resilient Caches for Trusted Execution Environments (which can be found at https://www.usenix.org/system/files/sec20spring_dessouky_prepub.pdf).
A brief summary of HybCache is that a subset of all the cache is reserved for use by secure processes and are isolated. This is achieved by using a limited subset of cache ways when the process is in 'isolated' mode. Non-isolated processes uses the cache operations normally, having access to the entire cache and using the replacement policy and associativity given in the configuration. The isolated subset of cache ways uses random replacement policy and is fully associative. Here is a picture demonstrating the idea.
The ways 6 and 7 are grey and represent the isolated cache ways.
So, I need to manipulate the placement of data into these ways. My question is, since I have found no mention of cache ways in the gem5 code, does that mean that the cache ways only exist logically? That is, do I have to manually calculate the location of each cache way? If cache ways are used in gem5, then were are they used? What is the file name?
Any help would be greatly appreciated.
This answer is only valid for the Classic cache model (src/mem/cache/).
In gem5 the number of cache ways is determined automatically from the cache size and the associativity. Check the files in src/mem/cache/tags/indexing_policies/ for the relevant code (specifically, the constructor of base.cc).
There are two ways you could tackle this implementation:
1 - Create a new class that inherits from BaseTags (e.g., HybCacheTags). This class will contain the decision of whether it should work in secure mode or not, and how to do so (i.e., when to call which indexing and replacement policy). Depending on whatever else is proposed in the paper, you may also need to derive from Cache to create a HybCache.
The new tags need one indexing policy per operation mode. One is the conventional (SetAssociative), and the other is derived from SetAssociative, where the parameter assoc makes the numSets become 1 (to make it fully associative). The derived one will also have to override at least one function, getPossibleEntries(), to only allow selecting the ways that you want. You can check skewed_assoc.cc for an example of a more complex location selection.
The new tags need one replacement policy per operation mode. You will likely just use the ones in the replacement_policies folder.
2 - You could create a HybCache based on the Cache class that has two tags, one conventional (i.e., BaseSetAssoc), and the other based on the FALRU class (rewritten to work as a, e.g., FARandom).
I believe the first option is easier and less hardcoded. FALRU has not been split into an indexing policy and replacement policy, so if you need to change one of these, you will have to reimplement it.
While implementing you may encounter coherence faults. If it happens it is much likely a problem in the indexing logic, and I wouldn't look into trying to find issues in the coherence model.

Is it bad to have many global functions?

I'm relatively new to software development, and I'm on my way to completing my first app for the iPhone.
While learning Swift, I learned that I could add functions outside the class definition, and have it accessible across all views. After a while, I found myself making many global functions for setting app preferences (registering defaults, UIAppearance, etc).
Is this bad practice? The only alternate way I could think of was creating a custom class to encapsulate them, but then the class itself wouldn't serve any purpose and I'd have to think of ways to passing it around views.
Global functions: good (IMHO anyway, though some disagree)
Global state: bad (fairly universally agreed upon)
By which I mean, it’s probably a good practice to break up your code to create lots of small utility functions, to make them general, and to re-use them. So long as they are “pure functions”
For example, suppose you find yourself checking if all the entries in an array have a certain property. You might write a for loop over the array checking them. You might even re-use the standard reduce to do it. Or you could write a re-useable function, all, that takes a closure that checks an element, and runs it against every element in the array. It’s nice and clear when you’re reading code that goes let allAboveGround = all(sprites) { $0.position.y > 0 } rather than a for…in loop that does the same thing. You can also write a separate unit test specifically for your all function, and be confident it works correctly, rather than a much more involved test for a function that includes embedded in it a version of all amongst other business logic.
Breaking up your code into smaller functions can also help avoid needing to use var so much. For example, in the above example you would probably need a var to track the result of your looping but the result of the all function can be assigned using let. Favoring immutable variables declared with let can help make your program easier to reason about and debug.
What you shouldn’t do, as #drewag points out in his answer, is write functions that change global variables (or access singletons which amount to the same thing). Any global function you write should operate only on their inputs and produce the exact same results every time regardless of when they are called. Global functions that mutate global state (i.e. make changes to global variables (or change values of variables passed to them as arguments by reference) can be incredibly confusing to debug due to unexpected side-effects they might cause.
There is one downside to writing pure global functions,* which is that you end up “polluting the namespace” – that is, you have all these functions lying around that might have specific relevance to a particular part of your program, but accessible everywhere. To be honest, for a medium-sized application, with well-written generic functions named sensibly, this is probably not an issue. If a function is purely of use to a specific struct or class, maybe make it a static method. If your project really is getting too big, you could perhaps factor out your most general functions into a separate framework, though this is quite a big overhead/learning exercise (and Swift frameworks aren’t entirely fully-baked yet), so if you are just starting out so I’d suggest leaving this for now until you get more confident.
* edit: ok two downsides – member functions are more discoverable (via autocomplete when you hit .)
Updated after discussion with #AirspeedVelocity
Global functions can be ok and they really aren't much different than having type methods or even instance methods on a custom type that is not actually intended to contain state.
The entire thing comes down mostly to personal preference. Here are some pros and cons.
Cons:
They sometimes can cause unintended side effects. That is they can change some global state that you or the caller forgets about causing hard to track down bugs. As long as you are careful about not using global variables and ensure that your function always returns the same result with the same input regardless of the state of the rest of the system, you can mostly ignore this con.
They make code that uses them difficult to test which is important once you start unit testing (which is a definite good policy in most circumstances). It is hard to test because you can't mock out the implementation of a global function easily. For example, to change the value of a global setting. Instead your test will start to depend on your other class that sets this global setting. Being able to inject a setting into your class instead of having to fake out a global function is generally preferable.
They sometimes hint at poor code organization. All of your code should be separable into small, single purpose, logical units. This ensures your code will remain understandable as your code base grows in size and age. The exception to this is truly universal functions that have very high level and reusable concepts. For example, a function that lets you test all of the elements in a sequence. You can also still separate global functions into logical units by separating them into well named files.
Pros:
High level global functions can be very easy to test. However, you cannot ignore the need to still test their logic where they are used because your unit test should not be written with knowledge of how your code is actually implemented.
Easily accessible. It can often be a pain to inject many types into another class (pass objects into an initializer and probably store it as a property). Global functions can often remove this boiler plate code (even if it has the trade off of being less flexible and less testable).
In the end, every code architecture decision is a balance of trade offs each time you go to use it.
I have a Framework.swift that contains a set of common global functions like local(str:String) to get rid of the 2nd parameter from NSLocalize. Also there are a number of alert functions internally using local and with varying number of parameters which makes use of NSAlert as modal dialogs more easy.
So for that purpose global functions are good. They are bad habit when it comes to information hiding where you would expose internal class knowledge to some global functionality.

Developing multi-use VHDL modules

I've recently just started learning VHDL again after not having touched it for years. The hope is to use it to develop a control system with various sensor interfaces, etc. I'm fairly competent in embedded C, which has been my go-to language for any embedded projects I've had up until this point, which makes learning VHDL all the more frustrating.
Basically, my issue right now, which I see as my biggest barrier in being able to progress with my intended project, is that I don't know how to develop and incorporate a module that I can just pass variables to and call (like a function in C) to carry out some task, i.e. display an integer 0-9999 on a 4 digit 7-segment display.
I know there are components in VHDL, but that seems like such a long winded way of performing one task. Is there a better way of doing this?
It seems to me like after you've done all the digital tutorials, there's a huge gap in the info on how to actually develop a full system in VHDL.
EDIT : further explanation re: third comment at the bottom. Apologies for the length of this!
VHDL has functions and procedures too, just like C. (OK, C calls its procedures "void functions"!) And, just like C, you can call them from sequential code using the same kinds of sequential code constructs (variables, loops, if statements, case-statements which have some similarities to C's switch), and so on.
So far, all this is synthesisable; some other VHDL features are for simulation only (to let you test the synthesisable code). So in simulation you can - like C - open, read and write files, and use pointers (access types) to handle memory. I hope you can see why these parts of VHDL aren't synthesisable!)
But what is different in VHDL is that you need a few extra lines to wrap this sequential code in a process. In C, that happens for you; a typical C program is just a single process (you have to resort to libraries or OS functionality like fork or pthreads if you want multiple processes)
But VHDL can do so much more. You can very easily create multiple processes, interconnect them with signals, wrap them as re-usable components, use "for ... generate" to create multiple processes, and so on. Again, all synthesisable : this imposes some restrictions, for example the size of the hardware (number of processes) cannot be changed while the system is running!
KEY: Understand the rules for signal assignment as opposed to variable assignment. Variables work very much as in C; signals do not! What they do instead is provide safe, synchronised, inter-process communication with no fuss. To see how, you need to understand "postponed assignment", delta cycles, wait statements, and how a process suspends itself and wakes up again.
You seem to be asking two questions here:
(1) - can I use functions as in C? Very much so; you can do better than C by wrapping useful types and related functions, procedures in a package and re-using the package in multiple designs. It's a little like a C++ reusable class with some stronger points, and some weaker.
(2) can I avoid entities, architectures and components altogether in VHDL? You can avoid components (search for "VHDL direct entity instantiation") but at some point you will need entities and architectures.
The least you can get away with is to write a single process that does your job, receiving inputs (clk, count) on signals and transmitting to the LEDs on other signals.
Create an entity with all those signals as ports, and an architecture containing your process, connecting its signals up to the ports. This is easy - it's just boilerplate stuff. On an FPGA, you also need to define the mapping between these ports and the actual pins your LEDs are wired to. Synthesise it and you're done, right? .. not quite.
Create another "testbench" entity with no external ports. This will contain your entity (directly instantiated), a bunch of signals connecting to its ports, and a new process which drives your entity's input ports and watches its output ports. (Best practice is to make the testbench self-checking, and assert when something bad comes out!) Usually "clk" comes from its own one-liner process, and clocks both testbench and entity.
Now you can simulate the testbench and watch your design working (or not!) at any level of detail you want. When it works - synthesise.
EDIT for more information: re: components, procedures, functions.
Entities/components are the main tool (you can ignore components if you wish, I'll deal with entities later).
Procedures and functions usually work together in a single process. If they are refactored into a package they can be re-used in other like-minded processes (e.g. operating on the same datatypes). A common abstraction is a datatype, plus all the functions and procedures operating on it, wrapped in a package - this somewhat resembles a C++ class. Functions also have uses in any declaration area, as initialisers (aka "factory" pattern in software terms)
But the main tool is the entity.
This is a level that is probably unfamiliar to an embedded C programmer, as C basically stops at the level of the process.
If you have written a physical block like an SPI master, as a process, you will wrap that process up in an entity. This will communicate with the rest of the world via ports (which, inside the entity, behave like signals). It can be parameterised via generics (e.g. for memory size, if it is a memory). The entity can wrap several processes, other entities, and other logic that didn't fit neatly into the process (e.g. unclocked logic, where the process was clocked)
To create a system, you will interconnect entities, in "structural HDL code" (useful search term!) - perhaps a whole hierarchy of them - in a top level entity. Which you will typically synthesise into an FPGA.
To test the system via simulation, you will embed that top level entity (=FPGA) in another entity - the testbench - which has no external ports. Instead, the FPGA's ports connect to signals inside the testbench. These signals connect to ... some of them connect to other entities - perhaps models of memories or SPI slave peripherals, so you can simulate SPI transactions ... some of them are driven by a process in the testbench, which feeds your FPGA stimuli and checks its responses, detecting and reporting errors.
Best practice involves a testbench for each entity you create - unit testing, in other words. An SPI master might connect to somebody else's SPI slave and a test process, to start SPI transactions and check for the correct responses. This way, you localise and correct problems at the entity level, instead of attempting to diagnose them from the top level testbench.
A basic example is shown here :
Note that he shows port mapping by both positional association and (later) named association - you can also use both forms for function arguments, as in Ada, but not C which only allows positional association.
What "vhdlguru" doesn't say is that named association is MUCH to be preferred, as positional association is a rich source of confusion and bugs.
Is this starting to help?
Basically there are two possibilities of handing information to an entity:
At runtime
Entities communicate with each other using signals that are defined inside a port statement. For better readability I suggest using std_logic_vector, numeric_std or even better record types where appropriate. See link below.
At synthesis time
If you want to set a parameter of an entity to a fixed value at synthesis time (for instance the size of a fifo) you might want to use a generic statement. See also link below.
I can also recommend reading this paper. It helped me a lot when coping with system that exceed a certain complexity:
A structured VHDL design method
Some simple generalizations for entity/component vs subprogram (function / procedure).
Use an entity when a reusable block of code contains a flip-flop (register).
Use a function do to a calculation. For RTL code, I use functions for small, reusable pieces of combinational logic.
I primarily use procedures for testbenches. I use procedures to apply a single transaction (waveform or interface action) to the design I am testing. For testbenches, I further use procedures to encapsulate frequently used sequences of transactions.

What does it mean when someone says that the module has both behavior and state?

As I understand I got a code review that my module has behavior and state at the same time, what does it mean anyway ?
Isn't that the whole point of object oriented programming, that instead of operating on data directly with logical circuitry using functions. We choose to operate on these closed black-boxes (encapsulation) using a set of neatly designed keys, switches and gears.
Wouldn't such a scheme naturally contain data(state) and logic(behavior) at the same time ?
By module I mean : a real Ruby module.
I designed something like this : How to design an application keeping SOLID principles and Design Patterns in mind
and implemented the commands in a module which I used to mixin.
Whatever you are referring to, be it an object defined by a class (or type), a module, or anything else with code in it, state is data that is persisted over multiple calls to the thing. If it "remembers" anything between one execution and the next, then it has state.
Behavior, otoh, is code that manipulates or processes that state-data, or non-state data that is used only during a single execution of the code, (like parameter values passed to a function). Methods, subroutines or functions, anything that changes or does something is behavior.
Most classes, types, or whatever, have both data (state) and behavior, but....
Some classes or types are designed simply to carry data around. They are referred to as Data Transfer objects or DTOs, or Plain Old Container Objects (POCOs). They only have state, and, generally, have little or no behavior.
Other times, a class or type is constructed to hold general utility functions, (like a Math Library). It will not maintain or keep any state between the many times it is called to perform one of its utilities. The only data used in it is data passed in as parameters for each call to the library function, and that data is discarded when the routine is finished. It has behavior. but no state.
You're right in your thinking that OOP encapsulates the ideas of both behaviour and state and mixes the two things together, but from the wording of your question, I'm wondering if you have written a ruby module (mixin, whatever you want to call it) that is stateful, such that there is the potential for state leakage across multiple uses of the same module.
Without seeing the code in question I can't really give you a full answer.
In Object-Oriented terminology, an object is said to have state when it encapsulates data (attributes, properties) and is said to have behavior when it offers operations (methods, procedures, functions) that operate (create, delete, modify, make calculations) on the data.
The same concepts can be extrapolated to a ruby module, it has "state" if it defined data accessible within the module, and it has "behavior" in the form of operations provided which operate on the data.

Resources