Best variable structure for initialization of several motors in CodeSys - data-structures

Problem
I have a PLC hooked up to several motors (which are all of the same type) via CanOpen. The PLC is programmed using CodeSys with "Structured Text". In order to activate the motors each one has to run through an initialization state machine, for which I have to send some commands in sequence (Power on, activate etc.). But as far as I understand I have to explicitly assign a variable for each boolean which has to be activated (mot1_power_on, mot2_power_on, mot1_enable, mot2_enable etc.).
Question
How to efficiently initialize several (likewise) motors with CodeSys and structured text, where each has to run through a initialization state machine? I find it bad practice to assign a bool for each motor and each variable and then programming the same code several times. How can this task be handled efficiently? Is there a way to pass the motor or some struct to some function, which then performs this task for each of the motors? In C++ I would instantiate a class to perform this task, but how can this be done in CodeSys where I have to explicitly assign a variable for each motor?
Background
I am new to codesys, but I have some background in c/c++, matlab, python and other coding languages.

Having programmed in C++, I will assume you are familiar with object-oriented programming. Function blocks in CODESYS are really similar to classes in OO languages. So go ahead and create a "motors" class with whatever member variable and methods you wish to use. Instantiate this class for each of your motors (either through individual variables or an array), and make sure whatever code needs to run is called, somehow, from your main program.
I expect the part that will not feel as natural is the part about I/O, and this is what you refer to when you say "assign a variable for each boolean". Because in your project, the (probably BIT rather than BOOL) values you need to read and write have hardware addresses (like %I12.3, %Q3.2). Once you have the classes/instances in place, you still need to tell each instance where to find its own I/O. You would rather not use separate global variables for that, which could lead to code duplication, right?
Here is one way to do it:
Create a structure for each I/O memory block you want to address. The simplest case is all inputs variables are together in I/O memory, and all outputs variables are together too, so that means two structures to define. These structures must match the I/O memory layout bit for bit.
Be careful that TRUE/FALSE I/O is generally exposed as BIT values. When you include consecutive BIT members in your structures, CODESYS will pack them inside bytes (whereas BOOL take up at least one full byte). BIT members are very often needed to ensure a structure matches the true layout of values in I/O memory. Be mindful that all types other than BIT are byte-aligned ; as an example, a lonely BIT variable between to BYTE variables will take up a whole byte.
In your function block, declare variables using your structures as a type, with undefined addresses.
inputs AT %I*: MY_INPUTS_STRUCTURE;
outputs AT %Q*: MY_OUTPUTS_STRUCTURE;
These undefined addresses essentially act as references. Each instance of your function block will receive its own, independent reference. In other for that to work, you have, however, to "map" those undefined addresses to hardware addresses. Under CODESYS this can be done in a couple of ways : you can go to the mapping page of the motor in the project and do it for each variable individually, or you can add a VAR_CONFIG to your project, which will allow you to have one mapping per structure (no need to associate each variable in the structure individually).
Note that when mapping whole structures rather than individual variables, you may have byte order (little-endian vs big-endian) issues to deal with when using multi-byte types if the fieldbus byte order differs from the CPU byte order.
It may seem a bit heavy at first, but once you figure it out it really is not, and it allows you to create function blocks with I/O that can be put in libraries and reused in many projects.

Related

Is it bad to have many global functions?

I'm relatively new to software development, and I'm on my way to completing my first app for the iPhone.
While learning Swift, I learned that I could add functions outside the class definition, and have it accessible across all views. After a while, I found myself making many global functions for setting app preferences (registering defaults, UIAppearance, etc).
Is this bad practice? The only alternate way I could think of was creating a custom class to encapsulate them, but then the class itself wouldn't serve any purpose and I'd have to think of ways to passing it around views.
Global functions: good (IMHO anyway, though some disagree)
Global state: bad (fairly universally agreed upon)
By which I mean, it’s probably a good practice to break up your code to create lots of small utility functions, to make them general, and to re-use them. So long as they are “pure functions”
For example, suppose you find yourself checking if all the entries in an array have a certain property. You might write a for loop over the array checking them. You might even re-use the standard reduce to do it. Or you could write a re-useable function, all, that takes a closure that checks an element, and runs it against every element in the array. It’s nice and clear when you’re reading code that goes let allAboveGround = all(sprites) { $0.position.y > 0 } rather than a for…in loop that does the same thing. You can also write a separate unit test specifically for your all function, and be confident it works correctly, rather than a much more involved test for a function that includes embedded in it a version of all amongst other business logic.
Breaking up your code into smaller functions can also help avoid needing to use var so much. For example, in the above example you would probably need a var to track the result of your looping but the result of the all function can be assigned using let. Favoring immutable variables declared with let can help make your program easier to reason about and debug.
What you shouldn’t do, as #drewag points out in his answer, is write functions that change global variables (or access singletons which amount to the same thing). Any global function you write should operate only on their inputs and produce the exact same results every time regardless of when they are called. Global functions that mutate global state (i.e. make changes to global variables (or change values of variables passed to them as arguments by reference) can be incredibly confusing to debug due to unexpected side-effects they might cause.
There is one downside to writing pure global functions,* which is that you end up “polluting the namespace” – that is, you have all these functions lying around that might have specific relevance to a particular part of your program, but accessible everywhere. To be honest, for a medium-sized application, with well-written generic functions named sensibly, this is probably not an issue. If a function is purely of use to a specific struct or class, maybe make it a static method. If your project really is getting too big, you could perhaps factor out your most general functions into a separate framework, though this is quite a big overhead/learning exercise (and Swift frameworks aren’t entirely fully-baked yet), so if you are just starting out so I’d suggest leaving this for now until you get more confident.
* edit: ok two downsides – member functions are more discoverable (via autocomplete when you hit .)
Updated after discussion with #AirspeedVelocity
Global functions can be ok and they really aren't much different than having type methods or even instance methods on a custom type that is not actually intended to contain state.
The entire thing comes down mostly to personal preference. Here are some pros and cons.
Cons:
They sometimes can cause unintended side effects. That is they can change some global state that you or the caller forgets about causing hard to track down bugs. As long as you are careful about not using global variables and ensure that your function always returns the same result with the same input regardless of the state of the rest of the system, you can mostly ignore this con.
They make code that uses them difficult to test which is important once you start unit testing (which is a definite good policy in most circumstances). It is hard to test because you can't mock out the implementation of a global function easily. For example, to change the value of a global setting. Instead your test will start to depend on your other class that sets this global setting. Being able to inject a setting into your class instead of having to fake out a global function is generally preferable.
They sometimes hint at poor code organization. All of your code should be separable into small, single purpose, logical units. This ensures your code will remain understandable as your code base grows in size and age. The exception to this is truly universal functions that have very high level and reusable concepts. For example, a function that lets you test all of the elements in a sequence. You can also still separate global functions into logical units by separating them into well named files.
Pros:
High level global functions can be very easy to test. However, you cannot ignore the need to still test their logic where they are used because your unit test should not be written with knowledge of how your code is actually implemented.
Easily accessible. It can often be a pain to inject many types into another class (pass objects into an initializer and probably store it as a property). Global functions can often remove this boiler plate code (even if it has the trade off of being less flexible and less testable).
In the end, every code architecture decision is a balance of trade offs each time you go to use it.
I have a Framework.swift that contains a set of common global functions like local(str:String) to get rid of the 2nd parameter from NSLocalize. Also there are a number of alert functions internally using local and with varying number of parameters which makes use of NSAlert as modal dialogs more easy.
So for that purpose global functions are good. They are bad habit when it comes to information hiding where you would expose internal class knowledge to some global functionality.

Explain/Give example of "Hide pointer operations" in Code Complete 2

I am reading Code Complete 2, Chapter 7.1 and I don't understand the point author said below.
7.1 Valid Reasons to Create a Routine
Hide pointer operations
Pointer operations tend to be hard to read and error prone. By isolating them in routines (or a class, if appropriate), you can concentrate on the intent of the operation rather than the mechanics of pointer manipulation. Also, if the operations are done in only one place, you can be more certain that the code is correct. If you find a better data type than pointers, you can change the program without traumatizing the routines that would have used the pointers.
Please explain or give example of this purpose.
Essentially, the advice is a specific example of the data-hiding. It boils down to this -
Stick to Object-oriented design and hide your data within objects.
In case of pointers, the norm is to NEVER expose pointers of "internal" data-structures as public members. Rather make them private and expose ONLY certain meaningful manipulations that are allowed to be performed on the pointers as public member functions.
Portable / Easy to maintain
The added advantage (as explained in the section quoted) is that any change in the internal data structures never forces the external API to be changed. Only the internal implementation of the publicly exposed member functions needs to be modified to handle any changes.
Code re-use / Easy to debug
Also pointer manipulations are now NOT copy/pasted and littered all around the code with no idea what exactly they do. They are now limited to the member functions which are written keeping in mind how exactly the internal data structures are being manipulated.
For example if we have a table of data which the user is allowed to add rows into,
Do NOT expose
pointers to the head/tail of table.
pointers to the individual elements.
Instead create a table object that exposes the functions
addNewRowTop(newData)
addNewRowBottom(newData)
addNewRow(position, newData)
To take this further, we implement addNewRowTop() and addNewRowBottom() by simply calling addNewRow() with the proper position - another internal variable of the table object.

Developing multi-use VHDL modules

I've recently just started learning VHDL again after not having touched it for years. The hope is to use it to develop a control system with various sensor interfaces, etc. I'm fairly competent in embedded C, which has been my go-to language for any embedded projects I've had up until this point, which makes learning VHDL all the more frustrating.
Basically, my issue right now, which I see as my biggest barrier in being able to progress with my intended project, is that I don't know how to develop and incorporate a module that I can just pass variables to and call (like a function in C) to carry out some task, i.e. display an integer 0-9999 on a 4 digit 7-segment display.
I know there are components in VHDL, but that seems like such a long winded way of performing one task. Is there a better way of doing this?
It seems to me like after you've done all the digital tutorials, there's a huge gap in the info on how to actually develop a full system in VHDL.
EDIT : further explanation re: third comment at the bottom. Apologies for the length of this!
VHDL has functions and procedures too, just like C. (OK, C calls its procedures "void functions"!) And, just like C, you can call them from sequential code using the same kinds of sequential code constructs (variables, loops, if statements, case-statements which have some similarities to C's switch), and so on.
So far, all this is synthesisable; some other VHDL features are for simulation only (to let you test the synthesisable code). So in simulation you can - like C - open, read and write files, and use pointers (access types) to handle memory. I hope you can see why these parts of VHDL aren't synthesisable!)
But what is different in VHDL is that you need a few extra lines to wrap this sequential code in a process. In C, that happens for you; a typical C program is just a single process (you have to resort to libraries or OS functionality like fork or pthreads if you want multiple processes)
But VHDL can do so much more. You can very easily create multiple processes, interconnect them with signals, wrap them as re-usable components, use "for ... generate" to create multiple processes, and so on. Again, all synthesisable : this imposes some restrictions, for example the size of the hardware (number of processes) cannot be changed while the system is running!
KEY: Understand the rules for signal assignment as opposed to variable assignment. Variables work very much as in C; signals do not! What they do instead is provide safe, synchronised, inter-process communication with no fuss. To see how, you need to understand "postponed assignment", delta cycles, wait statements, and how a process suspends itself and wakes up again.
You seem to be asking two questions here:
(1) - can I use functions as in C? Very much so; you can do better than C by wrapping useful types and related functions, procedures in a package and re-using the package in multiple designs. It's a little like a C++ reusable class with some stronger points, and some weaker.
(2) can I avoid entities, architectures and components altogether in VHDL? You can avoid components (search for "VHDL direct entity instantiation") but at some point you will need entities and architectures.
The least you can get away with is to write a single process that does your job, receiving inputs (clk, count) on signals and transmitting to the LEDs on other signals.
Create an entity with all those signals as ports, and an architecture containing your process, connecting its signals up to the ports. This is easy - it's just boilerplate stuff. On an FPGA, you also need to define the mapping between these ports and the actual pins your LEDs are wired to. Synthesise it and you're done, right? .. not quite.
Create another "testbench" entity with no external ports. This will contain your entity (directly instantiated), a bunch of signals connecting to its ports, and a new process which drives your entity's input ports and watches its output ports. (Best practice is to make the testbench self-checking, and assert when something bad comes out!) Usually "clk" comes from its own one-liner process, and clocks both testbench and entity.
Now you can simulate the testbench and watch your design working (or not!) at any level of detail you want. When it works - synthesise.
EDIT for more information: re: components, procedures, functions.
Entities/components are the main tool (you can ignore components if you wish, I'll deal with entities later).
Procedures and functions usually work together in a single process. If they are refactored into a package they can be re-used in other like-minded processes (e.g. operating on the same datatypes). A common abstraction is a datatype, plus all the functions and procedures operating on it, wrapped in a package - this somewhat resembles a C++ class. Functions also have uses in any declaration area, as initialisers (aka "factory" pattern in software terms)
But the main tool is the entity.
This is a level that is probably unfamiliar to an embedded C programmer, as C basically stops at the level of the process.
If you have written a physical block like an SPI master, as a process, you will wrap that process up in an entity. This will communicate with the rest of the world via ports (which, inside the entity, behave like signals). It can be parameterised via generics (e.g. for memory size, if it is a memory). The entity can wrap several processes, other entities, and other logic that didn't fit neatly into the process (e.g. unclocked logic, where the process was clocked)
To create a system, you will interconnect entities, in "structural HDL code" (useful search term!) - perhaps a whole hierarchy of them - in a top level entity. Which you will typically synthesise into an FPGA.
To test the system via simulation, you will embed that top level entity (=FPGA) in another entity - the testbench - which has no external ports. Instead, the FPGA's ports connect to signals inside the testbench. These signals connect to ... some of them connect to other entities - perhaps models of memories or SPI slave peripherals, so you can simulate SPI transactions ... some of them are driven by a process in the testbench, which feeds your FPGA stimuli and checks its responses, detecting and reporting errors.
Best practice involves a testbench for each entity you create - unit testing, in other words. An SPI master might connect to somebody else's SPI slave and a test process, to start SPI transactions and check for the correct responses. This way, you localise and correct problems at the entity level, instead of attempting to diagnose them from the top level testbench.
A basic example is shown here :
Note that he shows port mapping by both positional association and (later) named association - you can also use both forms for function arguments, as in Ada, but not C which only allows positional association.
What "vhdlguru" doesn't say is that named association is MUCH to be preferred, as positional association is a rich source of confusion and bugs.
Is this starting to help?
Basically there are two possibilities of handing information to an entity:
At runtime
Entities communicate with each other using signals that are defined inside a port statement. For better readability I suggest using std_logic_vector, numeric_std or even better record types where appropriate. See link below.
At synthesis time
If you want to set a parameter of an entity to a fixed value at synthesis time (for instance the size of a fifo) you might want to use a generic statement. See also link below.
I can also recommend reading this paper. It helped me a lot when coping with system that exceed a certain complexity:
A structured VHDL design method
Some simple generalizations for entity/component vs subprogram (function / procedure).
Use an entity when a reusable block of code contains a flip-flop (register).
Use a function do to a calculation. For RTL code, I use functions for small, reusable pieces of combinational logic.
I primarily use procedures for testbenches. I use procedures to apply a single transaction (waveform or interface action) to the design I am testing. For testbenches, I further use procedures to encapsulate frequently used sequences of transactions.

What is the design rationale behind HandleScope?

V8 requires a HandleScope to be declared in order to clean up any Local handles that were created within scope. I understand that HandleScope will dereference these handles for garbage collection, but I'm interested in why each Local class doesn't do the dereferencing themselves like most internal ref_ptr type helpers.
My thought is that HandleScope can do it more efficiently by dumping a large number of handles all at once rather than one by one as they would in a ref_ptr type scoped class.
Here is how I understand the documentation and the handles-inl.h source code. I, too, might be completely wrong since I'm not a V8 developer and documentation is scarce.
The garbage collector will, at times, move stuff from one memory location to another and, during one such sweep, also check which objects are still reachable and which are not. In contrast to reference-counting types like std::shared_ptr, this is able to detect and collect cyclic data structures. For all of this to work, V8 has to have a good idea about what objects are reachable.
On the other hand, objects are created and deleted quite a lot during the internals of some computation. You don't want too much overhead for each such operation. The way to achieve this is by creating a stack of handles. Each object listed in that stack is available from some handle in some C++ computation. In addition to this, there are persistent handles, which presumably take more work to set up and which can survive beyond C++ computations.
Having a stack of references requires that you use this in a stack-like way. There is no “invalid” mark in that stack. All the objects from bottom to top of the stack are valid object references. The way to ensure this is the LocalScope. It keeps things hierarchical. With reference counted pointers you can do something like this:
shared_ptr<Object>* f() {
shared_ptr<Object> a(new Object(1));
shared_ptr<Object>* b = new shared_ptr<Object>(new Object(2));
return b;
}
void g() {
shared_ptr<Object> c = *f();
}
Here the object 1 is created first, then the object 2 is created, then the function returns and object 1 is destroyed, then object 2 is destroyed. The key point here is that there is a point in time when object 1 is invalid but object 2 is still valid. That's what LocalScope aims to avoid.
Some other GC implementations examine the C stack and look for pointers they find there. This has a good chance of false positives, since stuff which is in fact data could be misinterpreted as a pointer. For reachability this might seem rather harmless, but when rewriting pointers since you're moving objects, this can be fatal. It has a number of other drawbacks, and relies a lot on how the low level implementation of the language actually works. V8 avoids that by keeping the handle stack separate from the function call stack, while at the same time ensuring that they are sufficiently aligned to guarantee the mentioned hierarchy requirements.
To offer yet another comparison: an object references by just one shared_ptr becomes collectible (and actually will be collected) once its C++ block scope ends. An object referenced by a v8::Handle will become collectible when leaving the nearest enclosing scope which did contain a HandleScope object. So programmers have more control over the granularity of stack operations. In a tight loop where performance is important, it might be useful to maintain just a single HandleScope for the whole computation, so that you won't have to access the handle stack data structure so often. On the other hand, doing so will keep all the objects around for the whole duration of the computation, which would be very bad indeed if this were a loop iterating over many values, since all of them would be kept around till the end. But the programmer has full control, and can arrange things in the most appropriate way.
Personally, I'd make sure to construct a HandleScope
At the beginning of every function which might be called from outside your code. This ensures that your code will clean up after itself.
In the body of every loop which might see more than three or so iterations, so that you only keep variables from the current iteration.
Around every block of code which is followed by some callback invocation, since this ensures that your stuff can get cleaned if the callback requires more memory.
Whenever I feel that something might produce considerable amounts of intermediate data which should get cleaned (or at least become collectible) as soon as possible.
In general I'd not create a HandleScope for every internal function if I can be sure that every other function calling this will already have set up a HandleScope. But that's probably a matter of taste.
Disclaimer: This may not be an official answer, more of a conjuncture on my part; but the v8 documentation is hardly
useful on this topic. So I may be proven wrong.
From my understanding, in developing various v8 based backed application. Its a means of handling the difference between the C++ and javaScript environment.
Imagine the following sequence, which a self dereferencing pointer can break the system.
JavaScript calls up a C++ wrapped v8 function : lets say helloWorld()
C++ function creates a v8::handle of value "hello world =x"
C++ returns the value to the v8 virtual machine
C++ function does its usual cleaning up of resources, including dereferencing of handles
Another C++ function / process, overwrites the freed memory space
V8 reads the handle : and the data is no longer the same "hell!#(#..."
And that's just the surface of the complicated inconsistency between the two; Hence to tackle the various issues of connecting the JavaScript VM (Virtual Machine) to the C++ interfacing code, i believe the development team, decided to simplify the issue via the following...
All variable handles, are to be stored in "buckets" aka HandleScopes, to be built / compiled / run / destroyed by their
respective C++ code, when needed.
Additionally all function handles, are to only refer to C++ static functions (i know this is irritating), which ensures the "existence"
of the function call regardless of constructors / destructor.
Think of it from a development point of view, in which it marks a very strong distinction between the JavaScript VM development team, and the C++ integration team (Chrome dev team?). Allowing both sides to work without interfering one another.
Lastly it could also be the sake of simplicity, to emulate multiple VM : as v8 was originally meant for google chrome. Hence a simple HandleScope creation and destruction whenever we open / close a tab, makes for much easier GC managment, especially in cases where you have many VM running (each tab in chrome).

What does it mean when someone says that the module has both behavior and state?

As I understand I got a code review that my module has behavior and state at the same time, what does it mean anyway ?
Isn't that the whole point of object oriented programming, that instead of operating on data directly with logical circuitry using functions. We choose to operate on these closed black-boxes (encapsulation) using a set of neatly designed keys, switches and gears.
Wouldn't such a scheme naturally contain data(state) and logic(behavior) at the same time ?
By module I mean : a real Ruby module.
I designed something like this : How to design an application keeping SOLID principles and Design Patterns in mind
and implemented the commands in a module which I used to mixin.
Whatever you are referring to, be it an object defined by a class (or type), a module, or anything else with code in it, state is data that is persisted over multiple calls to the thing. If it "remembers" anything between one execution and the next, then it has state.
Behavior, otoh, is code that manipulates or processes that state-data, or non-state data that is used only during a single execution of the code, (like parameter values passed to a function). Methods, subroutines or functions, anything that changes or does something is behavior.
Most classes, types, or whatever, have both data (state) and behavior, but....
Some classes or types are designed simply to carry data around. They are referred to as Data Transfer objects or DTOs, or Plain Old Container Objects (POCOs). They only have state, and, generally, have little or no behavior.
Other times, a class or type is constructed to hold general utility functions, (like a Math Library). It will not maintain or keep any state between the many times it is called to perform one of its utilities. The only data used in it is data passed in as parameters for each call to the library function, and that data is discarded when the routine is finished. It has behavior. but no state.
You're right in your thinking that OOP encapsulates the ideas of both behaviour and state and mixes the two things together, but from the wording of your question, I'm wondering if you have written a ruby module (mixin, whatever you want to call it) that is stateful, such that there is the potential for state leakage across multiple uses of the same module.
Without seeing the code in question I can't really give you a full answer.
In Object-Oriented terminology, an object is said to have state when it encapsulates data (attributes, properties) and is said to have behavior when it offers operations (methods, procedures, functions) that operate (create, delete, modify, make calculations) on the data.
The same concepts can be extrapolated to a ruby module, it has "state" if it defined data accessible within the module, and it has "behavior" in the form of operations provided which operate on the data.

Resources