If a VAR_INPUT is of INTERFACE type, is the value pass-by-reference or pass-by-value? - twincat

In the TwinCAT and CodeSys IEC-61131 programming environments, it's possible to declare POU VAR_INPUTs using an INTERFACE as a type specification. I believe the support for interfaces in TwinCAT and CoDeSys is an extension to the standard IEC-61131 language definition.
Question 1: When the POU is invoked, do interface VAR_INPUTs have pass-by-value (i.e. the input FB's state is copied on each execution of the called FB) or pass-by-reference semantics?
Question 2: Where is this behaviour specified or documented?

The interface type itself is a value, but it doesn't carry the function block it refers to. It's implemented as a pointer-to-instance's-vtable-pointer. It's used as-if it was a reference to a function block that implemented the interface, but the address returned is NOT that of the function block (that's the critical difference). That's because of the implementation:
FB Instance
|
interface (PVOID) ------+ * PVOID vtable 1 +----> VTABLE 3
+------> * PVOID vtable 2 -----+ |
* ... * method 1
* PVOID vtable n * ...
* data fields * method m
So, if you read the contents of an interface, you'll get an address somewhere within the function block instance, and that address is the address of the vtable pointer within the instance. The particular vtable is the one that implements the methods of the interface (i.e. is compatible with the interface).
We can check that this is so for some type FB_MyFB:
INTERFACE I_Derived EXTENDS __SYSTEM.QueryInterface
END_INTERFACE
FUNCTION_BLOCK FB_MyFB IMPLEMENTS I_Derived
...
END_FUNCTION_BLOCK
FUNCTION F_CheckInterfaceRange(fb : REFERENCE TO FB_MyFB) : BOOL
VAR
ifc : I_Derived := fb;
ifcval : POINTER TO PVOID := ADR(ifc);
END_VAR
ifcval := ifcval^;
F_CheckInterfaceRange :=
ifcval >= ADR(fb)
AND_THEN ifcval <= (ADR(fb) + SIZEOF(FB_MyFB) - SIZEOF(PVOID));
END_FUNCTION
It is seemingly impossible to get the address of the instance directly. Most likely it's an arbitrary limitation: all the vtable pointers must be valid and probably belong in a certain memory area, so you could imagine starting at whatever interface points to and walking backwards from it until you stop getting valid pointers. Those are the bounds. The instance starts with a vtable pointer, so one of those pointers you found will be it. Then examine how do the pointers look in instances of various library FB types, and then look at how the pointed-to vtables look, and I'm sure some valid heuristic would pop up that might not even be as expensive as a __QUERYINTERFACE call. CoDeSys 3 code generator is abysmal.
The supported way, instead, is for FB to implement an interface extending SYSTEM.__QueryInterface. Then, __QUERYPOINTER is used to access that interface to get the value of THIS of the FB.
You could imagine that __QUERYPOINTER looks a bit like:
FUNCTION __QUERYPOINTER
VAR_INPUT
ifc : __SYSTEM.QueryInterface;
ptr : REFERENCE TO PVOID;
END_VAR
ptr := ifc.__QUERYTHIS();
END_FUNCTION
The __SYSTEM.QueryInterface interface implements a method that casts between interfaces implemented by a FB, as long as both interfaces derive from __SYSTEM.QueryInterface, as well as a method (imagine it's called __QUERYTHIS) that returns THIS.
The method is generated by the compiler.
Imagine that the rest of the implementation is a bit like:
INTERFACE __SYSTEM.QueryInterface
PROPERTY _This_ : POINTER TO BYTE
METHOD _This__GET : POINTER TO BYTE // that's how CoDeSys 3 implements getters/setters
END_PROPERTY
...
END_INTERFACE
FUNCTION BLOCK FB_Queryable IMPLEMENTS I_Queryable
PROPERTY _This_ : POINTER TO BYTE
METHOD _This__GET : POINTER TO BYTE
_This_GET := THIS;
END_METHOD
END_FUNCTION_BLOCK
You could similarly implement F_QueryInterface (this won't be as easy because __QUERYINTERFACE gets help from the compiler):
FUNCTION F_QueryInterface2 : BOOL
VAR_INPUT
from : I_Queryable;
to : REFERENCE TO I_Interface2;
END_VAR
IF from <> 0 THEN
// the compiler would translate __QUERYINTERFACE(from, to) to something like:
F_QueryInterface2 := from._QueryInterface_(2, ADR(to));
END_IF
END_FUNCTION
INTERFACE I_Queryable // cont'd
...
METHOD _QueryInterface_ : BOOL
VAR_INPUT
typeid : INT;
to : POINTER TO PVOID; // pointer to interface
END_VAR
END_INTERFACE
INTERFACE I_Interface2 EXTENDS I_Queryable
...
END_INTERFACE
FUNCTION_BLOCK FB_MoreQueryable IMPLEMENTS I_Interface1, I_Interface2
METHOD _QueryInterface_ : BOOL
VAR_INPUT
typeid : INT;
to : POINTER TO U_Interfaces; // pointer to interface
END_VAR
to^.PVOID := 0;
CASE typeId OF
1: to^.Interface1 := THIS^;
2: to^.Interface2 := THIS^;
END_CASE
_QueryInterface_ := to^.PVOID <> 0;
END_FUNCTION_BLOCK
TYPE U_Interfaces :
UNION
PVOID : PVOID;
Interface1 : I_Interface1;
Interface2 : I_Interface2;
END_UNION
END_TYPE

Interface variables are always treated as references in CoDeSys and TwinCAT. This should include VAR_INPUT variables.
TwinCAT reference:
CoDeSys reference:

Related

incomplete type support for list

I implemented a list with similar API to std::list but it fails to compile
struct A { my_list<A> v; };
The list has a base class that has a member a base_node which has the prev and next fields and node (which is derived from base_node) holds the T value (which is the template parameter for the list). The compilation error is
error: ‘node<T>::val’ has incomplete type
T val;
^~~
note: forward declaration of ‘struct A’
I looked in GCC code and it seems like they hold a buffer of bytes of size T so not sure how it works for them. How std::list manages to store A in its nodes?
[UPDATE]
struct A { };
template <typename T>
struct B : public A
{
using B_T = B<T>;
T t;
};
template <typename T>
class C
{
using B_T = typename B<T>::B_T; // this fails to compile
//using B_T = B<T>; // this compiles fine
};
struct D { C<D> d; };
In your simplified example
struct A { };
template <typename T>
struct B : public A
{
using B_T = B<T>;
T t;
};
template <typename T>
class C
{
using B_T = typename B<T>::B_T; // this fails to compile
//using B_T = B<T>; // this compiles fine
};
struct D { C<D> d; };
you're running into the gotchas of class template instantiation.
First, note that a class definition has essentially two parse passes (not necessarily implemented this way):
First determine the types of base classes and class members. During this process, the class is considered incomplete, although previously declared bases and members can be used by later code in the definition.
In some pieces of code within the class definition which does not affect the types of bases or members, the class is considered complete. These places include member function definition bodies, member function default arguments, static member initializers, and non-static member default initializers.
For example:
struct S {
std::size_t n = sizeof(S); // OK, S is complete
std::size_t f() const { return sizeof(S); } // OK, S is complete
using array_type = int[sizeof(S)]; // Error, S incomplete
void f(int (&)[sizeof(S)]); // Error, S incomplete
};
Templates make this trickier because they make it easier to accidentally indirectly use a class which is not yet complete. This particularly comes up in CRTP code, but this example is another simple way it can happen.
The basic way class template instantiation works (a bit simplified) is:
Just naming a class template specialization, like X<Y>, does not by itself cause the class template to be instantiated.
Using a class template specialization in ways valid for an incomplete class type does not cause the template to be instantiated.
Using a class template specialization in any way which requires the type to be complete, like naming a member of the class or defining a variable with the class type (not pointer or reference), causes an implicit instantiation of the template.
Instantiating a class template involves determining the types of the base classes and members, much like the "first pass" of class definition parsing. All those base and member types must be valid at that time. Instantiating member definitions is for the most part delayed until each member is needed, but there is no selective instantiation of member types in this step: it's all or error.
The process can be recursive when a base class or member declaration involves another template specialization. But during that other instantiation, the class type for the original instantiation context is considered incomplete.
Looking at the example, struct D defines a member C<D> d; which requires C<D> to be complete, so we attempt to instantiate the specialization C<D>. So far, D is incomplete.
There's just one member of C<D>, which is
using B_T = typename B<D>::B_T;
Since this names a member of another class template specialization B<D>, now we have to attempt to instantiate that B<D> specialization. So far, D and C<D> are still incomplete.
B<D> has one base class, which is just A. It has two members:
using B_T = B<D>;
D t;
The member type B<D>::B_T is fine since just naming B<D> doesn't require a complete type. But instantiating B<D> requires both members to be well-formed. A class member can't have an incomplete class as its type, but type D is still incomplete right now.
As you noticed, you can work around this by avoiding naming the member B<T>::B_T and directly using the type B<T> instead. Or you could move the original B_T definition to some other base class or traits struct, and make sure its new location is one that can be instantiated with an incomplete type as argument.
Many templates just assume their arguments must always be complete types. But they can be useful in more situations if they're carefully written with considerations about how the code uses template arguments and other indirectly used dependent types, which might be incomplete at the point of instantiation.

how to define a CAPL function taking a sysvar argument

In Vector CANoe, is it possible to define a function that takes a system variable argument like the system function TestWaitForSignalMatch()?
For my use case it is not sufficient to supply the current value of the system variable because I want to pass the system variable to TestWaitForSignalMatch() or similar system functions.
The CANoe help seems to show examples:
long TestWaitForSignalMatch (Signal aSignal, float aCompareValue, dword aTimeout); // form 1
long TestWaitForSignalMatch (sysvar aSysVar, float aCompareValue, dword aTimeout); // form 3
I tried like this
void foo(sysvar aSysvar) {}
^
or this
void foo(sysvar *aSysvar) {}
^
but I get a parse error at the marked position of the sysvar keyword in both cases.
I successfully created functions that take a signal argument, but unlike the syntax in the CANoe help I have to use a pointer.
This works:
void foo(signal *aSignal) {}
Obviously the documentation in the help is not correct in this point. It results in a parse error after the signal keyword when I omit the * as shown in the help:
void bar(signal aSignal) {}
^
So what's the correct syntax for defining a function that takes a sysvar argument? (if possible)
In case the version matters, I'm currently testing with CANoe 9.0.53(SP1), 9.0.135(SP7) or 10.0.125(SP6).
You have to use the correct type. You have the following possibilities to declare system variables in functions:
Integer: sysvarInt*
Float: sysvarFloat*
String: sysvarString*
Integer Array: sysvarIntArray*
Float Array: sysvarFloatArray*
Data: sysvarData*
Examples:
void PutSysVarIntArrayToByteArray (sysvarIntArray * from, byte to[], word length)
{
word ii;
for (ii = 0; ii < length; ii++)
{
to[ii] = (byte)#from[ii];
}
}
You can also write to the system variable:
void PutByteToSysVarInt (byte from, sysvarInt * to) {
#to = from;
}
See also CANoe Help page "Test Features » XML » Declaration and Transfer of CAPL Test Case and Test Function Parameters"
Yes, you can. Just define a bit further your sysvar type, not just sysvar.
System variables, with indication of type and *. Possible types:
Data, Int, Float, String, IntArray, and FloatArray. Example
declaration: sysvarFloat * sv
You didn't specify the CANoe SP version, so it may not be supported in older versions, but to make sure of this, search for Function parameter in Help/Index, then you should get the full list of possible function parameters you can use in your current CANoe setup. Should start like this:
Integers (byte, word, dword, int, long, qword, int64) Example
declaration: long 1
Integers (byte, word, dword, int, long, qword, int64) Example
declaration: long 1
Individual characters (char) Example declaration: char ch
Enums Example declaration: enum Colors c
Associative fields Example declaration: int m[float]. Associative
fields are transferred as reference automatically.
.............
System variables, with indication of type and *. Possible types:
Data, Int, Float, String, IntArray, and FloatArray. Example
declaration: sysvarFloat * sv

how to get concrete class object from shared_ptr to abstract class objects

I have vector of shared_ptrs to objects that are derived from Abstract class
std::vector<std::shared_ptr<Abstract>> objects;
the following fails:
Concrete* c = objects[0].get()
With:
error: invalid conversion from ‘Abstract*’ to ‘Concrete*
How can I get access to object of concrete type
If you must get a pointer to the derived class, use dynamic_cast.
Concrete* c = dynamic_cast<Concrete*>(objects[0].get());
if ( c != nullptr )
{
// Use c
}
If you need to just access a member function or member variable but not the pointer itself, you can use std::dynamic_pointer_cast.
auto cptr = std::static_pointer_cast<Concrete>(objects[0]);
if ( cptr )
{
// Use cptr
}
Using std::dynamic_pointer_cast also has the same benefits that you get when using smart pointers in general.

C++11 Pointer (void**)&data

I'm still learning C++, and I'm doing some API work, but I'm, having trouble parsing this pointer arrangement.
void* data;
res = npt.receive(0x1007, params, 1, response, (void**)&data, size);
uint32_t* op = (uint32_t*)data;
uint32_t num = *op;
op++;
Can anyone explain what is going on with that void pointer? I see it being defined, it does something in the res line(maybe initialized?), then it's copied to an uint32 pointer, and dereferenced in num. Can anyone help me parse the (void**)&data declaration?
Pay attention when you use the void pointer:
The void type of pointer is a special type of pointer. In C++, void represents the absence of type. Therefore, void pointers are pointers that point to a value that has no type (and thus also an undetermined length and undetermined dereferencing properties).
This gives void pointers a great flexibility, by being able to point to any data type, from an integer value or a float to a string of characters. In exchange, they have a great limitation: the data pointed to by them cannot be directly dereferenced (which is logical, since we have no type to dereference to), and for that reason, any address in a void pointer needs to be transformed into some other pointer type that points to a concrete data type before being dereferenced.
From C++ reference
Firstly: What is npt?
Secondly: Guessing what npt could be some explanation:
// Declare a pointer to void named data
void* data;
// npt.receive takes as 5th parameter a pointer to pointer to void,
// which is why you provide the address of the void* using &data.
// The void ** appears to be unnecessary unless the data type of the
// param is not void **
// What is "npt"?
res = npt.receive(0x1007, params, 1, response, (void**)&data, size);
// ~.receive initialized data with contents.
// Now make the uint32_t data usable by casting void * to uint32_t*
uint32_t* op = (uint32_t*)data;
// Use the data by dereferencing it.
uint32_t num = *op;
// Pointer arithmetic: Move the pointer by sizeof(uint32_t).
// Did receive fill in an array?
op++;
Update
Signature of receive is:
<whatever return type> receive(uint16_t code, uint32_t* params, uint8_t nparam, Container& response, void** data, uint32_t& size)
So the data parameter is of type void** already so the explicit type cast to void** using (void**) is not necessary.
Considering the usage, the received data appears to be an array of uint32_t values IN THIS CASE!
Void as a type means no type and no type information regarding size and alignment is available, but is mandatory for lexical and syntactical consistency.
In conjunction with the *, it can be used as a pointer to data of unknown type and must be explicitly cast to another type (adds type information) before any use.
You usually have a void* or void** in an API, if you dont know the specific data type or only received plain byte data.
To understand this please read up C type erasure using void*
Please read up as basics before:
Dynamically allocated C arrays.
Pointers and Pointer Arithmetics.
From the code, ntp.receive tells you whether it receives anything successfully in the return code but it also needs to give you what it receives. It has a pointer that it wants to pass back, so you have to tell it where that pointer is so that it can fill it, hence (void **), a pointer to a pointer, being the address of your pointer, &data.
When you have received it, you know as the developer that what it points to is actually a uint_32 value so you copy the void pointer into one that points to a uint_32. In fact, this step is unnecessary since you could have cast the uint_32 pointer to void** in the above call but we'll let that slide.
Now that you have told the compiler that the pointer points to a 32 bit number, you can take the number on the other end of that pointer (*op) and store it in a local variable. Again, unnecessary, as *op could be used anywhere num is subsequently used.
Hope this helps.

Interfacing Ada enumerations and C enums

Let in C code are defined:
typedef enum { A=1, B=2 } option_type;
void f(option_type option);
Let we also have Ada code:
type Option_Type is (A, B);
for Option_Type'Size use Interfaces.C.int'Size;
for Option_Type use (A=>1, B=>2);
X: Option_Type := A;
Which of the following code is correct (accordingly RM)?
-- First code
declare
procedure F (Option: Option_Type)
with Import, Convention=>C, External_Name=>"f";
begin
F(X);
end;
or
-- Second code
declare
procedure F (Option: Interfaces.C.unsigned)
with Import, Convention=>C, External_Name=>"f";
function Conv is new Ada.Unchecked_Conversion(Option_Type, Interfaces.C.unsigned);
begin
F(Conv(X));
end;
I think both first and second Ada fragments are correct but am not sure.
Neither is 100% correct.
In C:
typedef enum { A=1, B=2 } option_type;
In Ada:
type Option_Type is (A, B);
for Option_Type'Size use Interfaces.C.int'Size;
for Option_Type use (A=>1, B=>2);
The Ada code assumes that the C type option_type has the same size as a C int. Your second snippet assumes it has the same representation as a C unsigned int.
Neither assumption is supported by the C standard.
Quoting the N1570 draft, section 6.7.2.2, paragraph 4:
Each enumerated type shall be compatible with char, a signed
integer type, or an unsigned integer type. The choice of type is
implementation-defined, but shall be capable of representing the
values of all the members of the enumeration.
So the C type option_type could be as narrow as 1 byte or as wide as the widest supported integer type (typically 8 bytes), and it could be either signed or unsigned. C restricts the values of the enumeration constants to the range of type int, but that doesn't imply that the type itself is compatible with int -- or with unsigned int.
If you have knowledge of the characteristics of the particular C compiler you're using (the phrase "implementation-defined" means that those characteristics must be documented), then you can rely on those characteristics -- but your code is going to be non-portable.
I'm not aware of any completely portable way to define an Ada type that's compatible with a given C enumeration type. (I've been away from Ada for a long time, so I could be missing something.)
The only portable approach I can think of is to write a C wrapper function that takes an argument of a specified integer type and calls f(). The conversion from the integer type to option_type is then handled by the C compiler, and the wrapper exposes a function with an argument of known type to Ada.
void f_wrapper(int option) {
f(option); /* the conversion from int to option_type is implicit */
}

Resources