array length in CAPL using symbolic constants - capl

I'm defining an array in CAPL with 'int array[5]'.
It would be useful to define the length of this array using a define, such as it can be done in C:
#define LEN 5
int array[LEN];
Is there a way to obtain the same result with the tools at our disposal in Vector CAPL language?

To obtain similar result use const type qualifier.
const word LEN = 5;
int array[LEN];
You can use different datatype than word (unsigned, 2 Byte), ex int (signed, 2 Byte) ...

Related

how can i safely convert an ascii integer back to its associated ascii character using curses in c++?

I have not been able to find a reliable solution for my problem, what i'm simply trying to do is create some function which:
takes an rows and columns position in the terminal.
calls mvinch(window_object , rows, cols), which returns an unsigned int which corresponds to the character in the terminal at that position.
returns the ascii character associated with that unsigned int, effectively casting it back to a char.
Here is an example of my code in c++11:
char Kmenu::getChrfromW(size_t const y, size_t const x,
bool const save_cursor) const {
size_t curr_y, curr_x;
getyx(_win, curr_y, curr_x);
char ich = mvwinch(_win, y, x);
char ch = ich;
if (save_cursor)
wmove(_win, curr_y, curr_x);
return ch;
}
If for example the character in the terminal at position 2,3 is the letter 'a', i want this function to return the letter 'a'.
I tried the solution described here:
Convert ASCII number to ASCII Character in C
which effectively casts an integer as char.
unfortunately what i get back is still the integer: testing with a screen filled with 'w's, i get back the integer 119.
the man page for the curses function mvwinch() describes the function to return chtype, which the compiler recognises as unsigned int.
Is there a built in a curses function which gives the char back directly without casting to unsigned int, or some other way i can achieve this?
Edit: ch to ich, as in the actual code
A chtype contains a character along with other data. The curses.h header has several symbols which are useful for extracting those bits. If you mask it with A_CHARTEXT and cast that to a char, you will get a character:
char c = (char)((A_CHARTEXT) & n);
Your example should not compile, since it declares ch twice. You may have meant this:
char Kmenu::getChrfromW(size_t const y, size_t const x,
bool const save_cursor) const {
int curr_y, curr_x; // size_t is inappropriate...
getyx(_win, curr_y, curr_x);
char ch = (char)((A_CHARTEXT) & mvwinch(_win, y, x));
// char ch = ich;
if (save_cursor)
wmove(_win, curr_y, curr_x);
return ch;
}
The manual page for mvwinch mentions the A_CHARTEXT mask in the Attributes section, assuming the reader is familiar with things like that:
The following bit-masks may be AND-ed with characters returned by
winch.
A_CHARTEXT Bit-mask to extract character
A_ATTRIBUTES Bit-mask to extract attributes
A_COLOR Bit-mask to extract color-pair field information

how to define a CAPL function taking a sysvar argument

In Vector CANoe, is it possible to define a function that takes a system variable argument like the system function TestWaitForSignalMatch()?
For my use case it is not sufficient to supply the current value of the system variable because I want to pass the system variable to TestWaitForSignalMatch() or similar system functions.
The CANoe help seems to show examples:
long TestWaitForSignalMatch (Signal aSignal, float aCompareValue, dword aTimeout); // form 1
long TestWaitForSignalMatch (sysvar aSysVar, float aCompareValue, dword aTimeout); // form 3
I tried like this
void foo(sysvar aSysvar) {}
^
or this
void foo(sysvar *aSysvar) {}
^
but I get a parse error at the marked position of the sysvar keyword in both cases.
I successfully created functions that take a signal argument, but unlike the syntax in the CANoe help I have to use a pointer.
This works:
void foo(signal *aSignal) {}
Obviously the documentation in the help is not correct in this point. It results in a parse error after the signal keyword when I omit the * as shown in the help:
void bar(signal aSignal) {}
^
So what's the correct syntax for defining a function that takes a sysvar argument? (if possible)
In case the version matters, I'm currently testing with CANoe 9.0.53(SP1), 9.0.135(SP7) or 10.0.125(SP6).
You have to use the correct type. You have the following possibilities to declare system variables in functions:
Integer: sysvarInt*
Float: sysvarFloat*
String: sysvarString*
Integer Array: sysvarIntArray*
Float Array: sysvarFloatArray*
Data: sysvarData*
Examples:
void PutSysVarIntArrayToByteArray (sysvarIntArray * from, byte to[], word length)
{
word ii;
for (ii = 0; ii < length; ii++)
{
to[ii] = (byte)#from[ii];
}
}
You can also write to the system variable:
void PutByteToSysVarInt (byte from, sysvarInt * to) {
#to = from;
}
See also CANoe Help page "Test Features » XML » Declaration and Transfer of CAPL Test Case and Test Function Parameters"
Yes, you can. Just define a bit further your sysvar type, not just sysvar.
System variables, with indication of type and *. Possible types:
Data, Int, Float, String, IntArray, and FloatArray. Example
declaration: sysvarFloat * sv
You didn't specify the CANoe SP version, so it may not be supported in older versions, but to make sure of this, search for Function parameter in Help/Index, then you should get the full list of possible function parameters you can use in your current CANoe setup. Should start like this:
Integers (byte, word, dword, int, long, qword, int64) Example
declaration: long 1
Integers (byte, word, dword, int, long, qword, int64) Example
declaration: long 1
Individual characters (char) Example declaration: char ch
Enums Example declaration: enum Colors c
Associative fields Example declaration: int m[float]. Associative
fields are transferred as reference automatically.
.............
System variables, with indication of type and *. Possible types:
Data, Int, Float, String, IntArray, and FloatArray. Example
declaration: sysvarFloat * sv

gcc enum wrong value

I have a enum typedef and when I assign a wrong value (not in the enum) and print this, it shows me an enum value, not the bad value. Why?
This is the example:
#define attribute_packed_type(x ) __attribute__( ( packed, aligned( sizeof( x ) ) ) )
typedef enum attribute_packed_type( uint16_t ) UpdateType_t
{
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
// UPDATE_TYPE_FORCE_UINT16 = 0xFFFF,
} UpdateType_t;
UpdateType_t myValue;
uint16_t bad = 1234;
myValue = bad;
printf( "myValue=%d\n", myValue );
return 1;
and the output of this example is:
myValue=210
If I enable the "UPDATE_TYPE_FORCE_UINT16" into the enum the output is:
myValue=1234
I not understand why the gcc make this. Is this a problem, a bug, or is it normal? If this normal, why?
You've run into a case where gcc behaves oddly when you specify both packed and aligned attributes for an enumerated type. It's probably a bug. It's at least an undocumented feature.
A simplified version of what you have is:
typedef enum __attribute__ (packed, aligned(2)) UpdateType_t {
foo, bar
} UpdateType_t;
The values of the enumerated constants are all small enough to fit in a single byte, either signed or unsigned.
The behavior of the packed and aligned attributes on enum types is a bit confusing. The behavior of packed in particular is, as far as I can tell, not entirely documented.
My experiments with gcc 5.2.0 indicate that:
__attribute__(packed) applied to an enumerated type causes it to be given the smallest size that can fit the values of all the constants. In this case, the size is 1 byte, so the range is either -128..+127 or 0..255. (This is not documented.)
__attribute__(aligned(N)) affects the size of the type. In particular, aligned(2) gives the enumerated type a size and alignment of 2 bytes.
The tricky part is this: if you specify both packed and aligned(2), then the aligned specification affects the size of the enumerated type, but not its range. Which means that even though an enum e is big enough to hold any value from 0 to 65535, any value exceeding 255 is truncated, leaving only the low-order 8 bits of the value.
Regardless of the aligned specification, the fact that you've used the packed attribute means that gcc will restrict the range of your enumerated type to the smallest range that can fit the values of all the constants. The aligned attribute can change the size, but it doesn't change the range.
In my opinion, this is a bug in gcc. (And clang, which is largely gcc-compatible, behaves differently.)
The bottom line is that by packing the enumeration type, you've told the compiler to narrow its range. One way to avoid that is to define an additional constant with a value of 0xFFFF, which you show in a comment.
In general, a C enum type is compatible with some integer type. The choice of which integer type to use is implementation-defined, as long as the chosen type can represent all the specified values.
According to the latest gcc manual:
Normally, the type is unsigned int if there are no negative
values in the enumeration, otherwise int. If -fshort-enums is
specified, then if there are negative values it is the first of
signed char, short and int that can represent all the
values, otherwise it is the first of unsigned char, unsigned short
and unsigned int that can represent all the values.
On some targets, -fshort-enums is the default; this is
determined by the ABI.
Also quoting the gcc manual:
The packed attribute specifies that a variable or structure field
should have the smallest possible alignment -- one byte for a
variable, and one bit for a field, unless you specify a larger
value with the aligned attribute.
Here's a test program, based on yours but showing some extra information:
#include <stdio.h>
int main(void) {
enum __attribute((packed, aligned(2))) e { foo, bar };
enum e obj = 0x1234;
printf("enum e is %s, size %zu, alignment %zu\n",
(enum e)-1 < (enum e)0 ? "signed" : "unsigned",
sizeof (enum e),
_Alignof (enum e));
printf("obj = 0x%x\n", (unsigned)obj);
return 0;
}
This produces a compile-time warning:
c.c: In function 'main':
c.c:4:18: warning: large integer implicitly truncated to unsigned type [-Woverflow]
enum e obj = 0x1234;
^
and this output:
enum e is unsigned, size 2, alignment 2
obj = 0x34
The simplest change to your program would be to add the
UPDATE_TYPE_FORCE_UINT16 = 0xFFFF
that you've commented out, forcing the type to have a range of at least 0 to 65535. But there's a more portable alternative.
Standard C doesn't provide a way to specify the representation of an enum type. gcc does, but as we've seen it's not well defined, and can yield surprising results. But there is an alternative that doesn't require any non-portable code or assumptions beyond the existence of uint16_t:
enum {
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
};
typedef uint16_t UpdateType_t;
The anonymous enum type serves only to define the constant values (which are of type int, not of the enumeration type). You can declare objects of type UpdateType_T and they'll have the same representation as uint16_t, which (I think) is what you really want.
Since C enumeration constants aren't closely tied to their type anyway (for example UPDATE_A is of type int, not of the enumerated type), you might as well use the num declaration just to define the values of the constants, and use whatever integer type you like to declare variables.

Integer or int in Processing?

When I'm creating an integer variable in Processing, should I use int or Integer? They both seem to work the same way. Is it optional which one you would use?
// The same thing?
int a = 5;
Integer b = 4;
// I prefer Integer because it looks like String:
Integer c = 95;
String d = "Hello!";
// Then again, int looks like char:
int e = 3;
char f = 'a';
I'm thinking it's probably just what one prefers, though int is used more?
They have different uses. int is a primitive type while Integer is an object.
The primitive int has a default value of 0 while an Integer will default to null. Primitives use much less memory, just one location of memory, taking up 32 or 64 bits. An object requires more overhead.
Stick to using an int unless you have a need for a null integer or some other requirement.
For reference:
https://processing.org/reference/int.html
https://processing.org/tutorials/objects/
The int type is a primitive data type. That means you can use it in any place you can use a primitive literal, which you can think of as a typed-out number, like 1, 2, 3, 99, -15, etc.
However, you can't use an int in places you have to use an Object. For example, this code will not compile:
void setup(){
ArrayList<int> list = new ArrayList<int>();
}
This code won't compile, because the generic arguments require a class, and int is a primitive, not a class. So how do we get an ArrayList of ints?
That's where primitive wrapper Objects come into play. They are Objects that wrap a primitive, such as int. That way you can correct the above code:
void setup(){
ArrayList<Integer> list = new ArrayList<Integer>();
}
Other primitive wrapper classes include Float, Boolean, Character, etc.
However, it gets more complicated thanks to auto-boxing and auto-unboxing. Basically, Java (and therefore Processing) will automatically convert between primitive values and their primitive wrapper classes. That's why you can do stuff like this:
void setup(){
int primitive = 7;
Integer wrapper = 7;
println(primitive == wrapper);
}
So, for your purposes, it probably doesn't matter which one you use because Java (and therefore Processing) will automatically convert it for you.
However, using Integer instead of int might create Objects that you don't really need, and more importantly, it might prevent you from using Processing.js mode.
Recommended reading:
http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
http://en.wikipedia.org/wiki/Primitive_wrapper_class
http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html

Setting register values in PIC16F876 using Hi Tech PICC

I am using MPLABx and the HI Tech PICC compiler. My target chip is a PIC16F876. By looking at the pic16f876.h include file, it appears that it should be possible to set the system registers of the chip by referring to them by name.
For example, within the CCP1CON register, bits 0 to 3 set how the CCP and PWM modules work. By looking at the pic16f876.h file, it looks like it should be possible to refer to these 4 bits alone, without change the value of the rest of the CCP1CON register.
However, I have tried to refer to these 4 bits in a variety of ways with no success.
I have tried;
CCP1CON.CCP1M=0xC0; this results in "error: struct/union required
CCP1CON:CCP1M=0xC0; this results in "error: undefined identifier "CCP1M"
but both have failed. I have read through the Hi Tech PICC compiler manual, but cannot see how to do this.
From the pic16f876.h file, it looks to me as though I should be able to refer to these subsets within the system registers by name, as they are defined in the .h file.
Does anyone know how to accomplish this?
Excerpt from pic16f876.h
// Register: CCP1CON
volatile unsigned char CCP1CON # 0x017;
// bit and bitfield definitions
volatile bit CCP1Y # ((unsigned)&CCP1CON*8)+4;
volatile bit CCP1X # ((unsigned)&CCP1CON*8)+5;
volatile bit CCP1M0 # ((unsigned)&CCP1CON*8)+0;
volatile bit CCP1M1 # ((unsigned)&CCP1CON*8)+1;
volatile bit CCP1M2 # ((unsigned)&CCP1CON*8)+2;
volatile bit CCP1M3 # ((unsigned)&CCP1CON*8)+3;
#ifndef _LIB_BUILD
volatile union {
struct {
unsigned CCP1M : 4;
unsigned CCP1Y : 1;
unsigned CCP1X : 1;
};
struct {
unsigned CCP1M0 : 1;
unsigned CCP1M1 : 1;
unsigned CCP1M2 : 1;
unsigned CCP1M3 : 1;
};
} CCP1CONbits # 0x017;
#endif
You need to access the bitfield members through an instance of a struct. In this case, that is CCP1CONbits. Because it is a bitfield, you only need to have the number of significant bits as defined in the bitfield, not the full eight bits in your code.
So:
CCP1CONbits.CCP1M = 0x0c;
Should be the equivalent of what you are trying to do. If you want to set all eight bits at once you can use CCP1CON = 0xc0. That would set the CCP1M bits to 0x0c and all the other bits to zero.
The header you gave also has individual bit symbols, so you could do this too:
CCP1M0 = 1;
CCP1M1 = 1;
CCP1M2 = 0;
CCP1M3 = 0;
Although the bitfield approach is cleaner.

Resources