How to construct a tuple/record in ATS? - ats

For instance, what is the syntax for a type in ATS that corresponds to the following struct in C:
struct{ char *name; int age; double height; }
Also, how a value of such a type can be constructed in ATS?

Here's a translation:
#{ name= string, age= int, height= double }
The value of such a type can be constructed as follows:
val x = #{name = "hello", age= 3, height= 2.0}
However, note that this not a literal translation. For instance, the C type char* is a pointer to char, and that usually means zero-terminated UTF-8 strings, which maps to type string in ATS. But, it could well mean a buffer, say, in other contexts.
Also, note that in C, you usually initialize records on a field-by-field basis (that is, you declare a variable of record type, and then assign its fields one by one). You can do mostly the same thing in ATS, provided that you are prepared to deal with the typechecker complaints.

Related

how to define a CAPL function taking a sysvar argument

In Vector CANoe, is it possible to define a function that takes a system variable argument like the system function TestWaitForSignalMatch()?
For my use case it is not sufficient to supply the current value of the system variable because I want to pass the system variable to TestWaitForSignalMatch() or similar system functions.
The CANoe help seems to show examples:
long TestWaitForSignalMatch (Signal aSignal, float aCompareValue, dword aTimeout); // form 1
long TestWaitForSignalMatch (sysvar aSysVar, float aCompareValue, dword aTimeout); // form 3
I tried like this
void foo(sysvar aSysvar) {}
^
or this
void foo(sysvar *aSysvar) {}
^
but I get a parse error at the marked position of the sysvar keyword in both cases.
I successfully created functions that take a signal argument, but unlike the syntax in the CANoe help I have to use a pointer.
This works:
void foo(signal *aSignal) {}
Obviously the documentation in the help is not correct in this point. It results in a parse error after the signal keyword when I omit the * as shown in the help:
void bar(signal aSignal) {}
^
So what's the correct syntax for defining a function that takes a sysvar argument? (if possible)
In case the version matters, I'm currently testing with CANoe 9.0.53(SP1), 9.0.135(SP7) or 10.0.125(SP6).
You have to use the correct type. You have the following possibilities to declare system variables in functions:
Integer: sysvarInt*
Float: sysvarFloat*
String: sysvarString*
Integer Array: sysvarIntArray*
Float Array: sysvarFloatArray*
Data: sysvarData*
Examples:
void PutSysVarIntArrayToByteArray (sysvarIntArray * from, byte to[], word length)
{
word ii;
for (ii = 0; ii < length; ii++)
{
to[ii] = (byte)#from[ii];
}
}
You can also write to the system variable:
void PutByteToSysVarInt (byte from, sysvarInt * to) {
#to = from;
}
See also CANoe Help page "Test Features » XML » Declaration and Transfer of CAPL Test Case and Test Function Parameters"
Yes, you can. Just define a bit further your sysvar type, not just sysvar.
System variables, with indication of type and *. Possible types:
Data, Int, Float, String, IntArray, and FloatArray. Example
declaration: sysvarFloat * sv
You didn't specify the CANoe SP version, so it may not be supported in older versions, but to make sure of this, search for Function parameter in Help/Index, then you should get the full list of possible function parameters you can use in your current CANoe setup. Should start like this:
Integers (byte, word, dword, int, long, qword, int64) Example
declaration: long 1
Integers (byte, word, dword, int, long, qword, int64) Example
declaration: long 1
Individual characters (char) Example declaration: char ch
Enums Example declaration: enum Colors c
Associative fields Example declaration: int m[float]. Associative
fields are transferred as reference automatically.
.............
System variables, with indication of type and *. Possible types:
Data, Int, Float, String, IntArray, and FloatArray. Example
declaration: sysvarFloat * sv

what does it mean typedef char *b?

I'm reading through this memory management code overloading operator new. there's expression something like
typedef char *b
and later in the code b was used like this:
b(h); //h is a pointer to some class;
h defined here:
static Head* h= (Head*) HEAP_BASE_ADDRESS;
I'm assuming when b is used it is considered a pointer to a char. But how can a pointer have expressions like b()?? Is there some sort of conversion going on in here? Can I understand it as b now is having the same address as h?
The first code line you posted is a typedef which creates an alias for char* as b. The second code line shows a functional-style type conversion from h to b.
Can I understand it as b now is having the same address as h?
The b is just an alias of char*, so b(h) eventually does nothing unless you store the result of that expresion like:
b b_ptr = b(h); // equivalent to: char* b_ptr = ((char*)h);
The functional-style type conversion works only with single-word type names, so if you want to use this conversion style to e.g. a pointer, you have to typedef it first. (This is the reason of the typedef char *b.) This style of conversion can be used for expressions like int(3.14 + 6.67).

module_param: display value in hex instead of decimal

Is it possible to display value of module_param when read, in hex?
I have this code in my linux device driver:
module_param(num_in_hex, ulong, 0644)
$cat /sys/module/my_module/parameters/num_in_hex
1234512345
Would like to see that value in hex, instead of decimal. Or, should I use different way like debugfs for this?
There is no ready parameter type (2nd argument of module_param macro), which output its argument as hexadecimal. But it is not difficult to implement it.
Module parameters are driven by callback functions, which extract parameter's value from string and write parameter's value to string.
// Set hexadecimal parameter
int param_set_hex(const char *val, const struct kernel_param *kp)
{
return kstrtoul(val, 16, (unsigned long*)kp->arg);
}
// Read hexadecimal parameter
int param_get_hex(char *buffer, const struct kernel_param *kp)
{
return scnprintf(buffer, PAGE_SIZE, "%lx", *((unsigned long*)kp->arg));
}
// Combine operations together
const struct kernel_param_ops param_ops_hex = {
.set = param_set_hex,
.get = param_get_hex
};
/*
* Macro for check type of variable, passed to `module_param`.
* Just reuse already existed macro for `ulong` type.
*/
#define param_check_hex(name, p) param_check_ulong(name, p)
// Everything is ready for use `module_param` with new type.
module_param(num_in_hex, hex, 0644);
Check include/linux/moduleparam.h for implementation module_param macro and kernel/params.c for implementation of operations for ready-made types (macro STANDARD_PARAM_DEF).

gcc enum wrong value

I have a enum typedef and when I assign a wrong value (not in the enum) and print this, it shows me an enum value, not the bad value. Why?
This is the example:
#define attribute_packed_type(x ) __attribute__( ( packed, aligned( sizeof( x ) ) ) )
typedef enum attribute_packed_type( uint16_t ) UpdateType_t
{
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
// UPDATE_TYPE_FORCE_UINT16 = 0xFFFF,
} UpdateType_t;
UpdateType_t myValue;
uint16_t bad = 1234;
myValue = bad;
printf( "myValue=%d\n", myValue );
return 1;
and the output of this example is:
myValue=210
If I enable the "UPDATE_TYPE_FORCE_UINT16" into the enum the output is:
myValue=1234
I not understand why the gcc make this. Is this a problem, a bug, or is it normal? If this normal, why?
You've run into a case where gcc behaves oddly when you specify both packed and aligned attributes for an enumerated type. It's probably a bug. It's at least an undocumented feature.
A simplified version of what you have is:
typedef enum __attribute__ (packed, aligned(2)) UpdateType_t {
foo, bar
} UpdateType_t;
The values of the enumerated constants are all small enough to fit in a single byte, either signed or unsigned.
The behavior of the packed and aligned attributes on enum types is a bit confusing. The behavior of packed in particular is, as far as I can tell, not entirely documented.
My experiments with gcc 5.2.0 indicate that:
__attribute__(packed) applied to an enumerated type causes it to be given the smallest size that can fit the values of all the constants. In this case, the size is 1 byte, so the range is either -128..+127 or 0..255. (This is not documented.)
__attribute__(aligned(N)) affects the size of the type. In particular, aligned(2) gives the enumerated type a size and alignment of 2 bytes.
The tricky part is this: if you specify both packed and aligned(2), then the aligned specification affects the size of the enumerated type, but not its range. Which means that even though an enum e is big enough to hold any value from 0 to 65535, any value exceeding 255 is truncated, leaving only the low-order 8 bits of the value.
Regardless of the aligned specification, the fact that you've used the packed attribute means that gcc will restrict the range of your enumerated type to the smallest range that can fit the values of all the constants. The aligned attribute can change the size, but it doesn't change the range.
In my opinion, this is a bug in gcc. (And clang, which is largely gcc-compatible, behaves differently.)
The bottom line is that by packing the enumeration type, you've told the compiler to narrow its range. One way to avoid that is to define an additional constant with a value of 0xFFFF, which you show in a comment.
In general, a C enum type is compatible with some integer type. The choice of which integer type to use is implementation-defined, as long as the chosen type can represent all the specified values.
According to the latest gcc manual:
Normally, the type is unsigned int if there are no negative
values in the enumeration, otherwise int. If -fshort-enums is
specified, then if there are negative values it is the first of
signed char, short and int that can represent all the
values, otherwise it is the first of unsigned char, unsigned short
and unsigned int that can represent all the values.
On some targets, -fshort-enums is the default; this is
determined by the ABI.
Also quoting the gcc manual:
The packed attribute specifies that a variable or structure field
should have the smallest possible alignment -- one byte for a
variable, and one bit for a field, unless you specify a larger
value with the aligned attribute.
Here's a test program, based on yours but showing some extra information:
#include <stdio.h>
int main(void) {
enum __attribute((packed, aligned(2))) e { foo, bar };
enum e obj = 0x1234;
printf("enum e is %s, size %zu, alignment %zu\n",
(enum e)-1 < (enum e)0 ? "signed" : "unsigned",
sizeof (enum e),
_Alignof (enum e));
printf("obj = 0x%x\n", (unsigned)obj);
return 0;
}
This produces a compile-time warning:
c.c: In function 'main':
c.c:4:18: warning: large integer implicitly truncated to unsigned type [-Woverflow]
enum e obj = 0x1234;
^
and this output:
enum e is unsigned, size 2, alignment 2
obj = 0x34
The simplest change to your program would be to add the
UPDATE_TYPE_FORCE_UINT16 = 0xFFFF
that you've commented out, forcing the type to have a range of at least 0 to 65535. But there's a more portable alternative.
Standard C doesn't provide a way to specify the representation of an enum type. gcc does, but as we've seen it's not well defined, and can yield surprising results. But there is an alternative that doesn't require any non-portable code or assumptions beyond the existence of uint16_t:
enum {
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
};
typedef uint16_t UpdateType_t;
The anonymous enum type serves only to define the constant values (which are of type int, not of the enumeration type). You can declare objects of type UpdateType_T and they'll have the same representation as uint16_t, which (I think) is what you really want.
Since C enumeration constants aren't closely tied to their type anyway (for example UPDATE_A is of type int, not of the enumerated type), you might as well use the num declaration just to define the values of the constants, and use whatever integer type you like to declare variables.

Are enums by default unsigned? [duplicate]

Are C++ enums signed or unsigned? And by extension is it safe to validate an input by checking that it is <= your max value, and leave out >= your min value (assuming you started at 0 and incremented by 1)?
Let's go to the source. Here's what the C++03 standard (ISO/IEC 14882:2003) document says in 7.2-5 (Enumeration declarations):
The underlying type of an enumeration
is an integral type that can represent
all the enumerator values defined in
the enumeration. It is
implementation-defined which integral
type is used as the underlying type
for an enumeration except that the
underlying type shall not be larger
than int unless the value of an
enumerator cannot fit in an int or
unsigned int.
In short, your compiler gets to choose (obviously, if you have negative numbers for some of your ennumeration values, it'll be signed).
You shouldn't rely on any specific representation. Read the following link. Also, the standard says that it is implementation-defined which integral type is used as the underlying type for an enum, except that it shall not be larger than int, unless some value cannot fit into int or an unsigned int.
In short: you cannot rely on an enum being either signed or unsigned.
You shouldn't depend on them being signed or unsigned. If you want to make them explicitly signed or unsigned, you can use the following:
enum X : signed int { ... }; // signed enum
enum Y : unsigned int { ... }; // unsigned enum
You shouldn't rely on it being either signed or unsigned. According to the standard it is implementation-defined which integral type is used as the underlying type for an enum. In most implementations, though, it is a signed integer.
In C++0x strongly typed enumerations will be added which will allow you to specify the type of an enum such as:
enum X : signed int { ... }; // signed enum
enum Y : unsigned int { ... }; // unsigned enum
Even now, though, some simple validation can be achieved by using the enum as a variable or parameter type like this:
enum Fruit { Apple, Banana };
enum Fruit fruitVariable = Banana; // Okay, Banana is a member of the Fruit enum
fruitVariable = 1; // Error, 1 is not a member of enum Fruit
// even though it has the same value as banana.
Even some old answers got 44 upvotes, I tend to disagree with all of them. In short, I don't think we should care about the underlying type of the enum.
First off, C++03 Enum type is a distinct type of its own having no concept of sign. Since from C++03 standard dcl.enum
7.2 Enumeration declarations
5 Each enumeration defines a type that is different from all other types....
So when we are talking about the sign of an enum type, say when comparing 2 enum operands using the < operator, we are actually talking about implicitly converting the enum type to some integral type. It is the sign of this integral type that matters. And when converting enum to integral type, this statement applies:
9 The value of an enumerator or an object of an enumeration type is converted to an integer by integral promotion (4.5).
And, apparently, the underlying type of the enum get nothing to do with the Integral Promotion. Since the standard defines Integral Promotion like this:
4.5 Integral promotions conv.prom
.. An rvalue of an enumeration type (7.2) can be converted to an rvalue of the first of the following types that can represent all the values of the enumeration
(i.e. the values in the range bmin to bmax as described in 7.2: int, unsigned int, long, or unsigned long.
So, whether an enum type becomes signed int or unsigned int depends on whether signed int can contain all the values of the defined enumerators, not the underlying type of the enum.
See my related question
Sign of C++ Enum Type Incorrect After Converting to Integral Type
In the future, with C++0x, strongly typed enumerations will be available and have several advantages (such as type-safety, explicit underlying types, or explicit scoping). With that you could be better assured of the sign of the type.
The compiler can decide whether or not enums are signed or unsigned.
Another method of validating enums is to use the enum itself as a variable type. For example:
enum Fruit
{
Apple = 0,
Banana,
Pineapple,
Orange,
Kumquat
};
enum Fruit fruitVariable = Banana; // Okay, Banana is a member of the Fruit enum
fruitVariable = 1; // Error, 1 is not a member of enum Fruit even though it has the same value as banana.
In addition to what others have already said about signed/unsigned, here's what the standard says about the range of an enumerated type:
7.2(6): "For an enumeration where e(min) is the smallest enumerator and e(max) is the largest, the values of the enumeration are the values of the underlying type in the range b(min) to b(max), where b(min) and b(max) are, respectively, the smallest and largest values of the smallest bitfield that can store e(min) and e(max). It is possible to define an enumeration that has values not defined by any of its enumerators."
So for example:
enum { A = 1, B = 4};
defines an enumerated type where e(min) is 1 and e(max) is 4. If the underlying type is signed int, then the smallest required bitfield has 4 bits, and if ints in your implementation are two's complement then the valid range of the enum is -8 to 7. If the underlying type is unsigned, then it has 3 bits and the range is 0 to 7. Check your compiler documentation if you care (for example if you want to cast integral values other than enumerators to the enumerated type, then you need to know whether the value is in the range of the enumeration or not - if not the resulting enum value is unspecified).
Whether those values are valid input to your function may be a different issue from whether they are valid values of the enumerated type. Your checking code is probably worried about the former rather than the latter, and so in this example should at least be checking >=A and <=B.
Check it with std::is_signed<std::underlying_type + scoped enums default to int
https://en.cppreference.com/w/cpp/language/enum implies:
main.cpp
#include <cassert>
#include <iostream>
#include <type_traits>
enum Unscoped {};
enum class ScopedDefault {};
enum class ScopedExplicit : long {};
int main() {
// Implementation defined, let's find out.
std::cout << std::is_signed<std::underlying_type<Unscoped>>() << std::endl;
// Guaranteed. Scoped defaults to int.
assert((std::is_same<std::underlying_type<ScopedDefault>::type, int>()));
// Guaranteed. We set it ourselves.
assert((std::is_same<std::underlying_type<ScopedExplicit>::type, long>()));
}
GitHub upstream.
Compile and run:
g++ -std=c++17 -Wall -Wextra -pedantic-errors -o main main.cpp
./main
Output:
0
Tested on Ubuntu 16.04, GCC 6.4.0.
While some of the above answers are arguably proper, they did not answer my practical question. The compiler (gcc 9.3.0) emitted warnings for:
enum FOO_STATUS {
STATUS_ERROR = (1 << 31)
};
The warning was issued on use:
unsigned status = foo_status_get();
if (STATUS_ERROR == status) {
(Aside from the fact this code is incorrect ... do not ask.)
When asked properly, the compiler does not emit an error.
enum FOO_STATUS {
STATUS_ERROR = (1U << 31)
};
Note that 1U makes the expression unsigned.

Resources