WIM: Variant Type for uint32? - winapi

The definition of a WIM/CIM method takes an input parameter "uint32".
I used InitVariantFromUInt32() to setup the variable that is used but IWbemClassObject::put() complains that WBEM_E_TYPE_MISMATCH (0x80041005)
What would a uint32 VARIANT type supposed to be?
TIA!!

The answer is at:
https://learn.microsoft.com/en-us/windows/win32/wmisdk/numbers
Data type
Automation type
Description
sint8
VT_I2
Signed 8-bit integer.
sint16
VT_I2
Signed 16-bit integer.
sint32
VT_I4
Signed 32-bit integer.
sint64
VT_BSTR
Signed 64-bit integer in string form. This type follows hexadecimal or decimal format according to the American National Standards Institute (ANSI) C rules.
real32
VT_R4
4-byte floating-point value that follows the Institute of Electrical and Electronics Engineers, Inc. (IEEE) standard.
real64
VT_R8
8-byte floating-point value that follows the IEEE standard.
uint8
VT_UI1
Unsigned 8-bit integer.
uint16
VT_I4
Unsigned 16-bit integer.
uint32
VT_I4
Unsigned 32-bit integer.
uint64
VT_BSTR
Unsigned 64-bit integer in string form. This type follows hexadecimal or decimal format according to ANSI C rules.

Related

Ingenico POS NFC UID wrong encoding

Our team develop POS solution for NFC cards on Ingenico devices.
What we use to read the card:
/* Open the MIFARE driver */
int ClessMifare_OpenDriver (void);
Return value: OK
/*Wait until a MIFARE contactless card is detected*/
int ClessMifare_DetectCardsEx (unsigned char nKindOfCard, unsigned int *pNumOfCards, unsigned int nTimeout);
Return value: OK
/*Retrieve the type of the MIFARE card and its UID */
int ClessMifare_GetUid (unsigned char nCardIndex, unsigned char *pKindOfCard, unsigned char *pUidLength, unsigned char *pUid);
Return Value:
Paramater2:
pKindOfCard(Type of cards)
Card1: CL_B_UNDEFINED
Card2: CL_B_UNDEFINED
Card3: CL_B_UNDEFINED
Card4: CL_MF_CLASSIC
Paramater4: pUid ( UID of the card)
Card1: "\004Br\302\3278\200"
Card2: "\004\333\354y\342\002\200"
Card3: "\004s\247B\344?\201"
Card4: "\016\310d\301"
But in real life we expect:
Card1 044272c2d73880
Card2 0ec864c1
Card3 0473a742e43f81
Card4 04dbec79e20280
From Android NFC readers we get correct numbers, but from POS its quite different as a output from Ingenico POS. What we need to do to get this number in hex?
Thanks!
You are actually seeing the right UIDs here. There is just a representation issue you are not expecting. Return values you are quoting are C strings with octal escaping for non-printable characters. \nnn is octal representation of a byte.
In the value "\004s\247B\344?\201", you have \004, byte of value 0x04, followed by printable character s, of value 0x73, followed by \247, value 0xa7, etc.
You can convert to hex for debugging with python for example:
$ python2
>>> import binascii
>>> binascii.b2a_hex("\004Br\302\3278\200")
'044272c2d73880'
>>> binascii.b2a_hex("\004\333\354y\342\002\200")
'04dbec79e20280'
>>> binascii.b2a_hex("\004s\247B\344?\201")
'0473a742e43f81'
>>> binascii.b2a_hex("\016\310d\301")
'0ec864c1'
But overall, data is here.

Understanding the order of conversions, arithmetic conversions, and integer promotions for non-overloaded bitwise operators

I want to understand exactly what is happening, when the compiler encounter a non overloaded operator and what conversions are operated. As an example, let's take the bitwise operators, and for example &. The standard says :
[expr.bit.and] The usual arithmetic conversions are performed; the result is the bitwise AND function of the operands. The operator applies only to integral or unscoped enumeration operands.
Then if I am searching for the usual arithmetic conversions, I got:
[expr] Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result.
This pattern is called the usual arithmetic conversions, which are defined as follows:
If either operand is of scoped enumeration type (7.2), no conversions are performed; if the other
operand does not have the same type, the expression is ill-formed.
If either operand is of type long double, the other shall be converted to long double.
Otherwise, if either operand is double, the other shall be converted to double.
Otherwise, if either operand is float, the other shall be converted to float.
Otherwise, the integral promotions shall be performed on both operands. Then the following rules shall be applied to the promoted operands:
If both operands have the same type, no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank shall be converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater than or equal to the rank of the type of the other operand, the operand with signed integer type shall be converted to the type of the operand with unsigned integer type.
Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type shall be converted to the type of the operand with signed integer type.
Otherwise, both operands shall be converted to the unsigned integer type corresponding to the type of the operand with signed integer type
Now if we look to integer promotion:
[conv.prom]:
A prvalue of an integer type other than bool, char16_t, char32_t, or wchar_t whose integer conversion rank is less than the rank of int can be converted to a prvalue of type int if int can represent all the values of the source type; otherwise, the source prvalue can be converted to a prvalue of type unsigned int.
A prvalue of type char16_t, char32_t, or wchar_t (3.9.1) can be converted to a prvalue of the first of the following types that can represent all the values of its underlying type: int, unsigned int, long int, unsigned long int, long long int, or unsigned long long int. If none of the types in that list can represent all the values of its underlying type, a prvalue of type char16_t, char32_t, or wchar_t can be converted to a prvalue of its underlying type.
A prvalue of an unscoped enumeration type whose underlying type is not fixed can be converted to a prvalue of the first of the following types that can represent all the values of the enumeration: int, unsigned int, long int, unsigned long int, long long int, or unsigned long long int. If none of the types in that list can represent all the values of the enumeration, a prvalue of an unscoped enumeration type can be converted to a prvalue of the extended
integer type with lowest integer conversion rank greater than the rank of long long in which all the values of the enumeration can be represented. If there are two such extended types, the signed one is chosen.
A prvalue of an unscoped enumeration type whose underlying type is fixed can be converted to a prvalue of its underlying type. Moreover, if integral promotion can be applied to its underlying type, a prvalue of an unscoped enumeration type whose underlying type is fixed can also be converted to a prvalue of the promoted underlying type.
A prvalue for an integral bit-field can be converted to a prvalue of type int if int can represent all the values of the bit-field; otherwise, it can be converted to unsigned int if unsigned int can represent all the values of the bit-field. If the bit-field is larger yet, no integral promotion applies to it. If the bit-field has an enumerated type, it is treated as any other value of that type for promotion purposes.
A prvalue of type bool can be converted to a prvalue of type int, with false becoming zero and true becoming one.
These conversions are called integral promotions.
But if we do:
std::integral_constant<int, 2> x;
std::integral_constant<int, 3> y;
int z = x & y;
It will work, although I don't see where it is specified in the standard. I would like to exactly, all the conversion checks that are done in the order. I think that first, the compiler check whether the operator& has an overload taking exactly the types. Then I don't know what other tests the compiler does. And probably only after that it uses the usual arithmetic conversions and then the integral promotion.
So what conversions tests and steps is the compiler doing and in what order when it encounters T1 & T2? (extracts from the standard are welcome).
When the compiler sees this:
int z = x & y;
It will see that there is no specific operator & for std::integral_constant<>. It will see however that there is a non-explicit operator value_type() for x and y. Since value_type is int, this gives a direct match for the most common operator &.
No arithmetic conversion or integral promotion is required or performed.
[conv] (2.1) says:
When used as operands of operators. The operator’s requirements for its operands dictate the destination type.
[over.match] says:
Each of these contexts defines the set of candidate functions and the list of arguments in its own unique way.
But, once the candidate functions and argument lists have been identified, the selection of the best function
is the same in all cases:
(2.8) — First, a subset of the candidate functions (those that have the proper number of arguments and meet
certain other conditions) is selected to form a set of viable functions (13.3.2).
(2.9) — Then the best viable function is selected based on the implicit conversion sequences (13.3.3.1) needed
to match each argument to the corresponding parameter of each viable function.
[class.conv] says:
Type conversions of class objects can be specified by constructors and by conversion functions. These
conversions are called user-defined conversions and are used for implicit type conversions (Clause 4), for
initialization (8.5), and for explicit type conversions (5.4, 5.2.9).

gcc enum wrong value

I have a enum typedef and when I assign a wrong value (not in the enum) and print this, it shows me an enum value, not the bad value. Why?
This is the example:
#define attribute_packed_type(x ) __attribute__( ( packed, aligned( sizeof( x ) ) ) )
typedef enum attribute_packed_type( uint16_t ) UpdateType_t
{
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
// UPDATE_TYPE_FORCE_UINT16 = 0xFFFF,
} UpdateType_t;
UpdateType_t myValue;
uint16_t bad = 1234;
myValue = bad;
printf( "myValue=%d\n", myValue );
return 1;
and the output of this example is:
myValue=210
If I enable the "UPDATE_TYPE_FORCE_UINT16" into the enum the output is:
myValue=1234
I not understand why the gcc make this. Is this a problem, a bug, or is it normal? If this normal, why?
You've run into a case where gcc behaves oddly when you specify both packed and aligned attributes for an enumerated type. It's probably a bug. It's at least an undocumented feature.
A simplified version of what you have is:
typedef enum __attribute__ (packed, aligned(2)) UpdateType_t {
foo, bar
} UpdateType_t;
The values of the enumerated constants are all small enough to fit in a single byte, either signed or unsigned.
The behavior of the packed and aligned attributes on enum types is a bit confusing. The behavior of packed in particular is, as far as I can tell, not entirely documented.
My experiments with gcc 5.2.0 indicate that:
__attribute__(packed) applied to an enumerated type causes it to be given the smallest size that can fit the values of all the constants. In this case, the size is 1 byte, so the range is either -128..+127 or 0..255. (This is not documented.)
__attribute__(aligned(N)) affects the size of the type. In particular, aligned(2) gives the enumerated type a size and alignment of 2 bytes.
The tricky part is this: if you specify both packed and aligned(2), then the aligned specification affects the size of the enumerated type, but not its range. Which means that even though an enum e is big enough to hold any value from 0 to 65535, any value exceeding 255 is truncated, leaving only the low-order 8 bits of the value.
Regardless of the aligned specification, the fact that you've used the packed attribute means that gcc will restrict the range of your enumerated type to the smallest range that can fit the values of all the constants. The aligned attribute can change the size, but it doesn't change the range.
In my opinion, this is a bug in gcc. (And clang, which is largely gcc-compatible, behaves differently.)
The bottom line is that by packing the enumeration type, you've told the compiler to narrow its range. One way to avoid that is to define an additional constant with a value of 0xFFFF, which you show in a comment.
In general, a C enum type is compatible with some integer type. The choice of which integer type to use is implementation-defined, as long as the chosen type can represent all the specified values.
According to the latest gcc manual:
Normally, the type is unsigned int if there are no negative
values in the enumeration, otherwise int. If -fshort-enums is
specified, then if there are negative values it is the first of
signed char, short and int that can represent all the
values, otherwise it is the first of unsigned char, unsigned short
and unsigned int that can represent all the values.
On some targets, -fshort-enums is the default; this is
determined by the ABI.
Also quoting the gcc manual:
The packed attribute specifies that a variable or structure field
should have the smallest possible alignment -- one byte for a
variable, and one bit for a field, unless you specify a larger
value with the aligned attribute.
Here's a test program, based on yours but showing some extra information:
#include <stdio.h>
int main(void) {
enum __attribute((packed, aligned(2))) e { foo, bar };
enum e obj = 0x1234;
printf("enum e is %s, size %zu, alignment %zu\n",
(enum e)-1 < (enum e)0 ? "signed" : "unsigned",
sizeof (enum e),
_Alignof (enum e));
printf("obj = 0x%x\n", (unsigned)obj);
return 0;
}
This produces a compile-time warning:
c.c: In function 'main':
c.c:4:18: warning: large integer implicitly truncated to unsigned type [-Woverflow]
enum e obj = 0x1234;
^
and this output:
enum e is unsigned, size 2, alignment 2
obj = 0x34
The simplest change to your program would be to add the
UPDATE_TYPE_FORCE_UINT16 = 0xFFFF
that you've commented out, forcing the type to have a range of at least 0 to 65535. But there's a more portable alternative.
Standard C doesn't provide a way to specify the representation of an enum type. gcc does, but as we've seen it's not well defined, and can yield surprising results. But there is an alternative that doesn't require any non-portable code or assumptions beyond the existence of uint16_t:
enum {
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
};
typedef uint16_t UpdateType_t;
The anonymous enum type serves only to define the constant values (which are of type int, not of the enumeration type). You can declare objects of type UpdateType_T and they'll have the same representation as uint16_t, which (I think) is what you really want.
Since C enumeration constants aren't closely tied to their type anyway (for example UPDATE_A is of type int, not of the enumerated type), you might as well use the num declaration just to define the values of the constants, and use whatever integer type you like to declare variables.

Are enums by default unsigned? [duplicate]

Are C++ enums signed or unsigned? And by extension is it safe to validate an input by checking that it is <= your max value, and leave out >= your min value (assuming you started at 0 and incremented by 1)?
Let's go to the source. Here's what the C++03 standard (ISO/IEC 14882:2003) document says in 7.2-5 (Enumeration declarations):
The underlying type of an enumeration
is an integral type that can represent
all the enumerator values defined in
the enumeration. It is
implementation-defined which integral
type is used as the underlying type
for an enumeration except that the
underlying type shall not be larger
than int unless the value of an
enumerator cannot fit in an int or
unsigned int.
In short, your compiler gets to choose (obviously, if you have negative numbers for some of your ennumeration values, it'll be signed).
You shouldn't rely on any specific representation. Read the following link. Also, the standard says that it is implementation-defined which integral type is used as the underlying type for an enum, except that it shall not be larger than int, unless some value cannot fit into int or an unsigned int.
In short: you cannot rely on an enum being either signed or unsigned.
You shouldn't depend on them being signed or unsigned. If you want to make them explicitly signed or unsigned, you can use the following:
enum X : signed int { ... }; // signed enum
enum Y : unsigned int { ... }; // unsigned enum
You shouldn't rely on it being either signed or unsigned. According to the standard it is implementation-defined which integral type is used as the underlying type for an enum. In most implementations, though, it is a signed integer.
In C++0x strongly typed enumerations will be added which will allow you to specify the type of an enum such as:
enum X : signed int { ... }; // signed enum
enum Y : unsigned int { ... }; // unsigned enum
Even now, though, some simple validation can be achieved by using the enum as a variable or parameter type like this:
enum Fruit { Apple, Banana };
enum Fruit fruitVariable = Banana; // Okay, Banana is a member of the Fruit enum
fruitVariable = 1; // Error, 1 is not a member of enum Fruit
// even though it has the same value as banana.
Even some old answers got 44 upvotes, I tend to disagree with all of them. In short, I don't think we should care about the underlying type of the enum.
First off, C++03 Enum type is a distinct type of its own having no concept of sign. Since from C++03 standard dcl.enum
7.2 Enumeration declarations
5 Each enumeration defines a type that is different from all other types....
So when we are talking about the sign of an enum type, say when comparing 2 enum operands using the < operator, we are actually talking about implicitly converting the enum type to some integral type. It is the sign of this integral type that matters. And when converting enum to integral type, this statement applies:
9 The value of an enumerator or an object of an enumeration type is converted to an integer by integral promotion (4.5).
And, apparently, the underlying type of the enum get nothing to do with the Integral Promotion. Since the standard defines Integral Promotion like this:
4.5 Integral promotions conv.prom
.. An rvalue of an enumeration type (7.2) can be converted to an rvalue of the first of the following types that can represent all the values of the enumeration
(i.e. the values in the range bmin to bmax as described in 7.2: int, unsigned int, long, or unsigned long.
So, whether an enum type becomes signed int or unsigned int depends on whether signed int can contain all the values of the defined enumerators, not the underlying type of the enum.
See my related question
Sign of C++ Enum Type Incorrect After Converting to Integral Type
In the future, with C++0x, strongly typed enumerations will be available and have several advantages (such as type-safety, explicit underlying types, or explicit scoping). With that you could be better assured of the sign of the type.
The compiler can decide whether or not enums are signed or unsigned.
Another method of validating enums is to use the enum itself as a variable type. For example:
enum Fruit
{
Apple = 0,
Banana,
Pineapple,
Orange,
Kumquat
};
enum Fruit fruitVariable = Banana; // Okay, Banana is a member of the Fruit enum
fruitVariable = 1; // Error, 1 is not a member of enum Fruit even though it has the same value as banana.
In addition to what others have already said about signed/unsigned, here's what the standard says about the range of an enumerated type:
7.2(6): "For an enumeration where e(min) is the smallest enumerator and e(max) is the largest, the values of the enumeration are the values of the underlying type in the range b(min) to b(max), where b(min) and b(max) are, respectively, the smallest and largest values of the smallest bitfield that can store e(min) and e(max). It is possible to define an enumeration that has values not defined by any of its enumerators."
So for example:
enum { A = 1, B = 4};
defines an enumerated type where e(min) is 1 and e(max) is 4. If the underlying type is signed int, then the smallest required bitfield has 4 bits, and if ints in your implementation are two's complement then the valid range of the enum is -8 to 7. If the underlying type is unsigned, then it has 3 bits and the range is 0 to 7. Check your compiler documentation if you care (for example if you want to cast integral values other than enumerators to the enumerated type, then you need to know whether the value is in the range of the enumeration or not - if not the resulting enum value is unspecified).
Whether those values are valid input to your function may be a different issue from whether they are valid values of the enumerated type. Your checking code is probably worried about the former rather than the latter, and so in this example should at least be checking >=A and <=B.
Check it with std::is_signed<std::underlying_type + scoped enums default to int
https://en.cppreference.com/w/cpp/language/enum implies:
main.cpp
#include <cassert>
#include <iostream>
#include <type_traits>
enum Unscoped {};
enum class ScopedDefault {};
enum class ScopedExplicit : long {};
int main() {
// Implementation defined, let's find out.
std::cout << std::is_signed<std::underlying_type<Unscoped>>() << std::endl;
// Guaranteed. Scoped defaults to int.
assert((std::is_same<std::underlying_type<ScopedDefault>::type, int>()));
// Guaranteed. We set it ourselves.
assert((std::is_same<std::underlying_type<ScopedExplicit>::type, long>()));
}
GitHub upstream.
Compile and run:
g++ -std=c++17 -Wall -Wextra -pedantic-errors -o main main.cpp
./main
Output:
0
Tested on Ubuntu 16.04, GCC 6.4.0.
While some of the above answers are arguably proper, they did not answer my practical question. The compiler (gcc 9.3.0) emitted warnings for:
enum FOO_STATUS {
STATUS_ERROR = (1 << 31)
};
The warning was issued on use:
unsigned status = foo_status_get();
if (STATUS_ERROR == status) {
(Aside from the fact this code is incorrect ... do not ask.)
When asked properly, the compiler does not emit an error.
enum FOO_STATUS {
STATUS_ERROR = (1U << 31)
};
Note that 1U makes the expression unsigned.

GCC, weird integer promotion scheme

I'm working with GCC v4.4.5 and I've notived a default integer promotion scheme I didn't expected.
To activate enough warnings to prevent implicit bugs I activated the option -Wconversion and since then I've notice that when I perform the code below, the warning "conversion to ‘short int’ from ‘int’ may alter its value" is present.
signed short sA, sB=1, sC=2;
sA = sB + sC;
This means that "sB + sC" is promoted to int and then assigned to sA which is signed short.
To fix this warning I have to cast it like this.
signed short sA, sB=1, sC=2;
sA = ( signed short )( sB + sC );
This warning is also present with the code below.
signed short sA=2;
sA += 5;
And can be fixed by removing the operator += by this...
sA = ( signed short )( sA + 1 );
which is a bit annoying cause I can't use the operators +=, -=.
I expected GCC to select the right integer promotion according to the operands. I mean, sA=sB+sC and sA+=5 should not be promoted to int as they all are signed short.
I understand that promoting to int by default prevents overflow bugs but it's a bit annoying cause I have to cast most of my code or change my variables to int.
Is there a GCC option I could use to present this integer promotion scheme ?
Thanks for your help.
This isn't gcc, this is standard C semantics.
Per 6.3.1.1:2, an object or expression with an integer type whose integer conversion rank is less
than or equal to the rank of int and unsigned int is converted to int or unsigned int depending on the signedness of the type, prior to participating in arithmetic expressions.
The reason C behaves this way is to allow for platforms where ALU operations on sub-int types are less efficient than those on full int types. You should perform all your arithmetic on int values, and convert back to short only for storage.

Resources