I am trying to convert the following code from msvc to gcc
#define ltolower(ch) CharLower((LPSTR)(UCHAR)(ch))
char * aStr;
* aStr = (char)ltolower(*aStr);
This code is giving a compiler error: cast from ‘CHAR*’ to ‘char’ loses precision
My understanding is that tolower(int) from c wouldn't convert the whole string.
Thanks.
Your cast in CharLower is raising that error. Before doing that, you need to set the high order byte of the pointer passed to CharLower equals to ZERO.
From MSDN reference on the function:
If the operand is a character string,
the function returns a pointer to the
converted string. Because the string
is converted in place, the return
value is equal to lpsz.
If the operand is a single character,
the return value is a 32-bit value
whose high-order word is zero, and
low-order word contains the converted
character.
Something like this might work:
#define ltolower(ch) CharLower(0x00ff & ch)
If you are using a C++ compiler, you might also need a CAST operator:
#define ltolower(ch) CharLower((LPTSTR)(0x00ff & ch))
Haven't tested it though...
Related
My understanding is that, nullptr could not be converted implicitly to another types. But later I "found" that it could be converted to bool.
The issue is, I can see it being converted to bool on GCC 4.x, but it complains on GCC > 5.X
#include <iostream>
bool f(bool a){
return !a;
}
// Type your code here, or load an example.
int main() {
return f(nullptr);
}
On >5.x I get
<source>: In function 'int main()':
<source>:7:21: error: converting to 'bool' from 'std::nullptr_t' requires direct-initialization [-fpermissive]
return f(nullptr);
^
<source>:2:6: note: initializing argument 1 of 'bool f(bool)'
bool f(bool a){
^
Compiler returned: 1
I couldn't find anything on the release notes of GCC 5.X that would explain that.
Can be observed here:
https://godbolt.org/g/1Uc2nM
Can someone explain why there is a difference between versions and what rule is applied here.
The rule can be found in C++17 [conv.bool]/1:
For direct-initialization, a prvalue of type std::nullptr_t can
be converted to a prvalue of type bool; the resulting value is false.
Initialization of function parameters is copy-initialization , not direct-initialization. If you are not familiar with this topic; initialization contexts in C++ can be divided into these two classes, and there are some operations that can only occur in direct-initialization.
The restriction to direct-initialization was added in C++14, which could explain the difference between g++ versions.
I assume the purpose of this rule is to raise an error for the exact code you've written: a bool is expected and a null pointer constant was provided; testing a null pointer constant for boolean-ness is not very meaningful since it only has one state anyway.
Remember that nullptr is not a pointer; it's a thing that can be implicitly converted to a null pointer if the code explicitly requests such a conversion. The whole reason for adding it was to fix the hack of 0 being used as a null pointer constant, and inadvertently matching some other template or overload.
The code could be:
return f(static_cast<bool>(nullptr));
or perhaps you could add an overload of f that accepts std::nullptr_t.
I have a enum typedef and when I assign a wrong value (not in the enum) and print this, it shows me an enum value, not the bad value. Why?
This is the example:
#define attribute_packed_type(x ) __attribute__( ( packed, aligned( sizeof( x ) ) ) )
typedef enum attribute_packed_type( uint16_t ) UpdateType_t
{
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
// UPDATE_TYPE_FORCE_UINT16 = 0xFFFF,
} UpdateType_t;
UpdateType_t myValue;
uint16_t bad = 1234;
myValue = bad;
printf( "myValue=%d\n", myValue );
return 1;
and the output of this example is:
myValue=210
If I enable the "UPDATE_TYPE_FORCE_UINT16" into the enum the output is:
myValue=1234
I not understand why the gcc make this. Is this a problem, a bug, or is it normal? If this normal, why?
You've run into a case where gcc behaves oddly when you specify both packed and aligned attributes for an enumerated type. It's probably a bug. It's at least an undocumented feature.
A simplified version of what you have is:
typedef enum __attribute__ (packed, aligned(2)) UpdateType_t {
foo, bar
} UpdateType_t;
The values of the enumerated constants are all small enough to fit in a single byte, either signed or unsigned.
The behavior of the packed and aligned attributes on enum types is a bit confusing. The behavior of packed in particular is, as far as I can tell, not entirely documented.
My experiments with gcc 5.2.0 indicate that:
__attribute__(packed) applied to an enumerated type causes it to be given the smallest size that can fit the values of all the constants. In this case, the size is 1 byte, so the range is either -128..+127 or 0..255. (This is not documented.)
__attribute__(aligned(N)) affects the size of the type. In particular, aligned(2) gives the enumerated type a size and alignment of 2 bytes.
The tricky part is this: if you specify both packed and aligned(2), then the aligned specification affects the size of the enumerated type, but not its range. Which means that even though an enum e is big enough to hold any value from 0 to 65535, any value exceeding 255 is truncated, leaving only the low-order 8 bits of the value.
Regardless of the aligned specification, the fact that you've used the packed attribute means that gcc will restrict the range of your enumerated type to the smallest range that can fit the values of all the constants. The aligned attribute can change the size, but it doesn't change the range.
In my opinion, this is a bug in gcc. (And clang, which is largely gcc-compatible, behaves differently.)
The bottom line is that by packing the enumeration type, you've told the compiler to narrow its range. One way to avoid that is to define an additional constant with a value of 0xFFFF, which you show in a comment.
In general, a C enum type is compatible with some integer type. The choice of which integer type to use is implementation-defined, as long as the chosen type can represent all the specified values.
According to the latest gcc manual:
Normally, the type is unsigned int if there are no negative
values in the enumeration, otherwise int. If -fshort-enums is
specified, then if there are negative values it is the first of
signed char, short and int that can represent all the
values, otherwise it is the first of unsigned char, unsigned short
and unsigned int that can represent all the values.
On some targets, -fshort-enums is the default; this is
determined by the ABI.
Also quoting the gcc manual:
The packed attribute specifies that a variable or structure field
should have the smallest possible alignment -- one byte for a
variable, and one bit for a field, unless you specify a larger
value with the aligned attribute.
Here's a test program, based on yours but showing some extra information:
#include <stdio.h>
int main(void) {
enum __attribute((packed, aligned(2))) e { foo, bar };
enum e obj = 0x1234;
printf("enum e is %s, size %zu, alignment %zu\n",
(enum e)-1 < (enum e)0 ? "signed" : "unsigned",
sizeof (enum e),
_Alignof (enum e));
printf("obj = 0x%x\n", (unsigned)obj);
return 0;
}
This produces a compile-time warning:
c.c: In function 'main':
c.c:4:18: warning: large integer implicitly truncated to unsigned type [-Woverflow]
enum e obj = 0x1234;
^
and this output:
enum e is unsigned, size 2, alignment 2
obj = 0x34
The simplest change to your program would be to add the
UPDATE_TYPE_FORCE_UINT16 = 0xFFFF
that you've commented out, forcing the type to have a range of at least 0 to 65535. But there's a more portable alternative.
Standard C doesn't provide a way to specify the representation of an enum type. gcc does, but as we've seen it's not well defined, and can yield surprising results. But there is an alternative that doesn't require any non-portable code or assumptions beyond the existence of uint16_t:
enum {
UPDATE_A = 4,
UPDATE_B = 5,
UPDATE_C = 37,
UPDATE_D = 43,
};
typedef uint16_t UpdateType_t;
The anonymous enum type serves only to define the constant values (which are of type int, not of the enumeration type). You can declare objects of type UpdateType_T and they'll have the same representation as uint16_t, which (I think) is what you really want.
Since C enumeration constants aren't closely tied to their type anyway (for example UPDATE_A is of type int, not of the enumerated type), you might as well use the num declaration just to define the values of the constants, and use whatever integer type you like to declare variables.
I recently came across the following code that uses syntax I have never seen before:
std::cout << char('A'+i);
The behavior of the code is obvious enough: it is simply printing a character to stdout whose value is given by the position of 'A' in the ASCII table plus the value of the counter i, which is of type unsigned int.
For example, if i = 5, the line above would print the character 'F'.
I have never seen char used as a function before. My questions are:
Is this functionality specific to C++ or did it already exist in strict C?
Is there a technical name for using the char() keyword as a function?
That is C++ cast syntax. The following are equivalent:
std::cout << (char)('A' + i); // C-style cast: (T)e
std::cout << char('A' + i); // C++ function-style cast: T(e); also, static_cast<T>(e)
Stroustroup's The C++ programming language (3rd edition, p. 131) calls the first type C-style cast, and the second type function-style cast. In C++, it is equivalent to the static_cast<T>(e) notation. Function-style casts were not available in C.
This is not a function call, it's instead a typecast. More usually it's written as
std::cout << (char)('A'+i);
That makes it clear it's not a function call, but your version does the same. Note that your version might only be valid in C++, while the one above work in both C and C++. In C++ you can also be more explicit and write
std::cout << static_cast<char>('A'+i);
instead.
Not that the cast is necessary because 'A'+i will have type int and be printed as an integer. If you want it to be interpreted as a character code you need the char cast.
I have the following piece of code:
#include <stdlib.h>
#include <stdio.h>
void test(unsigned char * arg) { }
int main() {
char *pc = (char *) malloc(1);
unsigned char *pcu = (unsigned char *) malloc(1);
*pcu = *pc = -1; /* line 10 */
if (*pc == *pcu) puts("equal"); else puts("not equal"); /* line 12 */
pcu = pc; /* line 14 */
if (pcu == pc) { /* line 16 */
test(pc); /* line 18 */
}
return 0;
}
If I compile it with gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) (but it is not limited to this particular version) with options
gcc a.c -pedantic -Wall -Wextra -Wsign-conversion -Wno-unused-parameter; ./a.out
I get the following warnings
test.c: In function ‘main’:
test.c:10:21: warning: conversion to ‘unsigned char’ from ‘char’ may change the sign of the result [-Wsign-conversion]
test.c:14:13: warning: pointer targets in assignment differ in signedness [-Wpointer-sign]
test.c:16:17: warning: comparison of distinct pointer types lacks a cast [enabled by default]
test.c:18:17: warning: pointer targets in passing argument 1 of ‘test’ differ in signedness [-Wpointer-sign]
test.c:4:6: note: expected ‘unsigned char *’ but argument is of type ‘char *’
not equal
g++ warnings/errors are similar. I hope I understand why the comparison on line 12 is evaluated to false, but is there any way to get a warning also in such cases? If not, is there some principial difference between line 12 and the lines which cause warnings? Is there any specific reason why comparison of char and unsigned char shouldn't deserve its warning? Because at least at first glance, line 12 seems to me more "dangerous" than e.g. line 16.
A short "story behind": I have to put together pieces of code from various sources. Some of them use char and some of them use unsigned char. -funsigned-char would work fine, but I am forced to avoid it and rather to add proper type conversions. That's why such a warning would be useful for me, because now, if I forget to add a type conversion in such a case, the program silently fails.
Thanks in advance, P.
I believe this is caused by integer promotion.
When you deal with char or short, what C actually does (and this is defined by the standard, not the implementation) is promote those types to int before doing any operations. The theory, I think, is that int is supposed to be the natural size used by the underlying machine, and therefore the fastest, most efficient size; in fact, most architectures will do this conversion on loading a byte without being asked.
Since both signed char and unsigned char will fit happily within the range of a signed int, the compiler uses that for both, and the comparison becomes a pure signed comparison.
When you have a mismatched type on the left-hand-side of the expression (lines 10 and 14) then it needs to convert that back to the smaller type, but it can't, so you get a warning.
When you compared the mismatching pointers (line 16) and passed the mismatching pointer (line 18), the integer promotion is not in play because you never actually dereference the pointers, and so no integers are ever compared (char is an integer type also, of course).
GCC allows customization of printf specifiers. However, I don't see how I can "teach" it to accept my string class for %s specifier. My string class is a simple wrapper over char pointer and has exactly one member variable (char * data) and no virtual functions. So, it's kind of ok to pass it as-is to printf-like functions in place of regular char *. The problem is that on gcc static analyzer prevents me from doing so and I have to explicitly cast it to const char * to avoid warnings or errors.
My cstring looks something like this:
class cstring
{
cstring() : data(NULL){}
cstring(const char * str) : data(strdup(str)){}
cstring(const cstring & str) : data(strdup(str.data)){}
~cstring()
{
free(data);
}
...
const char * c_str() const
{
return data;
}
private:
char * data;
};
Example code that uses cstring:
cstring str("my string");
printf("str: '%s'", str);
On GCC I get this error:
error: cannot pass objects of non-trivially-copyable type 'class cstring' through '...'
error: format '%s' expects argument of type 'char*', but argument 1 has type 'cstring' [-Werror=format]
cc1plus.exe: all warnings being treated as errors
The C++ standard doesn't require compilers to support this sort of code, and not all versions of gcc support it. (https://gcc.gnu.org/onlinedocs/gcc/Conditionally-supported-behavior.html suggests that gcc-6.0 does, at least - an open question whether it will work with classes such as the one here.)
The relevant section in the C++11 standard is 5.2.2 section 7:
When there is no parameter for a given argument, the argument is passed in such a way that the receiving function can obtain the value of the argument by invoking va_arg ...
Passing a potentially-evaluated argument of class type (Clause 9)
having a non-trivial copy constructor, a non-trivial move constructor,
or a non-trivial destructor, with no corresponding parameter, is
conditionally-supported with implementation-defined semantics.
(But look on the bright side: if you get into the habit of using c_str, then at least you won't get tripped up when/if you use std::string.)