#include<stdio.h>
int main()
{
int ret = -1071;
if(ret == 0xfffffbd1)
{
printf("HAHAHA");
}
return 0;
}
why does the GCC compiler recognize the const variable 0xfffffbd1 as unsigned int as the condition ret == 0xfffffbd1
The C standard says that [t]he type of an integer constant is the first of the corresponding list in which its value can be represented (paragraph 6.4.4.1/5 in C99), and for hexadecimal types without a suffix, this list is:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
Assuming a 32-bit int type, 0xfffffbd1 is larger than INT_MAX but less than UINT_MAX, so the type of the constant is unsigned int.
Assuming 32 bit int type
At first time:
I defined a signed int ret = -1071 and executed the expression ret == 0xfffffbd1, the default type of const variable 0xfffffbd1 is signed integer(the sign bit is 1, which is a negative number), so result of the expression is TRUE
The second time
I updated the ret type to long long int, let ret = -1071 and executed the expression ret == 0xfffffbd1, the expression result is FALSE, the default type of 0xfffffbd1 is signed long long int(sign bit is 0, which is a positive number), if i updated the right value to 0xfffffffffffffbd1, the result of expression will return TRUE
Related
code like this:
#include <stdio.h>
int main(){
struct{
unsigned char a:4;
unsigned char b:4;
}i;
struct{
unsigned char a:4;
unsigned char b:4;
unsigned char c:4;
}j;
i.a = 1;
i.b = 1;
j.a = 1;
j.b = 1;
j.c = 1;
printf("size of i is: %d, size of j is: %d", sizeof(i), sizeof(j));
return 0;
}
why the output is 1 2? means size of i possess 1 byte, j possess 2 bytes. we know unsigned char have 1 byte, so why i not equal 2? i am sorry for my english.
All variables in C++ are padded upto next byte.
In struct i, both a and b are of 4 bit summing up to 1 byte.
In j, variables sum up to 12 bits, but size is 2 byte due to padding.
Reference: http://www.cplusplus.com/forum/general/51911/
In gdb,
(gdb) p -2147483648
$28 = 2147483648
(gdb) pt -2147483648
type = unsigned int
Since -2147483648 is within the range of type int, why is gdb treating it as an unsigned int?
(gdb) pt -2147483647-1
type = int
(gdb) p -2147483647-1
$27 = -2147483648
I suspect that gdb applies the unary negation operator after setting the type of the digit value:
In case 1: gdb parses 2147483648 which overflows int type and becomes unsigned int. Then it applies the negation.
In case 2: 2147483647 is a valid int and stays int when negation and subtraction are subsequently applied.
gdb appears to be following a set of rules for determining the type of a decimal integer literal that are inconsistent with the rules given by the C standard.
I'll assume your system has a 32-bit int and long int types, using 2's-complement and no padding bits (that's a common choice for 32-bit systems, and it's consistent with what you're seeing). Then the ranges of int and unsigned int are:
int: -2147483648 .. +2147483647
unsigned int: 0 .. 4294967295
and the ranges of long int and unsigned long int are the same.
2147483647 is within the range of type int, so that's its type.
Since the value of 2147483648 is outside the range of type int, apparently gdb is choosing to treat it as an unsigned int. And -2147483648 is not an integer literal, it's an expression consisting of a unary - operator applied to the constant 2147483648. Since gdb treats 2147483648 as an unsigned int, it also treats -2147483648 as an unsigned int, and the unary - operator for unsigned types wraps around, yielding 2147483648.
As for -2147483647-1, that's an expression all of whose operands are of type int, and there's no overflow.
In all versions of ISO C, though, an unsuffixed decimal literal can never be of type unsigned int. In C90, its type is the first of:
int
long int
unsigned long int
that can represent its value. Under C99 rules (and later), the type of a decimal integer constant is the first of:
int
long int
long long int
that can represent its value.
I don't know whether there's a way to tell gdb to use C rules for integer literals.
following is code of function
void printf(char *ch,void *num,...)
{
int i;
va_list ptr; //to store variable length argument list
va_start(ptr,num); // initialise ptr
for(i=0;ch[i]!='\0';i++)
{
if(ch[i]=='%') // check for % sign in print statement
{ i++;
if( ch[i]=='d')
{
int *no = (int *)va_arg(ptr,int * );
int value=*no; // just used for nothing
printno(value); //print int number
}
if( ch[i]=='u')
{
unsigned long *no =(unsigned long *) va_arg(ptr,unsigned long *);
unsigned long value=*no;
printuno(value); //print unsigned long
}
}
else // if not % sign then its regular character so print it
{
printchar(ch[i]);
}
}
}
this my code for printf() to print integer value and uint values
It is working fine for string portion in arguments but for %d %u it shows the same
values for all variables. This value is 405067 - even though the values of the variables are different.
Please tell me how to fix this.
Why are you interpreting the argument as a pointer? I'm surprised you aren't crashing. You should just be using
int num = va_arg(ptr,int);
printno(num);
and
unsigned int num = va_arg(ptr,unsigned int);
printuno(value);
(note, unsigned int, not unsigned long, because that would actually be %lu)
Also, get rid of the num parameter. It's wrong. Your va_list should be initialized as
`va_start(ptr, ch);`
va_start() takes the last argument before the varargs, not the first argument.
As noted in a comment, the C99 prototype for printf() is:
int printf(const char * restrict format, ...);
Therefore, if you're calling your function printf(), you should probably follow its design. I'm going to ignore flags, field width, precision and length modifiers, assuming that the conversion specifiers are simply two characters each, such as %d or %%.
int printf(const char * restrict format, ...)
{
va_list args;
va_start(args, format);
char c;
int len = 0;
while ((c = *format++) != '\0')
{
if (c != '%')
{
putchar(c);
len++;
}
else if ((c = *format++) == '%')
{
putchar(c);
len++;
}
else if (c == 'd')
{
int value = va_arg(args, int);
len += printno(value);
}
else if (c == 'u')
{
unsigned value = va_arg(args, unsigned);
len += printuno(value);
}
else
{
/* Print unrecognized formats verbatim */
putchar('%');
putchar(c);
len += 2;
}
}
return len;
}
Dealing with the full set of format specifiers (especially if you add the POSIX n$ notation as well as flags, field width, precision and length modifiers) is much harder, but this should get you moving in the correct direction. Note that I assume the printno() and printuno() functions both report how many characters were written for the conversion specifier. The function returns the total number of characters written. Note, too, that production code would need to allow for the called functions to fail, and would therefore probably not use the len += printno(value); notation, but would capture the return from printno() into a separate variable that could be tested for an error before adding it to the total length output.
I ported one project from Visual C++ 6.0 to VS 2010 and found that a critical part of the code (scripting engine) now runs in about three times slower than in was before.
After some research I managed to extract code fragment which seems to cause the slowdown. I minimized it as much as possible, so it ill be easier to reproduce the problem.
The problem is reproduced when assigning a complex class (Variant) which contains another class (String), and the union of several other fields of simple types.
Playing with the example I discovered more "magic":
1. If I comment one of unused (!) class members, the speed increases, and the code finally runs faster than those complied with VS 6.2
2. The same is true if I remove the "union" wrapper"
3. The same is true event if change the value of the filed from 1 to 0
I have no idea what the hell is going on.
I have checked all code generation and optimization switches, but without any success.
The code sample is below:
On my Intel 2.53 GHz CPU this test, compiled under VS 6.2 runs 1.0 second.
Compiled under VS 2010 - 40 seconds
Compiled under VS 2010 with "magic" lines commented - 0.3 seconds.
The problem is reproduces with any optimization switch, but the "Whole program optimization" (/GL) should be disabled. Otherwise this too smart optimizer will know that out test actually does nothing, and the test will run 0 seconds.
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
class String
{
public:
char *ptr;
int size;
String() : ptr(NULL), size( 0 ) {};
~String() {if ( ptr != NULL ) free( ptr );};
String& operator=( const String& str2 );
};
String& String::operator=( const String& string2 )
{
if ( string2.ptr != NULL )
{
// This part is never called in our test:
ptr = (char *)realloc( ptr, string2.size + 1 );
size = string2.size;
memcpy( ptr, string2.ptr, size + 1 );
}
else if ( ptr != NULL )
{
// This part is never called in our test:
free( ptr );
ptr = NULL;
size = 0;
}
return *this;
}
struct Date
{
unsigned short year;
unsigned char month;
unsigned char day;
unsigned char hour;
unsigned char minute;
unsigned char second;
unsigned char dayOfWeek;
};
class Variant
{
public:
int dataType;
String valStr; // If we comment this string, the speed is OK!
// if we drop the 'union' wrapper, the speed is OK!
union
{
__int64 valInteger;
// if we comment any of these fields, unused in out test, the speed is OK!
double valReal;
bool valBool;
Date valDate;
void *valObject;
};
Variant() : dataType( 0 ) {};
};
void TestSpeed()
{
__int64 index;
Variant tempVal, tempVal2;
tempVal.dataType = 3;
tempVal.valInteger = 1; // If we comment this string, the speed is OK!
for ( index = 0; index < 200000000; index++ )
{
tempVal2 = tempVal;
}
}
int main(int argc, char* argv[])
{
int ticks;
char str[64];
ticks = GetTickCount();
TestSpeed();
sprintf( str, "%.*f", 1, (double)( GetTickCount() - ticks ) / 1000 );
MessageBox( NULL, str, "", 0 );
return 0;
}
This was rather interesting. First I was unable to reproduce the slow down in release build, only in debug build. Then I turned off SSE2 optimizations and got the same ~40s run time.
The problem seems to be in the compiler generated copy assignment for Variant. Without SSE2 it actually does a floating point copy with fld/fstp instructions because the union contains a double. And with some specific values this apparently is a really expensive operation. The 64-bit integer value 1 maps to 4.940656458412e-324#DEN which is a denormalized number and I believe this causes problems. When you leave tempVal.valInteger uninitialized it may contain a value that works faster.
I did a small test to confirm this:
union {
uint64_t i;
volatile double d1;
};
i = 0xcccccccccccccccc; //with this value the test takes 0.07 seconds
//i = 1; //change to 1 and now the test takes 36 seconds
volatile double d2;
for(int i=0; i<200000000; ++i)
d2 = d1;
So what you could do is define your own copy assignment for Variant that just does a simple memcpy of the union.
Variant& operator=(const Variant& rhs)
{
dataType = rhs.dataType;
union UnionType
{
__int64 valInteger;
double valReal;
bool valBool;
Date valDate;
void *valObject;
};
memcpy(&valInteger, &rhs.valInteger, sizeof(UnionType));
valStr = rhs.valStr;
return *this;
}
I have a small sample function:
#define VALUE 0
int test(unsigned char x) {
if (x>=VALUE)
return 0;
else
return 1;
}
My compiler warns me that the comparison (x>=VALUE) is true in all cases, which is right, because x is an unsigned character and VALUE is defined with the value 0. So I changed my code to:
if ( ((signed int) x ) >= ((signed int) VALUE ))
But the warning comes again. I tested it with three GCC versions (all versions > 4.0, sometimes you have to enable -Wextra).
In the changed case, I have this explicit cast and it should be an signed int comparison. Why is it claiming, that the comparison is always true?
Even with the cast, the comparison is still true in all cases of defined behavior. The compiler still determines that (signed int)0 has the value 0, and still determines that (signed int)x) is non-negative if your program has defined behavior (casting from unsigned to signed is undefined if the value is out of range for the signed type).
So the compiler continues warning because it continues to eliminate the else case altogether.
Edit: To silence the warning, write your code as
#define VALUE 0
int test(unsigned char x) {
#if VALUE==0
return 1;
#else
return x>=VALUE;
#endif
}
x is an unsigned char, meaning it is between 0 and 256. Since an int is bigger than a char, casting unsigned char to signed int still retains the chars original value. Since this value is always >= 0, your if is always true.
All the values of an unsigned char can fir perfectly in your int, so even with the cast you will never get a negative value. The cast you need is to signed char - however, in that case you should declare x as signed in the function signature. There is no point lying to the clients that you need an unsigned value while in fact you need a signed one.
The #define of VALUE to 0 means that your function is reduced to this:
int test(unsigned char x) {
if (x>=0)
return 0;
else
return 1;
}
Since x is always passed in as an unsigned char, then it will always have a value between 0 and 255 inclusive, regardless of whether you cast x or 0 to a signed int in the if statement. The compiler therefore warns you that x will always be greater than or equal to 0, and that the else clause can never be reached.