Invalid assertion for overflow check Frama-C - static-analysis

While checking the overflow for short and char data type for add operation, the assertions inserted by Frama-C are seems to be incorrect:
For char and short data the maximum positive and negative values are of integer data type.
What could be the reason for this?

Integral types of rank less than int are converted to either int or unsigned when used in an arithmetic operation (see C11 6.3.1.8 Usual arithmetic conversions). This is why you see the cast to (int) for x and y. Note that by default -rte will not emit warning for downcasts, as they are not undefined behavior (6.3.1.3§3 indicates that signed downcasts are implementation defined and that an implementation may raise a signal). If you add the option -warn-signed-downcast, you'll see the assertions you were probably looking for, which are due to the cast into (char) of the result:
/*# assert rte: signed_downcast: (int)x+(int)y ≤ 127; */
/*# assert rte: signed_downcast: -128 ≤ (int)x+(int)y; */
Note that if you store the result into an int, as in
void main(void) {
char x;
char y;
int z;
x = 1;
y = 127;
z = x + y;
return;
}
There won't be any downcast warning (but the signed overflow warnings will be present).

Related

Character range in Java

I've read in a book:
..characters are just 16-bit unsigned integers under the hood. That means you can assign a number literal, assuming it will fit into the unsigned 16-bit range (65535 or less).
It gives me the impression that I can assign integers to characters as long as it's within the 16-bit range.
But how come I can do this:
char c = (char) 80000; //80000 is beyond 65535.
I'm aware the cast did the magic. But what exactly happened behind the scenes?
Looks like it's using the int value mod 65536. The following code:
int i = 97 + 65536;
char c = (char)i;
System.out.println(c);
System.out.println(i % 65536);
char d = 'a';
int n = (int)d;
System.out.println(n);
Prints out 'a' and then '97' twice (a is char 97 in ascii).

In GNU GCC 4.5.3 type casting int to short

I would like to know, when two integers are multiplied and result is typecast to short and assigned to short, what will the compiler resolves it to? Below is the code snippet
int a=1,b=2,c;
short x=3,y=4,z;
int p;
short q;
int main()
{
c = a*b; /* Mul two ints and assign to int
[compiler resolves this to __mulsi3()] */
z = x*y; /* Mul two short and assign to short
[compiler resolves this to __mulhi3()] */
p = (x*y); /* Mul two short and assign to int
[compiler resolves this to __mulsi3()] */
q =(short)(a*b); /* Mul two ints typecast to short and assign to short
[compiler resolves this to __mulhi3()] */
return 0;
}
Here in the case for q =(short)(a*b);, first two ints multiplication should be performed (using __mulsi3()) and then assign it to short. But it's not the case here, compiler type casts both a and b to short and then calls __mulhi3().
I would like to know how can I change in gcc source code [which file], so that i can achieve my above requirements.
The compiler can analyse the code and see that as you covert the result immediately to a short the mutiplication can be done as short multiplication without affecting the result. This is exactly the same as case two of your example.
As the result is the same you don't need to worry about which multiplication function is used.

Comparison of integers of different signs warning with Xcode

I use an open source to build my project. when I add EGOTextView to the project, it has Semantic Issues like:
Comparison of integers of different signs: 'int' and 'NSUInteger' (aka 'unsigned long')
Comparison of integers of different signs: 'NSInteger' (aka 'long') and 'NSUInteger' (aka 'unsigned long')
For example in source code:
for (int i = 0; i &lt lines.count; i++)//lines is an array
I notice the project has build configure file which includes:
// Make CG and NS geometry types be the same. Mostly doesn't matter on iPhone, but this also makes NSInteger types be defined based on 'long' consistently, which avoids conflicting warnings from clang + llvm 2.7 about printf format checking
OTHER_CFLAGS = $(value) -DNS_BUILD_32_LIKE_64
According to the comments, I guess it causes the problems.
However, I don't know the meaning for this OTHER_CFLAGS setting. And I also don't know how to fix it so that it can avoid the semantic issues.
Could any one help me?
Thanks!
Actually, I don't think turning off the compiler warning is the right solution, since comparing an int and an unsigned long introduces a subtle bug.
For example:
unsigned int a = UINT_MAX; // 0xFFFFFFFFU == 4,294,967,295
signed int b = a; // 0xFFFFFFFF == -1
for (int i = 0; i < b; ++i)
{
// the loop will have zero iterations because i < b is always false!
}
Basically if you simply cast away (implicitly or explicitly) an unsigned int to an int your code will behave incorrectly if the value of your unsigned int is greater than INT_MAX.
The correct solution is to cast the signed int to unsigned int and to also compare the signed int to zero, covering the case where it is negative:
unsigned int a = UINT_MAX; // 0xFFFFFFFFU == 4,294,967,295
for (int i = 0; i < 0 || (unsigned)i < a; ++i)
{
// The loop will have UINT_MAX iterations
}
Instead of doing all of this strange type casting all over the place, you ought to first notice why you are comparing different types in the first place: YOU ARE CREATING AN INT!!
do this instead:
for (unsigned long i = 0; i < lines.count; i++)//lines is an array
...and now you are comparing the same types!
The configuration option you're looking at won't do anything about the warning you quoted. What you need to do is go into your build settings and search for the "sign comparison" warning. Turn that off.
Instead of turning the warnings of you can also prevent them from occurring.
Your lines.count is of type NSUInteger. Make an int of this first, and then do the comparison:
int count = lines.count;
for (int i = 0; i < count; i++)

How to type cast a literal in C

I have a small sample function:
#define VALUE 0
int test(unsigned char x) {
if (x>=VALUE)
return 0;
else
return 1;
}
My compiler warns me that the comparison (x>=VALUE) is true in all cases, which is right, because x is an unsigned character and VALUE is defined with the value 0. So I changed my code to:
if ( ((signed int) x ) >= ((signed int) VALUE ))
But the warning comes again. I tested it with three GCC versions (all versions > 4.0, sometimes you have to enable -Wextra).
In the changed case, I have this explicit cast and it should be an signed int comparison. Why is it claiming, that the comparison is always true?
Even with the cast, the comparison is still true in all cases of defined behavior. The compiler still determines that (signed int)0 has the value 0, and still determines that (signed int)x) is non-negative if your program has defined behavior (casting from unsigned to signed is undefined if the value is out of range for the signed type).
So the compiler continues warning because it continues to eliminate the else case altogether.
Edit: To silence the warning, write your code as
#define VALUE 0
int test(unsigned char x) {
#if VALUE==0
return 1;
#else
return x>=VALUE;
#endif
}
x is an unsigned char, meaning it is between 0 and 256. Since an int is bigger than a char, casting unsigned char to signed int still retains the chars original value. Since this value is always >= 0, your if is always true.
All the values of an unsigned char can fir perfectly in your int, so even with the cast you will never get a negative value. The cast you need is to signed char - however, in that case you should declare x as signed in the function signature. There is no point lying to the clients that you need an unsigned value while in fact you need a signed one.
The #define of VALUE to 0 means that your function is reduced to this:
int test(unsigned char x) {
if (x>=0)
return 0;
else
return 1;
}
Since x is always passed in as an unsigned char, then it will always have a value between 0 and 255 inclusive, regardless of whether you cast x or 0 to a signed int in the if statement. The compiler therefore warns you that x will always be greater than or equal to 0, and that the else clause can never be reached.

Char conversion in gcc

What are the char implicit typecasting rules? The following code gives an awkward output of -172.
char x = 200;
char y = 140;
printf("%d", x+y);
My guess is that being signed, x is casted into 72, and y is casted into 12, which should give 84 as the answer, which however is not the case as mentioned above. I am using gcc on Ubuntu.
The following code gives an awkward output of -172.
The behavior of an overflow is implementation dependent, but visibly in your case (and mine) a char has 8 bits and its representation is the complement by 2. So the binary representation of the unsigned char 200 and 140 are 11001000 and 10001100, corresponding to the binary representation of the  signed char -56 and -116, and -56 + -116 equals -172 (the char are promoted to int to do the addition).
Example forcing x and y to be signed whatever the default for char:
#include <stdio.h>
int main()
{
signed char x = 200;
signed char y = 140;
printf("%d %d %d\n", x, y, x+y);
return 0;
}
Compilation and execution :
pi#raspberrypi:/tmp $ gcc -Wall c.c
pi#raspberrypi:/tmp $ ./a.out
-56 -116 -172
pi#raspberrypi:/tmp $
My guess is that being signed, x is casted into 72, and y is casted into 12
You supposed the higher bit is removed (11001000 -> 1001000 and 10001100 -> 1100) but this is not the case, contrarily to the IEEE floats using a bit for the sign.

Resources