I want to use gcc to do some compile-time checking on function inputs if the compiler knows that they are constants.
I have a solution that very almost works, and as far as I can see, it should work.
Note: __builtin_constant_p(expression) is supposed to returns whether an expression is known to be a constant at compile time.
Assuming we want to check whether port<2 when calling uart(port), the following code should work:
#include <stdio.h>
void _uart(int port) {
printf("port is %d", port);
}
#define uart(port) \
static_assert(__builtin_constant_p(port)? port<2: 1, "parameter port must be < 2"); \
_uart(port)
int main(void) {
int x=1;
uart(x);
}
This works when calling uart(). Unfortunately, it doesn't quite work for non-constant x. For some reason static_assert can't handle the case where x is not a constant, even though in theory __builtin_constant_p() won't even pass it a constant. The error message I get is:
c:\>gcc a.cpp -std=c++0x -Os
a.cpp: In function 'int main()':
a.cpp:13: error: 'x' cannot appear in a constant-expression
Any ideas?
Your code works with g++ (GCC) 4.8.2.
- but not with optimization, as you correctly noted.
If only we could use
static_assert(__builtin_choose_expr(__builtin_constant_p(port), \
port<2, 1), "parameter port must be < 2")
- but unfortunately the __builtin_choose_expr construct is currently only available for C.
However, there is a C++ patch which sadly didn't make it into the release yet.
You can try the trick used in the Linux kernel:
What is ":-!!" in C code?
The (somewhat horrible) Linux kernel macro is less strict about what kinds of expressions are allowed in the parameter.
Related
I am getting the following error
rudimentary_calc.c: In function ‘main’:
rudimentary_calc.c:9:6: error: conflicting types for ‘getline’
9 | int getline(char line[], int max) ;
| ^~~~~~~
In file included from rudimentary_calc.c:1:
/usr/include/stdio.h:616:18: note: previous declaration of ‘getline’ was here
616 | extern __ssize_t getline (char **__restrict __lineptr,
| ^~~~~~~
when I ran the following code
#include <stdio.h>
#define maxline 100
int main()
{
double sum, atof(char[]);
char line[maxline];
int getline(char line[], int max) ;
sum = 0;
while (getline(line, maxline) > 0)
printf("\t %g \n", sum += atof(line));
return 0;
}
What am I doing wrong? I am very new to C, so I don't know what went wrong.
Generally, you should not have to declare "built-in" functions as long as you #include the appropriate header files (in this case stdio.h). The compiler is complaining that your declaration is not exactly the same as the one in stdio.h.
The venerable K&R book defines a function named getline. The GNU C library also defines a non-standard function named getline. It is not compatible with the function defined in K&R. It is declared in the standard <stdio.h> header. So there is a name conflict (something that every C programmer has do deal with).
You can instruct GCC to ignore non-standard names found in standard headers. You need to supply a compilation flag such as -std=c99 or -std=c11 or any other std=c<year> flag that yout compiler supports.
Live demo
Always use one of these flags, plus at least -Wall, to compile any C code, including code from K&R. You may encounter some compiler warnings or even errors. This is good. Thy will tell you that there are some code constructs that were good in the days of K&R, but are considered problematic now. You want to know about those. The book is rather old and the best practices and the C language itself have evolved since.
I have been debugging a strange issue in the past hours that only occured in a release build (-O3) but not in a debug build (-g and no optimizations). Finally, I could pin it down to the "count trailing zeroes" builtin giving me wrong results, and now I wonder whether I just found a GCC bug or whether I'm missing something.
The short story is that apparently, GCC evaulates __builtin_ctz wrongly with -O2 and -O3 in some situations, but it does fine with no optimizations or -O1. The same applies to the long variants __builtin_ctzl and __builtin_ctzll.
My initial assumption is that __builtin_ctz(0) should resolve to 32, because it is the unsigned int (32-bit) version of the builtin and thus there are 32 trailing zero bits. I have not found anything stating that these builtins are undefined for the input being zero, and practical work with them has me convinced that they are not.
Let's have a look at the code I'd like to talk about now:
bool test_basic;
bool test_ctz;
bool test_result;
int ctz(const unsigned int x) {
const int q = __builtin_clz(x);
test_ctz = (q == 32);
return q;
};
int main(int argc, char** argv) {
{
const int q = __builtin_clz(0U);
test_basic = (q == 32);
}
{
const int q = ctz(0U);
test_result = (q == 32);
}
std::cout << "test_basic=" << test_basic << std::endl;
std::cout << "test_ctz=" << test_ctz << std::endl;
std::cout << "test_result=" << test_result << std::endl;
}
The code basically does three tests, storing the results in those boolean values:
test_basic is true if __builtin_clz(0U) resolves to 32.
test_ctz is true if __builtin_clz(x) equals 32 within the function ctz.
test_result is true if the result of ctz(0) equals 32.
Because I call ctz once in my main function and pass zero to it, I expect all three bools to be true by the end of the program. This actually is the case if I compile it without any optimizations or -O1. However, when I compile it with -O2, test_ctz becomes false. I consulted the Compiler Explorer to find out what the hell is going on. (Note that I am using g++ 7.5 myself, but I could reproduce this with any later version as well. In the Compiler Explorer, I picked the latest it has to offer, which is 10.2.)
Let's have a look at the code compiled with -O1 first. I see that test_ctz is simply set to 1. I guess that's because these builtins are treated as constexpr and the whole rather simple function ctz is evaluated at compile-time. The result is correct (under my initial assumption) and so I'm fine with that.
So what could possibly go wrong from here? Well, let's look at the code compiled with -O2. Nothing much has changed, just that test_ctz is now set to 0! And that's that, beyond any logic: the compiler apparently evaluates q == 32 to being false, but then q is returned from the function and we compare that against 32, and suddenly it's true (test_result). I have no explanation for this. Am I missing something? Have I found some demonical GCC bug?
It gets even funnier if you printf the value of q just before test_ctz is set: the console then prints 32, so the computation actually works as expected - at runtime. Yet at compile-time, the compiler thinks q is not 32 and test_ctz is forced to false. Indeed, if I change the declaration of q from const int to volatile int and thus force the computation at runtime, everything works as expected, so luckily there's a simple workaround.
To conclude, I'd like to note that I also use the "count leading zeroes" builtins (__builtin_clz and long versions) and I could not observe the same problem there; they work just fine.
I have not found anything stating that these builtins are undefined for the input being zero
How could you missed it??? From gcc online docs other builtins:
Built-in Function: int __builtin_ctz (unsigned int x)
Returns the number of trailing 0-bits in x, starting at the least significant bit position. If x is 0, the result is undefined.
So what could possibly go wrong from here?
Code behaving differently with different optimizations levels in 99% of cases is a clear indication of undefined behavior in your code. In this case the compiler optimizations makes different decisions then architecture instruction BSR and in case the compiler generates the BSR on x86 architecture, the result is still undefined, from the link If the content source operand is 0, the content of the destination operand is undefined. Och, there's also LZCNT in which case you'll get LZCNT will produce the operand size when the input operand is zero, which maybe better explains the behavior of your code.
Am I missing something?
Yes. You are missing that __builtin_ctz(0) is undefined.
Have I found some demonical GCC bug?
No.
I'd like to note that I also use the "count leading zeroes" builtins (__builtin_clz and long versions) I could not observe the same problem there; they work just fine.
Can be seen in gcc docs that __builtin_clz(0) is also undefined behavior.
In the code listed below, "LambdaTest" fails with the following error on Clang only:
shared/LambdaTest.cpp:8:31: error: variable 'array' with variably
modified type cannot be captured in a lambda expression
auto myLambdaFunction = [&array]()
^
shared/LambdaTest.cpp:7:9: note: 'array' declared here
int array[length];
The function "LambdaTest2" which passes the array as a parameter instead of capturing compiles fine on G++/Clang.
// Compiles with G++ but fails in Clang
void LambdaTest(int length)
{
int array[length];
auto myLambdaFunction = [&array]()
{
array[0] = 2;
};
myLambdaFunction();
}
// Compiles OK with G++ and Clang
void LambdaTest2(int length)
{
int array[length];
auto myLambdaFunction = [](int* myarray)
{
myarray[0] = 2;
};
myLambdaFunction(array);
}
Two questions:
What does the compiler error message "variable 'array' with variably modified type cannot be captured in a lambda expression" mean?
Why does LambdaTest fail to compile on Clang and not G++?
Thanks in advance.
COMPILER VERSIONS:
*G++ version 4.6.3
*clang version 3.5.0.210790
int array[length]; is not allowed in Standard C++. The dimension of an array must be known at compile-time.
What you are seeing is typical for non-standard features: they conflict with standard features at some point. The reason this isn't standard is because nobody has been able to make a satisfactory proposal that resolves those conflicts. Instead, each compiler has done their own thing.
You will have to either stop using the non-standard feature, or live with what a compiler happens to do with it.
VLA (Variable-length array) is not officially supported in C++.
You can instead use std::vector like so:
void LambdaTest(int length)
{
std::vector<int> array(length);
auto myLambdaFunction = [&array]()
{
array[0] = 2;
};
myLambdaFunction();
}
Thanks to both answers above for pointing out that VLAs are non-standard. Now I know what to search for.
Here are more are related links to the subject.
Why aren't variable-length arrays part of the C++ standard?
Why no VLAS in C++
I have mainly two kinds of compile warning:
1. implicit declaration of function
in a.c, it has char *foo(char *ptr1, char *ptr2), in b.c, some functions use this foo function without any declaration, and I found seems compiler will treat the function foo return value as integer, and even I can pass some variables less or more than foo function declaration
2. enumerated type mixed with another type
My target chip is ARM11, it seems that even I don't solve these two kinds of compile warning, my program can run without any issues, but I believe it must have some risk behind these. Can anyone give me some good example that these two kinds of compile warning can cause some unexpected issues?
Meanwhile, if these two warnings have potential risk, why c compiler allow these kinds warning happen but not set them to error directly? any story behind?
Implicit declaration. E.g. you have function: float foo(float a), which isn't declared when you call it. Implicit rules will create auto-declaration with following signature: int foo(double) (if passed argument is float). So value you pass will be converted to double, but foo expects float. The same with return - calling code expects int, but returned float. Values would be a complete mess.
enum mixed with other type. Enumerated type have list of values it could take. If you trying to assign numeric value to it, there is a chance that it isn't one of listed values; if later your code expects only specified range and presumes nothing else could be there - it could misbehave.
Simple example:
File: warn.c
#include <stdio.h>
double foo(double x)
{
return myid(x);
}
int
main (void)
{
double x = 1.0;
fprintf (stderr, "%lg == %lg\n", x, foo (x));
return 0;
}
File: foo.c
double
myid (double x)
{
return x;
}
Compile and run:
$ gcc warn.c foo.c -Wall
warn.c: In function ‘foo’:
warn.c:5: warning: implicit declaration of function ‘myfabs’
$ ./a.out
1 == 0
Old C standard (C90) had this strange "default int" rule and for compatibility it is supported even in latest compilers.
I have been told that you could add some special instruction to your code to make GCC issue a warning when it detects that 0 is being passed as an argument (which means, when it is possible at compile-time).
I have looked for it but haven’t been able to find it. Is this true?
There is a function attribute you can use to warn on null pointers:
void foo(void *data) __attribute__((nonnull));
int main(void)
{
foo(0);
return 0;
}
$ gcc -Wall -c t.c
t.c: In function ‘main’:
t.c:5:5: warning: null argument where non-null required (argument 1) [-Wnonnull]
I'm not aware of anything built-in to check for 0 for integer types though.
You might find something that suits your need in the various BUILD_BUG_* macros from the Linux kernel though. They're in include/linux/kernel.h. (Cross-referenced here.)