Why is this code a non-constant condition?
static_assert(4294965 * 1000 / 1000 == -2, "overflow occurs");
But this is not:
const int overflowed = 4294965 * 1000 / 1000;
static_assert(overflowed == -2, "overflow occurs");
See code on godbolt.
Note: With gcc <9 second code also has the error.
https://en.cppreference.com/w/cpp/language/constant_expression
A core constant expression is any expression whose evaluation would not evaluate any one of the following :
[...]
an expression whose evaluation leads to any form of core language undefined behavior (including signed integer overflow, division by zero, pointer arithmetic outside array bounds, etc). Whether standard library undefined behavior is detected is unspecified.
Since it's unspecified if gcc will detect the undefined behavior, it may cause some strange case, like yours, when it detect only sometime
If you change your const to constexpr you get the same error
constexpr int overflowed = 4294965 * 1000 / 1000;
clang seem fail both of your solution: https://godbolt.org/z/qocG8xfzb
Note:
Even if you find a way to static_assert an undefined behavior and you get the result you hoping for it does not mean that you can expect the same result later in the program.
See : https://en.cppreference.com/w/cpp/language/ub
UB and optimization
Because correct C++ programs are free of undefined behavior, compilers may produce unexpected results when a program that actually has UB is compiled with optimization enabled
IMHO, most of the trick that try to "out-smart" the compiler with UB backfire sooner or later and should be avoided
Related
A non-variable length array declaration in standard C must have an integer constant expression for its size (6.7.6.2). You are not allowed to use floating point expressions even if you cast them to an integer, with one exception - directly casting a floating point constant to integer: (int)5.5 is allowed, (int)(11.0/2) is not (6.6).
However, this works in GCC. GCC warns about it (using a confusing message), but otherwise generates code that works as I expect it to. These are equivalent:
int a[ 6 ]; /* Standard C */
int b[ (int)6.4 ]; /* Standard C */
int c[ (int)(3.2*2) ]; /* Not standard - GCC allows it but warns */
This is fine, GCC is allowed to define rules above the default standard. In fact, since GCC is the only compiler available for my target architecture, I would not mind using this gcc extension.
My problem is: I can't find anything in the documentation that mentions that this should work. I would be grateful if someone could point me to the right place in the documentation, or - if you have some insight into this - tell me that I can't rely on it!
I'm reading Bjarne Stroustrup's book, "The C++ Programming Language" and I found an example explaining static_assert. What I understood is that static_assert only works with things that can be expressed by constant expressions. In other words, it must not include an expression that's meant to be evaluated at runtime.
The following example was used in the book (I did some changes in the code. But I don't think that should change anything that'd be produced by the original example code given in the book.)
#include <iostream>
using namespace std;
void f (double speed)
{
constexpr double C = 299792.468;
const double local_max = 160.0/(60*60);
static_assert(local_max<C,"can't go that fast");
}
int main()
{
f(3.25);
cout << "Reached here!";
return 0;
}
The above gives a compile error. Here's it compiled using ideone: http://ideone.com/C97oF5
The exact code from the book example:
constexpr double C = 299792.458;
void f(double speed)
{
const double local_max = 160.0/(60∗60);
static_assert(speed<C,"can't go that fast"); // yes this is error
static_assert(local_max<C,"can't go that fast");
}
The compiler does not know the value of speed at compile time. It makes sense that it cannot evaluate speed < C at compile time. Hence, a compile time error is expected when processing the line
static_assert(speed<C,"can't go that fast");
The language does not guarantee that floating point expressions be evaluated at compile time. Some compilers might support it but that's not to be relied upon.
Even though the values of the floating point variables are "constants" to a human reader, they are not necessarily evaluated at compile time. The error message from the compiler from the link you provided makes it clear.
static_assert expression is not an integral constant expression
You'll have to find a way to do the comparison using integral expressions. However, that seems to be a moot point. I suspect, what you really want to do is make sure that speed is within a certain limit. That makes sense only as a run time check.
I have the following code (CPU Atmel AVR ATmega64A):
#define UO_ADC1023 265
#define UREF_180V (1023*180/UO_ADC1023)
....
if(ADC > UREF180) {do_something();}
This should evaluate UREF_180V as 694.87... and than this value should be rounded (better) to 695 or floored (poorer) to 694 to be compared to ADC register.
However I have integer overflow warning at compile. As per this I suppose that compiler generating code which calculates (1023*180/UO_ADC1023) at the run time which is very bad in my case.
I'd like to avoid to calculate those constants by my self (#define UREF_180V 695 in this case I could be sure that they are really literals) to make the code more flexible and readable. I'd like also to be able to check those values after the compiler.
So the questions are:
Is there any possibility to force GCC compiler to calculate such constants at compile time?
How to check this calculated value?
int on AVR is 16-bit. Tell the compiler to use long instead (since 1023 * 180 will overflow).
#define UREF_180V (1023L * 180 / UO_ADC1023)
Macros are inserted at the place of invokation, where their content can later be compiled.
In C++11 you can evaluate expressions at compile time with constexpr, like this:
constexpr auto UREF_180V = 1023*180/UO_ADC1023;
Due to all of the numbers being ints the result of this is 694. To properly round it you would have to change one of the values to a floating point number and create a constexpr rounding function, which can be invoked at compile time.
As for checking the number you could use static_assert(695 == UREF_180V, "");.
To compile C++11 code add -std=c++11 to your compiler options. (You probably have to switch to a C++ project, and I'm not entirely certain AtmelStudio supports C++11, if not I'm sorry)
see the -fmerge-all-constants command, -fgcse, -fsee, or better, see here: https://gcc.gnu.org/onlinedocs/gcc-4.2.2/gcc/Optimize-Options.html
nonetheless, the integer overflow can be part of an semantic error in your code as welternsturm mentioned
Changing the parens to curly braces seems to produce the exact same behavior in my program, even though semantically they seem to be quite different beasts. Is there a reason (memory usage, performance, etc.) to prefer one?
double pie = 3.14159;
myVal = int(pie); // type conversion using operator()
myVal = int{pie}; // uniform initialization syntax
[edit]
My actual code is a little different from the above example, perhaps that explains the narrowing issues:
int32_t result;
myVal = uint16_t(result); // myVal is between 0 and 65535
myVal = uint16_t{result}; // myVal is between 0 and 65535
First note that what you are doing there is not initialization, is a type conversion followed by an assignment. I strongly recommend C++ casting operators (static_cast in this case) over C casts and these constructor-based castings.
That said, the main difference between uniform initialization and the other is that uniform initialization doesn't allow (See the note) narrowing conversions such these you are doing, float to int. This is helpful when writting constants or initializing variables, since initializing an int with 3.141592654 has no sense at all because the fractional part will be stripped out.
NOTE: I remember the initial proposal for uniform-initialization explicitly stating that it disallows narrowing conversions, so if I had understood it correctly, code like yours should not compile.
I have tested it and seems like compilers emmit warnings about the narrowing conversions instead of aborting compilation. Indeed, that warnings are useful too, and you could allways use a -Werror flag.
I have the following std::begin wrappers around Eigen3 matrices:
namespace std {
template<class T, int nd> auto begin(Eigen::Matrix<T,nd,1>& v)
-> decltype(v.data()) { return v.data(); }
}
Substitution fails, and I get a compiler error (error: no matching function for call to 'begin'). For this overload, clang outputs the following:
.../file:line:char note: candidate template ignored:
substitution failure [with T = double, nd = 4]
template<class T, int nd> auto begin(Eigen::Matrix<T,nd,1>& v)
^
I want this overload to be selected. I am expecting the types to be double and int, i.e. they are deduced as I want them to be deduced (and hopefully correctly). By looking at the function, I don't see anything that can actually fail.
Every now and then I get similar errors. Here, clang tells me: substitution failure, I'm not putting this function into the overload resolution set. However, this does not help me debugging at all. Why did substitution failed? What exactly couldn't be substituted where? The only thing obvious to me is that the compiler knows, but it is deliberately not telling me :(
Is it possible to force clang to tell me what did exactly fail here?
This function is trivial and I'm having problems. In more complex functions, I guess things can only get worse. How do you go about debugging these kind of errors?
You can debug substitution failures by doing the substitution yourself into a cut'n'paste of the original template and seeing what errors the compiler spews for the fully specialized code. In this case:
namespace std {
auto begin(Eigen::Matrix<double,4,1>& v)
-> decltype(v.data()) {
typedef double T; // Not necessary in this example,
const int nd = 4; // but define the parameters in general.
return v.data();
}
}
Well this has been reported as a bug in clang. Unfortunately, the clang devs still don't know the best way to fix it. Until then, you can use gcc which will report the backtrace, or you can apply this patch to clang 3.4. The patch is a quick hack that will turn substitution failures into errors.