static_assert fails when test condition includes constants defined with const? - c++11

I'm reading Bjarne Stroustrup's book, "The C++ Programming Language" and I found an example explaining static_assert. What I understood is that static_assert only works with things that can be expressed by constant expressions. In other words, it must not include an expression that's meant to be evaluated at runtime.
The following example was used in the book (I did some changes in the code. But I don't think that should change anything that'd be produced by the original example code given in the book.)
#include <iostream>
using namespace std;
void f (double speed)
{
constexpr double C = 299792.468;
const double local_max = 160.0/(60*60);
static_assert(local_max<C,"can't go that fast");
}
int main()
{
f(3.25);
cout << "Reached here!";
return 0;
}
The above gives a compile error. Here's it compiled using ideone: http://ideone.com/C97oF5
The exact code from the book example:
constexpr double C = 299792.458;
void f(double speed)
{
const double local_max = 160.0/(60∗60);
static_assert(speed<C,"can't go that fast"); // yes this is error
static_assert(local_max<C,"can't go that fast");
}

The compiler does not know the value of speed at compile time. It makes sense that it cannot evaluate speed < C at compile time. Hence, a compile time error is expected when processing the line
static_assert(speed<C,"can't go that fast");
The language does not guarantee that floating point expressions be evaluated at compile time. Some compilers might support it but that's not to be relied upon.
Even though the values of the floating point variables are "constants" to a human reader, they are not necessarily evaluated at compile time. The error message from the compiler from the link you provided makes it clear.
static_assert expression is not an integral constant expression
You'll have to find a way to do the comparison using integral expressions. However, that seems to be a moot point. I suspect, what you really want to do is make sure that speed is within a certain limit. That makes sense only as a run time check.

Related

How to debug if a constexpr function doesn't run on compile time?

For example I have a constexpr function, but I use a runtime variable (not marked as constexpr) to take the return value. In this case, I'm not sure whether the function runs on compile time or runtime, So is there any way to debug?
At first I thinked about static_assert, but it looks like static_assert cannot do this. Then I thought convert the code to assembly code, but it is way too difficult to check the assembly code to figure out.
Before C++20 there is no way to directly handle it from the program itself.
With C++20 you have std::is_constant_evaluated.
If the return type from your constexpr func is a valid non type template parameter, you can force your function to be evaluated in compile time like this:
constexpr int func( int x )
{
return x*2;
}
template < auto x >
auto force_constexpr_evaluation()
{
return x;
}
int main()
{
int y = force_constexpr_evaluation<func(99)>();
}
If you are using c++20 already, you can directly force compile time evaluation by using consteval
Debugging on assembly level should be not so hard.
If you see a function call to your constexpr func, it is running in run time.
If you see directly the forwarded value, it was evaluated in compile time.
If it is inlined, you should be able to detect it by having the function name associated from the debug symbols to the location of the inlined code. Typically, if you set a breakpoint on the constexpr function and it is not always be executed at compile time but inlined, you get a number of breakpoints not only a single one. Even if it is one, it points to the inlined position in that case.
BTW: It is not possible to back port std::is_constant_evaluated to older compilers, as it needs some implementation magic.

Initialisation of a std::string seems to generate the same code no matter which initialisation I use (uniform, assignment, etc)

I was experimenting with C++ Insights and I got a result that was surprising to me and which I don't really understand. I was hoping some one can provide some illumination for.
Given this code snippet:
#include <string>
int main()
{
std::string a {"123"};
std::string b = {"123"};
std::string c = "123";
std::string d("123");
}
I was at least expecting a to be initialised differently to c. With c I was expecting some sort of temporary string to be copied and for a I was expecting just the constructor to be called directly.
Here is the link to c++ insights: here (You have to press the "play" button).
Every one of the different ways to initialise a string is the same. This really surprised me. I started with c++17 and then switched to c++11, which produced something more like what I was expecting.
Does this mean that all init types in c++17 are now the same? - is there a name for this, because I thought uniform init was only with the curly braces {} - is everything now uniform init?

using std::max causes a link error?

Consider a library that defines in a header file
struct Proj {
struct Depth {
static constexpr unsigned Width = 10u;
static constexpr unsigned Height = 10u;
};
struct Video {
static constexpr unsigned Width = 10u;
static constexpr unsigned Height = 10u;
};
};
The library gets compiled, and I'm now developing an application that links against this library. I thought for a long time it was some kind of visibility problem, but even after adding B_EXPORT (standard visibility stuff from CMake) everywhere with no change, I finally found the problem.
template <class Projection>
struct SomeApplication {
SomeApplication() {
unsigned height = std::max(// or std::max<unsigned>
Projection::Depth::Height,
Projection::Video::Height
);
}
};
I can't even reproduce the problem with a small dummy library / sample application. But using std::max in the application causes the link error, whereas if I just do it myself
unsigned height = Projection::Depth::Height;
if (height < Projection::Video::Height)
height = Projection::Video::Height;
Everything works out. AKA there don't appear to be any specific issues with the visibility in terms of just using Projection::XXX.
Any thoughts on what could possibly cause this? This is on OSX, so this doesn't even apply.
The problem is that Width and Height are declared, not defined in your structs. Effectively, this means there is no storage allocated for them.
Now recall the signature for std::max:
template<typename T>
const T& max(const T&, const T&);
Note the references: this means the addresses of the arguments are to be taken. But, since Width and Height are only declared, they don't have any storage! Hence the linker error.
Now let's consider the rest of your question.
Your hand-written max works because you never take any pointers or references to the variables.
You might be unable to reproduce this on a toy example because, depending on the optimization level in particular, a sufficiently smart compiler might evaluate max at compile time and save the trouble of taking the addresses at runtime. For instance, compare the disasm for no optimization vs -O2 for gcc 7.2: the evaluation is indeed done at compile-time!
As for the fix, it depends. You have several options, to name a few:
Use constexpr getter functions instead of variables. In this case the values they return will behave more like temporary objects, allowing the addresses to be taken (and the compiler will surely optimize that away). This is the approach I'd suggest in general.
Use namespaces instead of structs. In this case the variables will also be defined in addition to being declared. The caveat is that you might get duplicate symbol errors if they are used in more than one translation unit. The fix for that is only in form of C++17 inline variables.
...speaking of which, C++17 also changes the rules for constexpr static member variables (they become inline by default), so you won't get this error if you just switch to this standard.

Uniform initialization syntax or type conversion?

Changing the parens to curly braces seems to produce the exact same behavior in my program, even though semantically they seem to be quite different beasts. Is there a reason (memory usage, performance, etc.) to prefer one?
double pie = 3.14159;
myVal = int(pie); // type conversion using operator()
myVal = int{pie}; // uniform initialization syntax
[edit]
My actual code is a little different from the above example, perhaps that explains the narrowing issues:
int32_t result;
myVal = uint16_t(result); // myVal is between 0 and 65535
myVal = uint16_t{result}; // myVal is between 0 and 65535
First note that what you are doing there is not initialization, is a type conversion followed by an assignment. I strongly recommend C++ casting operators (static_cast in this case) over C casts and these constructor-based castings.
That said, the main difference between uniform initialization and the other is that uniform initialization doesn't allow (See the note) narrowing conversions such these you are doing, float to int. This is helpful when writting constants or initializing variables, since initializing an int with 3.141592654 has no sense at all because the fractional part will be stripped out.
NOTE: I remember the initial proposal for uniform-initialization explicitly stating that it disallows narrowing conversions, so if I had understood it correctly, code like yours should not compile.
I have tested it and seems like compilers emmit warnings about the narrowing conversions instead of aborting compilation. Indeed, that warnings are useful too, and you could allways use a -Werror flag.

How can I get more information from clang substitution-failure errors?

I have the following std::begin wrappers around Eigen3 matrices:
namespace std {
template<class T, int nd> auto begin(Eigen::Matrix<T,nd,1>& v)
-> decltype(v.data()) { return v.data(); }
}
Substitution fails, and I get a compiler error (error: no matching function for call to 'begin'). For this overload, clang outputs the following:
.../file:line:char note: candidate template ignored:
substitution failure [with T = double, nd = 4]
template<class T, int nd> auto begin(Eigen::Matrix<T,nd,1>& v)
^
I want this overload to be selected. I am expecting the types to be double and int, i.e. they are deduced as I want them to be deduced (and hopefully correctly). By looking at the function, I don't see anything that can actually fail.
Every now and then I get similar errors. Here, clang tells me: substitution failure, I'm not putting this function into the overload resolution set. However, this does not help me debugging at all. Why did substitution failed? What exactly couldn't be substituted where? The only thing obvious to me is that the compiler knows, but it is deliberately not telling me :(
Is it possible to force clang to tell me what did exactly fail here?
This function is trivial and I'm having problems. In more complex functions, I guess things can only get worse. How do you go about debugging these kind of errors?
You can debug substitution failures by doing the substitution yourself into a cut'n'paste of the original template and seeing what errors the compiler spews for the fully specialized code. In this case:
namespace std {
auto begin(Eigen::Matrix<double,4,1>& v)
-> decltype(v.data()) {
typedef double T; // Not necessary in this example,
const int nd = 4; // but define the parameters in general.
return v.data();
}
}
Well this has been reported as a bug in clang. Unfortunately, the clang devs still don't know the best way to fix it. Until then, you can use gcc which will report the backtrace, or you can apply this patch to clang 3.4. The patch is a quick hack that will turn substitution failures into errors.

Resources