Here's my test case, simplified to a minimal test case:
#include <iostream>
struct Foo {
int i;
static_assert(sizeof(i) > 1, "Wrong size");
};
static_assert(sizeof(Foo::i) > 1, "Wrong size");
int main () {
Foo f{42};
static_assert(sizeof(f.i) > 1, "Wrong size");
std::cout << f.i << "\n";
}
This works fine on any version of GCC or Clang recent enough to support static_assert. But on MSVC 2015, the first static_assert gives me a compile error:
static-assert.cpp(4): error C2327: 'Foo::i': is not a type name, static, or enumerator
static-assert.cpp(4): error C2065: 'i': undeclared identifier
static-assert.cpp(4): error C2338: Wrong size
The other two asserts work as expected, if I remove the first one. I also tried it on VS2013 with the same results. The documentation for C2327 talks about nested class member access, which doesn't seem relevant in any way I can see. What's going on here? Which compiler is right?
(Edited to add a third assert to make the problem clearer.)
Further edit: It doesn't actually seem to have anything to do with static_assert, because this fails with the same error:
struct Foo {
int i;
char array[sizeof(i)];
};
Again, this works fine in other compilers.
I had a simular issue and the problem was that MSVC 2013 does not support constant expressions thus both
static_assert(sizeof(i) > 1, "Wrong size");
and
char array[sizeof(i)];
require constant expressions which is not supported by the compiler.
This is valid in gcc though
Interestingly, sizeof (decltype(i)) works in VS2013.
Related
I have been debugging a strange issue in the past hours that only occured in a release build (-O3) but not in a debug build (-g and no optimizations). Finally, I could pin it down to the "count trailing zeroes" builtin giving me wrong results, and now I wonder whether I just found a GCC bug or whether I'm missing something.
The short story is that apparently, GCC evaulates __builtin_ctz wrongly with -O2 and -O3 in some situations, but it does fine with no optimizations or -O1. The same applies to the long variants __builtin_ctzl and __builtin_ctzll.
My initial assumption is that __builtin_ctz(0) should resolve to 32, because it is the unsigned int (32-bit) version of the builtin and thus there are 32 trailing zero bits. I have not found anything stating that these builtins are undefined for the input being zero, and practical work with them has me convinced that they are not.
Let's have a look at the code I'd like to talk about now:
bool test_basic;
bool test_ctz;
bool test_result;
int ctz(const unsigned int x) {
const int q = __builtin_clz(x);
test_ctz = (q == 32);
return q;
};
int main(int argc, char** argv) {
{
const int q = __builtin_clz(0U);
test_basic = (q == 32);
}
{
const int q = ctz(0U);
test_result = (q == 32);
}
std::cout << "test_basic=" << test_basic << std::endl;
std::cout << "test_ctz=" << test_ctz << std::endl;
std::cout << "test_result=" << test_result << std::endl;
}
The code basically does three tests, storing the results in those boolean values:
test_basic is true if __builtin_clz(0U) resolves to 32.
test_ctz is true if __builtin_clz(x) equals 32 within the function ctz.
test_result is true if the result of ctz(0) equals 32.
Because I call ctz once in my main function and pass zero to it, I expect all three bools to be true by the end of the program. This actually is the case if I compile it without any optimizations or -O1. However, when I compile it with -O2, test_ctz becomes false. I consulted the Compiler Explorer to find out what the hell is going on. (Note that I am using g++ 7.5 myself, but I could reproduce this with any later version as well. In the Compiler Explorer, I picked the latest it has to offer, which is 10.2.)
Let's have a look at the code compiled with -O1 first. I see that test_ctz is simply set to 1. I guess that's because these builtins are treated as constexpr and the whole rather simple function ctz is evaluated at compile-time. The result is correct (under my initial assumption) and so I'm fine with that.
So what could possibly go wrong from here? Well, let's look at the code compiled with -O2. Nothing much has changed, just that test_ctz is now set to 0! And that's that, beyond any logic: the compiler apparently evaluates q == 32 to being false, but then q is returned from the function and we compare that against 32, and suddenly it's true (test_result). I have no explanation for this. Am I missing something? Have I found some demonical GCC bug?
It gets even funnier if you printf the value of q just before test_ctz is set: the console then prints 32, so the computation actually works as expected - at runtime. Yet at compile-time, the compiler thinks q is not 32 and test_ctz is forced to false. Indeed, if I change the declaration of q from const int to volatile int and thus force the computation at runtime, everything works as expected, so luckily there's a simple workaround.
To conclude, I'd like to note that I also use the "count leading zeroes" builtins (__builtin_clz and long versions) and I could not observe the same problem there; they work just fine.
I have not found anything stating that these builtins are undefined for the input being zero
How could you missed it??? From gcc online docs other builtins:
Built-in Function: int __builtin_ctz (unsigned int x)
Returns the number of trailing 0-bits in x, starting at the least significant bit position. If x is 0, the result is undefined.
So what could possibly go wrong from here?
Code behaving differently with different optimizations levels in 99% of cases is a clear indication of undefined behavior in your code. In this case the compiler optimizations makes different decisions then architecture instruction BSR and in case the compiler generates the BSR on x86 architecture, the result is still undefined, from the link If the content source operand is 0, the content of the destination operand is undefined. Och, there's also LZCNT in which case you'll get LZCNT will produce the operand size when the input operand is zero, which maybe better explains the behavior of your code.
Am I missing something?
Yes. You are missing that __builtin_ctz(0) is undefined.
Have I found some demonical GCC bug?
No.
I'd like to note that I also use the "count leading zeroes" builtins (__builtin_clz and long versions) I could not observe the same problem there; they work just fine.
Can be seen in gcc docs that __builtin_clz(0) is also undefined behavior.
In the code listed below, "LambdaTest" fails with the following error on Clang only:
shared/LambdaTest.cpp:8:31: error: variable 'array' with variably
modified type cannot be captured in a lambda expression
auto myLambdaFunction = [&array]()
^
shared/LambdaTest.cpp:7:9: note: 'array' declared here
int array[length];
The function "LambdaTest2" which passes the array as a parameter instead of capturing compiles fine on G++/Clang.
// Compiles with G++ but fails in Clang
void LambdaTest(int length)
{
int array[length];
auto myLambdaFunction = [&array]()
{
array[0] = 2;
};
myLambdaFunction();
}
// Compiles OK with G++ and Clang
void LambdaTest2(int length)
{
int array[length];
auto myLambdaFunction = [](int* myarray)
{
myarray[0] = 2;
};
myLambdaFunction(array);
}
Two questions:
What does the compiler error message "variable 'array' with variably modified type cannot be captured in a lambda expression" mean?
Why does LambdaTest fail to compile on Clang and not G++?
Thanks in advance.
COMPILER VERSIONS:
*G++ version 4.6.3
*clang version 3.5.0.210790
int array[length]; is not allowed in Standard C++. The dimension of an array must be known at compile-time.
What you are seeing is typical for non-standard features: they conflict with standard features at some point. The reason this isn't standard is because nobody has been able to make a satisfactory proposal that resolves those conflicts. Instead, each compiler has done their own thing.
You will have to either stop using the non-standard feature, or live with what a compiler happens to do with it.
VLA (Variable-length array) is not officially supported in C++.
You can instead use std::vector like so:
void LambdaTest(int length)
{
std::vector<int> array(length);
auto myLambdaFunction = [&array]()
{
array[0] = 2;
};
myLambdaFunction();
}
Thanks to both answers above for pointing out that VLAs are non-standard. Now I know what to search for.
Here are more are related links to the subject.
Why aren't variable-length arrays part of the C++ standard?
Why no VLAS in C++
In a very simple situation with a constrained constructor, testing for convertibility of the argument, an error is produced in clang, but not in g++:
#include <type_traits>
template <class T, class U>
constexpr bool Convertible = std::is_convertible<T,U>::value && std::is_convertible<U,T>::value;
template <class T>
struct A
{
template <class S, class = std::enable_if_t<Convertible<S,T>> >
A(S const&) {}
};
int main()
{
A<double> s = 1.0;
}
Maybe this issue is related to Is clang's c++11 support reliable?
The error clang gives, reads:
error: no member named 'value' in 'std::is_convertible<double, A<double> >'
constexpr bool Convertible = std::is_convertible<T,U>::value && std::is_convertible<U,T>::value;
~~~~~~~~~~~~~~~~~~~~~~~~~~^
I've tried
g++-5.4, g++-6.2 (no error)
clang++-3.5, clang++-3.8, clang++-3.9 (error)
with argument -std=c++1y and for clang either with -stdlib=libstdc++ or -stdlib=libc++.
Which compiler is correct? Is it a bug in clang or gcc? Or is the behavior for some reasons undefined and thus both compilers correct?
First of all, note that it works fine if you use:
A<double> s{1.0};
Instead, the error comes from the fact that you are doing this:
A<double> s = 1.0;
Consider the line below (extracted from the definition of Convertible):
std::is_convertible<U,T>::value
In your case, this is seen as it follows (once substitution has been performed):
std::is_convertible<double, A<double>>::value
The compiler says this clearly in the error message.
This is because a temporary A<double> is constructed from 1.0, then it is assigned to s.
Note that in your class template you have defined a (more or less) catch-all constructor, so a const A<double> & is accepted as well.
Moreover, remember that a temporary binds to a const reference.
That said, the error happens because in the context of the std::enable_if_t we have that A<double> is an incomplete type and from the standard we have this for std::is_convertible:
From and To shall be complete types [...]
See here for the working draft.
Because of that, I would say that it's an undefined behavior.
As a suggestion, you don't need to use std::enable_if_t in this case.
You don't have a set of functions from which to pick the best one up in your example.
A static_assert is just fine and error messages are nicer:
template <class S>
A(S const&) { static_assert(Convertible<S,T>, "!"); }
Here's a short example that reproduces this “no viable conversion” with lemon for clang but valid for g++ difference in compiler behavior.
#include <iostream>
struct A {
int i;
};
#ifndef UNSCREW_CLANG
using cast_type = const A;
#else
using cast_type = A;
#endif
struct B {
operator cast_type () const {
return A{i};
}
int i;
};
int main () {
A a{0};
B b{1};
#ifndef CLANG_WORKAROUND
a = b;
#else
a = b.operator cast_type ();
#endif
std::cout << a.i << std::endl;
return EXIT_SUCCESS;
}
live at godbolt's
g++ (4.9, 5.2) compiles that silently; whereas clang++ (3.5, 3.7) compiles it
if
using cast_type = A;
or
using cast_type = const A;
// [...]
a = b.operator cast_type ();
are used,
but not with the defaulted
using cast_type = const A;
// [...]
a = b;
In that case clang++ (3.5) blames a = b:
testling.c++:25:9: error: no viable conversion from 'B' to 'A'
a = b;
^
testling.c++:3:8: note: candidate constructor (the implicit copy constructor)
not viable:
no known conversion from 'B' to 'const A &' for 1st argument
struct A {
^
testling.c++:3:8: note: candidate constructor (the implicit move constructor)
not viable:
no known conversion from 'B' to 'A &&' for 1st argument
struct A {
^
testling.c++:14:5: note: candidate function
operator cast_type () const {
^
testling.c++:3:8: note: passing argument to parameter here
struct A {
With reference to the 2011¹ standard: Is clang++ right about rejecting the defaulted code or is g++ right about accepting it?
Nota bene: This is not a question about whether that const qualifier on the cast_type makes sense. This is about which compiler works standard-compliant and only about that.
¹ 2014 should not make a difference here.
EDIT:
Please refrain from re-tagging this with the generic c++ tag.
I'd first like to know which behavior complies to the 2011 standard, and keep the committees' dedication not to break existing (< 2011) code out of ansatz for now.
So it looks like this is covered by this clang bug report rvalue overload hides the const lvalue one? which has the following example:
struct A{};
struct B{operator const A()const;};
void f(A const&);
#ifdef ERR
void f(A&&);
#endif
int main(){
B a;
f(a);
}
which fails with the same error as the OP's code. Richard Smith toward the end says:
Update: we're correct to choose 'f(A&&)', but we're wrong to reject
the initialization of the parameter. Further reduced:
struct A {};
struct B { operator const A(); } b;
A &&a = b;
Here, [dcl.init.ref]p5 bullet 2 bullet 1 bullet 2 does not apply,
because [over.match.ref]p1 finds no candidate conversion functions,
because "A" is not reference-compatible with "const A". So we fall
into [dcl.init.ref]p5 bullet 2 bullet 2, and copy-initialize a
temporary of type A from 'b', and bind the reference to that. I'm not
sure where in that process we go wrong.
but then comes back with another comment due to a defect report 1604:
DR1604 changed the rules so that
A &&a = b;
is now ill-formed. So we're now correct to reject the initialization.
But this is still a terrible answer; I've prodded CWG again. We should
probably discard f(A&&) during overload resolution.
So it seems like clang is technically doing the right thing based on the standard language today but it may change since there seems to be disagreement at least from the clang team that this is the correct outcome. So presumably this will result in a defect report and we will have to wait till it is resolved before we can come to a final conclusion.
Update
Looks like defect report 2077 was filed based on this issue.
Why am I getting a warning about initialization in one case, but not the other? The code is in a C++ source file, and I am using GCC 4.7 with -std=c++11.
struct sigaction old_handler, new_handler;
The above doesn't produce any warnings with -Wall and -Wextra.
struct sigaction old_handler={}, new_handler={};
struct sigaction old_handler={0}, new_handler={0};
The above produces warnings:
warning: missing initializer for member ‘sigaction::__sigaction_handler’ [-Wmissing-field-initializers]
warning: missing initializer for member ‘sigaction::sa_mask’ [-Wmissing-field-initializers]
warning: missing initializer for member ‘sigaction::sa_flags’ [-Wmissing-field-initializers]
warning: missing initializer for member ‘sigaction::sa_restorer’ [-Wmissing-field-initializers]
I've read through How should I properly initialize a C struct from C++?, Why is the compiler throwing this warning: "missing initializer"? Isn't the structure initialized?, and bug reports like Bug 36750. Summary: -Wmissing-field-initializers relaxation request. I don't understand why the uninitialized struct is not generating a warning, while the initialized struct is generating a warning.
Why is the uninitialized structs not generating a warning; and why is the initialized structs generating a warning?
Here is a simple example:
#include <iostream>
struct S {
int a;
int b;
};
int main() {
S s { 1 }; // b will be automatically set to 0
// and that's probably(?) not what you want
std::cout<<"s.a = "<<s.a<<", s.b = "<<s.b<<std::endl;
}
It gives the warning:
missing.cpp: In function ‘int main()’:
missing.cpp:9:11: warning: missing initializer for member 'S::b' [-Wmissing-field-initializers]
The program prints:
s.a = 1, s.b = 0
The warning is just a reminder from the compiler that S has two members but you only explicitly initialized one of them, the other will be set to zero. If that's what you want, you can safely ignore that warning.
In such a simple example, it looks silly and annoying; if your struct has many members, then this warning can be helpful (catching bugs: miscounting the number of fields or typos).
Why is the uninitialized structs not generating a warning?
I guess it would simply generate too much warnings. After all, it is legal and it is only a bug if you use the uninitialized members. For example:
int main() {
S s;
std::cout<<"s.a = "<<s.a<<", s.b = "<<s.b<<std::endl;
}
missing.cpp: In function ‘int main()’:
missing.cpp:10:43: warning: ‘s.S::b’ is used uninitialized in this function [-Wuninitialized]
missing.cpp:10:26: warning: ‘s.S::a’ is used uninitialized in this function [-Wuninitialized]
Even though it did not warn me about the uninitialized members of s, it did warn me about using the uninitialized fields. All is fine.
Why is the initialized structs generating a warning?
It warns you only if you explicitly but partially initialize the fields. It is a reminder that the struct has more fields than you enumerated. In my opinion, it is questionable how useful this warning is: It can indeed generate too much false alarms. Well, it is not on by default for a reason...
That's a defective warning. You did initialize all the members, but you just didn't have the initializers for each member separately appear in the code.
Just ignore that warning if you know what you are doing. I regularly get such warnings too, and I'm upset regularly. But there's nothing I can do about it but to ignore it.
Why is the uninitialized struct not giving a warning? I don't know, but most probably that is because you didn't try to initialize anything. So GCC has no reason to believe that you made a mistake in doing the initialization.
You're solving the symptom but not the problem. Per my copy of "Advanced Programming in the UNIX Environment, Second Edition" in section 10.15:
Note that we must use sigemptyset() to initialize the sa_mask member of the structure. We're not guaranteed that act.sa_mask = 0 does the same thing.
So, yes, you can silence the warning, and no this isn't how you initialize a struct sigaction.
The compiler warns that all members are not initialized when you initialize the struct. There is nothing to warn about declaring an uninitialized struct. You should get the same warnings when you (partially) initialize the uninitialized structs.
struct sigaction old_handler, new_handler;
old_handler = {};
new_handler = {};
So, that's the difference. Your code that doesn't produce the warning is not an initialization at all. Why gcc warns about zero initialized struct at all is beyond me.
The only way to prevent that warning (or error, if you or your organization is treating warnings as errors (-Werror option)) is to memset() it to an init value. For example:
#include <stdio.h>
#include <memory.h>
typedef struct {
int a;
int b;
char c[12];
} testtype_t;
int main(int argc, char* argv[]) {
testtype_t type1;
memset(&type1, 0, sizeof(testtype_t));
printf("%d, %s, %d\n", argc, argv[0], type1.a);
return 0;
}
It is not very clean, however, it seems like that for GCC maintainers, there is only one way to initialize a struct and code beauty is not on top of their list.