class A
{
virtual void foo();
}
class B : public A
{
void foo() final;
}
Quote from C++11 standard § 9.2/8:
A virt-specifier-seq shall contain at most one of each virt-specifier. A virt-specifier-seq shall appear only in the declaration of a virtual member function (10.3)
A virt-specifier includes final and override.
Function foo in class B is a derived virtual member function (even not declared as virtual in B). This is legal according to the above quote from the C++11 standard.
But what is with the following case:
class C
{
virtual void bar() final;
}
According to the C++11 standard class C should compile, though the virtual and final keywords are contrary.
Therefore the C++11 standard $9.2/8 confused me a little bit. It's not precise enough. I don't even know if this really compiles, and if its behaviour is well defined.
"though the virtual and final keywords are contrary". No. The statement you have quoted says only that you cannot use multiple override or multiple final keywords in the same declaration. virtual itself is optional and redundant if any of them are given. The virtual keyword in C as well as in B is optional, because the base class already declares that method virtual. A final method is always virtual, too. It is not useful (and probably illegal - but not sure about the standard) to use final on something that is not an overridden method from the base class.
In previous versions of the C++ standard, final and override did not exist, so it was customary to declare overrides virtual for readability. Now you have override, which not only makes it obvious that this is an override, but also generates a compiler error if it is not (e.g. because the method name in the overriding class has a typo). For backward compatibility, declaring virtual and override was kept legal.
virtual void bar() final;
^^^^^
That is the virt-specifier-seq. It contains final, which is a virt-specifier, and there is one of them.
This is perfectly legal. As is this:
virtual void bar() final override;
^^^^^^^^^^^^^^
It is in the declaration of a virtual member function, and contains at most one of each virt-specifier.
What is illegal is
virtual void bar() final final;
^^^^^^^^^^^
here, it contains two final, which violates the rule "at most one of each virt-specifier".
Related
#include <string>
struct T1 {
int _mem1;
int _mem2;
T1() = default;
T1(int mem2) : T1() { _mem2 = mem2; }
};
T1 getT1() { return T1(); }
T1 getT1(int mem2) { return T1(mem2); }
int main() {
volatile T1 a = T1();
std::printf("a._mem1=%d a._mem2=%d\n", a._mem1, a._mem2);
volatile T1 b = T1(1);
std::printf("b._mem1=%d b._mem2=%d\n", b._mem1, b._mem2);
// Temporarily disable
if (false) {
volatile T1 c = getT1();
std::printf("c._mem1=%d c._mem2=%d\n", c._mem1, c._mem2);
volatile T1 d = getT1(1);
std::printf("d._mem1=%d d._mem2=%d\n", d._mem1, d._mem2);
}
}
When I compile this with gcc5.4, I get the following output:
g++ -std=c++11 -O3 test.cpp -o test && ./test
a._mem1=0 a._mem2=0
b._mem1=382685824 b._mem2=1
Why does the user defined constructor, which delegates to the default constructor not manage to set _mem1 to zero for b, however a which uses the default constructor is zero initialized?
Valgrind confirms this also:
==12579== Conditional jump or move depends on uninitialised value(s)
==12579== at 0x4E87CE2: vfprintf (vfprintf.c:1631)
==12579== by 0x4E8F898: printf (printf.c:33)
==12579== by 0x4005F3: main (in test)
If I change if(false) to if(true)
Then the output is as you would expect
a._mem1=0 a._mem2=0
b._mem1=0 b._mem2=1
c._mem1=0 c._mem2=0
d._mem1=0 d._mem2=1
What is the compiler doing?
Short answer: for trivial types, the two distinct forms of "default construction" leads to two different initializations:
T a; in which case the object is default-initialized. Its value is undetermined and undefined behavior will soon happen (this is how is initialized b.mem1 and why valgrind detect an error.)
T a=T(); in which case the object is value-initialized and its entire memory is zeroed (this is what happens to a.mem1 and a.mem2)
Long answer: Actualy, the default constructor of T1 is not the cause of zero initialization of a.mem1. a has been first zero-initialized but not b because of a singular rule of the standard that does not apply for b's initializer.
The definition volatile a=T() causes a to be value-initialized (1). struct T1 as no user-provided default constructor (2). For such a struct the entire object is zero-initialized as stated by this rule of the C++11 standard [dcl.init]/7.2:
if T is a (possibly cv-qualified) non-union class type without a user-provided constructor, then the object is zero-initialized and, if T's implicitly-declared default constructor is non-trivial, that constructor is called.
There is a subtle difference between C++11 and C++17 that causes the definition volatile b=T(1) to be undefined behavior in C++11 but not in C++17. In C++11, b is initialized by copying an object type T1 which is initialized by the expression T(1). This copy construction evaluate T(1).mem1 which is an undetermined value. This is forbidden. In c++17, b is directly initialized by the prvalue expression T(1).
The evaluation of this undetermined value inside the printf is also undefined behavior independently of the c++ standard. This is why valgrind complains and why you see inconsistent outputs when you change if (true) to if (false).
(1) strictly speaking a is copy constructed from a value-initalized object in c++11
(2) T1's default constructor is not user provided because it is defined as defaulted on the first declaration
Short Answer
The default constructor in your code is considered trivial and that kind of constructor perform no actions i.e. leave things unitialized.
Longer answer
Trivial default constructor
The default constructor for class T is trivial (i.e. performs no
action) if all of the following is true:
The constructor is not user-provided (i.e., is implicitly-defined or defaulted on its first declaration)
T has no virtual member functions
T has no virtual base classes
T has no non-static members with default initializers.
(since C++11)
Every direct base of T has a trivial default constructor
Every non-static member of class type has a trivial default constructor
> A trivial default constructor is a constructor that performs no
action. All data types compatible with the C language (POD types) are
trivially default-constructible. Unlike in C, however, objects with
trivial default constructors cannot be created by simply
reinterpreting suitably aligned storage, such as memory allocated with
std::malloc: placement-new is required to formally introduce a new
object and avoid potential undefined behavior.
http://www.enseignement.polytechnique.fr/informatique/INF478/docs/Cpp/en/cpp/language/default_constructor.html
With the struct definition given below...
struct A {
virtual void hello() = 0;
};
Approach #1:
struct B : public A {
virtual void hello() { ... }
};
Approach #2:
struct B : public A {
void hello() { ... }
};
Is there any difference between these two ways to override the hello function?
They are exactly the same. There is no difference between them other than that the first approach requires more typing and is potentially clearer.
The 'virtualness' of a function is propagated implicitly, however at least one compiler I use will generate a warning if the virtual keyword is not used explicitly, so you may want to use it if only to keep the compiler quiet.
From a purely stylistic point-of-view, including the virtual keyword clearly 'advertises' the fact to the user that the function is virtual. This will be important to anyone further sub-classing B without having to check A's definition. For deep class hierarchies, this becomes especially important.
The virtual keyword is not necessary in the derived class. Here's the supporting documentation, from the C++ Draft Standard (N3337) (emphasis mine):
10.3 Virtual functions
2 If a virtual member function vf is declared in a class Base and in a class Derived, derived directly or indirectly from Base, a member function vf with the same name, parameter-type-list (8.3.5), cv-qualification, and ref-qualifier (or absence of same) as Base::vf is declared, then Derived::vf is also virtual (whether or not it is so declared) and it overrides Base::vf.
No, the virtual keyword on derived classes' virtual function overrides is not required. But it is worth mentioning a related pitfall: a failure to override a virtual function.
The failure to override occurs if you intend to override a virtual function in a derived class, but make an error in the signature so that it declares a new and different virtual function. This function may be an overload of the base class function, or it might differ in name. Whether or not you use the virtual keyword in the derived class function declaration, the compiler would not be able to tell that you intended to override a function from a base class.
This pitfall is, however, thankfully addressed by the C++11 explicit override language feature, which allows the source code to clearly specify that a member function is intended to override a base class function:
struct Base {
virtual void some_func(float);
};
struct Derived : Base {
virtual void some_func(int) override; // ill-formed - doesn't override a base class method
};
The compiler will issue a compile-time error and the programming error will be immediately obvious (perhaps the function in Derived should have taken a float as the argument).
Refer to WP:C++11.
Adding the "virtual" keyword is good practice as it improves readability , but it is not necessary. Functions declared virtual in the base class, and having the same signature in the derived classes are considered "virtual" by default.
There is no difference for the compiler, when you write the virtual in the derived class or omit it.
But you need to look at the base class to get this information. Therfore I would recommend to add the virtual keyword also in the derived class, if you want to show to the human that this function is virtual.
The virtual keyword should be added to functions of a base class to make them overridable. In your example, struct A is the base class. virtual means nothing for using those functions in a derived class. However, it you want your derived class to also be a base class itself, and you want that function to be overridable, then you would have to put the virtual there.
struct B : public A {
virtual void hello() { ... }
};
struct C : public B {
void hello() { ... }
};
Here C inherits from B, so B is not the base class (it is also a derived class), and C is the derived class.
The inheritance diagram looks like this:
A
^
|
B
^
|
C
So you should put the virtual in front of functions inside of potential base classes which may have children. virtual allows your children to override your functions. There is nothing wrong with putting the virtual in front of functions inside of the derived classes, but it is not required. It is recommended though, because if someone would want to inherit from your derived class, they would not be pleased that the method overriding doesn't work as expected.
So put virtual in front of functions in all classes involved in inheritance, unless you know for sure that the class will not have any children who would need to override the functions of the base class. It is good practice.
There's a considerable difference when you have templates and start taking base class(es) as template parameter(s):
struct None {};
template<typename... Interfaces>
struct B : public Interfaces
{
void hello() { ... }
};
struct A {
virtual void hello() = 0;
};
template<typename... Interfaces>
void t_hello(const B<Interfaces...>& b) // different code generated for each set of interfaces (a vtable-based clever compiler might reduce this to 2); both t_hello and b.hello() might be inlined properly
{
b.hello(); // indirect, non-virtual call
}
void hello(const A& a)
{
a.hello(); // Indirect virtual call, inlining is impossible in general
}
int main()
{
B<None> b; // Ok, no vtable generated, empty base class optimization works, sizeof(b) == 1 usually
B<None>* pb = &b;
B<None>& rb = b;
b.hello(); // direct call
pb->hello(); // pb-relative non-virtual call (1 redirection)
rb->hello(); // non-virtual call (1 redirection unless optimized out)
t_hello(b); // works as expected, one redirection
// hello(b); // compile-time error
B<A> ba; // Ok, vtable generated, sizeof(b) >= sizeof(void*)
B<None>* pba = &ba;
B<None>& rba = ba;
ba.hello(); // still can be a direct call, exact type of ba is deducible
pba->hello(); // pba-relative virtual call (usually 3 redirections)
rba->hello(); // rba-relative virtual call (usually 3 redirections unless optimized out to 2)
//t_hello(b); // compile-time error (unless you add support for const A& in t_hello as well)
hello(ba);
}
The fun part of it is that you can now define interface and non-interface functions later to defining classes. That is useful for interworking interfaces between libraries (don't rely on this as a standard design process of a single library). It costs you nothing to allow this for all of your classes - you might even typedef B to something if you'd like.
Note that, if you do this, you might want to declare copy / move constructors as templates, too: allowing to construct from different interfaces allows you to 'cast' between different B<> types.
It's questionable whether you should add support for const A& in t_hello(). The usual reason for this rewrite is to move away from inheritance-based specialization to template-based one, mostly for performance reasons. If you continue to support the old interface, you can hardly detect (or deter from) old usage.
I will certainly include the Virtual keyword for the child class, because
i. Readability.
ii. This child class my be derived further down, you don't want the constructor of the further derived class to call this virtual function.
I am curious why, in C++ 11, use of "= default" on a derived virtual method does not select the pure base class implementation.
For example, the following test code produces the message "error: 'virtual void B::tst()' cannot be defaulted" from "g++ -std=c++11".
struct A {
virtual ~A () = default;
virtual void tst () = 0;
};
void A :: tst () {}
struct B : public A {
virtual void tst () = default;
};
We can of course provide a B::tst that invokes the default base implementation, but one is concerned that this might be the higher overhead implementation compared to a hypothetical "= default" based coding.
Sorry to ask questions about what might or might not be within the minds of the c++ standards committee persons, but nevertheless perhaps someone here at stack overflow will have some wisdom concerning the impracticality of using the default keyword in this way that would be interesting to hear.
Thanks!
According to the standard §8.4.2/p1 Explicitly-defaulted functions [dcl.fct.def.default] (Emphasis Mine):
A function definition of the form:
attribute-specifier-seqopt decl-specifier-seqopt declarator
virt-specifier-seqopt = default;
is called an explicitly-defaulted definition. A function that is
explicitly defaulted shall
(1.1) — be a special member function,
(1.2) — have the same declared function type (except for possibly differing ref-qualifiers and except that in
the case of a copy constructor or copy assignment operator, the parameter type may be “reference to
non-const T”, where T is the name of the member function’s class) as if it had been implicitly declared,
and
(1.3) — not have default arguments
Member function tst() is not a special member function. Thus, it cannot be defaulted.
Now specifying a member function of a class (e.g., class A) as pure virtual entails that any class that inherits from that class and you don't wan't it to be abstract as well must override that member function.
The code below contains a reference to Enum::name (notice no type parameter).
public static <T extends Enum<T>> ColumnType<T, String> enumColumn(Class<T> klazz) {
return simpleColumn((row, label) -> valueOf(klazz, row.getString(label)), Enum::name);
}
public static <T, R> ColumnType<T, R> simpleColumn(BiFunction<JsonObject, String, T> readFromJson,
Function<T, R> writeToDb) {
// ...
}
Javac reports a warning during compilation:
[WARNING] found raw type: java.lang.Enum missing type arguments for
generic class java.lang.Enum
Changing the expression to Enum<T>::name causes the warning to go away.
However Idea flags the Enum<T>::name version with a warning that:
Explicit type arguments can be inferred
In turn Eclipse (ECJ) doesn't report any problems with either formulation.
Which of the three approaches is correct?
On one hand raw types are rather nasty. If you try to put some other type argument e.g. Enum<Clause>::name will cause the compilation to fails so it's some extra protection.
On the other hand the above reference is equivalent to e -> e.name() lambda, and this formulation doesn't require type arguments.
Enviorment:
Java 8u91
IDEA 15.0.3 Community
ECJ 4.5.2
There is no such thing as a “raw method reference”. Whilst raw types exist to help the migration of pre-Generics code, there can’t be any pre-Generics usage of method references, hence there is no “compatibility mode” and type inference is the norm. The Java Language Specification §15.13. Method Reference Expressions states:
If a method or constructor is generic, the appropriate type arguments may either be inferred or provided explicitly. Similarly, the type arguments of a generic type mentioned by the method reference expression may be provided explicitly or inferred.
Method reference expressions are always poly expressions
So while you may call the type before the :: a “raw type” when it referes to a generic class without specifying type arguments, the compiler will still infer the generic type signature according to the target function type. That’s why producing a warning about “raw type usage” makes no sense here.
Note that, e.g.
BiFunction<List<String>,Integer,String> f1 = List::get;
Function<Enum<Thread.State>,String> f2 = Enum::name;
can be compiled with javac without any warning (the specification names similar examples where the type should get inferred), whereas
Function<Thread.State,String> f3 = Enum::name;
generates a warning. The specification says about this case:
In the second search, if P1, ..., Pn is not empty and P1 is a subtype of ReferenceType, then the method reference expression is treated as if it were a method invocation expression with argument expressions of types P2, ..., Pn. If ReferenceType is a raw type, and there exists a parameterization of this type, G<...>, that is a supertype of P1, the type to search is the result of capture conversion (§5.1.10) applied to G<...>;…
So in the above example, the compiler should infer Enum<Thread.State> as the parametrization of Enum that is a supertype of Thread.State to search for an appropriate method and come to the same result as for the f2 example. It somehow does work, though it generates the nonsensical raw type warning.
Since apparently, javac only generates this warning when it has to search for an appropriate supertype, there is a simple solution for your case. Just use the exact type to search:
public static <T extends Enum<T>> ColumnType<T, String> enumColumn(Class<T> klazz) {
return simpleColumn((row, label) -> valueOf(klazz, row.getString(label)), T::name);
}
This compiles without any warning.
I have these Extension methods:
public static void Replace<T>(this IList list, T newItem)
public static void Replace<T>(this IList<T> list, IEnumerable newItems)
public static void Replace<T>( this IList<T> list, IEnumerable<T> newItems )
I have a Linq statement that produces an IList<IWell> called, wells. (I confirm at runtime that wells is IEnumerable<IWell>.)
However, the statement
SelectedValues.Replace( wells );
always hits the first extension method, not the second or third. (I confirm at runtime that SelectedValues is IList<IWell>.)
Is it obvious what I am doing wrong?
What is the declared type of SelectedValues and of wells? Extension methods are bound at compile-time, not at runtime so it is the compile-time types that matter.
Edit: Since you said that SelectedValues is declared as type IList, the only possible candidate for use as an extension method on SelectedValues of the three you provided is
public static void Replace<T>(this IList list, T newItem)
The compiler then realizes that it can consider wells as a T with T being the declared type of wells and then can invoke the method
public static void Replace<T>(this IList list, T newItem)
where SelectedValues fills in for the parameter list and wells fills in for the parameter newItem, and the declared type of wells fills in for the type parameter T. This is why that extension method is invoked.
Again, extension methods are bound at compile-time. If you want to invoke a different method, you need to use a different declared type for SelectedValues.
So, this is not a case of the compiler "Matching wrong Extension method," this is a case of the compiler matching the only possible extension method. This behavior is by design; it is a feature, not a bug.
I can't remember the rules for overload preference, but as a solution, specifying a (superfluous) cast usually fixes these:
SelectedValues.Replace((IEnumerable<IWell>)wells);
The interesting type here is not SelectedValues, but wells - which of T, IEnumerable or IEnumerable<T> is it? If you always hit the first method, it is likely that wells, despite the plural form of it's name, is just a single T instance.
Also, it would be nice to know exactly what type SelectedValues is - the fact that it implements IList<IWell> doesn't stop it from implementing IList as well.