What exactly is the in-class-initializer? - c++11

I've read many text mentioned the in-class-initializer and I've searched many question on stackoverflow, However I didn't found any precise explanation on what is the in-class-initializer. And as far as I understood the variable of build-in type declared outside any function will be default initialized by the compiler, does the in-class-initilizer doing the same action for a declared variable?

Here is a simple example for in-class initialization. It's useful for less typing, especially when more than one constructor signatures are available. It's recommend in the core guidelines, too.
class Foo {
public:
Foo() = default; // No need to initialize data members in the initializer list.
Foo(bool) { /* Do stuff here. */ } // Again, data member already have values.
private:
int bar = 42;
// ^^^^ in-class initialization
int baz{};
// ^^ same, but requests zero initialization
};
As the data members are explicitly initialized, the second part of your questions doesn't really apply to to in-class initialization.

Related

When does c++ right value destruct in this scenario?

Here is the code:
class SomeType {
public:
SomeType() {}
~SomeType() {}
std::string xxx;
}
bool funtion_ab() {
SomeType(); // This is a right val;
// The right val destructs here when I test the code. I want to make sure that it would always destructs here.
int a = 0, b = 10;
....// other code
return true;
}
Please tell me if you know the truth. Thank you!
What you have is called a temporary object. From §6.7.7,
Temporary objects are created
when a prvalue is converted to an xvalue
or, more specifically,
[Note 3: Temporary objects are materialized:
...
when a prvalue that has type other than cv void appears as a discarded-value expression ([expr.context]).
— end note]
and, on the lifetime, the same section has this to say
Temporary objects are destroyed as the last step in evaluating the full-expression ([intro.execution]) that (lexically) contains the point where they were created.
You can read more about the expression semantics, but in your case "full-expression" is fairly unambiguous.
SomeType();
The "full-expression" containing your constructor call is... the constructor call itself. So the destructor will be called immediately after evaluating the constructor. There are some exceptions to this rule (such as if the temporary object is thrown as an exception or is bound as a reference), but none of those apply here.
As noted in the comments, compilers are free to inline your constructor and destructor calls and then are free to notice that they do nothing and omit them entirely. Optimizers can do fun stuff with your code, provided it doesn't change the semantics. But a strict reading of the standard states that the destructor is called exactly where you suggested.

Why one may need a shared_from_this instead of directly using this pointer?

Look at the second answer here:
What is the need for enable_shared_from_this?
it says:
"Short answer: you need enable_shared_from_this when you need to use inside the object itself existing shared pointer guarding this object.
Out of the object you can simply assign and copy a shared_ptr because you deal with the shared_ptr variable as is."
and later down in the last line it says:
"And when and why one can need a shared pointer to this instead of just this it is quite other question. For example, it is widely used in asynchronous programming for callbacks binding."
Here in this post I want to ask exactly this other question. What is an use case for "enable_shared_from_this" and "shared_from_this"?
A simple use-case would be to ensure this survives till the end of some asynchronous, or delayed operation:
class My_type : public std::enable_shared_from_this<My_type> {
public:
void foo() {}
void perform_foo() {
auto self = shared_from_this();
std::async(std::launch::async, [self, this]{ foo(); });
}
};
boost::asio uses this technique a lot in their examples:
https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/example/cpp11/allocation/server.cpp

Initializing GTest const class members

I want to achieve something like this:
class MyTest: public ::testing::Test {
public:
const int myConstInt = 23;
}
TEST_F(MyTest, MyTest1) {... use myConstInt ...}
But recollecting from Item 4 of EffectiveCPP, the initialization is not guaranteed in this manner and there is a chance of undefined behavior.
Let's say the above is Method 1.
I can think of two other methods to achieve this:
Method 2: Initializer list of myConstStr using a MyTest constructor.
Method 3: Make it constexpr - since the value is set at compile time I shouldnt face any initialization issues during runtime.
Which would be the correct way to go about this? Also Effective CPP is a relatively old book - Does the discussion of Item 4 still fully apply?
const int myConstInt = 23;
is a non static data member with a default member initializer
https://en.cppreference.com/w/cpp/language/data_members#Member_initialization
There is absolutely no risk that it is undefined behavior.
The initialization is guaranteed
Post discussion on Cpplang slack , found out that the best solution would be to use a static const for any integral/Enum types - can also use static constexpr but this is essentially the same except in C++17 where static constexpr data members can also be inlined.
Additioal useful Reference: constexpr vs. static const: Which one to prefer?

Is Virtual destructor rule inherited or not? [duplicate]

With the struct definition given below...
struct A {
virtual void hello() = 0;
};
Approach #1:
struct B : public A {
virtual void hello() { ... }
};
Approach #2:
struct B : public A {
void hello() { ... }
};
Is there any difference between these two ways to override the hello function?
They are exactly the same. There is no difference between them other than that the first approach requires more typing and is potentially clearer.
The 'virtualness' of a function is propagated implicitly, however at least one compiler I use will generate a warning if the virtual keyword is not used explicitly, so you may want to use it if only to keep the compiler quiet.
From a purely stylistic point-of-view, including the virtual keyword clearly 'advertises' the fact to the user that the function is virtual. This will be important to anyone further sub-classing B without having to check A's definition. For deep class hierarchies, this becomes especially important.
The virtual keyword is not necessary in the derived class. Here's the supporting documentation, from the C++ Draft Standard (N3337) (emphasis mine):
10.3 Virtual functions
2 If a virtual member function vf is declared in a class Base and in a class Derived, derived directly or indirectly from Base, a member function vf with the same name, parameter-type-list (8.3.5), cv-qualification, and ref-qualifier (or absence of same) as Base::vf is declared, then Derived::vf is also virtual (whether or not it is so declared) and it overrides Base::vf.
No, the virtual keyword on derived classes' virtual function overrides is not required. But it is worth mentioning a related pitfall: a failure to override a virtual function.
The failure to override occurs if you intend to override a virtual function in a derived class, but make an error in the signature so that it declares a new and different virtual function. This function may be an overload of the base class function, or it might differ in name. Whether or not you use the virtual keyword in the derived class function declaration, the compiler would not be able to tell that you intended to override a function from a base class.
This pitfall is, however, thankfully addressed by the C++11 explicit override language feature, which allows the source code to clearly specify that a member function is intended to override a base class function:
struct Base {
virtual void some_func(float);
};
struct Derived : Base {
virtual void some_func(int) override; // ill-formed - doesn't override a base class method
};
The compiler will issue a compile-time error and the programming error will be immediately obvious (perhaps the function in Derived should have taken a float as the argument).
Refer to WP:C++11.
Adding the "virtual" keyword is good practice as it improves readability , but it is not necessary. Functions declared virtual in the base class, and having the same signature in the derived classes are considered "virtual" by default.
There is no difference for the compiler, when you write the virtual in the derived class or omit it.
But you need to look at the base class to get this information. Therfore I would recommend to add the virtual keyword also in the derived class, if you want to show to the human that this function is virtual.
The virtual keyword should be added to functions of a base class to make them overridable. In your example, struct A is the base class. virtual means nothing for using those functions in a derived class. However, it you want your derived class to also be a base class itself, and you want that function to be overridable, then you would have to put the virtual there.
struct B : public A {
virtual void hello() { ... }
};
struct C : public B {
void hello() { ... }
};
Here C inherits from B, so B is not the base class (it is also a derived class), and C is the derived class.
The inheritance diagram looks like this:
A
^
|
B
^
|
C
So you should put the virtual in front of functions inside of potential base classes which may have children. virtual allows your children to override your functions. There is nothing wrong with putting the virtual in front of functions inside of the derived classes, but it is not required. It is recommended though, because if someone would want to inherit from your derived class, they would not be pleased that the method overriding doesn't work as expected.
So put virtual in front of functions in all classes involved in inheritance, unless you know for sure that the class will not have any children who would need to override the functions of the base class. It is good practice.
There's a considerable difference when you have templates and start taking base class(es) as template parameter(s):
struct None {};
template<typename... Interfaces>
struct B : public Interfaces
{
void hello() { ... }
};
struct A {
virtual void hello() = 0;
};
template<typename... Interfaces>
void t_hello(const B<Interfaces...>& b) // different code generated for each set of interfaces (a vtable-based clever compiler might reduce this to 2); both t_hello and b.hello() might be inlined properly
{
b.hello(); // indirect, non-virtual call
}
void hello(const A& a)
{
a.hello(); // Indirect virtual call, inlining is impossible in general
}
int main()
{
B<None> b; // Ok, no vtable generated, empty base class optimization works, sizeof(b) == 1 usually
B<None>* pb = &b;
B<None>& rb = b;
b.hello(); // direct call
pb->hello(); // pb-relative non-virtual call (1 redirection)
rb->hello(); // non-virtual call (1 redirection unless optimized out)
t_hello(b); // works as expected, one redirection
// hello(b); // compile-time error
B<A> ba; // Ok, vtable generated, sizeof(b) >= sizeof(void*)
B<None>* pba = &ba;
B<None>& rba = ba;
ba.hello(); // still can be a direct call, exact type of ba is deducible
pba->hello(); // pba-relative virtual call (usually 3 redirections)
rba->hello(); // rba-relative virtual call (usually 3 redirections unless optimized out to 2)
//t_hello(b); // compile-time error (unless you add support for const A& in t_hello as well)
hello(ba);
}
The fun part of it is that you can now define interface and non-interface functions later to defining classes. That is useful for interworking interfaces between libraries (don't rely on this as a standard design process of a single library). It costs you nothing to allow this for all of your classes - you might even typedef B to something if you'd like.
Note that, if you do this, you might want to declare copy / move constructors as templates, too: allowing to construct from different interfaces allows you to 'cast' between different B<> types.
It's questionable whether you should add support for const A& in t_hello(). The usual reason for this rewrite is to move away from inheritance-based specialization to template-based one, mostly for performance reasons. If you continue to support the old interface, you can hardly detect (or deter from) old usage.
I will certainly include the Virtual keyword for the child class, because
i. Readability.
ii. This child class my be derived further down, you don't want the constructor of the further derived class to call this virtual function.

Uniform initialization behavior different for different types in vector

I came across this article in which I read this example by one of the posters. I have quoted that here for convenience.
struct Foo
{
Foo(int i) {} // #1
Foo() {}
};
int main()
{
std::vector<Foo> f {10};
std::cout << f.size() << std::endl;
}
The above code, as written, emits “1” (10 is a converted to Foo by a
constructor that takes an int, then the vector’s initializer_list
constructor is called). If I comment out the line commented as #1, the
result is “10” (the initializer_list cannot be converted so the int
constructor is used).
My question is why does it emit a 10 if the int constructor is removed.
I understand that uniform initialization list works in the following order
1-Calls the initializer list if available or possible
2-Calls the default constructor if available
3-Does aggregate initialization
In the above case why is it creating 10 items in the vector since 1,2 and 3 are not possible ? Does this mean with uniform initialization a vector of items might always have different behaviors ?
Borrowing a quote from Scott Meyers in Effective Modern C++ (emphasis in original):
If, however, one or more constructors declare a parameter of type std::initializer_list, calls using the braced initialization syntax strongly prefer the overloads taking std;:initializer_lists. Strongly. If there's any way for compilers to construe a call using a braced initializer to be a constructor taking a std::initializer_list, compilers will employ that interpretation.
So when you have std::vector<Foo> f {10};, it will try to use the constructor of vector<Foo> that takes an initializer_list<Foo>. If Foo is constructible from an int, that is the constructor we're using - so we end up with one Foo constructed from 10.
Or, from the standardese, in [over.match.list]:
When objects of non-aggregate class type T are list-initialized (8.5.4), overload resolution selects the constructor
in two phases:
(1.1) — Initially, the candidate functions are the initializer-list constructors (8.5.4) of the class T and the
argument list consists of the initializer list as a single argument.
(1.2) — If no viable initializer-list constructor is found, overload resolution is performed again, where the
candidate functions are all the constructors of the class T and the argument list consists of the elements
of the initializer list.
If there is a viable initializer-list constructor, it is used. If you didn't have the Foo(int ) constructor, there would not be a viable initializer-list constructor, and overload resolution the second time around would find the constructor of vector that takes a size - and so you'd get a vector of 10 default-constructed Foos instead.

Resources