ABI in pimpl idiom with unique_ptr - c++11

My goal is to provide abi compatibility for my new library.
I look toward the using of unique_ptr instead of raw pointers. But I'm afraid that if I update standard library, I may break abi. Is it true? Is there any guaranty of abi-stability for unique_ptrs in future stdlib releases?

As you can see from this blog post the problem is known and it is being addressed. As things stand now I'm afraid the best you can do is check with your compiler supplier whether they provide any guarantee (e.g. not to break ABI in minor releases).

The problem rise when you use a custom deleter. unique_ptr (unlike shared_ptr) destructor requires knowing a complete type of the object. So you need to specify the deleter in the data member declaration:
class Foo {
private:
std::unique_ptr <FooImpl> _pimpl;
};
When instantiating the pimpl you are constrained to using the default deleter.
If you want a custom deleter, you need to specify it in the declaration
class Foo {
private:
std::unique_ptr <FooImpl, std::function <void (FooImpl*&)> > _pimpl;
};
However, you can't have the option to be flexible whether you want the unique_ptr d'tor to call the default delete or a custom deleter.
The more flexible option is the second version, but if you choose to keep with the default behavior, then you must instantiate the unique_ptr with a specific deleter that is equivalent to the default delete.
IMO, this is a significant drawback for unique_ptr to be used for pimpl idiom.

Related

What is the best type for a callable object in a template method?

Every time I write a signature that accepts a templated callable, I always wonder what the best type for the parameter is. Should it be a value type or a const reference type?
For example,
template <class Func>
void execute_func(Func func) {
/* ... */
}
// vs.
template <class Func>
void execute_func(const Func& func) {
/* ... */
}
Is there any situation where the callable is greater than 64bits (aka a pointer to func)? Maybe std::function behaves differently?
In general, I do not like passing callable objects by const reference, because it is not that flexible (e.g. it cannot be used on mutable lambdas). I suggest to pass them by value. If you check the stl algorithms implementation, (e.g. for std::for_each), all of the callable objects are passed by value as well.
Doing this, the users are still able to use std::ref(func) or std::cref(func) to avoid unnecessary copying of the callable object (using reference_wrapper), if desired.
Is there any situation where the callable is greater than 64bits
From my experience in working in CAD/CAE applications, a lot. Functors can easily hold data that is bigger than 64 bits. More than two ints, more than one double, more than one pointer, is all you need to exceed that limit in Visual Studio.
There is no best type. What if you have noncopyable functor? First template will not work as it will try to use deleted copy constructor. You have to move it but then you will loose (probably) ownership of the object. It all depends on intended use. And yes, std::function can be much bigger than size_t. If you bind member function, it already is 2 words (object pointer and function pointer). If you bind some arguments it may grow further. The same goes with lambda, every captured value is stored in lambda which is basically a functor in this case. Const reference will not work if your callable has non const operator. Neither of them will be perfect for all uses. Sometimes the best option is to provide a few different versions so you can handle all cases, SFINAE is your friend here.

why shared pointer has a virtual function

It is mentioned in Scott Meyer's book that part of the overhead caused by using shared pointers is that they need virutal function to destroy the pointed object correctly. My question is why? Is this not supposed to be the responsibility of the class of that pointed object to have a virtual destructor?
Is this not supposed to be the reponsibility of the class of that pointed object to have a virtual destructor?
That would be one possible way to design a shared pointer, but std::shared_ptr allows you to do the following, even if Base does not have a virtual destructor:
std::shared_ptr<Base> p { new Derived{} };
It does this by capturing the correct deleter for the argument when the std::shared_ptr is constructed, then calls that when the reference count hits zero rather than just using delete (of course, you can pass your own custom deleter to use instead). This is commonly referred to as type erasure, and this technique is generally implemented using virtual function calls.

making a shared_ptr weak

I've got an map shared_ptrs
std::unordered_map<uint64_t, std::shared_ptr<Target>> map;
Is there a way to make them weak_ptrs at some point or do I
have to make something like
std::unordered_map<uint64_t,
std::pair<std::shared_ptr<Target>,
std::weak_ptr<Target>>> map;
and swap them?
Thanks in advance
As people already stated in the comments, you can not do that. A shared_ptr always owns a reference, while a weak_ptr never does. The API of the standard library is explicitly designed in a way that the type tells you whether you currently own a reference or not (and the only thing you should do to access pointees of weak_ptr objects is lock() them, and check the resulting shared_ptr for non-null-ness, so you can (even in a multi-threaded environment) be sure, that you own a reference for yourself while working with the object.
What you could possibly do is have a map of weak_ptr all along, and store a shared_ptr elsewhere as long as you want to keep the object alive. Depending on the design or purpose, the place for the shared_ptr might even be a member variable of the object.
If you use a map of pairs, I would not swap() the pair members, but start with a pair of a shared and a weak ptr referring to the same managed object, and just reset() the shared ptr if it is decided to drop the strong reference, not touching the weak_ptr at that point.
You can always use the weak_ptr or shared_ptr itself as the key in the map. So indeed:
std::map<std::weak_ptr<void>, std::string> information_map;
Would be able to associate strings with any kind of weak ptr, regardless of type. This is because std::less<void*> defines a weak total ordering over all possible pointer values according to the standard.
See also
Advanced Shared Pointer Programming Techniques
How to compare pointers?

Should I always use the override contextual keyword?

I know that the override contextual keyword was introduced to write safer code (by checking for a virtual function with the same signature) but I don't feel good about it, because it seems to be redundant for me to write override every time I want to override a virtual function.
Is it a bad practice to not use override contextual keyword for 99% of cases? Why/when should I have to use it (a compiler warning is not enough when we are hiding a virtual function mistakenly)?
EDIT: In other words; what is the advantage of using the override contextual keyword in C++11 while we always had a compiler warning if we were hiding a virtual function mistakenly in C++03 (without using override contextual keyword)?
The override keyword is totally useful and I would recommend using it all the time.
If you misspell your virtual function it will compile fine but at runtime the program will call the wrong function. It will call the base class function rather than your override.
It can be a really difficult bug to find:
#include <iostream>
class Base
{
public:
virtual ~Base() {}
virtual int func()
{
// do stuff for bases
return 3;
}
};
class Derived
: public Base
{
public:
virtual int finc() // WHOOPS MISSPELLED, override would prevent this
{
// do stuff for deriveds
return 8;
}
};
int main()
{
Base* base = new Derived;
std::cout << base->func() << std::endl;
delete base;
}
Annotations are what you call contextual keywords, they serve as clarification, to make sure anyone who reads the code realizes it is a function that overrides a function in a superclass or a interface.
The compiler can also give a warning if the originally overridden feature was removed, in which case you might want to think about removing your function as well.
As far as I know, nothing bad happens if you ommit anotations. It's neither right nor wrong. Like you stated correctly already: annotations are introduced to write safer code.
However: They won't change your code in any functional way.
If you work as a single programmer on your own project it might not matter wheter you use them or not. It is however good practice to stick to one style (i.e. either you use it, or you don't use it. Anything inbetween like sometimes using it and sometimes not only causes confusion)
If you work in a Team you should discuss the topic with your teammates and decide wheter you all use it or not.
What is the advantage of using override contextual keyword in C++11 while we always had a compiler warning if we were hiding a virtual function mistakenly
Nearly none!?
But:
It depends on how much warnings will be accepted by your build rules. If you say, every warning MUST be fixed, you will get the same result UNTIL you are using a compiler which give you the warning.
We have decided to always use override and remove virtual on overriding methods. So the "overhead" is zero and the code is portable in the meaning of "give an error" on misuse.
I personally like this new feature, because it makes the language clearer. If you say this is an override, it will be checked! And if we want to add a new method with different signature, we will NOT get a false positive warning, which is important in your scenario!

State of object after std::move construction

Is it legal/proper c++0x to leave an object moved for the purpose of move-construction in a state that can only be destroyed? For instance:
class move_constructible {...};
int main()
{
move_constructible x;
move_constructible y(std::move(x));
// From now on, x can only be destroyed. Any other method will result
// in a fatal error.
}
For the record, I'm trying to wrap in a c++ class a c struct with a pointer member which is always supposed to be pointing to some allocated memory area. All the c library API relies on this assumption. But this requirement prevents to write a truly cheap move constructor, since in order for x to remain a valid object after the move it will need its own allocated memory area. I've written the destructor in such a way that it will first check for NULL pointer before calling the corresponding cleanup function from the c API, so that at least the struct can be safely destroyed after the move.
Yes, the language allows this. In fact it was one of the purposes of move semantics. It is however your responsibility to ensure that no other methods get called and/or provide proper diagnostics. Note, usually you can also use at least the assignment operator to "revive" your variable, such as in the classical example of swapping two values.
See also this question

Resources