Related
This is a class which contains image data.
class MyMat
{
public:
int width, height, format;
uint8_t *data;
}
I want to design MyMat with automatic memory management. The image data could be shared among many objects.
Common APIs which I'm going to design:
+) C++ 11
+) Assignment : share data
MyMat a2(w, h, fmt);
.................
a2 = a1;
+) Accessing data should be simple and short.
Can use raw pointer directly.
In general, I want to design MyMat like as OpenCV cv::Mat
Could you suggest me a proper design ?
1) Using std::vector<uint8_t> data
I have to write some code to remove copy constructor and assignment operator because someone can call them and causes memory copy.
The compiler must support copy ellision and return value optimization.
Always using move assignment and passing by reference are inconvenient
a2 = std::move(a1)
void test(MyMat &mat)
std::queue<MyMat> lists;
lists.push_back(std::move(a1))
..............................
2) Use share_ptr<uint8_t> data
Following this guideline http://www.codingstandard.com/rule/17-3-4-do-not-create-smart-pointers-of-array-type/,
we shouldn't create smart pointers of array type.
3) Use share_ptr< std::vector<uint8_t> > data
To access data, use *(a1.data)[0], the syntax is very inconvenient
4) Use raw pointer, uint8_t *data
Write proper constructor and destructor for this class.
To make automatic memory management, use smart pointer.
share_ptr<MyMat> mat
std::queue< share_ptr<MyMat> > lists;
Matrix classes are normally expected to be a value type with deep copying. So, stick with std::vector<uint8_t> and let the user decide whether copy is expensive or not in their specific context.
Instead of raw pointers for arrays prefer std::unique_ptr<T[]> (note the square brackets).
std::array - fixed length in-place buffer (beautified array)
std::vector - variable length buffer
std::shared_ptr - shared ownership data
std::weak_ptr - expiring view on shared data
std::unique_ptr - unique ownership
std::string_view, std::span, std::ref, &, * - reference to data with no assumption of ownership
Simplest design is to have a single owner and RAII-forced life time ensuring everything that needs to be alive at certain time is alive and needs no other ownership, so generally I'd see if I could live std::unique_ptr<T> before complicating further (unless I can fit all my data on the stack, then I don't even need a unique_ptr).
On a side note - shared pointers are not free, they need dynamic memory allocation for the shared state (two allocations if done incorrectly :) ), whereas unique pointers are true "zero" overhead RAII.
Matrixes should use value semantics, and they should be nearly free to move.
Matrixes should support a view type as well.
There are two approaches for a basic Matrix that make sense.
First, a Matrix type that wraps a vector<T> with a stride field. This has an overhead of 3 instead of 2 pointers (or 1 pointer and a size) compared to a hand-rolled one. I don't consider that significant; the ease of debugging a vector<T> etc makes it more than worth that overhead.
In this case you'd want to write a separate MatrixView.
I'd use CRTP to create a common base class for both to implement operator[] and stride fields.
A distinct basic Matrix approach is to make your Matrix immutable. In this case, the Matrix wraps a std::shared_ptr<T const> and a std::shared_ptr<std::mutex> and (local, or stored with the mutex) width, height and stride field.
Copying such a Matrix just duplciates handles.
Modifying such a Matrix causes you to acquire the std::mutex, then check that shared_ptr<T const> has a use_count()==1. If it does, you cast-away const and modify the data referred to in the shared_ptr. If it does not, you duplicate the buffer, create a new mutex, and operate on the new state.
Here is a copy on write matrix buffer:
template<class T>
struct cow_buffer {
std::size_t rows() const { return m_rows; }
std::size_t cols() const { return m_cols; }
cow_buffer( T const* in, std::size_t rows, std::size_t cols, std::size_t stride ) {
copy_in( in, rows, cols, stride );
}
void copy_in( T const* in, std::size_t rows, std::size_t cols, std::size_t stride ) {
// note it isn't *really* const, this matters:
auto new_data = std::make_shared<T[]>( rows*cols );
for (std::size_t i = 0; i < rows; ++i )
std::copy( in+i*stride, in+i*m_stride+m_cols, new_data.get()+i*m_cols );
m_data = new_data;
m_rows = rows;
m_cols = cols;
m_stride = cols;
m_lock = std::make_shared<std::mutex>();
}
template<class F>
decltype(auto) read( F&& f ) const {
return std::forward<F>(f)( m_data.get() );
}
template<class F>
decltype(auto) modify( F&& f ) {
auto lock = std::unique_lock<std::mutex>(*m_lock);
if (m_data.use_count()==1) {
return std::forward<F>(f)( const_cast<T*>(m_data.get()) );
}
auto old_data = m_data;
copy_in( old_data.get(), m_rows, m_cols, m_stride );
return std::forward<F>(f)( const_cast<T*>(m_data.get()) );
}
explicit operator bool() const { return m_data && m_lock; }
private:
std::shared_ptr<T> m_data;
std::shared_ptr<std::mutex> m_lock;
std::size_t m_rows = 0, m_cols = 0, m_stride = 0;
};
something like that.
The mutex is required to ensure synchonization between multiple threads who are sole owners modifying m_data and the data from the previous write not being synchronzied with the current one.
Ok, muddling though Stack on the particulars about void*, books like The C Programming Language (K&R) and The C++ Programming Language (Stroustrup). What have I learned? That void* is a generic pointer with no type inferred. It requires a cast to any defined type and printing void* just yields the address.
What else do I know? void* can't be dereferenced and thus far remains the one item in C/C++ from which I have discovered much written about but little understanding imparted.
I understand that it must be cast such as *(char*)void* but what makes no sense to me for a generic pointer is that I must somehow already know what type I need in order to grab a value. I'm a Java programmer; I understand generic types but this is something I struggle with.
So I wrote some code
typedef struct node
{
void* data;
node* link;
}Node;
typedef struct list
{
Node* head;
}List;
Node* add_new(void* data, Node* link);
void show(Node* head);
Node* add_new(void* data, Node* link)
{
Node* newNode = new Node();
newNode->data = data;
newNode->link = link;
return newNode;
}
void show(Node* head)
{
while (head != nullptr)
{
std::cout << head->data;
head = head->link;
}
}
int main()
{
List list;
list.head = nullptr;
list.head = add_new("My Name", list.head);
list.head = add_new("Your Name", list.head);
list.head = add_new("Our Name", list.head);
show(list.head);
fgetc(stdin);
return 0;
}
I'll handle the memory deallocation later. Assuming I have no understanding of the type stored in void*, how do I get the value out? This implies I already need to know the type, and this reveals nothing about the generic nature of void* while I follow what is here although still no understanding.
Why am I expecting void* to cooperate and the compiler to automatically cast out the type that is hidden internally in some register on the heap or stack?
I'll handle the memory deallocation later. Assuming I have no understanding of the type stored in void*, how do I get the value out?
You can't. You must know the valid types that the pointer can be cast to before you can dereference it.
Here are couple of options for using a generic type:
If you are able to use a C++17 compiler, you may use std::any.
If you are able to use the boost libraries, you may use boost::any.
Unlike Java, you are working with memory pointers in C/C++. There is no encapsulation whatsoever. The void * type means the variable is an address in memory. Anything can be stored there. With a type like int * you tell the compiler what you are referring to. Besides the compiler knows the size of the type (say 4 bytes for int) and the address will be a multiple of 4 in that case (granularity/memory alignment). On top, if you give the compiler the type it will perform consistency checks at compilation time. Not after. This is not happening with void *.
In a nutshell, you are working bare metal. The types are compiler directives and do not hold runtime information. Nor does it track the objects you are dynamically creating. It is merely a segment in memory that is allocated where you can eventually store anything.
The main reason to use void* is that different things may be pointed at. Thus, I may pass in an int* or Node* or anything else. But unless you know either the type or the length, you can't do anything with it.
But if you know the length, you can handle the memory pointed at without knowing the type. Casting it as a char* is used because it is a single byte, so if I have a void* and a number of bytes, I can copy the memory somewhere else, or zero it out.
Additionally, if it is a pointer to a class, but you don't know if it is a parent or inherited class, you may be able to assume one and find out a flag inside the data which tells you which one. But no matter what, when you want to do much beyond passing it to another function, you need to cast it as something. char* is just the easiest single byte value to use.
Your confusion derived from habit to deal with Java programs. Java code is set of instruction for a virtual machine, where function of RAM is given to a sort of database, which stores name, type, size and data of each object. Programming language you're learning now is meant to be compiled into instruction for CPU, with same organization of memory as underlying OS have. Existing model used by C and C++ languages is some abstract built on top of most of popular OSes in way that code would work effectively after being compiled for that platform and OS. Naturally that organization doesn't involve string data about type, except for famous RTTI in C++.
For your case RTTI cannot be used directly, unless you would create a wrapper around your naked pointer, which would store the data.
In fact C++ library contains a vast collection of container class templates that are useable and portable, if they are defined by ISO standard. 3/4 of standard is just description of library often referred as STL. Use of them is preferable over working with naked pointers, unless you mean to create own container for some reason. For particular task only C++17 standard offered std::any class, previously present in boost library. Naturally, it is possible to reimplement it, or, in some cases, to replace by std::variant.
Assuming I have no understanding of the type stored in void*, how do I get the value out
You don't.
What you can do is record the type stored in the void*.
In c, void* is used to pass around a binary chunk of data that points at something through one layer of abstraction, and recieve it at the other end, casting it back to the type that the code knows it will be passed.
void do_callback( void(*pfun)(void*), void* pdata ) {
pfun(pdata);
}
void print_int( void* pint ) {
printf( "%d", *(int*)pint );
}
int main() {
int x = 7;
do_callback( print_int, &x );
}
here, we forget thet ype of &x, pass it through do_callback.
It is later passed to code inside do_callback or elsewhere that knows that the void* is actually an int*. So it casts it back and uses it as an int.
The void* and the consumer void(*)(void*) are coupled. The above code is "provably correct", but the proof does not lie in the type system; instead, it depends on the fact we only use that void* in a context that knows it is an int*.
In C++ you can use void* similarly. But you can also get fancy.
Suppose you want a pointer to anything printable. Something is printable if it can be << to a std::ostream.
struct printable {
void const* ptr = 0;
void(*print_f)(std::ostream&, void const*) = 0;
printable() {}
printable(printable&&)=default;
printable(printable const&)=default;
printable& operator=(printable&&)=default;
printable& operator=(printable const&)=default;
template<class T,std::size_t N>
printable( T(&t)[N] ):
ptr( t ),
print_f( []( std::ostream& os, void const* pt) {
T* ptr = (T*)pt;
for (std::size_t i = 0; i < N; ++i)
os << ptr[i];
})
{}
template<std::size_t N>
printable( char(&t)[N] ):
ptr( t ),
print_f( []( std::ostream& os, void const* pt) {
os << (char const*)pt;
})
{}
template<class T,
std::enable_if_t<!std::is_same<std::decay_t<T>, printable>{}, int> =0
>
printable( T&& t ):
ptr( std::addressof(t) ),
print_f( []( std::ostream& os, void const* pt) {
os << *(std::remove_reference_t<T>*)pt;
})
{}
friend
std::ostream& operator<<( std::ostream& os, printable self ) {
self.print_f( os, self.ptr );
return os;
}
explicit operator bool()const{ return print_f; }
};
what I just did is a technique called "type erasure" in C++ (vaguely similar to Java type erasure).
void send_to_log( printable p ) {
std::cerr << p;
}
Live example.
Here we created an ad-hoc "virtual" interface to the concept of printing on a type.
The type need not support any actual interface (no binary layout requirements), it just has to support a certain syntax.
We create our own virtual dispatch table system for an arbitrary type.
This is used in the C++ standard library. In c++11 there is std::function<Signature>, and in c++17 there is std::any.
std::any is void* that knows how to destroy and copy its contents, and if you know the type you can cast it back to the original type. You can also query it and ask it if it a specific type.
Mixing std::any with the above type-erasure techinque lets you create regular types (that behave like values, not references) with arbitrary duck-typed interfaces.
The regular std::vector has emplace_back which avoid an unnecessary copy. Is there a reason spsc_queue doesn't support this? Is it impossible to do emplace with lock-free queues for some reason?
I'm not a boost library implementer nor maintainer, so the rationale behind why not to include an emplace member function is beyond my knowledge, but it isn't too difficult to implement it yourself if you really need it.
The spsc_queue has a base class of either compile_time_sized_ringbuffer or runtime_sized_ringbuffer depending on if the size of the queue is known at compilation or not. These two classes maintain the actual buffer used with the obvious differences between a dynamic buffer and compile-time buffer, but delegate, in this case, their push member functions to a common base class - ringbuffer_base.
The ringbuffer_base::push function is relatively easy to grok:
bool push(T const & t, T * buffer, size_t max_size)
{
const size_t write_index = write_index_.load(memory_order_relaxed); // only written from push thread
const size_t next = next_index(write_index, max_size);
if (next == read_index_.load(memory_order_acquire))
return false; /* ringbuffer is full */
new (buffer + write_index) T(t); // copy-construct
write_index_.store(next, memory_order_release);
return true;
}
An index into the location where the next item should be stored is done with a relaxed load (which is safe since the intended use of this class is single producer for the push calls) and gets the appropriate next index, checks to make sure everything is in-bounds (with a load-acquire for appropriate synchronization with the thread that calls pop) , but the main statement we're interested in is:
new (buffer + write_index) T(t); // copy-construct
Which performs a placement new copy construction into the buffer. There's nothing inherently thread-unsafe about passing around some parameters to use to construct a T directly from viable constructor arguments. I wrote the following snippet and made the necessary changes throughout the derived classes to appropriately delegate the work up to the base class:
template<typename ... Args>
std::enable_if_t<std::is_constructible<T,Args...>::value,bool>
emplace( T * buffer, size_t max_size,Args&&... args)
{
const size_t write_index = write_index_.load(memory_order_relaxed); // only written from push thread
const size_t next = next_index(write_index, max_size);
if (next == read_index_.load(memory_order_acquire))
return false; /* ringbuffer is full */
new (buffer + write_index) T(std::forward<Args>(args)...); // emplace
write_index_.store(next, memory_order_release);
return true;
}
Perhaps the only difference is making sure that the arguments passed in Args... can actually be used to construct a T, and of course doing the emplacement via std::forward instead of a copy construction.
I am using BDS 2006 Turbo C++ for a long time now and some of my bigger projects (CAD/CAM,3D gfx engines and Astronomic computations) occasionally throw an exception (for example once in 3-12 months of 24/7 heavy duty usage). After extensive debugging I found this:
//code1:
struct _s { int i; } // any struct
_s *s=new _s[1024]; // dynamic allocation
delete[] s; // free up memory
this code is usually inside template where _s can be also class therefore delete[] this code should work properly, but the delete[] does not work properly for structs (classes looks OK). No exceptions is thrown, the memory is freed, but it somehow damages the memory manager allocation tables and after this any new allocation can be wrong (new can create overlapped allocations with already allocated space or even unallocated space hence the occasional exceptions)
I have found that if I add empty destructor to _s than suddenly seems everything OK
struct _s { int i; ~_s(){}; }
Well now comes the weird part. After I update this to my projects I have found that AnsiString class has also bad reallocations. For example:
//code2:
int i;
_s *dat=new _s[1024];
AnsiString txt="";
// setting of dat
for (i=0;i<1024;i++) txt+="bla bla bla\r\n";
// usage of dat
delete[] dat;
In this code dat contains some useful data, then later is some txt string created by adding lines so the txt must be reallocated few times and sometimes the dat data is overwritten by txt (even if they are not overlapped, I thing the temp AnsiString needed to reallocate txt is overlapped with dat)
So my questions are:
Am I doing something wrong in code1, code2 ?
Is there any way to avoid AnsiString (re)allocation errors ? (but still using it)
After extensive debugging (after posting question 2) I have found that AnsiString do not cause problems. They only occur while using them. The real problem is probably in switching between OpenGL clients. I have Open/Save dialogs with preview for vector graphics. If I disable OpenGL usage for these VCL sub-windows than AnsiString memory management errors disappears completely. I am not shore what is the problem (incompatibility between MFC/VCL windows or more likely I made some mistake in switching contexts, will further investigate). Concern OpenGL windows are:
main VCL Form + OpenGL inside Canvas client area
child of main MFC Open/Save dialog + docked preview VCL Form + OpenGL inside Canvas client area
P.S.
these errors depend on number of new/delete/delete[] usages not on the allocated sizes
both code1 and code2 errors are repetitive (for example have a parser to load complex ini file and the error occurs on the same line if the ini is not changed)
I detect these errors only on big projects (plain source code > 1MB) with combined usage of AnsiString and templates with internal dynamic allocations, but is possible that they are also in simpler projects but occurs so rarely that I miss it.
Infected projects specs:
win32 noinstall standalone (using Win7sp1 x64 but on XPsp3 x32 behaves the same)
does not meter if use GDI or OpenGl/GLSL
does not meter if use device driver DLLs or not
no OCX,or nonstandard VCL component
no DirectX
1 Byte aligned compilation/link
do not use RTL,packages or frameworks (standalone)
Sorry for bad English/grammar ...
any help / conclusion / suggestion appreciated.
After extensive debugging i finely isolated the problem.
Memory management of bds2006 Turbo C++ became corrupt after you try to call any delete for already deleted pointer. for example:
BYTE *dat=new BYTE[10],*tmp=dat;
delete[] dat;
delete[] tmp;
After this is memory management not reliable. ('new' can allocate already allocated space)
Of course deletion of the same pointer twice is bug on programmers side, but i have found the real cause of all my problems which generates this problem (without any obvious bug in source code) see this code:
//---------------------------------------------------------------------------
class test
{
public:
int siz;
BYTE *dat;
test()
{
siz=10;
dat=new BYTE[siz];
}
~test()
{
delete[] dat; // <- add breakpoint here
siz=0;
dat=NULL;
}
test& operator = (const test& x)
{
int i;
for (i=0;i<siz;i++) if (i<x.siz) dat[i]=x.dat[i];
for ( ;i<siz;i++) dat[i]=0;
return *this;
}
};
//---------------------------------------------------------------------------
test get()
{
test a;
return a; // here call a.~test();
} // here second call a.~test();
//---------------------------------------------------------------------------
void main()
{
get();
}
//---------------------------------------------------------------------------
In function get() is called destructor for class a twice. Once for real a and once for its copy because I forget to create constructor
test::test(test &x);
[Edit1] further upgrades of code
OK I have refined the initialization code for both class and struct even templates to fix even more bug-cases. Add this code to any struct/class/template and if needed than add functionality
T() {}
T(const T& a) { *this=a; }
~T() {}
T* operator = (const T *a) { *this=*a; return this; }
//T* operator = (const T &a) { ...copy... return this; }
T is the struct/class name
the last operator is needed only if T uses dynamic allocations inside it if no allocations are used you can leave it as is
This also resolves other compiler issues like this:
Too many initializers error for a simple array in bcc32
If anyone have similar problems hope this helps.
Also look at traceback a pointer in c++ code mmap if you need to debug your memory allocations...
I started studying smart pointers of C++11 and I don't see any useful use of std::weak_ptr. Can someone tell me when std::weak_ptr is useful/necessary?
std::weak_ptr is a very good way to solve the dangling pointer problem. By just using raw pointers it is impossible to know if the referenced data has been deallocated or not. Instead, by letting a std::shared_ptr manage the data, and supplying std::weak_ptr to users of the data, the users can check validity of the data by calling expired() or lock().
You could not do this with std::shared_ptr alone, because all std::shared_ptr instances share the ownership of the data which is not removed before all instances of std::shared_ptr are removed. Here is an example of how to check for dangling pointer using lock():
#include <iostream>
#include <memory>
int main()
{
// OLD, problem with dangling pointer
// PROBLEM: ref will point to undefined data!
int* ptr = new int(10);
int* ref = ptr;
delete ptr;
// NEW
// SOLUTION: check expired() or lock() to determine if pointer is valid
// empty definition
std::shared_ptr<int> sptr;
// takes ownership of pointer
sptr.reset(new int);
*sptr = 10;
// get pointer to data without taking ownership
std::weak_ptr<int> weak1 = sptr;
// deletes managed object, acquires new pointer
sptr.reset(new int);
*sptr = 5;
// get pointer to new data without taking ownership
std::weak_ptr<int> weak2 = sptr;
// weak1 is expired!
if(auto tmp = weak1.lock())
std::cout << "weak1 value is " << *tmp << '\n';
else
std::cout << "weak1 is expired\n";
// weak2 points to new data (5)
if(auto tmp = weak2.lock())
std::cout << "weak2 value is " << *tmp << '\n';
else
std::cout << "weak2 is expired\n";
}
Output
weak1 is expired
weak2 value is 5
A good example would be a cache.
For recently accessed objects, you want to keep them in memory, so you hold a strong pointer to them. Periodically, you scan the cache and decide which objects have not been accessed recently. You don't need to keep those in memory, so you get rid of the strong pointer.
But what if that object is in use and some other code holds a strong pointer to it? If the cache gets rid of its only pointer to the object, it can never find it again. So the cache keeps a weak pointer to objects that it needs to find if they happen to stay in memory.
This is exactly what a weak pointer does -- it allows you to locate an object if it's still around, but doesn't keep it around if nothing else needs it.
Another answer, hopefully simpler. (for fellow googlers)
Suppose you have Team and Member objects.
Obviously it's a relationship : the Team object will have pointers to its Members. And it's likely that the members will also have a back pointer to their Team object.
Then you have a dependency cycle. If you use shared_ptr, objects will no longer be automatically freed when you abandon reference on them, because they reference each other in a cyclic way. This is a memory leak.
You break this by using weak_ptr. The "owner" typically use shared_ptr and the "owned" use a weak_ptr to its parent, and convert it temporarily to shared_ptr when it needs access to its parent.
Store a weak ptr :
weak_ptr<Parent> parentWeakPtr_ = parentSharedPtr; // automatic conversion to weak from shared
then use it when needed
shared_ptr<Parent> tempParentSharedPtr = parentWeakPtr_.lock(); // on the stack, from the weak ptr
if( !tempParentSharedPtr ) {
// yes, it may fail if the parent was freed since we stored weak_ptr
} else {
// do stuff
}
// tempParentSharedPtr is released when it goes out of scope
Here's one example, given to me by #jleahy: Suppose you have a collection of tasks, executed asynchronously, and managed by an std::shared_ptr<Task>. You may want to do something with those tasks periodically, so a timer event may traverse a std::vector<std::weak_ptr<Task>> and give the tasks something to do. However, simultaneously a task may have concurrently decided that it is no longer needed and die. The timer can thus check whether the task is still alive by making a shared pointer from the weak pointer and using that shared pointer, provided it isn't null.
When using pointers it's important to understand the different types of pointers available and when it makes sense to use each one. There are four types of pointers in two categories as follows:
Raw pointers:
Raw Pointer [ i.e. SomeClass* ptrToSomeClass = new SomeClass(); ]
Smart pointers:
Unique Pointers [ i.e. std::unique_ptr<SomeClass> uniquePtrToSomeClass ( new SomeClass() ); ]
Shared Pointers [ i.e. std::shared_ptr<SomeClass> sharedPtrToSomeClass ( new SomeClass() ); ]
Weak Pointers [ i.e. std::weak_ptr<SomeClass> weakPtrToSomeWeakOrSharedPtr ( weakOrSharedPtr ); ]
Raw pointers (sometimes referred to as "legacy pointers", or "C pointers") provide 'bare-bones' pointer behavior and are a common source of bugs and memory leaks. Raw pointers provide no means for keeping track of ownership of the resource and developers must call 'delete' manually to ensure they are not creating a memory leak. This becomes difficult if the resource is shared as it can be challenging to know whether any objects are still pointing to the resource. For these reasons, raw pointers should generally be avoided and only used in performance-critical sections of the code with limited scope.
Unique pointers are a basic smart pointer that 'owns' the underlying raw pointer to the resource and is responsible for calling delete and freeing the allocated memory once the object that 'owns' the unique pointer goes out of scope. The name 'unique' refers to the fact that only one object may 'own' the unique pointer at a given point in time. Ownership may be transferred to another object via the move command, but a unique pointer can never be copied or shared. For these reasons, unique pointers are a good alternative to raw pointers in the case that only one object needs the pointer at a given time, and this alleviates the developer from the need to free memory at the end of the owning object's lifecycle.
Shared pointers are another type of smart pointer that are similar to unique pointers, but allow for many objects to have ownership over the shared pointer. Like unique pointer, shared pointers are responsible for freeing the allocated memory once all objects are done pointing to the resource. It accomplishes this with a technique called reference counting. Each time a new object takes ownership of the shared pointer the reference count is incremented by one. Similarly, when an object goes out of scope or stops pointing to the resource, the reference count is decremented by one. When the reference count reaches zero, the allocated memory is freed. For these reasons, shared pointers are a very powerful type of smart pointer that should be used anytime multiple objects need to point to the same resource.
Finally, weak pointers are another type of smart pointer that, rather than pointing to a resource directly, they point to another pointer (weak or shared). Weak pointers can't access an object directly, but they can tell whether the object still exists or if it has expired. A weak pointer can be temporarily converted to a shared pointer to access the pointed-to object (provided it still exists). To illustrate, consider the following example:
You are busy and have overlapping meetings: Meeting A and Meeting B
You decide to go to Meeting A and your co-worker goes to Meeting B
You tell your co-worker that if Meeting B is still going after Meeting A ends, you will join
The following two scenarios could play out:
Meeting A ends and Meeting B is still going, so you join
Meeting A ends and Meeting B has also ended, so you can't join
In the example, you have a weak pointer to Meeting B. You are not an "owner" in Meeting B so it can end without you, and you do not know whether it ended or not unless you check. If it hasn't ended, you can join and participate, otherwise, you cannot. This is different than having a shared pointer to Meeting B because you would then be an "owner" in both Meeting A and Meeting B (participating in both at the same time).
The example illustrates how a weak pointer works and is useful when an object needs to be an outside observer, but does not want the responsibility of sharing ownership. This is particularly useful in the scenario that two objects need to point to each other (a.k.a. a circular reference). With shared pointers, neither object can be released because they are still 'strongly' pointed to by the other object. When one of the pointers is a weak pointer, the object holding the weak pointer can still access the other object when needed, provided it still exists.
They are useful with Boost.Asio when you are not guaranteed that a target object still exists when an asynchronous handler is invoked. The trick is to bind a weak_ptr into the asynchonous handler object, using std::bind or lambda captures.
void MyClass::startTimer()
{
std::weak_ptr<MyClass> weak = shared_from_this();
timer_.async_wait( [weak](const boost::system::error_code& ec)
{
auto self = weak.lock();
if (self)
{
self->handleTimeout();
}
else
{
std::cout << "Target object no longer exists!\n";
}
} );
}
This is a variant of the self = shared_from_this() idiom often seen in Boost.Asio examples, where a pending asynchronous handler will not prolong the lifetime of the target object, yet is still safe if the target object is deleted.
shared_ptr : holds the real object.
weak_ptr : uses lock to connect to the real owner or returns a NULL shared_ptr otherwise.
Roughly speaking, weak_ptr role is similar to the role of housing agency. Without agents, to get a house on rent we may have to check random houses in the city. The agents make sure that we visit only those houses which are still accessible and available for rent.
weak_ptr is also good to check the correct deletion of an object - especially in unit tests. Typical use case might look like this:
std::weak_ptr<X> weak_x{ shared_x };
shared_x.reset();
BOOST_CHECK(weak_x.lock());
... //do something that should remove all other copies of shared_x and hence destroy x
BOOST_CHECK(!weak_x.lock());
Apart from the other already mentioned valid use cases std::weak_ptr is an awesome tool in a multithreaded environment, because
It doesn't own the object and so can't hinder deletion in a different thread
std::shared_ptr in conjunction with std::weak_ptr is safe against dangling pointers - in opposite to std::unique_ptr in conjunction with raw pointers
std::weak_ptr::lock() is an atomic operation (see also About thread-safety of weak_ptr)
Consider a task to load all images of a directory (~10.000) simultaneously into memory (e.g. as a thumbnail cache). Obviously the best way to do this is a control thread, which handles and manages the images, and multiple worker threads, which load the images. Now this is an easy task. Here's a very simplified implementation (join() etc is omitted, the threads would have to be handled differently in a real implementation etc)
// a simplified class to hold the thumbnail and data
struct ImageData {
std::string path;
std::unique_ptr<YourFavoriteImageLibData> image;
};
// a simplified reader fn
void read( std::vector<std::shared_ptr<ImageData>> imagesToLoad ) {
for( auto& imageData : imagesToLoad )
imageData->image = YourFavoriteImageLib::load( imageData->path );
}
// a simplified manager
class Manager {
std::vector<std::shared_ptr<ImageData>> m_imageDatas;
std::vector<std::unique_ptr<std::thread>> m_threads;
public:
void load( const std::string& folderPath ) {
std::vector<std::string> imagePaths = readFolder( folderPath );
m_imageDatas = createImageDatas( imagePaths );
const unsigned numThreads = std::thread::hardware_concurrency();
std::vector<std::vector<std::shared_ptr<ImageData>>> splitDatas =
splitImageDatas( m_imageDatas, numThreads );
for( auto& dataRangeToLoad : splitDatas )
m_threads.push_back( std::make_unique<std::thread>(read, dataRangeToLoad) );
}
};
But it becomes much more complicated, if you want to interrupt the loading of the images, e.g. because the user has chosen a different directory. Or even if you want to destroy the manager.
You'd need thread communication and have to stop all loader threads, before you may change your m_imageDatas field. Otherwise the loaders would carry on loading until all images are done - even if they are already obsolete. In the simplified example, that wouldn't be too hard, but in a real environment things can be much more complicated.
The threads would probably be part of a thread pool used by multiple managers, of which some are being stopped, and some aren't etc. The simple parameter imagesToLoad would be a locked queue, into which those managers push their image requests from different control threads with the readers popping the requests - in an arbitrary order - at the other end. And so the communication becomes difficult, slow and error-prone. A very elegant way to avoid any additional communication in such cases is to use std::shared_ptr in conjunction with std::weak_ptr.
// a simplified reader fn
void read( std::vector<std::weak_ptr<ImageData>> imagesToLoad ) {
for( auto& imageDataWeak : imagesToLoad ) {
std::shared_ptr<ImageData> imageData = imageDataWeak.lock();
if( !imageData )
continue;
imageData->image = YourFavoriteImageLib::load( imageData->path );
}
}
// a simplified manager
class Manager {
std::vector<std::shared_ptr<ImageData>> m_imageDatas;
std::vector<std::unique_ptr<std::thread>> m_threads;
public:
void load( const std::string& folderPath ) {
std::vector<std::string> imagePaths = readFolder( folderPath );
m_imageDatas = createImageDatas( imagePaths );
const unsigned numThreads = std::thread::hardware_concurrency();
std::vector<std::vector<std::weak_ptr<ImageData>>> splitDatas =
splitImageDatasToWeak( m_imageDatas, numThreads );
for( auto& dataRangeToLoad : splitDatas )
m_threads.push_back( std::make_unique<std::thread>(read, dataRangeToLoad) );
}
};
This implementation is nearly as easy as the first one, doesn't need any additional thread communication, and could be part of a thread pool/queue in a real implementation. Since the expired images are skipped, and non-expired images are processed, the threads never would have to be stopped during normal operation.
You could always safely change the path or destroy your managers, since the reader fn checks, if the owning pointer isn't expired.
I see a lot of interesting answers that explain reference counting etc., but I am missing a simple example that demonstrates how you prevent memory leak using weak_ptr. In first example I use shared_ptr in cyclically referenced classes. When the classes go out of scope they are NOT destroyed.
#include<iostream>
#include<memory>
using namespace std;
class B;
class A
{
public:
shared_ptr<B>bptr;
A() {
cout << "A created" << endl;
}
~A() {
cout << "A destroyed" << endl;
}
};
class B
{
public:
shared_ptr<A>aptr;
B() {
cout << "B created" << endl;
}
~B() {
cout << "B destroyed" << endl;
}
};
int main()
{
{
shared_ptr<A> a = make_shared<A>();
shared_ptr<B> b = make_shared<B>();
a->bptr = b;
b->aptr = a;
}
// put breakpoint here
}
If you run the code snippet you will see as classes are created, but not destroyed:
A created
B created
Now we change shared_ptr's to weak_ptr:
class B;
class A
{
public:
weak_ptr<B>bptr;
A() {
cout << "A created" << endl;
}
~A() {
cout << "A destroyed" << endl;
}
};
class B
{
public:
weak_ptr<A>aptr;
B() {
cout << "B created" << endl;
}
~B() {
cout << "B destroyed" << endl;
}
};
int main()
{
{
shared_ptr<A> a = make_shared<A>();
shared_ptr<B> b = make_shared<B>();
a->bptr = b;
b->aptr = a;
}
// put breakpoint here
}
This time, when using weak_ptr we see proper class destruction:
A created
B created
B destroyed
A destroyed
I see std::weak_ptr<T> as a handle to a std::shared_ptr<T>: It allows me
to get the std::shared_ptr<T> if it still exists, but it will not extend its
lifetime. There are several scenarios when such point of view is useful:
// Some sort of image; very expensive to create.
std::shared_ptr< Texture > texture;
// A Widget should be able to quickly get a handle to a Texture. On the
// other hand, I don't want to keep Textures around just because a widget
// may need it.
struct Widget {
std::weak_ptr< Texture > texture_handle;
void render() {
if (auto texture = texture_handle.get(); texture) {
// do stuff with texture. Warning: `texture`
// is now extending the lifetime because it
// is a std::shared_ptr< Texture >.
} else {
// gracefully degrade; there's no texture.
}
}
};
Another important scenario is to break cycles in data structures.
// Asking for trouble because a node owns the next node, and the next node owns
// the previous node: memory leak; no destructors automatically called.
struct Node {
std::shared_ptr< Node > next;
std::shared_ptr< Node > prev;
};
// Asking for trouble because a parent owns its children and children own their
// parents: memory leak; no destructors automatically called.
struct Node {
std::shared_ptr< Node > parent;
std::shared_ptr< Node > left_child;
std::shared_ptr< Node > right_child;
};
// Better: break dependencies using a std::weak_ptr (but not best way to do it;
// see Herb Sutter's talk).
struct Node {
std::shared_ptr< Node > next;
std::weak_ptr< Node > prev;
};
// Better: break dependencies using a std::weak_ptr (but not best way to do it;
// see Herb Sutter's talk).
struct Node {
std::weak_ptr< Node > parent;
std::shared_ptr< Node > left_child;
std::shared_ptr< Node > right_child;
};
Herb Sutter has an excellent talk that explains the best use of language
features (in this case smart pointers) to ensure Leak Freedom by Default
(meaning: everything clicks in place by construction; you can hardly screw it
up). It is a must watch.
http://en.cppreference.com/w/cpp/memory/weak_ptr
std::weak_ptr is a smart pointer that holds a non-owning ("weak") reference to an object that is managed by std::shared_ptr. It must be converted to std::shared_ptr in order to access the referenced object.
std::weak_ptr models temporary ownership: when an object needs to be accessed only if it exists, and it may be deleted at any time by someone else, std::weak_ptr is used to track the object, and it is converted to std::shared_ptr to assume temporary ownership. If the original std::shared_ptr is destroyed at this time, the object's lifetime is extended until the temporary std::shared_ptr is destroyed as well.
In addition, std::weak_ptr is used to break circular references of std::shared_ptr.
There is a drawback of shared pointer:
shared_pointer can't handle the parent-child cycle dependency. Means if the parent class uses the object of child class using a shared pointer, in the same file if child class uses the object of the parent class. The shared pointer will be failed to destruct all objects, even shared pointer is not at all calling the destructor in cycle dependency scenario. basically shared pointer doesn't support the reference count mechanism.
This drawback we can overcome using weak_pointer.
When we does not want to own the object:
Ex:
class A
{
shared_ptr<int> sPtr1;
weak_ptr<int> wPtr1;
}
In the above class wPtr1 does not own the resource pointed by wPtr1. If the resource is got deleted then wPtr1 is expired.
To avoid circular dependency:
shard_ptr<A> <----| shared_ptr<B> <------
^ | ^ |
| | | |
| | | |
| | | |
| | | |
class A | class B |
| | | |
| ------------ |
| |
-------------------------------------
Now if we make the shared_ptr of the class B and A, the use_count of the both pointer is two.
When the shared_ptr goes out od scope the count still remains 1 and hence the A and B object does not gets deleted.
class B;
class A
{
shared_ptr<B> sP1; // use weak_ptr instead to avoid CD
public:
A() { cout << "A()" << endl; }
~A() { cout << "~A()" << endl; }
void setShared(shared_ptr<B>& p)
{
sP1 = p;
}
};
class B
{
shared_ptr<A> sP1;
public:
B() { cout << "B()" << endl; }
~B() { cout << "~B()" << endl; }
void setShared(shared_ptr<A>& p)
{
sP1 = p;
}
};
int main()
{
shared_ptr<A> aPtr(new A);
shared_ptr<B> bPtr(new B);
aPtr->setShared(bPtr);
bPtr->setShared(aPtr);
return 0;
}
output:
A()
B()
As we can see from the output that A and B pointer are never deleted and hence memory leak.
To avoid such issue just use weak_ptr in class A instead of shared_ptr which makes more sense.
Inspired by #offirmo's response I wrote this code and then ran the visual studio diagnostic tool:
#include <iostream>
#include <vector>
#include <memory>
using namespace std;
struct Member;
struct Team;
struct Member {
int x = 0;
Member(int xArg) {
x = xArg;
}
shared_ptr<Team> teamPointer;
};
struct Team {
vector<shared_ptr<Member>> members;
};
void foo() {
auto t1 = make_shared<Team>();
for (int i = 0; i < 1000000; i++) {
t1->members.push_back(make_shared<Member>(i));
t1->members.back()->teamPointer = t1;
}
}
int main() {
foo();
while (1);
return 0;
}
When the member pointer to the team is shared_ptr teamPointer the memory is not free after foo() is done, i.e. it stays at around 150 MB.
But if it's changed to weak_ptr teamPointer in the diagnostic tool you'll see a peak and then memory usage returns to about 2MB.