How I understand, every time when I use word new I must use delete , for avoiding memory leaks. In my example, in push_back() ,I use new for re-allocating dynamic memory to my vector. I think, that every time when I use push_back(), I allocate a new part of memory and dont delete old. If I right, can someone explain me, how I should delete my memory, for saving values in with theirs adresses. If i wrong, please, explain my why?
class Vector {
double* ptr;
size_t size;
size_t max_size;
public:
Vector(): size{0}, max_size{1} {
ptr = new double(max_size);
}
~Vector(){
delete pointer;
}
void push_back(const double elem) {
if (size+1 == max_size) {
double* buf_ptr = ptr;
max_size*=2;
ptr = new double(max_size);
}
ptr[size] = elem;
size+=1;
}
};
Three things:
You want to use new double[maxsize] (allocate array of doubles), not new double(maxsize) (allocate single double value)
After the new allocation succeed, you want to copy data from the old array (e.g. with std::copy), or you'll drop the old data on the floor
Once you've succesfully allocated the new array and copied the old data, delete[] buf_ptr to release the old array's memory
All of that can be fixed by replacing this line:
ptr = new double(max_size);
with these lines:
ptr = new double[max_size];
std::copy(buf_ptr, buf_ptr + size, ptr);
delete[] buf_ptr;
Make sure to #include <algorithm> for std::copy at the top of your source file.
Note: You'd need to adjust your constructor and destructor use matching array based forms of new and delete respectively.
Related
This is a class which contains image data.
class MyMat
{
public:
int width, height, format;
uint8_t *data;
}
I want to design MyMat with automatic memory management. The image data could be shared among many objects.
Common APIs which I'm going to design:
+) C++ 11
+) Assignment : share data
MyMat a2(w, h, fmt);
.................
a2 = a1;
+) Accessing data should be simple and short.
Can use raw pointer directly.
In general, I want to design MyMat like as OpenCV cv::Mat
Could you suggest me a proper design ?
1) Using std::vector<uint8_t> data
I have to write some code to remove copy constructor and assignment operator because someone can call them and causes memory copy.
The compiler must support copy ellision and return value optimization.
Always using move assignment and passing by reference are inconvenient
a2 = std::move(a1)
void test(MyMat &mat)
std::queue<MyMat> lists;
lists.push_back(std::move(a1))
..............................
2) Use share_ptr<uint8_t> data
Following this guideline http://www.codingstandard.com/rule/17-3-4-do-not-create-smart-pointers-of-array-type/,
we shouldn't create smart pointers of array type.
3) Use share_ptr< std::vector<uint8_t> > data
To access data, use *(a1.data)[0], the syntax is very inconvenient
4) Use raw pointer, uint8_t *data
Write proper constructor and destructor for this class.
To make automatic memory management, use smart pointer.
share_ptr<MyMat> mat
std::queue< share_ptr<MyMat> > lists;
Matrix classes are normally expected to be a value type with deep copying. So, stick with std::vector<uint8_t> and let the user decide whether copy is expensive or not in their specific context.
Instead of raw pointers for arrays prefer std::unique_ptr<T[]> (note the square brackets).
std::array - fixed length in-place buffer (beautified array)
std::vector - variable length buffer
std::shared_ptr - shared ownership data
std::weak_ptr - expiring view on shared data
std::unique_ptr - unique ownership
std::string_view, std::span, std::ref, &, * - reference to data with no assumption of ownership
Simplest design is to have a single owner and RAII-forced life time ensuring everything that needs to be alive at certain time is alive and needs no other ownership, so generally I'd see if I could live std::unique_ptr<T> before complicating further (unless I can fit all my data on the stack, then I don't even need a unique_ptr).
On a side note - shared pointers are not free, they need dynamic memory allocation for the shared state (two allocations if done incorrectly :) ), whereas unique pointers are true "zero" overhead RAII.
Matrixes should use value semantics, and they should be nearly free to move.
Matrixes should support a view type as well.
There are two approaches for a basic Matrix that make sense.
First, a Matrix type that wraps a vector<T> with a stride field. This has an overhead of 3 instead of 2 pointers (or 1 pointer and a size) compared to a hand-rolled one. I don't consider that significant; the ease of debugging a vector<T> etc makes it more than worth that overhead.
In this case you'd want to write a separate MatrixView.
I'd use CRTP to create a common base class for both to implement operator[] and stride fields.
A distinct basic Matrix approach is to make your Matrix immutable. In this case, the Matrix wraps a std::shared_ptr<T const> and a std::shared_ptr<std::mutex> and (local, or stored with the mutex) width, height and stride field.
Copying such a Matrix just duplciates handles.
Modifying such a Matrix causes you to acquire the std::mutex, then check that shared_ptr<T const> has a use_count()==1. If it does, you cast-away const and modify the data referred to in the shared_ptr. If it does not, you duplicate the buffer, create a new mutex, and operate on the new state.
Here is a copy on write matrix buffer:
template<class T>
struct cow_buffer {
std::size_t rows() const { return m_rows; }
std::size_t cols() const { return m_cols; }
cow_buffer( T const* in, std::size_t rows, std::size_t cols, std::size_t stride ) {
copy_in( in, rows, cols, stride );
}
void copy_in( T const* in, std::size_t rows, std::size_t cols, std::size_t stride ) {
// note it isn't *really* const, this matters:
auto new_data = std::make_shared<T[]>( rows*cols );
for (std::size_t i = 0; i < rows; ++i )
std::copy( in+i*stride, in+i*m_stride+m_cols, new_data.get()+i*m_cols );
m_data = new_data;
m_rows = rows;
m_cols = cols;
m_stride = cols;
m_lock = std::make_shared<std::mutex>();
}
template<class F>
decltype(auto) read( F&& f ) const {
return std::forward<F>(f)( m_data.get() );
}
template<class F>
decltype(auto) modify( F&& f ) {
auto lock = std::unique_lock<std::mutex>(*m_lock);
if (m_data.use_count()==1) {
return std::forward<F>(f)( const_cast<T*>(m_data.get()) );
}
auto old_data = m_data;
copy_in( old_data.get(), m_rows, m_cols, m_stride );
return std::forward<F>(f)( const_cast<T*>(m_data.get()) );
}
explicit operator bool() const { return m_data && m_lock; }
private:
std::shared_ptr<T> m_data;
std::shared_ptr<std::mutex> m_lock;
std::size_t m_rows = 0, m_cols = 0, m_stride = 0;
};
something like that.
The mutex is required to ensure synchonization between multiple threads who are sole owners modifying m_data and the data from the previous write not being synchronzied with the current one.
I was looking at Microsoft site about single inheritance. In the example given (code is copied at the end), I am not sure how memory is allocated to Name. Memory is allocated for 10 objects. But Name is a pointer member of the class. I guess I can assign constant string something like
DocLib[i]->Name = "Hello";
But we cannot change this string. In such situation, do I need allocate memory to even Name using new operator in the same for loop something like
DocLib[i]->Name = new char[50];
The code from Microsoft site is here:
// deriv_SingleInheritance4.cpp
// compile with: /W3
struct Document {
char *Name;
void PrintNameOf() {}
};
class PaperbackBook : public Document {};
int main() {
Document * DocLib[10]; // Library of ten documents.
for (int i = 0 ; i < 10 ; i++)
DocLib[i] = new Document;
}
Yes in short. Name is just a pointer to a char (or char array). The structure instantiation does not allocate space for this char (or array). You have to allocate space, and make the pointer(Name) point to that space. In the following case
DocLib[i]->Name = "Hello";
the memory (for "Hello") is allocated in the read only data section of the executable(on load) and your pointer just points to this location. Thats why its not modifiable.
Alternatively you could use string objects instead of char pointers.
This issue is my misunderstanding of how the standard is using my custom allocator. I have a stateful allocator that keeps a vector of allocated blocks. This vector is pushed into when allocating and searched through during de-allocation.
From my debugging it appears that different instances of my object (this*'s differ) are being called on de-allocation. An example may be that MyAllocator (this* = 1) is called to allocate 20 bytes, then some time later MyAllocator (this* = 2) is called to de-allocate the 20 bytes allocated earlier. Abviously the vector in MyAllocator (this* = 2) doesn't contain the 20 byte block allocated by the other allocator so it fails to de-allocate. My understanding was that C++11 allows stateful allocators, what's going on and how do i fix this?
I already have my operator == set to only return true when this == &rhs
pseudo-code:
template<typename T>
class MyAllocator
{
ptr allocate(int n)
{
...make a block of size sizeof(T) * n
blocks.push_back(block);
return (ptr)block.start;
}
deallocate(ptr start, int n)
{
/*This fails because the the block array is not the
same and so doesn't find the block it wants*/
std::erase(std::remove_if(blocks.begin,blocks.end, []()
{
return block.start >= (uint64_t)ptr && block.end <= ((uint64_t)ptr + sizeof(T)*n);
}), blocks.end);
}
bool operator==(const MyAllocator& rhs)
{
//my attempt to make sure internal states are same
return this == &rhs;
}
private:
std::vector<MemoryBlocks> blocks;
}
Im using this allocator for an std::vector, on gcc. So as far as i know no weird rebind stuff is going on
As #Igor mentioned, allocators must be copyable. Importantly though they must share their state between copies, even AFTER they have been copied from. In this case the fix was easy, i made the blocks vector a shared_ptr as suggested and then now on copy all the updates to that vector occur to the same vector, since they all point to the same thing.
The regular std::vector has emplace_back which avoid an unnecessary copy. Is there a reason spsc_queue doesn't support this? Is it impossible to do emplace with lock-free queues for some reason?
I'm not a boost library implementer nor maintainer, so the rationale behind why not to include an emplace member function is beyond my knowledge, but it isn't too difficult to implement it yourself if you really need it.
The spsc_queue has a base class of either compile_time_sized_ringbuffer or runtime_sized_ringbuffer depending on if the size of the queue is known at compilation or not. These two classes maintain the actual buffer used with the obvious differences between a dynamic buffer and compile-time buffer, but delegate, in this case, their push member functions to a common base class - ringbuffer_base.
The ringbuffer_base::push function is relatively easy to grok:
bool push(T const & t, T * buffer, size_t max_size)
{
const size_t write_index = write_index_.load(memory_order_relaxed); // only written from push thread
const size_t next = next_index(write_index, max_size);
if (next == read_index_.load(memory_order_acquire))
return false; /* ringbuffer is full */
new (buffer + write_index) T(t); // copy-construct
write_index_.store(next, memory_order_release);
return true;
}
An index into the location where the next item should be stored is done with a relaxed load (which is safe since the intended use of this class is single producer for the push calls) and gets the appropriate next index, checks to make sure everything is in-bounds (with a load-acquire for appropriate synchronization with the thread that calls pop) , but the main statement we're interested in is:
new (buffer + write_index) T(t); // copy-construct
Which performs a placement new copy construction into the buffer. There's nothing inherently thread-unsafe about passing around some parameters to use to construct a T directly from viable constructor arguments. I wrote the following snippet and made the necessary changes throughout the derived classes to appropriately delegate the work up to the base class:
template<typename ... Args>
std::enable_if_t<std::is_constructible<T,Args...>::value,bool>
emplace( T * buffer, size_t max_size,Args&&... args)
{
const size_t write_index = write_index_.load(memory_order_relaxed); // only written from push thread
const size_t next = next_index(write_index, max_size);
if (next == read_index_.load(memory_order_acquire))
return false; /* ringbuffer is full */
new (buffer + write_index) T(std::forward<Args>(args)...); // emplace
write_index_.store(next, memory_order_release);
return true;
}
Perhaps the only difference is making sure that the arguments passed in Args... can actually be used to construct a T, and of course doing the emplacement via std::forward instead of a copy construction.
Given a map of maps like:
std::map<unsigned int, std::map<std::string, MyBase*>> m_allMyObjects;
What would be the most efficient way to insert/add/"emplace" an element into m_allMyObjects given an unsigned int and a std::string taking optimization into account (on modern compilers)?
What would be the most efficient way to retrieve an element then?
m_allMyObjects may potentially contain up to 100'000 elements in the future.
Common knowledge and folklore about how to efficiently insert into maps (typically telling you to avoid operator[] and prefering the shiny new emplace) considers the costs of constructing and copying the values in the map. In your case, those values are plain pointers which can be copied at virtually no expense, and copying pointers can be aggressively optimized by the compiler.
On the other hand, you actually do have an object that is expensive to handle, namely the key of type std::string. You need to watch out for copies (moved or copied) of the key to determine performance. Obviously, for the tree lookup, you already need the string object, even if you have it as char*, as there is no insertion function that is templated over the type of key. This means for looking up the place to insert, you use one certain std::string object, but once the map node gets created, the new std::string object inside the map is copy-initialized from it (possibly moved). Avoiding everything in excess of that single copy/move should be your goal.
Example time!
#include <map>
#include <cstdio>
struct noisy {
noisy(int v) : val(v) {}
noisy(const noisy& src) : val(src.val) { std::puts("copy ctor"); }
noisy(noisy&& src) : val(src.val) { std::puts("move ctor"); }
noisy& operator=(const noisy& src)
{ val = src.val; std::puts("copy assign"); return *this; }
noisy& operator=(noisy&& src)
{ val = src.val; std::puts("move assign"); return *this; }
int val;
};
bool operator<(const noisy& a, const noisy& b)
{
return a.val < b.val;
}
int main(void)
{
std::map<noisy,int> m;
std::puts("Operator[]");
m[noisy(1)] = 3;
std::puts("insert/make_pair");
m.insert(std::make_pair(noisy(2), 3));
std::puts("insert/make_pair/ref");
m.insert(std::make_pair<noisy&&,int>(noisy(3), 3));
std::puts("insert/pair/ref");
m.insert(std::pair<noisy&&,int>(noisy(4), 3));
std::puts("emplace");
m.emplace(noisy(5), 3);
}
compiled with g++ 4.9.1, -std=c++11, -O2, the result is
Operator[]
move ctor
insert/make_pair
move ctor
move ctor
insert/make_pair/ref
move ctor
move ctor
insert/pair/ref
move ctor
emplace
move ctor
Which shows: avoid everything that creates an intermediate pair containing a copy of the key! Be aware that std::make_pair does never create a pair that contains references, even if it can take the parameters by reference! Whenever you pass a pair containing the copy of the key, the key gets copied into the pair and later into the map.
The expression suggested by MarkMB, namely m[int_k][str_k] = ptr, is quite good, and likely produces optimal code. There is no reason for the first index (int_k) to not use [], as you want a default constructed sub-map if the index is not used yet, so there is no unnecessary overhead. As we have seen, indexing with the string gets away with a single copy, so you are fine. If you can afford to lose your string, m[int_k][std::move(str_k)] = ptr might be a win, though. As discussed in the beginning, using emplace instead of [] is only about the values, which are virtually free to handle in your case.