How do I make the map CONTAINER be allocated on the heap? - c++11

Basically I have this map container :
map <int, double> r_t_plus_1;
I know that r_t_plus_1 is on the stack, and its elements go into dynamic allocation.
I want to know whether I can get the container on heap, through an allocator property or is there any better way to do so?

Sure, just use new or make_unique() to allocate it on the heap, and store it in a smart pointer:
auto r_t_plus_1 = std::make_unique<std::map<int, double>>();
You could also use make_shared if you want a shared pointer.

Related

Program changes memory values when using calloc() vs make() for slices

I am trying to build a slice of pointers manually and with C.calloc() for allocating the array portion of the slice. I am able to do this successfully though when I try and add pointers that I allocate with make() some of the values (of what the pointers point to) get changed seemingly randomly. On the other hand if I C.calloc() space for the pointers I will be adding, the value are not changed. Or, if I allocate the slices with make() and the pointers I add are allocated with make() the values are not changed.
I do notice that the memory locations of the pointers when using C.calloc() vs make() are very different but I don't see why this should cause the memory to be changed randomly. I am new to Go so please forgive me if I am overlooking some very simple.
Here is the code I use for allocating my slices manually:
type caster struct {
ptr *byte;
len int64;
cap int64;
}
var temp caster;
temp.ptr=(*byte)(C.calloc(C.ulong(size),8));
temp.len=int64(size);
temp.cap=int64(size);
newTable.table=*(*[]*entry)(unsafe.Pointer(&temp));
This works if the entries I add are allocated as follows:
var temp caster;
var e []entry;
temp.ptr=(*byte)(C.calloc(C.ulong(ninserts),8));
temp.len=int64(ninserts);
temp.cap=int64(ninserts);
e=*(*[]entry)(unsafe.Pointer(&temp));
for i:=0;i<ninserts;i++ {
e[i].val=hint64(rand.Int63());
}
for i:=0;i<ninserts;i++ {
ht.insert(&e[i]);
}
though the memory of of the entries gets randomly changed if they are allocated as follows:
var e []entry = make([]entry, ninserts);
for i:=0;i<ninserts;i++ {
e[i].val=hint64(rand.Int63());
}
for i:=0;i<ninserts;i++ {
ht.insert(&e[i]);
}
Unless I build my slices normally as follows:
newTable.table = make([]*entry, size);
I am trying to build a slice of pointers manually and with C.calloc() for allocating the array portion of the slice.
This is explicitly forbidden.
To quote from the official cgo documentation:
Go is a garbage collected language, and the garbage collector needs to know the location of every pointer to Go memory. Because of this, there are restrictions on passing pointers between Go and C.
In this section the term Go pointer means a pointer to memory allocated by Go (such as by using the & operator or calling the predefined new function) and the term C pointer means a pointer to memory allocated by C (such as by a call to C.malloc). Whether a pointer is a Go pointer or a C pointer is a dynamic property determined by how the memory was allocated; it has nothing to do with the type of the pointer.
Note that values of some Go types, other than the type's zero value, always include Go pointers. This is true of string, slice, interface, channel, map, and function types. A pointer type may hold a Go pointer or a C pointer. Array and struct types may or may not include Go pointers, depending on the element types. All the discussion below about Go pointers applies not just to pointer types, but also to other types that include Go pointers.
The boldface above is mine. It means that you must not allocate any of these types via C's allocators.
It is possible to defeat this enforcement by using the unsafe package, and of course there is nothing stopping the C code from doing anything it likes. However, programs that break these rules are likely to fail in unexpected and unpredictable ways.
This bit of your own code:
newTable.table=*(*[]*entry)(unsafe.Pointer(&temp));
violates the rules, but defeats their enforcement. You have allocated C memory, and are now trying to use it as if it were Go memory, in the form of a slice.

unique_ptr heap and stack allocation

Raw pointers can point to objects allocated on the stack or on the heap.
Heap allocation example:
// heap allocation
int* rawPtr = new int(100);
std::cout << *rawPtr << std::endl; // 100
Stack allocation example:
int i = 100;
int* rawPtr = &i;
std::cout << *rawPtr << std::endl; // 100
Heap allocation using auto_ptr example:
int* rawPtr = new int(100);
std::unique_ptr<int> uPtr(rawPtr);
std::cout << *uPtr << std::endl; // 100
Stack allocation using auto_ptr example:
int i = 100;
int* rawPtr = &i;
std::unique_ptr<int> uPtr(rawPtr); // runtime error
Are 'smart pointers' intended to be used to point to dynamically created objects on the heap? For C++11, are we supposed to continue using raw pointers for pointing to stack allocated objects? Thank you.
Smart pointers are usually used to point to objects allocated with new and deleted with delete. They don't have to be used this way, but that would seem to be the intent, if we want to guess the intended use of the language constructs.
The reason your code crashes in the last example is because of the "deleted with delete" part. When it goes out of scope, the unique_ptr will try to delete the object it has a pointer to. Since it was allocated on the stack, this fails. Just as if you had written, delete rawPtr;
Since one usually uses smart pointers with heap objects, there is a function to allocate on the heap and convert to a smart pointer all in one go. std::unique_ptr<int> uPtr = make_unique<int>(100); will perform the actions of the first two lines of your third example. There is also a matching make_shared for shared pointers.
It is possible to use smart pointers with stack objects. What you do is specify the deleter used by the smart pointer, providing one that does not call delete. Since it's a stack variable and nothing need be done to delete it, the deleter could do nothing. Which makes one ask, what's the point of the smart pointer then, if all it does is call a function that does nothing? Which is why you don't commonly see smart pointers used with stack objects. But here's an example that shows some usefulness.
{
char buf[32];
auto erase_buf = [](char *p) { memset(p, 0, sizeof(buf)); };
std::unique_ptr<char, decltype(erase_buf)> passwd(buf, erase_buf);
get_password(passwd.get());
check_password(passwd.get());
}
// The deleter will get called since passwd has gone out of scope.
// This will erase the memory in buf so that the password doesn't live
// on the stack any longer than it needs to. This also works for
// exceptions! Placing memset() at the end wouldn't catch that.
The runtime error is due to the fact that delete was called on a memory location that was never allocated with new.
If an object has already been created with dynamic storage duration (typically implemented as creation on a 'heap') then a 'smart pointer' will not behave correctly as demonstrated by the runtime error.
Are 'smart pointers' intended to be used to point to dynamically
created objects on the heap? For C++11, are we supposed to continue
using raw pointers for pointing to stack allocated objects?
As for what one is supposed to do, well, it helps to think of the storage duration and specifically how the object was created.
If the object has automatic storage duration (stack) then avoid taking the address and use references. The ownership does not belong with the pointer and a reference makes the ownership clearer.
If the object has dynamic storage duration (heap) then a smart pointer is the way to go as it can then manage the ownership.
So for the last example, the following would be better (pointer owns the int):
auto uPtr = std::make_unique<int>(100);
The uPtr will have automatic storage duration and will call the destructor when it goes out of scope. The int will have dynamic storage duration (heap) and will be deleteed by the smart pointer.
One could generally avoid using new and delete and avoid using raw pointers. With make_unique and make_shared, new isn't required.
Are 'smart pointers' intended to be used to point to dynamically created objects on the heap?
They are intended for heap-allocated objects to prevent leaks.
The guideline for C++ is to use plain pointers to refer to a single object (but not own it). The owner of the object holds it by value, in a container or via a smart pointer.
Are 'smart pointers' intended to be used to point to dynamically created objects on the heap?
Yes, but that's just the default. Notice that std::unique_ptr has a constructor (no. (3)/(4) on that page) which takes a pointer that you have obtained somehow, and a "deleter" that you provide. In this case the unique pointer will not do anything with the heap (unless your deleter does so).
For C++11, are we supposed to continue using raw pointers for pointing to stack allocated objects? Thank you.
You should use raw pointers in code that does not "own" the pointer - does not need to concern itself with allocation or deallocation; and that is regardless of whether you're pointing into the heap or the stack or elsewhere.
Another place to use it is when you're implementing some class that has a complex ownership pattern, for protected/private members.
PS: Please, forget about std::auto_ptr... pretend it never existed :-)

D1's auto and scope difference in memory allocation

D's docs saying that when you use scope for local variables, then they will be allocated on stack (even if you're allocating class instance). But what about auto keyword? Does it guarantee that the instance will be allocated on stack?
void foo() { auto instance = new MyClass();}
void foo() { scope instance = new MyClass();}
So can I suggest that this two statements are equal (in terms of allocation)?
No, auto only infers the type.
There's no point in using auto if you want it to be allocated on the stack; that's what scope is (was) for.
They've brilliantly (read: not so much) decided to remove scope, delete, etc. from the language, so it will probably allocate on the heap anyway. Your best bet is to use the function called scoped in one of the modules, to allocate on the stack.
To answer the second question: in D1 those two statements are not equal. First one allocates on the heap, second one is (supposed) to allocate on the stack.

Call to _freea really necessary?

I am developping on Windows with DevStudio, in C/C++ unmanaged.
I want to allocate some memory on the stack instead of the heap because I don't want to have to deal with releasing that memory manually (I know about smart pointers and all those things. I have a very specific case of memory allocation I need to deal with), similar to the use of A2W() and W2A() macros.
_alloca does that, but it is deprecated. It is suggested to use malloca instead. But _malloca documentation says that a call to ___freea is mandatory for each call to _malloca. It then defeats my purpose to use _malloca, I will use malloc or new instead.
Anybody knows if I can get away with not calling _freea without leaking and what the impacts are internally?
Otherwise, I will end-up just using deprecated _alloca function.
It is always important to call _freea after every call to _malloca.
_malloca is like _alloca, but adds some extra security checks and enhancements for your protection. As a result, it's possible for _malloca to allocate on the heap instead of the stack. If this happens, and you do not call _freea, you will get a memory leak.
In debug mode, _malloca ALWAYS allocates on the heap, so also should be freed.
Search for _ALLOCA_S_THRESHOLD for details on how the thresholds work, and why _malloca exists instead of _alloca, and it should make sense.
Edit:
There have been comments suggesting that the person just allocate on the heap, and use smart pointers, etc.
There are advantages to stack allocations, which _malloca will provide you, so there are reasons for wanting to do this. _alloca will work the same way, but is much more likely to cause a stack overflow or other problem, and unfortunately does not provide nice exceptions, but rather tends to just tear down your process. _malloca is much safer in this regard, and protects you, but the cost is that you still need to free your memory with _freea since it's possible (but unlikely in release mode) that _malloca will choose to allocate on the heap instead of the stack.
If your only goal is to avoid having to free memory, I would recommend using a smart pointer that will handle the freeing of memory for you as the member goes out of scope. This would assign memory on the heap, but be safe, and prevent you from having to free the memory. This will only work in C++, though - if you're using plain ol' C, this approach will not work.
If you are trying to allocate on the stack for other reasons (typically performance, since stack allocations are very, very fast), I would recommend using _malloca and living with the fact that you'll need to call _freea on your values.
Another thing to consider is using an RAII class to manage the allocation - of course that's only useful if your macro (or whatever) can be restricted to C++.
If you want to avoid hitting the heap for performance reasons, take a look at the techniques used by Matthew Wilson's auto_buffer<> template class (http://www.stlsoft.org/doc-1.9/classstlsoft_1_1auto__buffer.html). This will allocate on the stack unless your runtime size request exceeds a size specified at compiler time - so you get the speed of no heap allocation for the majority of allocations (if you size the template right), but everything still works correctly if your exceed that size.
Since STLsoft has a whole lot of cruft to deal with portability issues, you may want to look at a simpler version of auto_buffer<> which is described in Wilson's book, "Imperfect C++".
I found it quite handy in an embedded project.
To allocate memory on the stack, simply declare a variable of the appropriate type and size.
I answered this before, but I'd missed something fundamental that meant that it only worked in debug mode. I moved the call to _malloca into the constructor of a class that would auto-free.
In debug this is fine, as it always allocates on the heap. However, in release, it allocates on the stack, and upon returning from the constructor, the stack pointer has been reset, and the memory lost.
I went back and took a different approach, resulting in a combination of using a macro (eurgh) to allocate the memory and instantiate an object that will automatically call _freea on that memory. As it's a macro, it's allocated in the same stack frame, and so will actually work in release mode. It's just as convenient as my class, but slightly less nice to use.
I did the following:
class EXPORT_LIB_CLASS CAutoMallocAFree
{
public:
CAutoMallocAFree( void *pMem ) : m_pMem( pMem ) {}
~CAutoMallocAFree() { _freea( m_pMem ); }
private:
void *m_pMem;
CAutoMallocAFree();
CAutoMallocAFree( const CAutoMallocAFree &rhs );
CAutoMallocAFree &operator=( const CAutoMallocAFree &rhs );
};
#define AUTO_MALLOCA( Var, Type, Length ) \
Type* Var = (Type *)( _malloca( ( Length ) * sizeof ( Type ) ) ); \
CAutoMallocAFree __MALLOCA_##Var( (void *) Var );
This way I can allocate using the following macro call, and it's released when the instantiated class goes out of scope:
AUTO_MALLOCA( pBuffer, BYTE, Len );
Ar.LoadRaw( pBuffer, Len );
My apologies for posting something that was plainly wrong!
If you're using _malloca() then you must call _freea() to prevent memory leak because _malloca() can do the allocation either on stack or heap. It resorts to allocate on heap if the given size exceeds_ALLOCA_S_THRESHOLD value. Thus, it's safer to call _freea() which won't do anything if allocation happened on stack.
If you're using _alloca() which seems to be deprecated as of today; there is no need to call _freea() as the allocation happens on stack.
If your concern is having to free temp memory, and you know all about things like smart-pointers then why not use a similar pattern where memory is freed when it goes out of scope?
template <class T>
class TempMem
{
TempMem(size_t size)
{
mAddress = new T[size];
}
~TempMem
{
delete [] mAddress;
}
T* mAddress;
}
void foo( void )
{
TempMem<int> buffer(1024);
// alternatively you could override the T* operator..
some_memory_stuff(buffer.mAddress);
// temp-mem auto-freed
}

Is there any way to determine what type of memory the segments returned by VirtualQuery() are?

Greetings,
I'm able to walk a processes memory map using logic like this:
MEMORY_BASIC_INFORMATION mbi;
void *lpAddress=(void*)0;
while (VirtualQuery(lpAddress,&mbi,sizeof(mbi))) {
fprintf(fptr,"Mem base:%-10x start:%-10x Size:%-10x Type:%-10x State:%-10x\n",
mbi.AllocationBase,
mbi.BaseAddress,
mbi.RegionSize,
mbi.Type,mbi.State);
lpAddress=(void *)((unsigned int)mbi.BaseAddress + (unsigned int)mbi.RegionSize);
}
I'd like to know if a given segment is used for static allocation, stack, and/or heap and/or other?
Is there any way to determine that?
I'm curious, what do you plan on doing with this information?
There is a windbg extension, !address, which can get you this information, if you don't need code to do it. Scripting the debugger will probably be much more reliable to get this info.
VirtualQuery can't return this information to you on its own, since it has no idea why user mode code requested the memory. You need to use it with other information sources to get this info, and there may still be some error cases.
First, you should filter only by MEM_PRIVATE memory . . . heap, stack, and static allocations (provided they've been modified) should fall within that range.
Static allocations (globals, etc.) should be at an address with a loaded module. You can use PSAPI to determine if the address is within a loaded module, for example, calling EnumProcessModules and then GetModuleInformation.
Stack values, you can use the toolhelp API to determine if the memory location is in a stack. CreateToolhelp32Snapshot with TH32CS_SNAPSHOT to get threads in the target process, then GetThreadContext and check if the resulting stack pointer is within the segment.
I'm don't know of a good way to walk heaps from outside the process. Toolhelp snaps a heap list but doesn't give you a good set of bounds for the heap memory. From within the process, you can use GetProcessHeaps to walk the list of heaps, and then call HeapValidate to dtermie if the memory location is within the heap.

Resources