I'm using VMMap from SysInternals to look at memory allocated by my Win32 C++ process on WinXP, and I see a bunch of allocations where portions of the allocated memory are reserved but not committed. As far as I can tell, from my reading and testing, all of the common memory allocators (e.g., malloc, new, LocalAlloc, GlobalAlloc) used in a C++ program always allocate fully committed blocks of memory.
Heaps are a common example of code that reserves memory but doesn't commit it until needed. I suspect that some of these blocks are Windows/CRT heaps, but there appears to be more of these types of blocks than I would expect for heaps. I see on the order of 30 of these blocks in my process, between 64k and 8MB in size, and I know that my code never intentionally calls VirtualAlloc to allocate reserved, uncommitted memory.
Here are a couple of examples from VMMap: http://www.flickr.com/photos/95123032#N00/5280550393/
What else would allocate such blocks of memory, where much of it is reserved but not committed? Would it make sense that my process has 30 heaps? Thanks.
I figured it out - it's the CRT heap that gets allocated by calls to malloc. If you allocate a large chunk of memory (e.g., 2 MB) using malloc, it allocates a single committed block of memory. But if you allocate smaller chunks (say 177kb), then it will reserve a 1 MB chunk of memory, but only commit approximately what you asked for (e.g., 184kb for my 177kb request).
When you free that small chunk, that larger 1 MB chunk is not returned to the OS. Everything but 4k is uncommitted, but the full 1 MB is still reserved. If you then call malloc again, it will attempt to use that 1 MB chunk to satisfy your request. If it can't satisfy your request with the memory that it's already reserved, it will allocate a new chunk of memory that's twice the previous allocation (in my case it went from 1 MB to 2 MB). I'm not sure if this pattern of doubling continues or not.
To actually return your freed memory to the OS, you can call _heapmin. I would think that this would make a future large allocation more likely to succeed, but it would all depend on memory fragmentation, and perhaps heapmin already gets called if an allocation fails (?), I'm not sure. There would also be a performance hit since heapmin would release the memory (taking time) and malloc would then need to re-allocate it from the OS when needed again. This information is for Windows/32 XP, your mileage may vary.
UPDATE: In my testing, heapmin did absolutely nothing. And the malloc heap is only used for blocks that are less than 512kb. Even if there are MBs of contiguous free space in the malloc heap, it will not use it for requests over 512kb. In my case, this freed, unused, yet reserved malloc memory chewed up huge parts of my process' 2GB address space, eventually leading to memory allocation failures. And since heapmin doesn't return the memory to the OS, I haven't found any solution to this problem, other than restarting my process or writing my own memory manager.
Whenever a thread is created in your application a certain (configurable) amount of memory will be reserved in the address space for the call stack of the thread. There's no need to commit all the reserved memory unless your thread is actually going to need all of that memory. So only a portion needs to be committed.
If more than the committed amount of memory is required, it will be possible to obtain more system memory.
The practical consideration is that the reserved memory is a hard limit on the stack size that reduces address space available to the application. However, by only committing a portion of the reserve, we don't have to consume the same amount of memory from the system until needed.
Therefore it is possible for each thread to have a portion of reserved uncommitted memory. I'm unsure what the page type will be in those cases.
Could they be the DLLs loaded into your process? DLLs (and the executable) are memory mapped into the process address space. I believe this initially just reserves space. The space is backed by the files themselves (at least initially) rather than the pagefile.
Only the code that's actually touched gets paged in. If I understand the terminology correctly, that's when it's committed.
You could confirm this by running your application in a debugger and looking at the modules that are loaded and comparing their locations and sizes to what you see in VMMap.
Related
In my program, I am using malloc to allocate large amounts of memory (several hundred mbs, in chunks of say 25mb to 75mb at a time), I am subsequently freeing some of the chunks, then again reallocating some more. My question is when I use free() to free memory, does it immediately free the concerned block of memory, or it merely marks it for freeing. If it is merely marking for freeing later, is there some standard C library function to force it to be freed immediately.
I am actually required to develop my program to be portable between linux and vxworks. In Vxworks, in one library I am using(vsipl) , I find 'free' is not freeing up, immediately on the call.
The answer depends upon malloc and free implementation. However, I can safely say that in nearly any implementation the memory does not get deallocated. Instead it goes back into a pool where it can be reused by the process.
If you are using large blocks of memory, it is usually better to use operating system memory functions and do your own memory management. However, it is not a good idea to map pages in and out of memory.
I have heard in embedded system, we should use some preallocated fixed-size memory chunks(like buddy memory system?). Could somebody give me a detailed explanation why?
Thanks,
In embedded systems you have very limited memory. Therefore, if you occasionally lose only one byte of memory (because you allocate it , but you dont free it), this will eat up the system memory pretty quickly (1 GByte of RAM, with a leak rate of 1/hour will take its time. If you have 4kB RAM, not as long)
Essentially the behaviour of avoiding dynamic memory is to avoid the effects of bugs in your program. As static memory allocation is fully deterministic (while dynamic memory alloc is not), by using only static memory allocation one can counteract such bugs. One important factor for that is that embedded systems are often used in security-critical application. A few hours of downtime could cost millions or an accident could happen.
Furthermore, depending on the dynamic memory allocator, the indeterminism also might take an indeterminate amount of time, which can lead to more bugs especially in systems relying on tight timing (thanks to Clifford for mentioning this). This type of bug is often hard to test and to reproduce because it relies on a very specific execution path.
Additionally, embedded systems don't usually have MMUs, so there is nothing like memory protection. If you run out of memory and your code to handle that condition doesn't work, you could end up executing any memory as instruction (bad things could happen! However this case is only indirectly related to dynamic mem allocation).
As Hao Shen mentioned, fragmentation is also a danger. Whether it may occur depends on your exact usecase, but in embedded systems it is quite easy to loose 50% of your RAM due to fragmentation. You can only avoid fragmentation if you allocate chunks that always have the exact same size.
Performance also plays a role (depends on the usecase - thanks Hao Shen). Statically allocated memory is allocated by the compiler whereas malloc() and similar need to run on the device and therefore consume CPU time (and power).
Many embedded OSs (e.g. ChibiOS) support some kind of dynamic memory allocator. But using it only increases the possibility of unexpected issues to occur.
Note that these arguments are often circumvented by using smaller statically allocated memory pools. This is not a real solution, as one can still run out of memory in those pools, but it will only affect a small part of the system.
As pointed out by Stephano Sanfilippo, some system don't even have enough resources to support dynamic memory allocation.
Note: Most coding standard, including the JPL coding standard and DO-178B (for critical avionics code - thanks Stephano Sanfilippo) forbid the use of malloc.
I also assume the MISRA C standard forbids malloc() because of this forum post -- however I don't have access to the standard itself.
The main reasons not to use dynamic heap memory allocation here are basically:
a) Determinism and, correlated,
b) Memory fragmentation.
Memory leaks are usually not a problem in those small embedded applications, because they will be detected very early in development/testing.
Memory fragmentation can however become non-deterministic, causing (best case) out-of-memory errors at random times and points in the application in the field.
It may also be non-trivial to predict the actual maximum memory usage of the application during development with dynamic allocation, whereas the amount of statically allocated memory is known at compile time and it is absolutely trivial to check if that memory can be provided by the hardware or not.
Allocating memory from a pool of fixed size chunks has a couple advantages over dynamic memory allocation. It prevents heap fragmentation and it is more deterministic.
With dynamic memory allocation, dynamically sized memory chunks are allocated from a fixed size heap. The allocations aren't necessarily freed in the same order that they're allocated. Over time this can lead to a situation where the free portions of the heap are divided up between allocated portions of the heap. As this fragmentation occurs, it can become more difficult to fulfill requests for larger allocations of memory. If a request for a large memory allocation is made, and there is no contiguous free section in the heap that's large enough then the allocation will fail. The heap may have enough total free memory but if it's all fragmented and there is not a contiguous section then the allocation will fail. The possibility of malloc() failing due to heap fragmentation is undesirable in embedded systems.
One way to combat fragmentation is rejoin the smaller memory allocations into larger contiguous sections as they are freed. This can be done in various ways but they all take time and can make the system less deterministic. For example, if the memory manager scans the heap when a memory allocation is freed then the amount of time it takes free() to complete can vary depending on what types of memory are adjacent to the allocation being freed. That is non-deterministic and undesirable in many embedded systems.
Allocating from a pool of fixed sized chunks does not cause fragmentation. So long as there is some free chunks then an allocation won't fail because every chunk is the right size. Plus allocating and freeing from a pool of fixed size chunks is simpler. So the allocate and free functions can be written to be deterministic.
I am writing a C++ program that essentially works with very large arrays. On Windows, I am using VirtualAlloc to allocate memory to my arrays. Now I fully understand the difference between reserving and committing memory using VirutalAlloc; however, I am wondering whether there is any benefit in committing memory page-by-page to a reserved region. In particular, MSDN (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887(v=vs.85).aspx) contains the following explanation for the MEM_COMMIT option:
Actual physical pages are not allocated unless/until the virtual addresses are actually accessed.
My experiments confirm this: I can reserve and commit several GB of memory wihtout increasing memory usage of my process (as shown in Task Manager); actual memory gets allocated only when I actually access memory.
Now I saw quite a few examples arguing that one should reserve a large portion of the address space and then commit memory page-by-page (or in some larger blocks, depending on the app's logic). As explained above, however, memory does not seem to be committed before one accesses it; thus, I'm wondering whether there is any real benefit in committing memory page-by-page. In fact, committing memory page-by-page might actually slow my program down due to many system calls for actually comitting memory. If I commit the entire region at once, I pay for just one system call, but the kernel seems to be smart enough to actually allocate only memory that I actually use.
I would appreciate it if someone could explain to me which strategy is better.
The difference is that commit "backs" the memory against the page file. To give an example:
Given 2GB of physical ram and 2GB of swap (assume fixed-size swap for this purpose).
Reserve 6GB - OK.
Commit first 2GB - OK.
Commit remaining 4GB - fails.
Extend swap file to 8GB
Commit remaining 4GB - succeeds.
The reason for using MEM_COMMIT would primarily be for runtime error suppression (app stability). If you have a process that commits pages on-demand then there's always a chance that a commit along-the-way could fail if it exceeds amount of memory+swap available. When memory has been backed by the page file then you have a strong guarantee that the memory is available for use from now until the point that you release it.
There's a number of reasons to go one way or the other, and I don't think there's any perfect science to deciding which. MEM_RESERVE alone is only needed for very large sparse array scenarios, ex: multi-gigabyte array which has at most 25-33% utilization (a popular technique for accelerating hash tables, etc).
Almost everything else is gray area where you could probably go either way -- MEM_COMMIT up-front would make your own app a little more stable and essentially give it priority to physical ram over competing apps that might allocate on-demand. (if you grab the ram first then your app will be the last left standing when physical memory is exhausted) At the same time, if you're not actually using all that ram then you may end up limiting the multi-tasking potential of your client's machine or causing unnecessary wasted disk space via a growing page file.
I understand that memory has to be reserved before being committed. And when it's reserved, no other process can use it. However reserved memory does not count against available RAM. But shouldn't it? Because if no one else can use it, then what good is it being "available"?
Or is there some bigger difference?
In the context of Win32, "reserved" means that the address space is allocated within the process that requested it. This may be used, for example, to reserve space for a large buffer that's not all needed right away, but when it's needed it would need to be contiguous.
Reserving memory does not interact with other processes at all, since each process has its own private address space. So the statement that "when it's reserved, no other process can use it" is meaningless, since processes can't normally allocate memory in the address space of another process anyway.
When the reserved pages are requested to be committed (backing store allocated for them), that operation can potentially fail due to lack of physical memory (or pagefile).
I like to view Reserved as booking the address space so that no one else can allocate it (but I can't use memory at that address because it is not yet available). And Committed as mapping that address space to the physical memory so that it can be used.
Why would I want to reserve? Why not just get committed memory? There are several reasons I have in mind:
Some application needs a specific address range, say from 0x400000 to 0x600000, but does not need the memory for storing anything. It is used to trap memory access. E.g., if some code accesses such area, it will be caught. (Useful for some reason.)
Some thread needs to store progressively expanding data. And the data needs to be in one contiguous chunk of memory. It is preferred not to commit large physical memory at one go because it is not needed and would be such a waste. The memory can be utilized by some other threads first. The physical memory is committed only on demand.
Process Virtual Memory (Address Space ) and Actual RAM are both different. you can have 512MB physical RAM but still your process can address 4GB virtual address space(2GB Userspace)
Every address in a process can be thought of as either free, reserved, or committed at any given time.
A process begins with all addresses free, meaning they are free to be committed
to memory or reserved for future use.Before any free address may be used, it must first be allocated as reserved OR committed. But doesn't need to be reserved in order for it to be committed.
reserving memory means reserving virtaul address space for future purposes. it doesn't associated with Physical RAM (mapped to RAM Addresses).where as committed memory means it will be associated with actual RAM so that you can store data in it.
http://msdn.microsoft.com/en-us/library/ms810627.aspx
I think the simplest answer is that:
you reserve memory in virtual address space, so that no other part of code within the same process gets it
you commit memory to physical RAM/swap so that no other process gets it
So for example if a process has mem limit of 1 GB and does malloc reserve of 1 GB at start it cannot malloc any more even though process memory usage at OS level (commited memory) is nearly 0.
Here's my question: Does calling free or delete ever release memory back to the "system". By system I mean, does it ever reduce the data segment of the process?
Let's consider the memory allocator on Linux, i.e ptmalloc.
From what I know (please correct me if I am wrong), ptmalloc maintains a free list of memory blocks and when a request for memory allocation comes, it tries to allocate a memory block from this free list (I know, the allocator is much more complex than that but I am just putting it in simple words). If, however, it fails, it gets the memory from the system using say sbrk or brk system calls. When a memory is free'd, that block is placed in the free list.
Now consider this scenario, on peak load, a lot of objects have been allocated on heap. Now when the load decreases, the objects are free'd. So my question is: Once the object is free'd will the allocator do some calculations to find whether it should just keep this object in the free list or depending upon the current size of the free list it may decide to give that memory back to the system i.e decrease the data segment of the process using sbrk or brk?
Documentation of glibc tells me that if the allocation request is much larger than page size, it will be allocated using mmap and will be directly released back to the system once free'd. Cool. But let's say I never ask for allocation of size greater than say 50 bytes and I ask a lot of such 50 byte objects on peak load on the system. Then what?
From what I know (correct me please), a memory allocated with malloc will never be released back to the system ever until the process ends i.e. the allocator will simply keep it in the free list if I free it. But the question that is troubling me is then, if I use a tool to see the memory usage of my process (I am using pmap on Linux, what do you guys use?), it should always show the memory used at peak load (as the memory is never given back to the system, except when allocated using mmap)? That is memory used by the process should never ever decrease(except the stack memory)? Is it?
I know I am missing something, so please shed some light on all this.
Experts, please clear my concepts regarding this. I will be grateful. I hope I was able to explain my question.
There isn't much overhead for malloc, so you are unlikely to achieve any run-time savings. There is, however, a good reason to implement an allocator on top of malloc, and that is to be able to trace memory leaks. For example, you can free all memory allocated by the program when it exits, and then check to see if your memory allocator calls balance (i.e. same number of calls to allocate/deallocate).
For your specific implementation, there is no reason to free() since the malloc won't release to system memory and so it will only release memory back to your own allocator.
Another reason for using a custom allocator is that you may be allocating many objects of the same size (i.e you have some data structure that you are allocating a lot). You may want to maintain a separate free list for this type of object, and free/allocate only from this special list. The advantage of this is that it will avoid memory fragmentation.
No.
It's actually a bad strategy for a number of reasons, so it doesn't happen --except-- as you note, there can be an exception for large allocations that can be directly made in pages.
It increases internal fragmentation and therefore can actually waste memory. (You can only return aligned pages to the OS, so pulling aligned pages out of a block will usually create two guaranteed-to-be-small blocks --smaller than a page, anyway-- to either side of the block. If this happens a lot you end up with the same total amount of usefully-allocated memory plus lots of useless small blocks.)
A kernel call is required, and kernel calls are slow, so it would slow down the program. It's much faster to just throw the block back into the heap.
Almost every program will either converge on a steady-state memory footprint or it will have an increasing footprint until exit. (Or, until near-exit.) Therefore, all the extra processing needed by a page-return mechanism would be completely wasted.
It is entirely implementation dependent. On Windows VC++ programs can return memory back to the system if the corresponding memory pages contain only free'd blocks.
I think that you have all the information you need to answer your own question. pmap shows the memory that is currenly being used by the process. So, if you call pmap before the process achieves peak memory, then no it will not show peak memory. if you call pmap just before the process exits, then it will show peak memory for a process that does not use mmap. If the process uses mmap, then if you call pmap at the point where maximum memory is being used, it will show peak memory usage, but this point may not be at the end of the process (it could occur anywhere).
This applies only to your current system (i.e. based on the documentation you have provided for free and mmap and malloc) but as the previous poster has stated, behavior of these is implmentation dependent.
This varies a bit from implementation to implementation.
Think of your memory as a massive long block, when you allocate to it you take a bit out of your memory (labeled '1' below):
111
If I allocate more more memory with malloc it gets some from the system:
1112222
If I now free '1':
___2222
It won't be returned to the system, because two is in front of it (and memory is given as a continous block). However if the end of the memory is freed, then that memory is returned to the system. If I freed '2' instead of '1'. I would get:
111
the bit where '2' was would be returned to the system.
The main benefit of freeing memory is that that bit can then be reallocated, as opposed to getting more memory from the system. e.g:
33_2222
I believe that the memory allocator in glibc can return memory back to the system, but whether it will or not depends on your memory allocation patterns.
Let's say you do something like this:
void *pointers[10000];
for(i = 0; i < 10000; i++)
pointers[i] = malloc(1024);
for(i = 0; i < 9999; i++)
free(pointers[i]);
The only part of the heap that can be safely returned to the system is the "wilderness chunk", which is at the end of the heap. This can be returned to the system using another sbrk system call, and the glibc memory allocator will do that when the size of this last chunk exceeds some threshold.
The above program would make 10000 small allocations, but only free the first 9999 of them. The last one should (assuming nothing else has called malloc, which is unlikely) be sitting right at the end of the heap. This would prevent the allocator from returning any memory to the system at all.
If you were to free the remaining allocation, glibc's malloc implementation should be able to return most of the pages allocated back to the system.
If you're allocating and freeing small chunks of memory, a few of which are long-lived, you could end up in a situation where you have a large chunk of memory allocated from the system, but you're only using a tiny fraction of it.
Here are some "advantages" to never releasing memory back to the system:
Having already used a lot of memory makes it very likely you will do so again, and
when you release memory the OS has to do quite a bit of paperwork
when you need it again, your memory allocator has to re-initialise all its data structures in the region it just received
Freed memory that isn't needed gets paged out to disk where it doesn't actually make that much difference
Often, even if you free 90% of your memory, fragmentation means that very few pages can actually be released, so the effort required to look for empty pages isn't terribly well spent
Many memory managers can perform TRIM operations where they return entirely unused blocks of memory to the OS. However, as several posts here have mentioned, it's entirely implementation dependent.
But lets say I never ask for allocation of size greater than say 50 bytes and I ask a lot of such 50 byte objects on peak load on the system. Then what ?
This depends on your allocation pattern. Do you free ALL of the small allocations? If so and if the memory manager has handling for a small block allocations, then this may be possible. However, if you allocate many small items and then only free all but a few scattered items, you may fragment memory and make it impossible to TRIM blocks since each block will have only a few straggling allocations. In this case, you may want to use a different allocation scheme for the temporary allocations and the persistant ones so you can return the temporary allocations back to the OS.