I have used pageheap for debugging heap corruptions in last four years. generally, I don't have any problems with it. But now I have faced with weird behavior.
After enabling pageheap for my process in win7-sp1-x86 host using global flags with following flags:
-Enable heap tail checking
-Enable heap free checking
-Enable Page Heap
I noticed crashes with out-of-memory exceptions.
!address -summary command said that ~90% of virtual memory was consumed by PageHeap.
It is really strange for me, because, as I know, pageheap should not lead to such big amount of memory overhead.
Can please someone explain whats is the reason of such behavior?
When running an application with full page-heap enabled, 2 pages (4kb) are allocated for each 'malloc'. When the memory is freed, these pages (or may be only the first one) are still 'reserved' : they don't occupy any physical or page file memory, but the virtual address range is made unavailable and an access violation is raised when trying to access this memory. This allows to catch read-after-free kind of bugs. Thus, the virtual address space of the application keeps on increasing even if you properly call free for each malloc.
Related
Is there a way to detect if a memory allocation would cause the program to swap? Or if the last memory allocation causes swap to be used?
Basically, I'm debugging memory leaks in a VM and I want the program to terminate instead of using swap memory because that locks up the whole computer so I have to hard reboot.
I do not want to disable swap memory on the machine globally because that can have unintended consequences for legitimate users of swap memory.
Example code and names of API calls to perform this task on Windows and Linux would be appreciated.
I know Linux has a system call for keeping a process in memory, it's not the same thing as I'm asking for. setrlimit isn't quite right either because the amount of physical RAM free could change a lot during execution.
On Windows 32-bit system the application is being developed using Visual Studio:
Lets say lots of other application running on my machine and they have occupied almost all of physical memory and only 1 MB memory is left free. If my application (which has not yet allocated any memory) tries to allocate, say 2 MB, will the call be successful?
My guess: In theory, each Windows application has 2GB of virtual memory available.
So I believe this call should be successful (regardless how much physical memory is available). But I am not sure on this. That's why asking here.
Windows gives a rock-hard guarantee that this will always work. A process can only allocate virtual memory when Windows can commit space in the paging file for the allocation. If necessary, it will grow the paging file to make the space available. If that fails, for example when the paging file grows beyond the preset limit, then the allocation fails as well. Windows doesn't have the equivalent of the Linux "OOM killer", it does not support over-committing that may require an operating system to start randomly killing processes to find RAM.
Do note that the "always works" clause does have a sting. There is no guarantee on how long this will take. In very extreme circumstances the machine can start thrashing where just about every memory access in the running processes causes a page fault. Code execution slows down to a crawl, you can lose control with the mouse pointer frozen when Explorer or the mouse or video driver start thrashing as well. You are well past the point of shopping for RAM when that happens. Windows applies quotas to processes to prevent them from hogging the machine, but if you have enough processes running then that doesn't necessarily avoid the problem.
Of course. It would be lousy design if memory had to be wasted now in order to be used later. Operating systems constantly re-purpose memory to its most advantageous use at any moment. They don't have to waste memory by keeping it free just so that it can be used later.
This is one of the benefits of virtual memory with a page file. Because the memory is virtual, the system can allocate more virtual memory than physical memory. Virtual memory that cannot fit in physical memory, is pushed out to the page file.
So the fact that your system may be using all of the physical memory does not mean that your program will not be able to allocate memory. In the scenario that you describe, your 2MB memory allocation will succeed. If you then access that memory, the virtual memory will be paged in to physical memory and very likely some other pages (maybe in your process, maybe in another process) will be pushed out to the page file.
Well, it will succeed as long as there's some memory for it - apart from physical memory, there's also the page file.
However, once you reach the limit of both RAM and the page file, you're done for and that's when the out of memory situation really starts being fun.
Now, systems like Windows Vista will try to use all of your available RAM, pretty much for caching. That's a good thing, and when there's a request for memory from an application, the cache will be thrown away as needed.
As for virtual memory, you can request much more than you have available, regardless of your RAM or page file size. Only when you commit the memory does it actually need some backing - either RAM or the page file. On 64-bit, you can easily request terabytes of virtual memory - that doesn't mean you'll get it when you try to commit it, though :P
If your application is unable to allocate a physical memory (RAM) block to store information, the operating system takes over and 'pages' or stores sections that are in RAM on disk to free up physical memory so that your program is able to perform the allocation. This is done automatically and is completely invisible to your applications.
So, in your example, on a system that has 1MB RAM free, if your application tries to allocate memory, the operating system will page certain contents of physical memory to disk and free up RAM for your application. Your application will not crash in this case.
This, obviously is much more complicated than that.
There are several ways to configure a page file on Windows (fixed size, variable size and on which disk). If you run out of physical memory, and out of hard drive space (because your page file has grown very large due to excessive 'paging') or reach the limit of your paging file (if it is a static limit) then your applications will fail due out an out-of-memory exception. With today's systems with large local storage however, this is a rare event.
Be sure to read about paging for the full picture. Check out:
http://en.wikipedia.org/wiki/Paging
In certain cases, you will notice that you have sufficient free physical memory. Say 100MB and your program tries to allocate a 10MB block to store a large object but fails. This is caused by physical memory fragmentation. Although the total free memory is 100MB, there is no single contiguous block of 10MB that can be used to store your object. This will result in an exception that needs to be handled in your code. If you allocate large objects in your code you may want to separate the allocation into smaller blocks to facilitate allocation, and then aggregate them back in your code logic. For example, instead of having a single 10m vector, you can declare 10 x 1m vectors in an array and allocate memory for each individual one.
sorry for my rather general question, but I could not find a definite answer to it:
Given that I have free swap memory left and I allocate memory in reasonable chunks (~1MB) -> can memory allocation still fail for any reason?
The smartass answer would be "yes, memory allocation can fail for any reason". That may not be what you are looking for.
Generally, whether your system has free memory left is not related to whether allocations succeed. Rather, the question is whether your process address space has free virtual address space.
The allocator (malloc, operator new, ...) first looks if there is free address space in the current process that is already mapped, that is, the kernel is aware that the addresses should be usable. If there is, that address space is reserved in the allocator and returned.
Otherwise, the kernel is asked to map new address space to the process. This may fail, but generally doesn't, as mapping does not imply using physical memory yet -- it is just a promise that, should someone try to access this address, the kernel will try to find physical memory and set up the MMU tables so the virtual->physical translation finds it.
When the system is out of memory, there is no physical memory left, the process is suspended and the kernel attempts to free physical memory by moving other processes' memory to disk. The application does not notice this, except that executing a single assembler instruction apparently took a long time.
Memory allocations in the process fail if there is no mapped free region large enough and the kernel refuses to establish a mapping. For example, not all virtual addresses are useable, as most operating systems map the kernel at some address (typically, 0x80000000, 0xc0000000, 0xe0000000 or something such on 32 bit architectures), so there is a per-process limit that may be lower than the system limit (for example, a 32 bit process on Windows can only allocate 2 GB, even if the system is 64 bit). File mappings (such as the program itself and DLLs) further reduce the available space.
A very general and theoretical answer would be no, it can not. One of the reasons it could possibly under very peculiar circumstances fail is that there would be some weird fragmentation of your available / allocatable memory. I wonder whether you're trying get (probably very minor) performance boost (skipping if pointer == NULL - kind of thing) or you're just wondering and want to discuss it, in which case you should probably use chat.
Yes, memory allocation often fails when you run out of memory space in a 32-bit application (can be 2, 3 or 4 GB depending on OS version and settings). This would be due to a memory leak. It can also fail if your OS runs out of space in your swap file.
When will a program get an 'Out of memory exception'. Is it when not having enough virtual address range or because of not having enough physical memory?
As per my understanding, it should happen only when not enough virtual address is available as physical storage can be made available by paging un-used sections.
Please clarify.
Thanks,
Suresh.
If you're seeing an OutOfMemoryException, this is presumably a .Net application. Ironically, the conditions you describe are pretty much never the source of an OutOfMemoryException in .Net.
In most cases, it's better to think of an OutOfMemoryException as being an OutOfSomeCriticalResourceButNotRAMIronicallyEnoughException. Or even worse: as one example, .Net throws an OutOfMemoryException when you attempt to open an invalid image file.
Total memory available = physical (RAM) plus page file(s).
When both are full, you get the exception on any further memory allocation requests.
On some systems this is qualified further by the fact that the kernel reserves a portion of physical RAM for itself, so user mode programs are left to compete for the rest.
When you run out of addressable space for the program to access. This normally means virtual address range, but if you have enough RAM, it would be the physical memory.
I am routinely seeing VirtualAlloc calls to reserve memory fail. I'm requesting 2MB so that the allocations do not count against my per process virtual memory and instead use system shared memory. At the time of failure, the system reports having over 100 MB available in physical memory.
I'm running on a windows mobile 6.1 device. So far this is a device-specific problem. It works on many identical devices and fails on one device. I would like to be able to determine if other processes on this device are reserving shared memory and therefore preventing me from doing so. Not sure how i can do that though.
This is the doc I am relying on and I see nothing that would explain this problem:
http://msdn.microsoft.com/en-us/library/aa908768.aspx
Any ideas? Thanks.
I'm tempted to say that VirtualAlloc has run out of (contiguous) virtual address space, at least as far as your process is concerned.
I'd first try to establish to which memory slot those previously successful VirtualAlloc chunks got mapped to, and based on that see whom I am fighting with for address space. You should be able to do this either programatically or by using a tool from William J. Blanke (or other similar tools.)