I have an Objective-C application that is designed to run for an extended period of time using WebKit views. After time the application builds up a fair amount of memory so I would like to accurately reveal the memory usage to the end user upon request. In Activity Monitor on OS X I see two memory values for my application Real Memory and Private Memory. From what I have read Private is everything that the process has and Real is an estimate. What should I trust? Is there a specific formula I can use to calculate the exact usage rather than what OS X reports?
Not my area of expertise.
If you are worried about memory leaks, you should be using the various tools provided by Apple to debug it.
Real memory is the actual physical memory in use by the process.
Private memory is the physical memory that is used by just that process.
Virtual memory is the size of the entire virtual memory of the process including those pages that are not currently resident in physical RAM.
It's actually quite difficult to tell by looking at those numbers if you have a leak. For example, a block that is malloced and then leaked will never be referenced again, so it'll eventually get swapped out. It'll be part of the virtual memory but not part of the resident memory. So if you have a leak, the virtual memory will gradually increase over time.
On the other hand, virtual memory will increase if malloc can't find an unused block of memory to allocate, but it won't decrease when free gives memory back. So if you malloc a huge amount of RAM the VM will increase but even if you then free it correctly, it'll never decrease again. If you also have a leak, it will take a long time for malloc to run out of the recycled VM which means that you might not notice it.
So, use the purpose built leak detection tools.
Related
At the start of a Windows user-mode process, what determines the initial size of its Virtual Address Space?
What makes Virtual Bytes fluctuate during the lifetime of a process?
As per Microsoft docs:
The total amount of virtual address space available to a process is limited by physical memory and the free space on disk available for the paging file.
Does that mean that Virtual Bytes can do down if the amount of free disk space goes down?
Can it go up just because more disk space became available?
Background
An ASP.NET Webforms website of mine that runs on a shared hosting platform started failing with 503 errors. After adding some diags, I saw that the app was restarting very frequently (every minute at busy times) until it eventually died (503).
Further debugging showed that the app starts with around 1.2-1.5 Gb of Process.VirtualMemorySize64. The hosting provider has Virtual Memory Limit set to 1.5 Gb in IIS App Pool settings. No wonder the process gets shot down in a matter of minutes.
Which led me to the questions above.
At the start of a Windows user-mode process, what determines the initial size of its Virtual Address Space?
Sections in a PE32 executable have virtual memory sizes.
What makes Virtual Bytes fluctuate during the lifetime of a process?
Memory allocations; i.e. what happens when you new[] something (if you're familiar with libc, what malloc does under the hood. If you're from a Linux background, the Windows equivalent of sbrk).
The process can use syscalls to allocate more virtual memory; on modern systems, that does not automatically reserve memory. It just makes addresses valid to use for the process, but the first access to an unreserved page will (transparently to the process) fail, raise an exception that the NT kernel handles, which then actually takes some memory and adds it to the page table for that address (range).
Does that mean that Virtual Bytes can do down if the amount of free disk space goes down?
Can it go up just because more disk space became available?
No. Virtual Memory is an address space. That address space can be backed by actual RAM, it can not be backed at all (i.e. not yet used the first time), or it can be written to a paging file ("swapped") on disk, the used RAM was marked unused, and on the next attempt to access that area of memory, that will transparently fail, the kernel will load the memory from disk to some currently unused RAM page, then map that to the virtual address in question.
you might want to revisit how virtual memory, logical memory and physical memory addresses are handled, in general.
Further debugging showed that the app starts with around 1.2-1.5 Gb of Process.VirtualMemorySize64.
"b" is for bits, you probably mean "GB", Gigabytes; anyway, this is really a lot of memory that your runtime pre-allocated. Very few things in life use that much memory.
The hosting provider has Virtual Memory Limit set to 1.5 Gb in IIS App Pool settings. No wonder the process gets shot down in a matter of minutes.
Sounds reasonable. Unless you're actually implementing a large-scale database, a full browser, a 3D game or a thousand-user chatserver in your process, that should actually suffice.
As Lex Li points out in their comment, your memory usage warrants suspicion. You might want to figure out where your memory usage stems from.
On Windows 32-bit system the application is being developed using Visual Studio:
Lets say lots of other application running on my machine and they have occupied almost all of physical memory and only 1 MB memory is left free. If my application (which has not yet allocated any memory) tries to allocate, say 2 MB, will the call be successful?
My guess: In theory, each Windows application has 2GB of virtual memory available.
So I believe this call should be successful (regardless how much physical memory is available). But I am not sure on this. That's why asking here.
Windows gives a rock-hard guarantee that this will always work. A process can only allocate virtual memory when Windows can commit space in the paging file for the allocation. If necessary, it will grow the paging file to make the space available. If that fails, for example when the paging file grows beyond the preset limit, then the allocation fails as well. Windows doesn't have the equivalent of the Linux "OOM killer", it does not support over-committing that may require an operating system to start randomly killing processes to find RAM.
Do note that the "always works" clause does have a sting. There is no guarantee on how long this will take. In very extreme circumstances the machine can start thrashing where just about every memory access in the running processes causes a page fault. Code execution slows down to a crawl, you can lose control with the mouse pointer frozen when Explorer or the mouse or video driver start thrashing as well. You are well past the point of shopping for RAM when that happens. Windows applies quotas to processes to prevent them from hogging the machine, but if you have enough processes running then that doesn't necessarily avoid the problem.
Of course. It would be lousy design if memory had to be wasted now in order to be used later. Operating systems constantly re-purpose memory to its most advantageous use at any moment. They don't have to waste memory by keeping it free just so that it can be used later.
This is one of the benefits of virtual memory with a page file. Because the memory is virtual, the system can allocate more virtual memory than physical memory. Virtual memory that cannot fit in physical memory, is pushed out to the page file.
So the fact that your system may be using all of the physical memory does not mean that your program will not be able to allocate memory. In the scenario that you describe, your 2MB memory allocation will succeed. If you then access that memory, the virtual memory will be paged in to physical memory and very likely some other pages (maybe in your process, maybe in another process) will be pushed out to the page file.
Well, it will succeed as long as there's some memory for it - apart from physical memory, there's also the page file.
However, once you reach the limit of both RAM and the page file, you're done for and that's when the out of memory situation really starts being fun.
Now, systems like Windows Vista will try to use all of your available RAM, pretty much for caching. That's a good thing, and when there's a request for memory from an application, the cache will be thrown away as needed.
As for virtual memory, you can request much more than you have available, regardless of your RAM or page file size. Only when you commit the memory does it actually need some backing - either RAM or the page file. On 64-bit, you can easily request terabytes of virtual memory - that doesn't mean you'll get it when you try to commit it, though :P
If your application is unable to allocate a physical memory (RAM) block to store information, the operating system takes over and 'pages' or stores sections that are in RAM on disk to free up physical memory so that your program is able to perform the allocation. This is done automatically and is completely invisible to your applications.
So, in your example, on a system that has 1MB RAM free, if your application tries to allocate memory, the operating system will page certain contents of physical memory to disk and free up RAM for your application. Your application will not crash in this case.
This, obviously is much more complicated than that.
There are several ways to configure a page file on Windows (fixed size, variable size and on which disk). If you run out of physical memory, and out of hard drive space (because your page file has grown very large due to excessive 'paging') or reach the limit of your paging file (if it is a static limit) then your applications will fail due out an out-of-memory exception. With today's systems with large local storage however, this is a rare event.
Be sure to read about paging for the full picture. Check out:
http://en.wikipedia.org/wiki/Paging
In certain cases, you will notice that you have sufficient free physical memory. Say 100MB and your program tries to allocate a 10MB block to store a large object but fails. This is caused by physical memory fragmentation. Although the total free memory is 100MB, there is no single contiguous block of 10MB that can be used to store your object. This will result in an exception that needs to be handled in your code. If you allocate large objects in your code you may want to separate the allocation into smaller blocks to facilitate allocation, and then aggregate them back in your code logic. For example, instead of having a single 10m vector, you can declare 10 x 1m vectors in an array and allocate memory for each individual one.
My system is Windows XP.
Virtual Size displayed in TaskManager is different with MEMORYSTATUSEX.ullAvailVirtual got from GlobalMemoryStatusEx.
When I create lot of buffers and the memory usage is up, MEMORYSTATUSEX.ullAvailVirtual can well reflected the virtual size usage. It's same.
But when I delete the memory, Virtual Size in task manager is down, but MEMORYSTATUSEX.ullAvailVirtual is still very small. I don't know why....
I am totally confused.
You could be suffering from memory fragmentation. (I.e. if you lea a few bytes between each large allocation, it effectively forces up the virtual bytes of your application).
You might find it more reliable to compare figures against perfmon - the counters I've always used in the past have been Private bytes (memory actually allocated) and Virtual bytes (memory address space allocated) - if those two counters diverge, then you have a memory fragmentation problem, which will be the result of a memory leak. The figures in Task Manager, whilst true and accurate, don't convey anything particularly useful.
When you delete allocated memory, the OS doesn't immediately return that memory but keeps it reserved for the process, at least until another process needs that memory. This improves the performance, because the very same process might need the just deleted memory a few ms later again.
To really free the deleted memory, you can call
SetProcessWorkingSetSize(GetCurrentProcess(), (SIZE_T)-1, (SIZE_T)-1);
Maybe that will force GlobalMemoryStatusEx() to return the values you expect?
I have a long-running memory hog of an experimental program, and I'd like to know it's actual memory footprint. The Task Manager says (in windows7-64) that the app is consuming 800 mb of memory, but the total amount of memory allocated, also according to the task manager, is 3.7gb. The sum of all the allocated memory does not equal 3.7gb. How can I determine, on the fly, how much memory my application is actually consuming.
Corollary: What memory is the task manager actually reporting? It doesn't seem to be all the memory that's allocated to the app itself.
As I understand it, Task manager shows the Working Set;
working set: The set of memory pages
recently touched by the threads of a
process. If free memory in the
computer is above a threshold, pages
are left in the working set of a
process even if they are not being
used. When free memory falls below a
threshold, pages are trimmed from the
working set.
via http://msdn.microsoft.com/en-us/library/cc432779(PROT.10).aspx
You can get Task Manager to show Virtual Memory as well.
I usually use perfmon (Start -> Run... -> perfmon) to track memory usage, using the Private Bytes counter. It reflects memory allocated by your normal allocators (new/HeapAlloc/malloc, etc).
Memory is a tricky thing to measure. An application might reserve lots of virtual memory but not actually use much of it. Some of the memory might be shared; that is, a shared DLL might be loaded in to the address space of several applications but it is only loaded in to physical memory once.
A good measure is the working set, which is the set of pages in its virtual address space that have been accessed recently. What the meaning of 'accessed recently' is depends on the operating system and its page replacement algorithm. In other words, it is the actual set of virtual pages that are mapped in to physical memory and are in use at the moment. This is what the task manager shows you.
The virtual memory usage is the amount of virtual pages that have been reserved (note that not all of these will have actually been committed, that is, had physical backing store allocated for it. You can add this to the display in task manager by clicking View -> Select Columns.
The most important thing though: If you want to actually measure how much memory your program is using to see if you need to optimize some of it for space or choose better data structures or persist some things to disk, using the task manager is the wrong approach. You should almost certainly be using a profiler.
That depends on what memory you are talking about. Unfortunately there are many different ways to measure memory. For instance ...
Physical Memory Allocated
Virtual Memory Allocated
Virtual Memory Reserved (but not committed)
Private Bytes
Shared Bytes
Which metric are you interested in?
I think most people tend to be interested in the "Virtual Memory Allocated" category.
The memory statistics displayed by task manager are not nearly all the statistics available, nor are particularly well presented. I would use the great free tool from Microsoft Sysinternals, VMMap, to analyse the memory used by the application further.
If it is a long running application, and the memory usage grows over time, it is going to be the heap that is growing. Parts of the heap may or may not be paged out to disk at any time, but you really need to optimize you heap usage. In this case you need to be profile your application. If it is a .Net application then I can recommend Redgate's ANTS profiler. It is very easy to use. If it's a native application, then the Intel vtune profiler is pretty powerful. You don't need the source code for the process you are profiling for either tool.
Both applications have a free trial. Good luck.
P.S. Sorry I didn't include more hyperlinks to the tools, but this is my first post, and stackoverflow limits first posts to one hyperlink :-(
I'm trying to get a better understanding of how Windows, 32-bit, calculates the virtual bytes for a program. I am under the impression that Virtual Bytes (VB) are the measure of how much of the user address space is being used, while the Private Bytes (PB) are the measure of actual committed and reserved memory on the system.
In particular, I have a server program I am monitoring which, when under heavy usage, will climb up to the 3GB limit for VBs. Around the same time the PB climb as well, but then quickly drop down to around 1 GB as the usage drops. The PB tend to then stay low, around the 1 GB mark, but the VB stay up around the 3 GB mark. I do not have access to the source code, so I am just using the basic Windows performance counters to monitor all of this. From a programming point of view, what memory concept do I not understand that makes this all possible? Is there a good reference to learn more about this?
What your reporting is most likely being caused by the process heap. There are two pieces to a memory allocation in Windows. The first piece is the continuous address space in your application for the memory to accessed through. On a 32 bit system not running the /3GB switch all your allocations must come out of the lower 2 GB of user address space. The second piece of the memory allocation is the actually memory for the allocation. This can be either RAM or part of the page file system on the hard disk. The OS handles moving allocations between RAM and the page file system in the background.
Most likely your application is using a Windows heap to handle all memory allocations. When a heap is created is reserves 1 MB of address space for the memory it will allocate. Until it actually needs memory associated with this address space no physical memory is actually used. If the heap needs more memory than 1 MB it uses a doubling algorithm to reserve more address space, and then commits physical memory when it needs it. The important thing to note is that once a heap reserves address space it never releases it.
Personally I found the following books and chapters useful when trying to understand memory management.
Advanced Windows Debugging - Chapter 6 This book has the most detailed look into the heap I have seen.
Windows Internals - Chapter 7 This book adds a bit of information not found in Advanced Windows Debugging; however, it does not give as good an overview.
It sounds to me like you have a garbage collector that's only kicking in once the memory pressure hits 1/3 (1 GB out of 3 GB).
As for the VB - don't worry! It's virtual! Honestly, nothing's been allocated, nothing's been committed. Focus on your private bytes - your real allocations.
There is such a thing as "Virtual Memory". It's a rather non-OS-specific concept in computer science. Microsoft has also written about Windows implementation of the thing.
A long story short, in Windows you can ask to reserve some memory without actually allocating any physical memory. It's like making some memory addresses reserved for future use. When you really need the memory, you allocate it physically (aka "commit" it).
I haven't needed to use this feature myself, so I don't know how it's used in real life programs, but I know it's there. I think the idea might be something like preserving pointers to some memory address and allocating the memory when needed, without having to change what the pointers actually point to.
Windows is notorious for having a variety of types of memory allocations, some of which are supersets of others. You've mentioned Private Bytes and Virtual Bytes. Private bytes, as you know, refers to memory allocated specifically to your process. Virtual bytes includes private bytes, as well as any shared memory, reserved memory, etc.
Even though in real life you only need to be concerned with private bytes and perhaps shared memory (Windows handles the rest, anyways), the Virtual Bytes count is often what users (and evaluators of your software) see and interpret as the memory usage of your process.
A good and up-to-date reference on the subject is the book titled Windows Via C/C++ by Jeffrey Richter, you should look for Chapter 13: "Windows Memory Architecture".