I work on a 32 bit end user application that needs a lot of memory. Up to several gigabytes. I recently switched our internal memory allocation strategy to use memory-mapped-files-without-files inspired by this Raymond Chen article. It works great.
The only downside is this: If I allocate a gigabyte of memory this way, it does not show up anywhere in the performance counters. Of course, I do know how much is allocated, and how much of it is mapped into my adress space, but I don't know how it's divided over physical memory and the page file. I would like to know, if not for any other reason than logging it for debugging.
The solution was to monitor my application with sysinternals' VMMap. This breaks down an applications memory usage by allocation type (my memory mapped files are called "shared"), as well as by status (in memory or swapped out).
There's even a graphical memory fragmentation map!
Call QueryWorkingSet and count the number of pages that lie within your mapped range to determine how much of your memory is part of the working set. But keep in mind that pages could be excluded from the working set but still be in physical memory somewhere.
Related
On Windows 32-bit system the application is being developed using Visual Studio:
Lets say lots of other application running on my machine and they have occupied almost all of physical memory and only 1 MB memory is left free. If my application (which has not yet allocated any memory) tries to allocate, say 2 MB, will the call be successful?
My guess: In theory, each Windows application has 2GB of virtual memory available.
So I believe this call should be successful (regardless how much physical memory is available). But I am not sure on this. That's why asking here.
Windows gives a rock-hard guarantee that this will always work. A process can only allocate virtual memory when Windows can commit space in the paging file for the allocation. If necessary, it will grow the paging file to make the space available. If that fails, for example when the paging file grows beyond the preset limit, then the allocation fails as well. Windows doesn't have the equivalent of the Linux "OOM killer", it does not support over-committing that may require an operating system to start randomly killing processes to find RAM.
Do note that the "always works" clause does have a sting. There is no guarantee on how long this will take. In very extreme circumstances the machine can start thrashing where just about every memory access in the running processes causes a page fault. Code execution slows down to a crawl, you can lose control with the mouse pointer frozen when Explorer or the mouse or video driver start thrashing as well. You are well past the point of shopping for RAM when that happens. Windows applies quotas to processes to prevent them from hogging the machine, but if you have enough processes running then that doesn't necessarily avoid the problem.
Of course. It would be lousy design if memory had to be wasted now in order to be used later. Operating systems constantly re-purpose memory to its most advantageous use at any moment. They don't have to waste memory by keeping it free just so that it can be used later.
This is one of the benefits of virtual memory with a page file. Because the memory is virtual, the system can allocate more virtual memory than physical memory. Virtual memory that cannot fit in physical memory, is pushed out to the page file.
So the fact that your system may be using all of the physical memory does not mean that your program will not be able to allocate memory. In the scenario that you describe, your 2MB memory allocation will succeed. If you then access that memory, the virtual memory will be paged in to physical memory and very likely some other pages (maybe in your process, maybe in another process) will be pushed out to the page file.
Well, it will succeed as long as there's some memory for it - apart from physical memory, there's also the page file.
However, once you reach the limit of both RAM and the page file, you're done for and that's when the out of memory situation really starts being fun.
Now, systems like Windows Vista will try to use all of your available RAM, pretty much for caching. That's a good thing, and when there's a request for memory from an application, the cache will be thrown away as needed.
As for virtual memory, you can request much more than you have available, regardless of your RAM or page file size. Only when you commit the memory does it actually need some backing - either RAM or the page file. On 64-bit, you can easily request terabytes of virtual memory - that doesn't mean you'll get it when you try to commit it, though :P
If your application is unable to allocate a physical memory (RAM) block to store information, the operating system takes over and 'pages' or stores sections that are in RAM on disk to free up physical memory so that your program is able to perform the allocation. This is done automatically and is completely invisible to your applications.
So, in your example, on a system that has 1MB RAM free, if your application tries to allocate memory, the operating system will page certain contents of physical memory to disk and free up RAM for your application. Your application will not crash in this case.
This, obviously is much more complicated than that.
There are several ways to configure a page file on Windows (fixed size, variable size and on which disk). If you run out of physical memory, and out of hard drive space (because your page file has grown very large due to excessive 'paging') or reach the limit of your paging file (if it is a static limit) then your applications will fail due out an out-of-memory exception. With today's systems with large local storage however, this is a rare event.
Be sure to read about paging for the full picture. Check out:
http://en.wikipedia.org/wiki/Paging
In certain cases, you will notice that you have sufficient free physical memory. Say 100MB and your program tries to allocate a 10MB block to store a large object but fails. This is caused by physical memory fragmentation. Although the total free memory is 100MB, there is no single contiguous block of 10MB that can be used to store your object. This will result in an exception that needs to be handled in your code. If you allocate large objects in your code you may want to separate the allocation into smaller blocks to facilitate allocation, and then aggregate them back in your code logic. For example, instead of having a single 10m vector, you can declare 10 x 1m vectors in an array and allocate memory for each individual one.
I am writing a C++ program that essentially works with very large arrays. On Windows, I am using VirtualAlloc to allocate memory to my arrays. Now I fully understand the difference between reserving and committing memory using VirutalAlloc; however, I am wondering whether there is any benefit in committing memory page-by-page to a reserved region. In particular, MSDN (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887(v=vs.85).aspx) contains the following explanation for the MEM_COMMIT option:
Actual physical pages are not allocated unless/until the virtual addresses are actually accessed.
My experiments confirm this: I can reserve and commit several GB of memory wihtout increasing memory usage of my process (as shown in Task Manager); actual memory gets allocated only when I actually access memory.
Now I saw quite a few examples arguing that one should reserve a large portion of the address space and then commit memory page-by-page (or in some larger blocks, depending on the app's logic). As explained above, however, memory does not seem to be committed before one accesses it; thus, I'm wondering whether there is any real benefit in committing memory page-by-page. In fact, committing memory page-by-page might actually slow my program down due to many system calls for actually comitting memory. If I commit the entire region at once, I pay for just one system call, but the kernel seems to be smart enough to actually allocate only memory that I actually use.
I would appreciate it if someone could explain to me which strategy is better.
The difference is that commit "backs" the memory against the page file. To give an example:
Given 2GB of physical ram and 2GB of swap (assume fixed-size swap for this purpose).
Reserve 6GB - OK.
Commit first 2GB - OK.
Commit remaining 4GB - fails.
Extend swap file to 8GB
Commit remaining 4GB - succeeds.
The reason for using MEM_COMMIT would primarily be for runtime error suppression (app stability). If you have a process that commits pages on-demand then there's always a chance that a commit along-the-way could fail if it exceeds amount of memory+swap available. When memory has been backed by the page file then you have a strong guarantee that the memory is available for use from now until the point that you release it.
There's a number of reasons to go one way or the other, and I don't think there's any perfect science to deciding which. MEM_RESERVE alone is only needed for very large sparse array scenarios, ex: multi-gigabyte array which has at most 25-33% utilization (a popular technique for accelerating hash tables, etc).
Almost everything else is gray area where you could probably go either way -- MEM_COMMIT up-front would make your own app a little more stable and essentially give it priority to physical ram over competing apps that might allocate on-demand. (if you grab the ram first then your app will be the last left standing when physical memory is exhausted) At the same time, if you're not actually using all that ram then you may end up limiting the multi-tasking potential of your client's machine or causing unnecessary wasted disk space via a growing page file.
So I'm in a computer architecture class, and I guess I'm having a hard time differentiating between caching and pages.
The only explanation I can come up with is that pages are the OS's way of tricking a program that it's doing all it's work in a specified region of memory, vs a cache memory is the hardware's way of tricking the OS that it's reading from one specified region of memory, when it's really not.
Does the os direct the hardware that it needs a "new page" or is that taken care of by the os trying to read the address that is "out of range" of the current cache "page" (for lack of a better term).
Am I on the right track or am I completely crazy?
Caching and pages are orthogonal concepts.
A cache is a high-speed "memory" that acts to minimise the number of accesses to a large low-speed "memory". In the most general sense, the high-speed "memory" could be your hard disk acting to cache web pages fetched from the web (low-speed "memory"). Of course, in the context of computer architecture, the term "cache" is more likely to refer to physical RAM used to speed up access to slower RAM or disk.
Pages, OTOH, are simply a unit of management for the contents of RAM or disk.
These two concepts come together in implementing virtual memory systems. A process may allocate 500 MB of memory. This may be more that the physical RAM available to give to the process, so the operating system allocates blocks on disk called pages, which will hold the contents of certain logical pages in the process's address-space.
When the process accesses a location in its address-space, and the associated page isn't currently mapped into physical memory, the CPU signals a page fault, and the OS responds by fetching the page from disk while the process is in a suspended state. Once the page is mapped, the process resumes and is able to access that memory location as if it was there all along.
The common view that virtual memory is a way of tricking the process into thinking it has tons of RAM isn't the only way to think about this. You could also think of a process's address-space as being logically stored on disk pages, with the OS-assisted mapping into RAM being just a way to cache the contents of those pages such that the process isn't continually accessing the hard drive. In this sense, caching and paged virtual memory are logically the same thing. Just keep in mind that, while this viewpoint may help to understand the relationship between the two concepts, it isn't entirely accurate, since it is possible to run without virtual memory at all, just physical memory (in fact, most embedded systems run this way).
I think Paging is bring instruction data from disk or secondary memory to the main memory while caching is bring instruction data from main memory to CPU
I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc.
How do I trigger or tell the system to swap the memory to pagefile and free physical memory?
I'm using Windows XP.
If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system.
To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less.
Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :(
Any help is appreciated.
Thanks.
If your data access pattern for using the memory mapped file is sequential, you might get slightly better page recycling by specifying the FILE_FLAG_SEQUENTIAL_SCAN flag when opening the underlying file. If your data pattern accesses the mapped file in random order, this won't help.
You should consider decreasing the size of your map view. That's where all the memory is actually consumed and cached. Since it appears that you need to handle files that are larger than available contiguous free physical memory, you can probably do a better job of memory management than the virtual memory page swapper since you know more about how you're using the memory than the virtual memory manager does. If at all possible, try to adjust your design so that you can operate on portions of the large file using a smaller view.
Even if you can't get rid of the need for full random access across the entire range of the underlying file, it might still be beneficial to tear down and recreate the view as needed to move the view to the section of the file that the next operation needs to access. If your data access patterns tend to cluster around parts of the file before moving on, then you won't need to move the view as often. You'll take a hit to tear down and recreate the view object, but since tearing down the view also releases all the cached pages associated with the view, it seems likely you'd see a net gain in performance because the smaller view significantly reduces memory pressure and page swapping system wide. Try setting the size of the view based on a portion of the installed system RAM and move the view around as needed by your file processing. The larger the view, the less you'll need to move it around, but the more RAM it will consume potentially impacting system responsiveness.
As I think you are hinting in your post, the slow response time is probably at least partially due to delays in the system while the OS writes the contents of memory to the pagefile to make room for other processes in physical memory.
The obvious solution (and possibly not practical) is to use less memory in your application. I'll assume that is not an option or at least not a simple option. The alternative is to try to proactively flush data to disk to continually keep available physical memory for other applications to run. You can find the total memory on the machine with GlobalMemoryStatusEx. And GetProcessMemoryInfo will return current information about your own application's memory usage. Since you say you are using a memory mapped file, you may need to account for that in addition. For example, I believe the PageFileUsage information returned from that API will not include information about your own memory mapped file.
If your application is monitoring the usage, you may be able to use FlushViewOfFile to proactively force data to disk from memory. There is also an API (EmptyWorkingSet) that I think attempts to write as many dirty pages to disk as possible, but that seems like it would very likely hurt performance of your own application significantly. Although, it could be useful in a situation where you know your application is going into some kind of idle state.
And, finally, one other API that might be useful is SetProcessWorkingSetSizeEx. You might consider using this API to give a hint on an upper limit for your application's working set size. This might help preserve more memory for other applications.
Edit: This is another obvious statement, but I forgot to mention it earlier. It also may not be practical for you, but it sounds like one of the best things you might do considering that you are running into 32-bit limitations is to build your application as 64-bit and run it on a 64-bit OS (and throw a little bit more memory at the machine).
Well, it sounds like your program needs more than 2GB of working set.
Modern operating systems are designed to use most of the RAM for something at all times, only keeping a fairly small amount free so that it can be immediately handed out to processes that need more. The rest is used to hold memory pages and cached disk blocks that have been used recently; whatever hasn't been used recently is flushed back to disk to replenish the pool of free pages. In short, there isn't supposed to be much free physical memory.
The principle difference between using a normal memory allocation and memory mapped a files is where the data gets stored when it must be paged out of memory. It doesn't necessarily have any effect on when the memory will be paged out, and will have little effect on the time it takes to page it out.
The real problem you are seeing is probably not that you have too little free physical memory, but that the paging rate is too high.
My suggestion would be to attempt to reduce the amount of storage needed by your program, and see if you can increase the locality of reference to reduce the amount of paging needed.
I have a long-running memory hog of an experimental program, and I'd like to know it's actual memory footprint. The Task Manager says (in windows7-64) that the app is consuming 800 mb of memory, but the total amount of memory allocated, also according to the task manager, is 3.7gb. The sum of all the allocated memory does not equal 3.7gb. How can I determine, on the fly, how much memory my application is actually consuming.
Corollary: What memory is the task manager actually reporting? It doesn't seem to be all the memory that's allocated to the app itself.
As I understand it, Task manager shows the Working Set;
working set: The set of memory pages
recently touched by the threads of a
process. If free memory in the
computer is above a threshold, pages
are left in the working set of a
process even if they are not being
used. When free memory falls below a
threshold, pages are trimmed from the
working set.
via http://msdn.microsoft.com/en-us/library/cc432779(PROT.10).aspx
You can get Task Manager to show Virtual Memory as well.
I usually use perfmon (Start -> Run... -> perfmon) to track memory usage, using the Private Bytes counter. It reflects memory allocated by your normal allocators (new/HeapAlloc/malloc, etc).
Memory is a tricky thing to measure. An application might reserve lots of virtual memory but not actually use much of it. Some of the memory might be shared; that is, a shared DLL might be loaded in to the address space of several applications but it is only loaded in to physical memory once.
A good measure is the working set, which is the set of pages in its virtual address space that have been accessed recently. What the meaning of 'accessed recently' is depends on the operating system and its page replacement algorithm. In other words, it is the actual set of virtual pages that are mapped in to physical memory and are in use at the moment. This is what the task manager shows you.
The virtual memory usage is the amount of virtual pages that have been reserved (note that not all of these will have actually been committed, that is, had physical backing store allocated for it. You can add this to the display in task manager by clicking View -> Select Columns.
The most important thing though: If you want to actually measure how much memory your program is using to see if you need to optimize some of it for space or choose better data structures or persist some things to disk, using the task manager is the wrong approach. You should almost certainly be using a profiler.
That depends on what memory you are talking about. Unfortunately there are many different ways to measure memory. For instance ...
Physical Memory Allocated
Virtual Memory Allocated
Virtual Memory Reserved (but not committed)
Private Bytes
Shared Bytes
Which metric are you interested in?
I think most people tend to be interested in the "Virtual Memory Allocated" category.
The memory statistics displayed by task manager are not nearly all the statistics available, nor are particularly well presented. I would use the great free tool from Microsoft Sysinternals, VMMap, to analyse the memory used by the application further.
If it is a long running application, and the memory usage grows over time, it is going to be the heap that is growing. Parts of the heap may or may not be paged out to disk at any time, but you really need to optimize you heap usage. In this case you need to be profile your application. If it is a .Net application then I can recommend Redgate's ANTS profiler. It is very easy to use. If it's a native application, then the Intel vtune profiler is pretty powerful. You don't need the source code for the process you are profiling for either tool.
Both applications have a free trial. Good luck.
P.S. Sorry I didn't include more hyperlinks to the tools, but this is my first post, and stackoverflow limits first posts to one hyperlink :-(