What is Peak Working Set in windows task manager - windows

I'm confused about the windows task manager memory overview.
in the general memory overview it shows "in use" 7.9gb (in my sample)
.
I've used process explorer to sum up the used memory and it shows me the following:
Since this is the nearest number to the 7.9gb of the task manager, i guess this value is shown there.
Now my question:
What is the Peak working set?
If i hoover over the column in task manager, it says:
and the microsoft help says Maximum amount of working set memory used by the process.
Is it now the effective used memory of all processes, or is it the maximum of memory which was used by all process?

The number you refer to is "Memory used by processes, drivers and the operating system" [source].
This is an easy but somewhat vague description. A somewhat similar description would be the total amount of memory that is not free, or part of the buffer cache, or part of the standby list.
It is not the maximum memory used at some time ("peak"), it's a coincidence that you have roughly the same number there. It is the presently used amount (used by "everyone", that is all programs and the OS).
The peak working set is a different thing. The working set is the amount of memory in a process (or, if you consider several processes, in all these processes) that is currently in physical memory. The peak working set is, consequently, the maximum value so far seen.
A process may allocate more memory than it actually ever commits ("uses"), and most processes will commit more memory than they have in their working set at one time. This is perfectly normal. Pages are moved in and out of working sets (and into the standby list) to assure that the computer, which has only a finite amount of memory, always has enough reserves to satisfy any memory needs.

The memory figures in question aren't actually a reliable indicator of how much memory a process is using.
A brief explanation of each of the memory relationships:
Private Bytes are what the process is allocated, also with pagefile usage.
Working Set is the non-paged Private Bytes plus memory-mapped files.
Virtual Bytes are the Working Set plus paged Private Bytes and
standby list.
In answer to your question the peak working set is the maximum amount of physical RAM that was assigned to the process in question.
~ Update ~
Available memory is defined as the sum of the standby list plus free memory. There is far more to total memory usage than the sum all process working sets. Because of this and due to memory sharing this value is not generally very useful.
The virtual size of a process is the portion of a process virtual address space that has been allocated for use. There is no relationship between this and physical memory usage.
Private bytes is the portion of a processes virtual address space that has been allocated for private use. It does not include shared memory or that used for code. There is no relationship between this value and physical memory usage either.
Working set is the amount of physical memory in use by a process. Due to memory sharing there will be some double counting in this value.
The terms mentioned above aren't really going to mean very much until you understand the basic concepts in Windows memory management. Have a look HERE for some further reading.

Related

What is the threshold used by the Windows memory manager to determine when to begin swapping pages to disk?

I see this comment in the Perfmon counter "Memory"-"Working Set" description:
If free memory in the computer is above a threshold, pages are left in
the Working Set of a process even if they are not in use. When free
memory falls below a threshold, pages are trimmed from Working Sets.
I haven't been able to find any documentation regarding this threshold value. It is a percentage of available RAM? Is it when the Commit Charge consumes all available RAM? How does the system treat Kernel pages, vs User mode pages?
So my questions are:
What is that threshold?
Is there a way to detect it?
Do different versions of windows have different behavior or thresholds?
The scenario is that my process will try to use as much memory as there is available physical RAM. Once that limit is reached I can deallocate and cache on disk certain chunks of memory to make room for new stuff. It does this to help alleviate page file thrashing when memory conditions are low. I'd like to perform the deallocation BEFORE the memory manager begins swapping pages to disk because the memory usage has passed the magic threshold.
I currently use the MEMORYSTATUSEX::ullAvailPhys value (filled by GlobalMemoryStatusEx) to identify the amount of available physical memory.
Windows uses as much RAM as possible either for programs and for disk-cache so it doesn't swap tremendously at some point...
if you want more RAM for running application you have to diminish the disk-cache.
There is a tool to set the disk-cache by SysInternal.
cacheset.exe
You can find it here:
http://technet.microsoft.com/en-us/sysinternals/bb897561.aspx

what's the difference between working set and commit size?

when debugging OOM bugs, what's the difference between working set and commit size? Especially what's the exact meaning for commit size?
From here, the working set is:
... a count of physical memory (RAM) rather than virtual address space. It represents the
subset of the process's virtual address space that is valid, meaning
that it can be referenced without incurring a page fault.
The commit size is:
the total amount of pageable virtual address space for which no
backing store is assigned other than the pagefile. On systems with a
pagefile, it may be thought of as the maximum potential pagefile
usage. On systems with no pagefile, it is still counted, but all such
virtual address space must remain in physical memory (RAM) at all
times.
So you can think of the working set as the amount of physical memory used, while the commit size indicates the amount of virtual memory used (sans things like DLLs or memory mapped files, which can be back by files other than the page file).
That said, these numbers are not generally useful when trying to find "memory leaks" in .NET. Instead, you should use third-party memory profilers.

What is private bytes, virtual bytes, working set?

I am trying to use the perfmon windows utility to debug memory leaks in a process.
This is how perfmon explains the terms:
Working Set is the current size, in bytes, of the Working Set of this process. The Working Set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the Working Set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from Working Sets. If they are needed they will then be soft-faulted back into the Working Set before leaving main memory.
Virtual Bytes is the current size, in bytes, of the virtual address space the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite, and the process can limit its ability to load libraries.
Private Bytes is the current size, in bytes, of memory that this process has allocated that cannot be shared with other processes.
These are the questions I have:
Is it the Private Bytes which I should measure to be sure if the process is having any leaks as it does not involve any shared libraries and any leaks, if happening, will come from the process itself?
What is the total memory consumed by the process? Is it the Virtual Bytes or is it the sum of Virtual Bytes and Working Set?
Is there any relation between Private Bytes, Working Set and Virtual Bytes?
Are there any other tools that give a better idea of the memory usage?
The short answer to this question is that none of these values are a reliable indicator of how much memory an executable is actually using, and none of them are really appropriate for debugging a memory leak.
Private Bytes refer to the amount of memory that the process executable has asked for - not necessarily the amount it is actually using. They are "private" because they (usually) exclude memory-mapped files (i.e. shared DLLs). But - here's the catch - they don't necessarily exclude memory allocated by those files. There is no way to tell whether a change in private bytes was due to the executable itself, or due to a linked library. Private bytes are also not exclusively physical memory; they can be paged to disk or in the standby page list (i.e. no longer in use, but not paged yet either).
Working Set refers to the total physical memory (RAM) used by the process. However, unlike private bytes, this also includes memory-mapped files and various other resources, so it's an even less accurate measurement than the private bytes. This is the same value that gets reported in Task Manager's "Mem Usage" and has been the source of endless amounts of confusion in recent years. Memory in the Working Set is "physical" in the sense that it can be addressed without a page fault; however, the standby page list is also still physically in memory but not reported in the Working Set, and this is why you might see the "Mem Usage" suddenly drop when you minimize an application.
Virtual Bytes are the total virtual address space occupied by the entire process. This is like the working set, in the sense that it includes memory-mapped files (shared DLLs), but it also includes data in the standby list and data that has already been paged out and is sitting in a pagefile on disk somewhere. The total virtual bytes used by every process on a system under heavy load will add up to significantly more memory than the machine actually has.
So the relationships are:
Private Bytes are what your app has actually allocated, but include pagefile usage;
Working Set is the non-paged Private Bytes plus memory-mapped files;
Virtual Bytes are the Working Set plus paged Private Bytes and standby list.
There's another problem here; just as shared libraries can allocate memory inside your application module, leading to potential false positives reported in your app's Private Bytes, your application may also end up allocating memory inside the shared modules, leading to false negatives. That means it's actually possible for your application to have a memory leak that never manifests itself in the Private Bytes at all. Unlikely, but possible.
Private Bytes are a reasonable approximation of the amount of memory your executable is using and can be used to help narrow down a list of potential candidates for a memory leak; if you see the number growing and growing constantly and endlessly, you would want to check that process for a leak. This cannot, however, prove that there is or is not a leak.
One of the most effective tools for detecting/correcting memory leaks in Windows is actually Visual Studio (link goes to page on using VS for memory leaks, not the product page). Rational Purify is another possibility. Microsoft also has a more general best practices document on this subject. There are more tools listed in this previous question.
I hope this clears a few things up! Tracking down memory leaks is one of the most difficult things to do in debugging. Good luck.
The definition of the perfmon counters has been broken since the beginning and for some reason appears to be too hard to correct.
A good overview of Windows memory management is available in the video "Mysteries of Memory Management Revealed" on MSDN: It covers more topics than needed to track memory leaks (eg working set management) but gives enough detail in the relevant topics.
To give you a hint of the problem with the perfmon counter descriptions, here is the inside story about private bytes from "Private Bytes Performance Counter -- Beware!" on MSDN:
Q: When is a Private Byte not a Private Byte?
A: When it isn't resident.
The Private Bytes counter reports the commit charge of the process. That is to say, the amount of space that has been allocated in the swap file to hold the contents of the private memory in the event that it is swapped out. Note: I'm avoiding the word "reserved" because of possible confusion with virtual memory in the reserved state which is not committed.
From "Performance Planning" on MSDN:
3.3 Private Bytes
3.3.1 Description
Private memory, is defined as memory allocated for a process which cannot be shared by other processes. This memory is more expensive than shared memory when multiple such processes execute on a machine. Private memory in (traditional) unmanaged dlls usually constitutes of C++ statics and is of the order of 5% of the total working set of the dll.
You should not try to use perfmon, task manager or any tool like that to determine memory leaks. They are good for identifying trends, but not much else. The numbers they report in absolute terms are too vague and aggregated to be useful for a specific task such as memory leak detection.
A previous reply to this question has given a great explanation of what the various types are.
You ask about a tool recommendation:
I recommend Memory Validator. Capable of monitoring applications that make billions of memory allocations.
http://www.softwareverify.com/cpp/memory/index.html
Disclaimer: I designed Memory Validator.
There is an interesting discussion here: http://social.msdn.microsoft.com/Forums/en-US/vcgeneral/thread/307d658a-f677-40f2-bdef-e6352b0bfe9e/
My understanding of this thread is that freeing small allocations are not reflected in Private Bytes or Working Set.
Long story short:
if I call
p=malloc(1000);
free(p);
then the Private Bytes reflect only the allocation, not the deallocation.
if I call
p=malloc(>512k);
free(p);
then the Private Bytes correctly reflect the allocation and the deallocation.

Help me understand these memory statistics from Process Explorer

I'm trying to do a very rough measurement of the amount of memory my large financial calculation requires in order to run. Its a very simple command line tool which prices up a large number of financial instruments and then prints out a result.
I decided to use Process Explorer to view the memory requirements of the program. Can somebody kindly explain the difference between the two fields labeled a and b in the screenshot:
I currently believe that:
The value labeled "a" (Peak Private Bytes) is the largest amount of memory (both actual physical memory and virtual memory on disk) which was allocated to the process at any instantaneous moment.
The value labeled "b" (Peal Working Set) is the largest amount of physical memory allocated at any instant during the life of the process.
From here:
The working set is the set of memory pages that were touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the working set of a process, even if they are not in use. When free memory falls below a threshold, pages are trimmed from working sets. If the pages are needed, they will be soft-faulted back into the working set before leaving main memory.
[Private bytes are] bytes, that this process has allocated that cannot be shared with other processes.
What "peak" means in that context should be obvious.
Random thoughts from observations and what the display of Process explorer says.
Working set is in Physical Memory section of the display so anyone saying it is virtual memory is confused. And it changes by odd numbers as RAM usage would normally change so it looks like working set is physical memory.
Private Bytes on the other hand is listed as Virtual memory. And watching it change seems to change is multiples of 16K, as Virtual memory normally changes as it swaps out pages of memory and not just random bits. For some reason I thought this should be 64 k pages but depends on the machine and version of Windows I suppose.

How much memory is my windows app really using?

I have a long-running memory hog of an experimental program, and I'd like to know it's actual memory footprint. The Task Manager says (in windows7-64) that the app is consuming 800 mb of memory, but the total amount of memory allocated, also according to the task manager, is 3.7gb. The sum of all the allocated memory does not equal 3.7gb. How can I determine, on the fly, how much memory my application is actually consuming.
Corollary: What memory is the task manager actually reporting? It doesn't seem to be all the memory that's allocated to the app itself.
As I understand it, Task manager shows the Working Set;
working set: The set of memory pages
recently touched by the threads of a
process. If free memory in the
computer is above a threshold, pages
are left in the working set of a
process even if they are not being
used. When free memory falls below a
threshold, pages are trimmed from the
working set.
via http://msdn.microsoft.com/en-us/library/cc432779(PROT.10).aspx
You can get Task Manager to show Virtual Memory as well.
I usually use perfmon (Start -> Run... -> perfmon) to track memory usage, using the Private Bytes counter. It reflects memory allocated by your normal allocators (new/HeapAlloc/malloc, etc).
Memory is a tricky thing to measure. An application might reserve lots of virtual memory but not actually use much of it. Some of the memory might be shared; that is, a shared DLL might be loaded in to the address space of several applications but it is only loaded in to physical memory once.
A good measure is the working set, which is the set of pages in its virtual address space that have been accessed recently. What the meaning of 'accessed recently' is depends on the operating system and its page replacement algorithm. In other words, it is the actual set of virtual pages that are mapped in to physical memory and are in use at the moment. This is what the task manager shows you.
The virtual memory usage is the amount of virtual pages that have been reserved (note that not all of these will have actually been committed, that is, had physical backing store allocated for it. You can add this to the display in task manager by clicking View -> Select Columns.
The most important thing though: If you want to actually measure how much memory your program is using to see if you need to optimize some of it for space or choose better data structures or persist some things to disk, using the task manager is the wrong approach. You should almost certainly be using a profiler.
That depends on what memory you are talking about. Unfortunately there are many different ways to measure memory. For instance ...
Physical Memory Allocated
Virtual Memory Allocated
Virtual Memory Reserved (but not committed)
Private Bytes
Shared Bytes
Which metric are you interested in?
I think most people tend to be interested in the "Virtual Memory Allocated" category.
The memory statistics displayed by task manager are not nearly all the statistics available, nor are particularly well presented. I would use the great free tool from Microsoft Sysinternals, VMMap, to analyse the memory used by the application further.
If it is a long running application, and the memory usage grows over time, it is going to be the heap that is growing. Parts of the heap may or may not be paged out to disk at any time, but you really need to optimize you heap usage. In this case you need to be profile your application. If it is a .Net application then I can recommend Redgate's ANTS profiler. It is very easy to use. If it's a native application, then the Intel vtune profiler is pretty powerful. You don't need the source code for the process you are profiling for either tool.
Both applications have a free trial. Good luck.
P.S. Sorry I didn't include more hyperlinks to the tools, but this is my first post, and stackoverflow limits first posts to one hyperlink :-(

Resources