I've been tracking down a few memory leaks in my application. It's been a real pain, but I've finally tightened everything up. However, there's one bit of Windows memory management that is confusing me. Here is a printout of the app's memory usage over time...
Time PrivateMemorySize64 WorkingSet64
20:00:36 47480, 50144
20:01:06 47480, 50144
20:01:36 47480, 50144
20:02:06 47480, 149540
20:02:36 47480, 149540
20:03:06 47480, 149540
The working set jumped from 49 MB to 146 over a span of 30 seconds. This happened overnight as the application was basically doing nothing.
The working set (which is what task manager shows me) seems to be able to be influenced by other applications such as debuggers (as I learned while looking for memory leaks). After reading the documentation on what Working Set is, I still don't have a good understanding.
Any help is appreciated.
Update: Thanks to some links from responders as well as some additional searching, I have a better understanding on how a separate process can cause my process' working set to grow. Good to know that a spike in the working set is not necessarily an indication that your app is leaking... Further reason to not rely on Task Manager for memory evaluation :)
Helpful links:
A few words on memory usage or: working set vs. private working set
CyberNotes: Windows Memory Usage Explained
Simply said, the working set is the collection of memory pages currently owned by your process and not swapped out (i.e. in RAM). That is somewhat inaccurate, however. Reality is a lot more complicated.
Windows maintains a minimum working set size and a maximum working set size for every process. The minimum working set is easy, it is what Windows will grant to every process (as long as it can, by physical limits).
The maximum working set is more dubious. If your program uses more memory than will fit in its quota, Windows will drop some pages. However, while they are no longer in your working set, these pages are not necessarily "gone".
Rather, those pages are removed from your working set and moved to the pool of available pages. As a consequence, if some other program needs more memory and no cleared pages are left over, your pages will be cleared, and assigned to a different process. When you access them, they will need to be fetched from the swapfile again, possibly purging other pages, if you are still above the maximum working set size.
However, if nobody asked for more memory in the mean time (or if all demands could be satisfied by pages that were unused anyway), then accessing one of those pages will simply make it "magically reappear" and kick out another page in its stead.
Thus, your process can have more pages in RAM than are actually in its working set, but it does not "officially" own them.
The Resident Set/Working Set is the portion of Virtual Address Space which is currently residing in Physical Memory and therefore isn't Swapped Out
Related
We have a single page application, which runs well at the beginning, but slows down sharply as time goes. I am trying to investigate the root cause.
I use Chrome DevTool to record the timeline for initial page loading and a typical user operation. The JS Heap shows that the memory usage is ok: goes up and down periodically (due to Garbage Collection by browser, maybe).
However, when I check the Chrome Task Manager, I found that my page uses 60MB memory initially. But after 1 hour (and some user operations), the memory goes to 160MB. While the JavaScript Memory seems stable. Later I observed that the memory usage never goes down.
I guess maybe there is some memory leak in our JavaScript code? But the JS Heap seems ok. Does Chrome hold those memory and may release in future (when, say, other process needs more memory)?
Here is the Timeline recorded when I am operating:
I googled but cannot find explanations about this. Could anybody help? Thanks.
It is because of an interval that is not cleared. It keeps calling a function too frequently.
As I understand the creation of processes, every process has it's own space in RAM for it's heap, data, etc, which is allocated upon its creation. Many processes can share their data and storage space in some ways. But since terminating a process would erase its allocated memory(so also its caches), I was wondering if it is possible that many (similar) processes share a cache in memory that is not allocated to any specific process, so that it can be used even when these processes are terminated and other ones are created.
This is a theoretical question from a student perspective, so I am merely interested in the general sence of an operating system, without adding more functionality to them to achieve it.
For example I think of a webserver that uses only single-threaded processes (maybe due to lack of multi-threading support), so that most of the processes created do similar jobs, like retrieving a certain page.
There are a least four ways what you describe can occur.
First, the system address space is shared by all processes. The Operating system can save data there that survives the death of a process.
Second, processes can map logical pages to the same physical page frame. The termination of one process does not cause the page frame to be deallocated to the other processes.
Third, some operating systems have support for writable shared libraries.
Fourth, memory mapped files.
There are probably others as well.
I think so, when a process is terminated the RAM clears it. However your right as things such as webpages will be stored in the Cache for when there re-called. For example -
You open Google and then go to another tab and close the open Google page, when you next go to Google it loads faster.
However, what I think your saying is if the Entire program E.G - Google Chrome or Safari - is closed, does the webpage you just had open stay in the cache? No, when the program is closed all its relative data is also terminated in order to fully close the program.
I guess this page has some info on it -
https://www.wikipedia.org/wiki/Shared_memory
I was trying to understand following:
I know that page tables are built for translation between virtual memory and physical memory by virtual memory manager at some point. Since there are many processes running on a system, even though only process active at a time, I was wondering whether page tables for inactive process are moved to page file at any point of time? Given the fact that lower 2 GB area is reserved for windows, it would make sense that windows would keep page tables for all processes on the system. Although it would make sense as well that they are moved to page file if the current process is switched?
Same goes for the writable (data) pages. Will windows keep all the data pages for all the process in memory or move them to page file at some point. On my machine, task manager says 1.5 GB RAM is being utilized out of 3 GB and 1.5 is system cache in performance tab so my understanding is data stays in physical memory for all applications. But would there be a time when it needs to moved to paging file?
I was wondering whether page tables for inactive process are moved to page file at any point of time?
Yes, page tables are pageable.
Will windows keep all the data pages for all the process in memory or move them to page file at some point.
As far as the Windows paging policy is concerned, there's two kinds of memory: pageable and non-pageable. It doesn't really matter which process it belongs to or even if it belongs to the O/S itself, if it's pageable then it's subject to being paged out. So, yes, Windows will page out process data pages if necessary.
I suggest reading the memory management chapter in the Windows Internals book, it should cover all of this.
-scott
You are actually asking two questions here.
What's the paging policy regarding the page tables.
What's the paging policy for "writable data" pages (i.e. virtual memory with R/W permissions).
First I'll correct you a little.
Given the fact that lower 2 GB area is reserved for windows, it would
make sense that windows would keep page tables for all processes on
the system
To be exact it's the upper 2GB that are reserved to windows, more correctly - may be accessed in the kernel mode only by Windows kernel and drivers.
Now, this may surprise you, but the kernel memory may be pagable too! So technically it's not important at all which portion of the 32-bit address space is visible in the user/kernel mode. It's not related to paging.
Another correction: virtual memory may be in physical memory and saved to the page file. There's a common belief that the OS frees physical storage by on-demand saving the pages to the page file. Wrong.
Actually Windows saves memory pages to the page file before they need to be freed. In fact it dumps all the memory pages to the page file (besides of those that are related to other files, such as mapped sections) in background. There are two reasons for this:
During high load the OS will free memory pages quicker (since they're already saved)
In the kernel mode paging is not always possible. Drivers that run on high IRQL (i.e. serve the most time-critical events) may not access physical storage drivers, hence paging is not possible.
So, the answers to your questions are:
Don't know for sure, but it depends on the OS implementation details. I see no reasons why per-process page table may not be paged-out. It's needed during the context switch and modifying process virtual memory. Both situations don't belong to the time-critical events.
Definitely "writable data" memory pages are saved to the page file. Are they removed from the physical memory? On-demand only, during the system load, in the least-recent-used order.
I run Windows 7 RC1, which uses the same WTM from Vista. When i look at the processes, there some columns I'm not sure what the differences are:
Memory - working set
Memory - private working set
Memory - commit size
can anyone tell me what they are?
From the following article, under the section Types of Memory Usage:
There are two main types of memory usage: working set and private working set. The private working set is the amount of memory used by a process that cannot be shared among other processes, while working set includes the memory shared by other processes.
That may sound confusing, so let’s try to simplify it a bit. Lets pretend that there are two kids who are coloring, and both of the kids have 5 of their own crayons. They decide to share some of their crayons so that they have more colors to choose from. When each child is asked how many crayons they used, both of them said they used 7 crayons, because they each shared 2 of their crayons.
The point of that metaphor is that one might assume that there were a total of 14 crayons if they didn’t know that the two kids were sharing, but in reality there were only 10 crayons available. Here is the rundown:
Working Set: This includes all of the shared crayons, so the total would be 14.
Private Working Set: This includes only the crayons that each child owns, and doesn’t reflect how many were actually used in each picture. The total is therefore 10.
This is a really good comparison to how memory is measured. Many applications reuse code that you already have on your system, because in the end it helps reduce the overall memory consumption. If you are viewing the working set memory usage you might get confused because all of your running processes might actually add up to more than the amount of RAM you have installed, which is the same problem we had with the crayon metaphor above. Naturally the working set will always be larger than the private working set.
Working set:
Working set is the subset of virtual pages that are resident in physical memory only; this will be a partial amount of pages from that process.
Private working set:
The private working set is the amount of memory used by a process that cannot be shared among other processes
Commit size:
Amount of virtual memory that is reserved for use by a process.
And at microsoft.com you can find more details about other memory types.
'Working Set' is the amount of memory that the process currently has in physical RAM. In other words, accessing any pages in the 'Working Set' will not cause a page fault since the page is in RAM.
As for the other two, I'm not 100% sure, probably 'Working Set' contains sharable memory, such as memory mapped files, and 'Private Working Set' contains only pages that the process can use and are not shareable.
Have look at this site and search for the speaker 'Dave Solomon'. There is an excellent webcast that he gave which explains about Windows memory, and he mentions working set, commit sizes, and other memory terms.
EDIT:
Those site links are indeed dead :(
Instead, you can search Google for
vimeo david solomon windows
Those same videos look to be available on Vimeo now, which is cool.
If you open the Resource Monitor from the WTM, mousing over the various column headings of the interesting process displays a pretty informative tool tip.
e.g.
Commit(KB): Amount of virtual memory reserved by the operating system for the process in KB.
etc.
This article at Microsoft seems to be the most detailed.
Edit Oct 2018: new link
Is there any way to set a system wide memory limit a process can use in Windows XP? I have a couple of unstable apps which do work ok for most of the time but can hit a bug which results in eating whole memory in a matter of seconds (or at least I suppose that's it). This results in a hard reset as Windows becomes totally unresponsive and I lose my work.
I would like to be able to do something like the /etc/limits on Linux - setting M90, for instance (to set 90% max memory for a single user to allocate). So the system gets the remaining 10% no matter what.
Use Windows Job Objects. Jobs are like process groups and can limit memory usage and process priority.
Use the Application Verifier (AppVerifier) tool from Microsoft.
In my case I need to simulate memory no longer being available so I did the following in the tool:
Added my application
Unchecked Basic
Checked Low Resource Simulation
Changed TimeOut to 120000 - my application will run normally for 2 minutes before anything goes into effect.
Changed HeapAlloc to 100 - 100% chance of heap allocation error
Set Stacks to true - the stack will not be able to grow any larger
Save
Start my application
After 2 minutes my program could no longer allocate new memory and I was able to see how everything was handled.
Depending on your applications, it might be easier to limit the memory the language interpreter uses. For example with Java you can set the amount of RAM the JVM will be allocated.
Otherwise it is possible to set it once for each process with the windows API
SetProcessWorkingSetSize Function
No way to do this that I know of, although I'm very curious to read if anyone has a good answer. I have been thinking about adding something like this to one of the apps my company builds, but have found no good way to do it.
The one thing I can think of (although not directly on point) is that I believe you can limit the total memory usage for a COM+ application in Windows. It would require the app to be written to run in COM+, of course, but it's the closest way I know of.
The working set stuff is good (Job Objects also control working sets), but that's not total memory usage, only real memory usage (paged in) at any one time. It may work for what you want, but afaik it doesn't limit total allocated memory.
Per process limits
From an end-user perspective, there are some helpful answers (and comments) at the superuser question “Is it possible to limit the memory usage of a particular process on Windows”, including discussions of how to set recursive quota limits on any or all of:
CPU assignment (quantity, affinity, NUMA groups),
CPU usage,
RAM usage (both ‘committed’ and ‘working set’), and
network usage,
… mostly via the built-in Windows ‘Job Objects’ system (as mentioned in #Adam Mitz’s answer and #Stephen Martin’s comment above), using:
the registry (for persistence, when desired) or
free tools, such as the open-source Process Governor.
(Note: nested Job Objects ~may~ not have been available under all earlier versions of Windows, but the un-nested version appears to date back to Windows XP)
Per-user limits
As far as overall per-user quotas:
??
It is possible that each user session is automatically assigned to a job group itself; if true, per-user limits should be able to be applied to that job group. Update: nope; Job Objects can only be nested at the time they are created or associated with a specific process, and in some cases a child Job Object is allowed to ‘break free’ from its parent and become independent, so they can’t facilitate ‘per-user’ resource limits.
(NTFS does support per-user file system ~storage~ quotas, though)
Per-system limits
Besides simple BIOS or ‘energy profile’ restrictions:
VM hypervisor or Kubernetes-style container resource limit controls may be the most straightforward (in terms of end-user understandability, at least) option.
Footnotes, regarding per-process and other resource quotas / QoS for non-Windows systems:
‘Classic’ Mac OS (including ‘classic’ applications running on 2000s-era versions of Mac OS X): per-application memory limits can be easily set within the ‘Memory’ section of the Finder ‘Get Info’ window for the target program; as a system using a cooperative multitasking concurrency model, per-process CPU limits were impossible.
BSD: ? (probably has some overlap with linux and non-proprietary macOS methods?)
macOS (aka ‘Mac OS X’): no user-facing interface; system support includes, depending on version, the ‘Multiprocessing Services API’, Grand Central Dispatch, POSIX threads / pthread, ‘operation objects’, and possibly others.
Linux: ‘Resource Manager’/limits.conf, control groups/‘cgroups’, process priority/‘niceness’/renice, others?
IBM z/OS and other mainframe-style systems: resource controls / allocation was built-in from nearly the beginning