Set Windows process (or user) memory limit - windows

Is there any way to set a system wide memory limit a process can use in Windows XP? I have a couple of unstable apps which do work ok for most of the time but can hit a bug which results in eating whole memory in a matter of seconds (or at least I suppose that's it). This results in a hard reset as Windows becomes totally unresponsive and I lose my work.
I would like to be able to do something like the /etc/limits on Linux - setting M90, for instance (to set 90% max memory for a single user to allocate). So the system gets the remaining 10% no matter what.

Use Windows Job Objects. Jobs are like process groups and can limit memory usage and process priority.

Use the Application Verifier (AppVerifier) tool from Microsoft.
In my case I need to simulate memory no longer being available so I did the following in the tool:
Added my application
Unchecked Basic
Checked Low Resource Simulation
Changed TimeOut to 120000 - my application will run normally for 2 minutes before anything goes into effect.
Changed HeapAlloc to 100 - 100% chance of heap allocation error
Set Stacks to true - the stack will not be able to grow any larger
Save
Start my application
After 2 minutes my program could no longer allocate new memory and I was able to see how everything was handled.

Depending on your applications, it might be easier to limit the memory the language interpreter uses. For example with Java you can set the amount of RAM the JVM will be allocated.
Otherwise it is possible to set it once for each process with the windows API
SetProcessWorkingSetSize Function

No way to do this that I know of, although I'm very curious to read if anyone has a good answer. I have been thinking about adding something like this to one of the apps my company builds, but have found no good way to do it.
The one thing I can think of (although not directly on point) is that I believe you can limit the total memory usage for a COM+ application in Windows. It would require the app to be written to run in COM+, of course, but it's the closest way I know of.
The working set stuff is good (Job Objects also control working sets), but that's not total memory usage, only real memory usage (paged in) at any one time. It may work for what you want, but afaik it doesn't limit total allocated memory.

Per process limits
From an end-user perspective, there are some helpful answers (and comments) at the superuser question “Is it possible to limit the memory usage of a particular process on Windows”, including discussions of how to set recursive quota limits on any or all of:
CPU assignment (quantity, affinity, NUMA groups),
CPU usage,
RAM usage (both ‘committed’ and ‘working set’), and
network usage,
… mostly via the built-in Windows ‘Job Objects’ system (as mentioned in #Adam Mitz’s answer and #Stephen Martin’s comment above), using:
the registry (for persistence, when desired) or
free tools, such as the open-source Process Governor.
(Note: nested Job Objects ~may~ not have been available under all earlier versions of Windows, but the un-nested version appears to date back to Windows XP)
Per-user limits
As far as overall per-user quotas:
??
It is possible that each user session is automatically assigned to a job group itself; if true, per-user limits should be able to be applied to that job group. Update: nope; Job Objects can only be nested at the time they are created or associated with a specific process, and in some cases a child Job Object is allowed to ‘break free’ from its parent and become independent, so they can’t facilitate ‘per-user’ resource limits.
(NTFS does support per-user file system ~storage~ quotas, though)
Per-system limits
Besides simple BIOS or ‘energy profile’ restrictions:
VM hypervisor or Kubernetes-style container resource limit controls may be the most straightforward (in terms of end-user understandability, at least) option.
Footnotes, regarding per-process and other resource quotas / QoS for non-Windows systems:
‘Classic’ Mac OS (including ‘classic’ applications running on 2000s-era versions of Mac OS X): per-application memory limits can be easily set within the ‘Memory’ section of the Finder ‘Get Info’ window for the target program; as a system using a cooperative multitasking concurrency model, per-process CPU limits were impossible.
BSD: ? (probably has some overlap with linux and non-proprietary macOS methods?)
macOS (aka ‘Mac OS X’): no user-facing interface; system support includes, depending on version, the ‘Multiprocessing Services API’, Grand Central Dispatch, POSIX threads / pthread, ‘operation objects’, and possibly others.
Linux: ‘Resource Manager’/limits.conf, control groups/‘cgroups’, process priority/‘niceness’/renice, others?
IBM z/OS and other mainframe-style systems: resource controls / allocation was built-in from nearly the beginning

Related

How to debug copy-on-write?

We have some code that relies on extensive usage of fork. We started to hit the performance problems and one of our hypothesis is that we do have a lot of speed wasted when copy-on-write happens in the forked processes.
Is there a way to specifically detect when and how copy-and-write happens, to have a detailed insight into this process.
My platform is OSX but more general information is also appreciated.
There are a few ways to get this info on OS X. If you're satisfied with just watching information about copy-on-write behavior from the command-line, you can use the vm_stat tool with an interval. E.g., vm_stat 0.5 will print full statistics twice per second. One of the columns is the number of copy-on-write faults.
If you'd like to gather specific information in a more detailed way, but still from outside the actual running process, you can use the Instruments application that comes with OS X. This includes a set of tools for gathering information about a running process, the most useful of which for your case are likely to be the VM Tracker, Virtual Memory Trace, or Shared Memory instruments. These capture lots of useful information over the lifetime of a process. The application is not super intuitive, but it will do what you need.
If you'd like detailed information in-process, I think you'll need to use the (poorly documented) VM statistics API. You can request that the kernel fill a vm_statistics struct using the host_statistics routine. For example, running this code:
mach_msg_type_number_t count = HOST_VM_INFO_COUNT;
vm_statistics_data_t vmstats;
kern_return_t host_statistics(mach_host_self(), HOST_VM_INFO, (host_info_t) &vmstats, &count);
will fill the vmstats structure with information such as cow_faults, which gives the number of faults triggered by copy-on-write behavior. Check out the headers /usr/include/mach/vm_*, which declare the types and routines for gathering this information.

Can cgroup really guarantee process do not interfere each other?

I'm a fresh man of cgroup and I'm trying to use it control two C++ processes on my Linux server.
I set mem_limit of each process to 1G, which means it can consume at most 1GB memory, right?
But I think cgroup does not guarantee real isolation like VM, for example, one process can still read (or write) the memory of another process.
There's also competition between the two processes to grap free memory block as cgroup does not allocate anything to them.
Am I right?
What about the case in the cpu_set?
What's the difference between cgroup vs VM considering the isolation?
I Googled it but only got a lot of "docker vs vm", which is really not what I want.
Any tips from implementation of cgroups is really helpful.
First of all, you misunderstood what cgroups is. It is not an isolation tool, it is resource limiting tool that could limit memory, CPU, I/O consumption like mem_limit.
However, each process has its own, unique address space, so when process 1 is running on CPU, process 2 page tables are not used, so process 1 cannot get process 2 variable by simply dereferencing pointer. Virtual Memory is already an isolation technique.
There are some ways (used usually by debuggers) to access other's process memory in Linux:
/proc/PID/mem. If you check permissions on that file, you will see that only same user or root may access it.
process_vm_{readv,writev} system calls. They check if user has capability CAP_SYS_PTRACE.
So there are several options to forbid other processes to access others memory:
Run processes from different users that do not have CAP_SYS_PTRACE. Android does that.
Use Kernel Namespaces - process will not know if other exists - protection is performed on pid level. LXC uses this and Docker probably too.
Bare-Metal Virtualization: Xen, KVM, etc. Not only process page tables are isolated, but also a kernel too.
IMHO (1) is quite enough and (3) is for paranoics ;)

Limiting memory of V8 Context

I have a script server that runs arbitrary java script code on our servers. At any given time multiple scripts can be running and I would like to prevent one misbehaving script from eating up all the ram on the machine. I could do this by having each script run in its own process and have an off the shelf monitoring tool monitor the ram usage of each process, killing and restarting the ones that get out of hand. I don't want to do this because I would like to avoid the cost of restart the binary every time one of these scripts goes crazy. Is there a way in v8 to set a per context/isolate memory limit that I can use to sandbox the running scripts?
It should be easy to do now
context.EstimatedSize() to get estimated size of the context
isolate.TerminateExecution() when context goes out of acceptable memory/cpu usage/whatever
in order to get access if there is an infinite loop(or something else blocking, like high cpu calculation) I think you could use isolate.RequestInterrupt()
A single process can run multiple isolates, if you have a 1 isolate to 1 context ratio you can easily
restrict memory usage per isolate
get heap stats
See some examples in this commit:
https://github.com/discourse/mini_racer/commit/f7ec907547e9a6ea888b2587e4edee3766752dd3
In particular you have:
v8::HeapStatistics stats;
isolate->GetHeapStatistics(&stats);
There are also fancy features like memory allocation callbacks you can use.
This is not reliably possible.
All JavaScript contexts by this process share the same object heap.
WebKit/Chromium tries some stuff to disable contexts after context OOMs.
http://code.google.com/searchframe#OAMlx_jo-ck/src/third_party/WebKit/Source/WebCore/bindings/v8/V8Proxy.cpp&exact_package=chromium&q=V8Proxy&type=cs&l=361
Sources:
http://code.google.com/p/v8/source/browse/trunk/src/heap.h?r=11125&spec=svn11125#280
http://code.google.com/p/chromium/issues/detail?id=40521
http://code.google.com/p/chromium/issues/detail?id=81227

Help understanding Windows memory - "Working Set"

I've been tracking down a few memory leaks in my application. It's been a real pain, but I've finally tightened everything up. However, there's one bit of Windows memory management that is confusing me. Here is a printout of the app's memory usage over time...
Time PrivateMemorySize64 WorkingSet64
20:00:36 47480, 50144
20:01:06 47480, 50144
20:01:36 47480, 50144
20:02:06 47480, 149540
20:02:36 47480, 149540
20:03:06 47480, 149540
The working set jumped from 49 MB to 146 over a span of 30 seconds. This happened overnight as the application was basically doing nothing.
The working set (which is what task manager shows me) seems to be able to be influenced by other applications such as debuggers (as I learned while looking for memory leaks). After reading the documentation on what Working Set is, I still don't have a good understanding.
Any help is appreciated.
Update: Thanks to some links from responders as well as some additional searching, I have a better understanding on how a separate process can cause my process' working set to grow. Good to know that a spike in the working set is not necessarily an indication that your app is leaking... Further reason to not rely on Task Manager for memory evaluation :)
Helpful links:
A few words on memory usage or: working set vs. private working set
CyberNotes: Windows Memory Usage Explained
Simply said, the working set is the collection of memory pages currently owned by your process and not swapped out (i.e. in RAM). That is somewhat inaccurate, however. Reality is a lot more complicated.
Windows maintains a minimum working set size and a maximum working set size for every process. The minimum working set is easy, it is what Windows will grant to every process (as long as it can, by physical limits).
The maximum working set is more dubious. If your program uses more memory than will fit in its quota, Windows will drop some pages. However, while they are no longer in your working set, these pages are not necessarily "gone".
Rather, those pages are removed from your working set and moved to the pool of available pages. As a consequence, if some other program needs more memory and no cleared pages are left over, your pages will be cleared, and assigned to a different process. When you access them, they will need to be fetched from the swapfile again, possibly purging other pages, if you are still above the maximum working set size.
However, if nobody asked for more memory in the mean time (or if all demands could be satisfied by pages that were unused anyway), then accessing one of those pages will simply make it "magically reappear" and kick out another page in its stead.
Thus, your process can have more pages in RAM than are actually in its working set, but it does not "officially" own them.
The Resident Set/Working Set is the portion of Virtual Address Space which is currently residing in Physical Memory and therefore isn't Swapped Out

How to interpret Windows Task Manager?

I run Windows 7 RC1, which uses the same WTM from Vista. When i look at the processes, there some columns I'm not sure what the differences are:
Memory - working set
Memory - private working set
Memory - commit size
can anyone tell me what they are?
From the following article, under the section Types of Memory Usage:
There are two main types of memory usage: working set and private working set. The private working set is the amount of memory used by a process that cannot be shared among other processes, while working set includes the memory shared by other processes.
That may sound confusing, so let’s try to simplify it a bit. Lets pretend that there are two kids who are coloring, and both of the kids have 5 of their own crayons. They decide to share some of their crayons so that they have more colors to choose from. When each child is asked how many crayons they used, both of them said they used 7 crayons, because they each shared 2 of their crayons.
The point of that metaphor is that one might assume that there were a total of 14 crayons if they didn’t know that the two kids were sharing, but in reality there were only 10 crayons available. Here is the rundown:
Working Set: This includes all of the shared crayons, so the total would be 14.
Private Working Set: This includes only the crayons that each child owns, and doesn’t reflect how many were actually used in each picture. The total is therefore 10.
This is a really good comparison to how memory is measured. Many applications reuse code that you already have on your system, because in the end it helps reduce the overall memory consumption. If you are viewing the working set memory usage you might get confused because all of your running processes might actually add up to more than the amount of RAM you have installed, which is the same problem we had with the crayon metaphor above. Naturally the working set will always be larger than the private working set.
Working set:
Working set is the subset of virtual pages that are resident in physical memory only; this will be a partial amount of pages from that process.
Private working set:
The private working set is the amount of memory used by a process that cannot be shared among other processes
Commit size:
Amount of virtual memory that is reserved for use by a process.
And at microsoft.com you can find more details about other memory types.
'Working Set' is the amount of memory that the process currently has in physical RAM. In other words, accessing any pages in the 'Working Set' will not cause a page fault since the page is in RAM.
As for the other two, I'm not 100% sure, probably 'Working Set' contains sharable memory, such as memory mapped files, and 'Private Working Set' contains only pages that the process can use and are not shareable.
Have look at this site and search for the speaker 'Dave Solomon'. There is an excellent webcast that he gave which explains about Windows memory, and he mentions working set, commit sizes, and other memory terms.
EDIT:
Those site links are indeed dead :(
Instead, you can search Google for
vimeo david solomon windows
Those same videos look to be available on Vimeo now, which is cool.
If you open the Resource Monitor from the WTM, mousing over the various column headings of the interesting process displays a pretty informative tool tip.
e.g.
Commit(KB): Amount of virtual memory reserved by the operating system for the process in KB.
etc.
This article at Microsoft seems to be the most detailed.
Edit Oct 2018: new link

Resources