when does an operating system wipes out memory of a process - windows

A process is terminated successfully or abnormally on some OS, when does an OS decide to wipe out the memory (data, code etc.) allocated to that process; at exit or when it wants to allocate memory to a new process?
And is this wiping out memory allocation procedure the same on all operating systems (winXP, Win7, linux, Mac)?
I understand, page table has mapping of virtual addresses for that process and actual physical addresses in the memory.
Thanks.

How an OS reclaims process resources can (and generally does) vary by OS. On the Windows side of things, the NT-derived OSes behave similarly, so there should be little difference between win XP and win7. Note that asking about 'memory' is an over-simplification in this context, because there are different types of memory. For example, a typical Windows application will have stack memory, heap memory (sometimes multiple heaps), instruction/static memory, and perhaps shared memory. Most of this memory is solely owned by the process, and Windows will reclaim it on process termination (even abnormal termination).
However, shared memory can (and often does) have multiple owners; it is tied to a Windows handle (a kernel-level object that can potentially be referenced from multiple processes). Handles have a reference count, and the associated resource is reclaimed if the reference count goes to zero. This means that shared memory can outlive a process that references it. Also, it is possible for a process to 'leak' a handle, and for the handle to never be reclaimed. It is the programmer's responsibility to make sure such handles are properly closed and don't leak; the possibility of abnormal termination complicates this responsibility.
On a side note, when Windows 'reclaims' memory, it simply means that memory is available for future allocations to other processes, etc. The actual 1s and 0s are usually going to sit there up until the OS allocates the memory and the new owner of the memory actively over-writes it. So 'reclaiming' doesn't mean the memory is immediately zeroed out or anything like that; scrubbing the memory in this matter is inefficient and often unnecessary. If you are asking out of security concerns, you shouldn't rely on the OS; you'll need to scrub the memory yourself before your process releases it back to the OS.
If you want to know more about how modern Windows OSes handle memory, and don't mind doing some digging, the Windows API documentation on MSDN has a lot of information on the subject, but it is a little bit scattered. Good places to start would probably be Windows Handles, and load/unload library/process calls. Application Programming for Windows (Richter) probably has some decent information on this, if I remember right, but I don't have a copy on hand right now to check.
Hopefully, someone more knowledgeable about Linux internals can address that side of the question. This is OS-specific stuff, so there are likely to be differences. It might be worth noting that pre-NT Windows (e.g. Windows 95,98,etc.) had a completely different model of process memory. Those differences tended to make it harder for the OS to reclaim memory in the case of abnormal termination; some users found a need to restart the OS frequently if they were running unstable applications, in order to clean up the accumulated memory leak.

In Linux the resources are normally freed upon termination of the process. You can read about how Linux handles process termination here: http://www.informit.com/articles/article.aspx?p=370047&seqNum=4
There is also the OOM killer which can kick-in in extreme low memory situations I know in the embedded Android world stuff has been happening aorund this but I haven't really kept on top of it LWN.net probably has some coverage.

Related

Is there any mechanism where the kernel part of an OS in memory may also be swapped?

I'm recently learning the part of I/O buffering of operating system and according to the book I use,
When a user process issues an I/O request, the OS assigns a buffer in the system portion of main memory to the operation.
I understand how this method is able to avoid the swapping problem in non-buffering situation. But is it assumed that the OS buffering created for the process will never be swapped out?
To extend my question, I was wondering if there is any mechanism where the kernel portion of an OS in memory may also be swapped?
It is common for operating systems to page out parts of the kernel. The kernel has to define what parts may be paged out and which may not be paged out. For example, typically, there will be separate memory allocators for paged pool and non-paged pool.
Note that on most processors the page table format is the same for system pages as for user pages, thus supporting paging of the kernel.
Determining what parts of the kernel may be paged out is part of the system design and is done up front. You cannot page out the system interrupt table. You can page out system service code for the most part. You cannot page out interrupt handling code for the most part.
I was wondering if there is any mechanism where the kernel portion of an OS in memory
IIRC some old versions of AIX might have been able to swap (i.e. to paginate) some kernel code. And probably older OSes too (perhaps even Multics).
However, it practically is useless today, because the kernel memory is a tiny fraction of the RAM on current (desktop & server) computers. The total kernel memory is several dozens of megabytes only, while most computers have several dozens of gigabytes of RAM.
BTW, microkernel systems (e.g. GNU Hurd) can have server programs in paging processes.
See Operating Systems: Three Easy Pieces

How to solve the problems of lacking memory

Currently I'm reading books on Operating Systems Design. However, I'm not quite clear how Operating Systems do behave, when the number of free pages in memory is smaller than the working set of a process.
From the OS side, maybe it will prevent the process from being loaded into memory?
And from the developer side, what can they do to improve the situation?
I suspect your confusion here is related to the meaning of the term "working set." That term has a different meaning in theoretical computer science and in operating system implementation.
In the former, a working set is the amount of memory a process requires at a given time.
In the later, a working set is the amount of physical memory a process has at a given time. (some systems may use a different definition).
A process gets what the operating system gives it. The operating system can only give what is has available. If a process needs more physical memory, the OS either gives what it has free or to takes from another process.
When a process starts up, its working set will be zero (or close to zero). A process only gets physical memory when it causes a page fault. If you monitor a program, you typically see a lot of page faults when the program starts then the number starts to level off (Unless the program uses a lot of virtual memory relative to the physical memory of the system).

What interpreters manage memory of its threads inside its own process?

I'm just wondering what interpreters manage memory of its threads inside its own process?
With VMware Virtual Machines, when memory is tight, VMware will inflate a "balloon" inside the VM. This allows the guest OS to "intelligently choose" what memory to swap to disk, allowing memory from that OS to be used by other VMs.
One issue is Java, where the OS can not "see/understand" the memory inside the JRE, and when the balloon inflates, the guest OS will effectively randomly swap out memory to disk and could easily swap out critical JRE functions rather than being able to intelligently choose which bits of memory to swap out.
My question is, what other interpreters behave in the same way as Java around memory management?
Is Microsoft .NET similar in this regard? Any other interpreters?
Regards
ted
I am not sure you can selectively swap out memory used by certain threads, seeing that the main difference between a thread and a process is that they share a common memory space, and the reason to use threads over processes is that they are deemed more lightweight, precisely because you are giving up on process isolation managed by the OS.
So what you're really asking is, whether any interpreter implements its own algorithm for swapping data to disk. As far as I know, no interpreter designed in recent times does this - given the price of RAM nowadays, it's not a good use of engineering resources. (There is a school of thought, to which I subscribe, that says we should now consider throwing out operating system level disk swapping as well.)
Relational database systems do their own disk swapping of course, partly because they were designed when RAM was more expensive, and partly because they still sometimes deal with unusually large volumes of data.
And, memory is rusty on this one, but I could almost swear at least one of the old MUD systems also implemented its own swapping code; a big MUD could run to tens of megabytes, and in those days might have had to run in only a few megabytes of memory.

How does the operating system know how much memory my app is using? (And why doesn't it do garbage collection?)

When my task manager (top, ps, taskmgr.exe, or Finder) says that a process is using XXX KB of memory, what exactly is it counting, and how does it get updated?
In terms of memory allocation, does an application written in C++ "appear" different to an operating system from an application that runs as a virtual machine (managed code like .NET or Java)?
And finally, if memory is so transparent - why is garbage collection not a function-of or service-provided-by the operating system?
As it turns out, what I was really interested in asking is WHY the operating system could not do garbage collection and defrag memory space - which I see as a step above "simply" allocating address space to processes.
These answers help a lot! Thanks!
This is a big topic that I can't hope to adequately answer in a single answer here. I recommend picking up a copy of Windows Internals, it's an invaluable resource. Eric Lippert had a recent blog post that is a good description of how you can view memory allocated by the OS.
Memory that a process is using is basically just address space that is reserved by the operating system that may be backed by physical memory, the page file, or a file. This is the same whether it is a managed application or a native application. When the process exits, the operating system deletes the memory that it had allocated for it - the virtual address space is simply deleted and the page file or physical memory backings are free for other processes to use. This is all the OS really maintains - mappings of address space to some physical resource. The mappings can shift as processes demand more memory or are idle - physical memory contents can be shifted to disk and vice versa by the OS to meet demand.
What a process is using according to those tools can actually mean one of several things - it can be total address space allocated, total memory allocated (page file + physical memory) or memory a process is actually using that is resident in memory. Task Manager has a separate column for each of these possibilities.
The OS can't do garbage collection since it has no insight into what that memory actually contains - it just sees allocated pages of memory, it doesn't see objects which may or may not be referenced.
Whereas the OS handles allocates at the virtual address level, in the process itself there are other memory managers which take these large, page-sized chunks and break them up into something useful for the application to use. Windows will return memory allocated in 64k boundaries, but then the heap manager breaks it up into smaller chunks for use by each individual allocation done by the program via new. In .Net applications, the CLR will hand off new objects off of the garbage collected heap and when that heap reaches its limits, will perform garbage collection.
I can't speak to your question about the differences in how the memory appears in C++ vs. virtual machines, etc, but I can say that applications are typically given a certain memory range to use upon initialization. Then, if the application ever requires more, it will request it from the operating system, and the operating system will (generally) grant it. There are many implementations of this - in some operating systems, other applications' memory is moved away so as to give yours a larger contiguous block, and in others, your application gets various chunks of memory. There may even be some virtual memory handling involved. Its all up to an abstracted implementation. In any case, the memory is still treated as contiguous from within your program - the operating system will handle that much at least.
With regard to garbage collection, the operating system knows the bounds of your memory, but not what is inside. Furthermore, even if it did look at the memory used by your application, it would have no idea what blocks are used by out-of-scope variables and such that the GC is looking for.
The primary difference is application management. Microsoft distinguishes this as Managed and Unmanaged. When objects are allocated in memory they are stored at a specific address. This is true for both managed and unmanaged applications. In the managed world the "address" is wrapped in an object handle or "reference".
When memory is garbage collected by a VM, it can safely suspend the application, move objects around in memory and update all the "references" to point to their new locations in memory.
In a Win32 style app, pointers are pointers. There's no way for the OS to know if it's an object, an arbitrary block of data, a plain-old 32-bit value, etc. So it can't make any inferences about pointers between objects, so it can't move them around in memory or update pointers between objects.
Because the way references are handled, the OS can't take over the GC process and instead it's left up to the VM to manage the memory used by the application. For that reason, VM applications appear exactly the same to the OS. They simply request blocks of memory for use and the OS gives it to them. When the VM performs GC and compacts it's memory it's able to free memory back to the OS for use by another app.

Is Virtual Memory still relevant in today's world of inexpensive RAM? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Virtual memory was introduced to help run more programs with limited memory. But in todays environment of inexpensive RAM, is it still relevant?
Since there will be no disk access if it is disabled and all the programs will be memory resident, will it not improve the performance and program response times?
Is there any essential requirement for virtual memory in windows apart from running more programs as stated above? Something windows internal not known to us.
Some pedantry: virtual memory is not just the pagefile. The term encompasses a whole range of techniques that give the program the illusion that it has one single contiguous address space, some of which is the program's code, some of which is data, and some of which are DLLs or memory-mapped files.
So to your lead-in question: yes, virtual memory is required. It's what makes modern OS's work.
Don't disable virtual memory. 2GB is not nearly enough to even consider this. Regardless, you should always keep virtual memory on even if you do have enough since it will only ever be used when you actually need it. Much better to be safe than sorry since NOT having it active means you simply hit a wall, while having it active means your computer starts swapping to the hard drive but continues to run.
Yes, because it's the basis of all on-demand paging that occurs in a modern operating system, not just Windows.
Windows will always use all of your memory, if not for applications then for caching whatever you read from your hard drive. Because if that memory is not used, then you're throwing your investment in memory away. Basically, Windows uses your RAM as a big fat cache to your hard drives. And this happens all the time, as the relevant pages are only brought into main memory when you address the content of that page.
The question is really what is the use of a pagefile considering how much memory modern computers have and what's going on under the hood in the OS.
It's common for the Windows task manager to show not much physical memory being used, but, your having many page faults? Win32 will never allocate all it's physical memory. It always saves some for new resource needs. With a big pagefile vs small pagefile, Win32 will be slower to allocate physical memory to a process.
For a few days now I've been using a very small pagefile (200 MB fixed) in Vista with 3GB of addressable physical memory. I have had no crashes or problems. Haven't tried things like large video editing or many different processes open at once. I wouldn't recommend no pagefile since the OS can never shuffle pages around in physical memory leading to the development of holes. A large pagefile is fail-safe for people who wouldn't know how to manually increase the pagefile if a low memory warning pops up or the OS crashes.
Some points:
The kernel will use some of the physical memory and this will be shared through VM mapping with all other processes. Other processes will be in the remaining physical memory. VM makes each process see a 4GB mem space, the OS at the lower 2GB. Each process will need much less than the 4GB of physical memory, this amount is it's committed memory requirement. When programming, a malloc or new will reserve memory but not commit it. Things like the first write to the memory will commit it. Some memory is immedietely committed by the OS for each process.
Your question is really about using a page file, and not virtual memory, as kdgregory said. Probably the most important use for virtual memory is so that the OS can protect once process's memory from another processes memory, while still giving each process the illusion of a contiguous, flat virtual address space. The actual physical addresses can and will become fragmented, but the virtual addresses will appear contiguous.
Yes, virtual memory is vital. The page file, maybe not.
Grrr. Disk space is probably always going to be cheaper than RAM. One of my lab computers has 512MB of RAM. That used to be enough when I got it, but now it has slowed to a crawl swapping and I need to put more RAM in it. I am not running more software programs now than I was then, but they have all gotten more bloated, and they often spawn more "daemon" programs that just sit there doing nothing but wait for some event and use up memory. I look at my process list and the "in-memory" column for the file explorer is 40MB. For Firefox it's 162MB. Java's "update scheduler" jusched.exe uses another 3.6MB. And that's just the physical-memory, these numbers don't include the swap space.
So it's really important to save the quicker, more expensive memory for what can't be swapped out. I can spare tens of GB on my hard drive for swap space.
Memory is seen as cheap enough that the OS and many programs don't try to optimize any more. On the one hand it's great because it makes programs more maintainable and debuggable and quicker to develop. But I hate having to keep putting in more RAM into my computer.
A good explanation at
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
To optimally size your paging file you
should start all the applications you
run at the same time, load typical
data sets, and then note the commit
charge peak (or look at this value
after a period of time where you know
maximum load was attained). Set the
paging file minimum to be that value
minus the amount of RAM in your system
(if the value is negative, pick a
minimum size to permit the kind of
crash dump you are configured for). If
you want to have some breathing room
for potentially large commit demands,
set the maximum to double that number.
Virtual memory is much more than simply an extension of RAM. In reality, virtual memory is a system they virtualizes access to physical memory. Applications are presented with a consistent environment that is completely independent of RAM size. This offers a number of important advantages quite appart from the increased memory availabilty. Virtual memory is an integral part of the OS and cannot possibly be disabled.
The pagefile is NOT virtual memory. Many sources have claimed this, including some Microsoft articles. But it is wrong. You can disable the pagefile (not recommended) but this will not disable virtual memory.
Virtual mmeory has been used in large systems for some 40 years now and it is not going away anytime soon. The advantages are just too great. If virtual memory were eliminated all current 32 bit applications (and 64 bit as well) would become obsolete.
Larry Miller
Microsoft MCSA
Virtual memory is a safety net for situations when there is not enough RAM available for all running application. This was very common some time ago and today when you can have large amounts of system RAM it is less so.
Some say to leave page file alone and let it be managed by Windows. Some people say that even if you have large RAM keeping big pagefile cannot possibly hurt because it will not be used. That is not true since Windows does pre-emptive paging to prepare for spikes of memory demand. If that demand never comes this is just wasted HDD activity and we all know that HDD is the slowest component of any system. Pre-emptive paging with big enough RAM is just pointless and the only thing it does is to slow down any other disk activity that happens at the same time. Not to mention additional disk wear. Plus big page file means gigabytes of locked disk space.
A lot of people point to Mark Russinovich article to back up their strong belief that page file should not be disabled at any circumstances and so many clever people at Microsoft have thought it so thoroughly that we, little developers, should never question default Windows policy on page file size. But even Russinovich himself writes:
Set the paging file minimum to be that value (Peak Commit Charge) minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for).
So if you have large RAM amounts and your peek commit charge is never more than 50% of your RAM even when you open all your apps at once and then some, there is no need have page file at all. So in those situations 99.99% of time you will never need more memory than your RAM.
Now I am not advocating for disabling page file it but having it in size of your RAM or more is just waste of space and unnecessary activity that can slow down something else. Page file gives you a safety net in those rare (with plenty of RAM) situations when system does need more memory and to prevent it from getting out of memory which will most likely make your system unstable and unusable.
The only real need for page file is kernel dumps. If you need full kernel dumps you need at least 400 MB of paging file. But if you are happy with mini dumps, minimum is 16 MB. So to have best of both worlds which is
virtually no page file
safety net of virtual memory
I would suggest to configure Windows for mini kernel dumps, set minimum page file size to 16 MB and maximum to whatever you want. This way page file would be practically unused but would automatically expand after first out of memory error to prevent your system from being unusable. If you happen to have at least one out of memory issue you should of course reconsider your minimum size. If you really want to be safe make page file min. size 1 GB. For servers though you should be more careful.
Unfortunately, it is still needed because the windows operating system has a tendency to 'overcache'.
Plus, as stated above, 2GB isn't really enough to consider turning it off. Heck, I probably wouldn't turn it off until I had 8GB or more.
G-Man
Since there will be no disk access if it is disabled and all the programs will be memory resident, will it not improve the performance and program response times?
I'm not totally sure about other platforms, but I had a Linux machine where the swap-space had been accidently disabled. When a process used all available memory, the machine basically froze for 5 minutes, the load average went to absurd numbers and the kernel OOM killer kicked in and terminated several processes. Reenabling swap fixed this entirely.
I never experienced any unnecessary swapping to disc - it only happened when I used all the available memory. Modern OS's (even 5-10 year old Linux distros) deal with swap-space quite intelligently, and only use it when required.
You can probably get by without swap space, since it's quite rare to reach 4GB of memory usage with a single process. With a 64-bit OS and say 8GB of RAM it's even more rare.. but, there's really no point disabling swap-space, you don't gain much (if anything), and when you run out of physical memory without it, bad things happen..
Basically - any half-decent OS should only use disc-swap (or virtual-memory) when required. Disabling swap only stops the OS being able to fall back on it, which causes the OOM killer to strike (and thus data-loss when processes are terminated).

Resources