I am running a user mode program on normal priority. My program is searching an NP problem, and as a result, uses up a lot of memory which eventually ends up in the swap file.
Then my mouse freezes up, and it takes forever for task manager to open up and let me end the process.
What I want to know is how I can stop my Windows operating system from completely locking up from this even though only 1 out of my 2 cores are being used.
Edit:
Thanks for the replies.
I know that making it use less memory will help, but it just doesn't make sense to me that the whole OS should lock up.
The obvious answer is "use less memory". When your app uses up all the
available memory, the OS has to page the task manager (etc.) out to make room for your app. When you switch programs, the OS has to page the other programs back in (as they are needed).
Disk reads are slower than memory reads, so everything appears to be
going slower.
If you want to avoid this, have your app manage its own memory, or
use a better algorithm than brute force. (There are genetic
algorithms, simulated annealing, etc.)
The problem is that when another program (e.g. explorer.exe) is going to execute, all of its code and memory has been swapped out. To make room for the other program Windows has to first write data that your program is using to disk, then load up the other program's memory. Every new page of code that is executed in the other program requires disk access, causing it to run slowly.
I don't know the access pattern of your program, but I'm guessing it touches all of its memory pages a lot in a random fashion, which makes the problem worse because as soon as Windows evicts a memory page from your program, suddenly you need it again and Windows has to find some other page to give the same treatment.
To give other processes more RAM to live in, you can use SetProcessWorkingSetSize to reduce the maximum amount of RAM that your program may use. Of course this will make your program run more slowly because it has to do more swapping.
Another alternative you could try is to add more drives to the system, and distribute the swap files over those. You may have a dual-core CPU, but you have only a single drive. Distributing the swap file over multiple drives allows Windows to balance work across them (although I don't have first-hand experience of how well it does this).
I don't think there's a programming answer to this question, aside from "restructure your app to use less memory." The swapfile problem is most likely due to the bottleneck in accessing the disk, especially if you're using an IDE HDD or a highly fragmented swapfile.
It's a bit extreme, but you could always minimise your swap file so you don't have all the disk thashing, and your program isn't allowed to allocate much virtual memory. Under Control panel / Advanced / Advanced tab / Perfromance / Virtual memory, set the page file to custom size and enter a value of 2mb (smallest allowed on XP). When an allocation fails, you should get an exception and be able exit gracefully. It doesn't quite fix your problem, just speeds it up ;)
Another thing worth considering would be if you are ona 32bit platform, port to a 64bit system and get a box with much more addressable RAM.
Related
Suppose I open notepad ( not necessarily ) and write a text file of 6 GB ( again, suppose) . I have no running processes other than notepad itself, and the memory assigned to the user processes has a limit of less than 6 GB. My disc memory is sufficient though.
What happens to the file now? I know that writing is definitely possible and virtual memory may get involved , but I am not sure how. Does virtual memory actually get involved? Either way, can you please explain what happens from an OS point of view?
Thanks
From the memory point of view, the notepad allocates a 6Gb buffer in memory to store the text you're seeing. The process consists of data segment (which includes the buffer above, but not only) and a code segment (the notepad native code), so the total process space is going to be larger than 6Gb.
Now, it's all virtual memory as far the process is concerned (yes, it is involved). If I understand your case right, the process won't fit into physical memory, so it's going to crash due to insufficient memory.
the memory assigned to the user processes has a limit of less than 6 GB.
If this is a hard limit enforced by the operating system, it may at it's own discretion kill the process with some error message. It may also do anything else it wants, depending on it's implementation. This part of the answer disregards virtual memory or any other way of swapping RAM to disk.
My disc memory is sufficient though. What happens to the file now? I know that writing is definitely possible and virtual memory may get involved , but I am not sure how.
At this point, when your question starts involving the disk, we can start talking about virtual memory and swapping. If virtual memory is involved and the 6GB limit is in RAM usage, not total virtual memory usage, parts of the file can be moved to disk. This could be parts of the file currently out of view on screen or similar. The OS then manages what parts of the (more than 6GB of) data is available in RAM, and swaps in/out data depending on what the program needs (i.e. where in the file you are working).
Does virtual memory actually get involved?
Depends on weather it is enabled in the OS in question or not, and how it is configured.
Yes a lot of this depends on the OS in question and it's implementation and how it handles cases like this. If the OS is poorly written it may crash itself.
Before I give you a precise answer, let me explain few things.
I could suggest you to open Linux System Monitor or Windows Task Manager, and then open heavy softwares like a game, Android Studio, IntelliJ e.t.c
Go to the memory visualization tap. You will notice that each of the applications( processes) consume a certain amount of memory. Okey fine!
Some machines if not most, support memory virtualization.
It's a concept of allocating a certain amount of the hard disk space as a back up plan just in case some application( process) consumes a lot of memory and if it is not active at times, then it gets moved from the main memory to the virtual memory to create a priority for other tasks that are currently busy.
A virtual memory is slower that the main memory as it is located in the hard disk. However is it still faster than fetching data directly from the disk.
When launching any application, not all its application size will be loaded to memory, but only the necessary files that are required at that particular time will be loaded to memory. Thus why you can play a 60GB game in a machine that has a 4GB RAM.
To answer your question:
If you happen to launch a software that consumes all the memory resources of your machine, your machine will freeze. You will even hear the sounds made by its cooling system. It will be louder and faster.
I hope I clarified this well
I understand that delete returns memory to the heap that was allocated of the heap, but what is the point? Computers have plenty of memory don't they? And all of the memory is returned as soon as you "X" out of the program.
Example:
Consider a server that allocates an object Packet for each packet it receives (this is bad design for the sake of the example).
A server, by nature, is intended to never shut down. If you never delete the thousands of Packet your server handles per second, your system is going to swamp and crash in a few minutes.
Another example:
Consider a video game that allocates particles for the special effect, everytime a new explosion is created (and never deletes them). In a game like Starcraft (or other recent ones), after a few minutes of hilarity and destruction (and hundres of thousands of particles), lag will be so huge that your game will turn into a PowerPoint slideshow, effectively making your player unhappy.
Not all programs exit quickly.
Some applications may run for hours, days or longer. Daemons may be designed to run without cease. Programs can easily consume more memory over their lifetime than available on the machine.
In addition, not all programs run in isolation. Most need to share resources with other applications.
There are a lot of reasons why you should manage your memory usage, as well as any other computer resources you use:
What might start off as a lightweight program could soon become more complex, depending on your design areas of memory consumption may grow exponentially.
Remember you are sharing memory resources with other programs. Being a good neighbour allows other processes to use the memory you free up, and helps to keep the entire system stable.
You don't know how long your program might run for. Some people hibernate their session (or never shut their computer down) and might keep your program running for years.
There are many other reasons, I suggest researching on memory allocation for more details on the do's and don'ts.
I see your point, what computers have lots of memory but you are wrong. As an engineer you have to create programs, what uses computer resources properly.
Imagine, you made program which runs all the time then computer is on. It sometimes creates some objects/variables with "new". After some time you don't need them anymore and you don't delete them. Such a situation occurs time to time and you just make some RAM out of stock. After a while user have to terminate your program and launch it again. It is not so bad but it not so comfortable, what is more, your program may be loading for a while. Because of these user feels bad of your silly decision.
Another thing. Then you use "new" to create object you call constructor and "delete" calls destructor. Lets say you need to open so file and destructor closes it and makes it accessible for other processes in this case you would steel not only memory but also files from other processes.
If you don't want to use "delete" you can use shared pointers (it has garbage collector).
It can be found in STL, std::shared_ptr, it has one disatvantage, WIN XP SP 2 and older do not support this. So if you want to create something for public you should use boost it also has boost::shared_ptr. To use boost you need to download it from here and configure your development environment to use it.
I'm just wondering what interpreters manage memory of its threads inside its own process?
With VMware Virtual Machines, when memory is tight, VMware will inflate a "balloon" inside the VM. This allows the guest OS to "intelligently choose" what memory to swap to disk, allowing memory from that OS to be used by other VMs.
One issue is Java, where the OS can not "see/understand" the memory inside the JRE, and when the balloon inflates, the guest OS will effectively randomly swap out memory to disk and could easily swap out critical JRE functions rather than being able to intelligently choose which bits of memory to swap out.
My question is, what other interpreters behave in the same way as Java around memory management?
Is Microsoft .NET similar in this regard? Any other interpreters?
Regards
ted
I am not sure you can selectively swap out memory used by certain threads, seeing that the main difference between a thread and a process is that they share a common memory space, and the reason to use threads over processes is that they are deemed more lightweight, precisely because you are giving up on process isolation managed by the OS.
So what you're really asking is, whether any interpreter implements its own algorithm for swapping data to disk. As far as I know, no interpreter designed in recent times does this - given the price of RAM nowadays, it's not a good use of engineering resources. (There is a school of thought, to which I subscribe, that says we should now consider throwing out operating system level disk swapping as well.)
Relational database systems do their own disk swapping of course, partly because they were designed when RAM was more expensive, and partly because they still sometimes deal with unusually large volumes of data.
And, memory is rusty on this one, but I could almost swear at least one of the old MUD systems also implemented its own swapping code; a big MUD could run to tens of megabytes, and in those days might have had to run in only a few megabytes of memory.
Generally my question is why Windows is constantly thrashing my HDD. I'm talking about Windows 7 to be exact (and of course indexing is turned off).
I have a feeling that this is related to the page file, because this phenomenon disappears when the page file is turned off.
More specifically, my question is why Windows ever makes a use of the page file, whereas the amount of RAM is not exhausted yet.
I have two possible explanations:
perhaps Windows tends to save memory pages of every running process to the page file "in background" even before this is really needed. Then when free RAM is eventually needed - it can be acquired more quickly.
In kernel mode paging is not always possible. Drivers that run at high IRQL can use only physical RAM. Hence the OS should have some reserve of RAM that drivers may wish to allocate dynamically.
Anyway I'm talking about reserve of GBs free RAM, yet Windows is thrashing HDD.
Does anybody knows what's the exact Windows policy regarding the page file, and if it can be adjusted?
Of course I can turn the page file off, but I do want to use it, but only when RAM is adequately exhausted.
BTW I thought about buying in SSD as a system driver, but I'm afraid it'll be dead in a year with this kind of an abuse.
There are multiple reasons for disk activity in Windows 7: indexing (turned off here), scheduled tasks (collection of usage information, check disk etc.), disk defrag, antivirus, SuperFetch, paging and many more. You are correct to suspect that paging can happen even when there is sufficient RAM and the reason you have is correct. They call is pre-emptive paging to prepare system for spikes of memory demand. Once you have large amounts of RAM this seems unnecessary and it is in 99% of time. 1% are all those situations when you actually run out of memory and paging is the only option. There is a solution to get best of both worlds. You can practically eliminate paging and still have page file by setting its minimum size to something very small. If you are fine with mini kernel dumps to be done when system crashes you can set page file minimum size to be just 16 MB. If you want full kernel dumps, minimum is 1 GB but that would limit paging to a lesser degree. Set maximum page file size to 4 GB or whatever you want.
This way Windows can always expand and use bigger page if it REALLY needs to (out of virtual memory) but should not use it under normal condition. Probably never. You still keep the safety net of page file though.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Virtual memory was introduced to help run more programs with limited memory. But in todays environment of inexpensive RAM, is it still relevant?
Since there will be no disk access if it is disabled and all the programs will be memory resident, will it not improve the performance and program response times?
Is there any essential requirement for virtual memory in windows apart from running more programs as stated above? Something windows internal not known to us.
Some pedantry: virtual memory is not just the pagefile. The term encompasses a whole range of techniques that give the program the illusion that it has one single contiguous address space, some of which is the program's code, some of which is data, and some of which are DLLs or memory-mapped files.
So to your lead-in question: yes, virtual memory is required. It's what makes modern OS's work.
Don't disable virtual memory. 2GB is not nearly enough to even consider this. Regardless, you should always keep virtual memory on even if you do have enough since it will only ever be used when you actually need it. Much better to be safe than sorry since NOT having it active means you simply hit a wall, while having it active means your computer starts swapping to the hard drive but continues to run.
Yes, because it's the basis of all on-demand paging that occurs in a modern operating system, not just Windows.
Windows will always use all of your memory, if not for applications then for caching whatever you read from your hard drive. Because if that memory is not used, then you're throwing your investment in memory away. Basically, Windows uses your RAM as a big fat cache to your hard drives. And this happens all the time, as the relevant pages are only brought into main memory when you address the content of that page.
The question is really what is the use of a pagefile considering how much memory modern computers have and what's going on under the hood in the OS.
It's common for the Windows task manager to show not much physical memory being used, but, your having many page faults? Win32 will never allocate all it's physical memory. It always saves some for new resource needs. With a big pagefile vs small pagefile, Win32 will be slower to allocate physical memory to a process.
For a few days now I've been using a very small pagefile (200 MB fixed) in Vista with 3GB of addressable physical memory. I have had no crashes or problems. Haven't tried things like large video editing or many different processes open at once. I wouldn't recommend no pagefile since the OS can never shuffle pages around in physical memory leading to the development of holes. A large pagefile is fail-safe for people who wouldn't know how to manually increase the pagefile if a low memory warning pops up or the OS crashes.
Some points:
The kernel will use some of the physical memory and this will be shared through VM mapping with all other processes. Other processes will be in the remaining physical memory. VM makes each process see a 4GB mem space, the OS at the lower 2GB. Each process will need much less than the 4GB of physical memory, this amount is it's committed memory requirement. When programming, a malloc or new will reserve memory but not commit it. Things like the first write to the memory will commit it. Some memory is immedietely committed by the OS for each process.
Your question is really about using a page file, and not virtual memory, as kdgregory said. Probably the most important use for virtual memory is so that the OS can protect once process's memory from another processes memory, while still giving each process the illusion of a contiguous, flat virtual address space. The actual physical addresses can and will become fragmented, but the virtual addresses will appear contiguous.
Yes, virtual memory is vital. The page file, maybe not.
Grrr. Disk space is probably always going to be cheaper than RAM. One of my lab computers has 512MB of RAM. That used to be enough when I got it, but now it has slowed to a crawl swapping and I need to put more RAM in it. I am not running more software programs now than I was then, but they have all gotten more bloated, and they often spawn more "daemon" programs that just sit there doing nothing but wait for some event and use up memory. I look at my process list and the "in-memory" column for the file explorer is 40MB. For Firefox it's 162MB. Java's "update scheduler" jusched.exe uses another 3.6MB. And that's just the physical-memory, these numbers don't include the swap space.
So it's really important to save the quicker, more expensive memory for what can't be swapped out. I can spare tens of GB on my hard drive for swap space.
Memory is seen as cheap enough that the OS and many programs don't try to optimize any more. On the one hand it's great because it makes programs more maintainable and debuggable and quicker to develop. But I hate having to keep putting in more RAM into my computer.
A good explanation at
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
To optimally size your paging file you
should start all the applications you
run at the same time, load typical
data sets, and then note the commit
charge peak (or look at this value
after a period of time where you know
maximum load was attained). Set the
paging file minimum to be that value
minus the amount of RAM in your system
(if the value is negative, pick a
minimum size to permit the kind of
crash dump you are configured for). If
you want to have some breathing room
for potentially large commit demands,
set the maximum to double that number.
Virtual memory is much more than simply an extension of RAM. In reality, virtual memory is a system they virtualizes access to physical memory. Applications are presented with a consistent environment that is completely independent of RAM size. This offers a number of important advantages quite appart from the increased memory availabilty. Virtual memory is an integral part of the OS and cannot possibly be disabled.
The pagefile is NOT virtual memory. Many sources have claimed this, including some Microsoft articles. But it is wrong. You can disable the pagefile (not recommended) but this will not disable virtual memory.
Virtual mmeory has been used in large systems for some 40 years now and it is not going away anytime soon. The advantages are just too great. If virtual memory were eliminated all current 32 bit applications (and 64 bit as well) would become obsolete.
Larry Miller
Microsoft MCSA
Virtual memory is a safety net for situations when there is not enough RAM available for all running application. This was very common some time ago and today when you can have large amounts of system RAM it is less so.
Some say to leave page file alone and let it be managed by Windows. Some people say that even if you have large RAM keeping big pagefile cannot possibly hurt because it will not be used. That is not true since Windows does pre-emptive paging to prepare for spikes of memory demand. If that demand never comes this is just wasted HDD activity and we all know that HDD is the slowest component of any system. Pre-emptive paging with big enough RAM is just pointless and the only thing it does is to slow down any other disk activity that happens at the same time. Not to mention additional disk wear. Plus big page file means gigabytes of locked disk space.
A lot of people point to Mark Russinovich article to back up their strong belief that page file should not be disabled at any circumstances and so many clever people at Microsoft have thought it so thoroughly that we, little developers, should never question default Windows policy on page file size. But even Russinovich himself writes:
Set the paging file minimum to be that value (Peak Commit Charge) minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for).
So if you have large RAM amounts and your peek commit charge is never more than 50% of your RAM even when you open all your apps at once and then some, there is no need have page file at all. So in those situations 99.99% of time you will never need more memory than your RAM.
Now I am not advocating for disabling page file it but having it in size of your RAM or more is just waste of space and unnecessary activity that can slow down something else. Page file gives you a safety net in those rare (with plenty of RAM) situations when system does need more memory and to prevent it from getting out of memory which will most likely make your system unstable and unusable.
The only real need for page file is kernel dumps. If you need full kernel dumps you need at least 400 MB of paging file. But if you are happy with mini dumps, minimum is 16 MB. So to have best of both worlds which is
virtually no page file
safety net of virtual memory
I would suggest to configure Windows for mini kernel dumps, set minimum page file size to 16 MB and maximum to whatever you want. This way page file would be practically unused but would automatically expand after first out of memory error to prevent your system from being unusable. If you happen to have at least one out of memory issue you should of course reconsider your minimum size. If you really want to be safe make page file min. size 1 GB. For servers though you should be more careful.
Unfortunately, it is still needed because the windows operating system has a tendency to 'overcache'.
Plus, as stated above, 2GB isn't really enough to consider turning it off. Heck, I probably wouldn't turn it off until I had 8GB or more.
G-Man
Since there will be no disk access if it is disabled and all the programs will be memory resident, will it not improve the performance and program response times?
I'm not totally sure about other platforms, but I had a Linux machine where the swap-space had been accidently disabled. When a process used all available memory, the machine basically froze for 5 minutes, the load average went to absurd numbers and the kernel OOM killer kicked in and terminated several processes. Reenabling swap fixed this entirely.
I never experienced any unnecessary swapping to disc - it only happened when I used all the available memory. Modern OS's (even 5-10 year old Linux distros) deal with swap-space quite intelligently, and only use it when required.
You can probably get by without swap space, since it's quite rare to reach 4GB of memory usage with a single process. With a 64-bit OS and say 8GB of RAM it's even more rare.. but, there's really no point disabling swap-space, you don't gain much (if anything), and when you run out of physical memory without it, bad things happen..
Basically - any half-decent OS should only use disc-swap (or virtual-memory) when required. Disabling swap only stops the OS being able to fall back on it, which causes the OOM killer to strike (and thus data-loss when processes are terminated).