Memory mapping of files and system cache behavior in WinXP - caching

Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB.
There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system.
My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases?
I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

I work on a file backup product, so we often run into similar scenario, where our own file access causes the cache manager to keep data around -- which can cause memory usage to spike.
By default the windows cache manager will try to be useful by reading ahead, and keeping file data around in case its needed again.
There are several registry keys that let you tweak the cache behavior, and some of our customers have had good results with this.
XP is unique in that it has some server capability, but by default optimized for desktop programs, not caching. You can enable System Cache Mode in XP, which causes more memory to be set aside for caching. This might improve performance, or you may be doing this already and its having a negative side effect! You can read about that here
I can't recommend a custom memory manager, but I do know that most heavy weight apps do there own caching (Exchange, SQL). You can observe that by running process monitor.
If you want to completely prevent cache manager from using memory to cache your files you must disable both read and write caching:
FILE_FLAG_NO_BUFFERING and
FILE_FLAG_WRITE_THROUGH
There are other hints you can give the CM, (random access, temporary file) read this document on Caching Behavior here
You can still get good read performance even if you disable caching, but your going to have to emulate the cache manager behavior by having your own background threads doing read aheads
Also, I would recommend upgrading to a server class OS, even windows 2003 is going to give you more cache manager tuning options. And of course if you can move to Windows 7 / Server 2008, you will get even more performance improvements, with the same physical resources, because of dynamic paged/non paged pool sizing, and working set improvements. There is a nice article on that here

Type this down in a notepad and save it as .vbs file. Run it whenever u realize the system RAM is too low. The system cache gets cleared and adds up to RAM. I found it else where on net and giving it here so that it might help you. Also, it is suggested to care that the first record should not ever exceed half your actual RAM. So if u have 1 gb ram, start with the following text in your vbs file.
FreeMem=Space(240000000) <This one is to clear 512 MB ram>
FreeMem=Space(120000000) <This one is to clear 256 MB ram>
FreeMem=Space(90000000) <This one is to clear 128 MB ram>
FreeMem=Space(48000000) <This one is to clear 64 MB ram>
FreeMem=Space(20000000) <This one is to clear 52 MB ram>

Related

Maximize memory for file cache or memory-based file system on Windows?

I have an application which needs to create many small files in maximum performance (less than 1% of them may be read later), and I want to avoid using asynchronous file API to keep the simplicity of my code. The size of total files written cannot be pre-determined, so I figured that to achieve maximum performance, I would need:
1.Windows to utilize all unused RAM for cache (especially for file writes), with no regard of relibility or other issues. If I have more than 1GB of unused RAM and I create one million of 1KB files, I expect Windows to report "DONE" immediately, even if it has written nothing to disk yet.
OR
2.A memory-based file system backed by real on-disk file system. Whenever I write files, it should first write everything in memory only, and then update on-disk file system in background. No delay in synchronous calls unless there isn't enough free memory. Note it's different from tmpfs or RAM disk implementations on Windows, since they require fixed amount of memory and utilize page file when there isn't enough RAM.
For option 1, I have tested VHD files - while it does offer as high as 50% increase of total performance under some configurations, it still does flushing-to-disk and cause writing to wait unnecessarily. And it appears there is no way I can disable journaling or further tweak the write caching behavior.....
For option 2, I have yet found anything similar..... Do such things exist?

APC cache fragmentation (endless dilemma)

It's a frequently asked question, I know. But I tried every solution suggested (apc.stat=0, increasing shared memory, etc) without benefits.
Here's the screen with stats (you can see nginx and php5-fpm) and the parameters set in apc.ini:
APC is used for system and user-cache entries on multiple sites (8-9 WordPress sites and one with MediaWiki and SMF).
What would you suggest?
Each wordpress site is is going to cache a healthy amount in the user cache. I've looked at this myself in depth and have found the best 'guesstimate' is that if you use user cache in APC keep the fragmentation under 10%. This can sometimes mean you need to try and reserve upwards of 10x the amount of memory you intend to actually use for caching to avoid fragmentation. Start where you are and keep increasing memory allocated until fragmentation stays below 10% after running a while.
BTW: wordpress pages being cached are huge, so you'll probably need a lot of memory to avoid fragmentation.
Why 10% fragmentation? It's a bit of a black art, but I observed this is where performance start to noticeabily decline. I haven't found any good benchmarks (or run my own controlled tests) however.
This 10x amount to get this my seem insane, but root cause is APC has no way to defragment other than a restart (which completely dumps the cache). Having a slab of 1G of memory when you only plan to use 100-200m gives a lot of space to fill without having to look for 'holes' to put stuff. Think about bad old FAT16 or FAT32 disk performance with Windows 98--constant need to manually defrag as the disk gets over 50% full.
If you can't afford the extra memory to spare, you might want to look at either memcached or plain old file caching for your user cache.

How much memory should a caching system use on Windows?

I'm developing a client/server application where the server holds large pieces of data such as big images or video files which are requested by the client and I need to create an in-memory client caching system to hold a few of those large data to speed up the process. Just to be clear, each individual image or video is not that big but the overall size of all of them can be really big.
But I'm faced with the "how much data should I cache" problem and was wondering if there are some kind of golden rules on Windows about what strategy I should adopt. The caching is done on the client, I do not need caching on the server.
Should I stay under x% of global memory usage at all time ? And how much would that be ? What will happen if another program is launched and takes up a lot of memory, should I empty the cache ?
Should I request how much free memory is available prior to caching and use a fixed percentage of that memory for my needs ?
I hope I do not have to go there but should I ask the user how much memory he is willing to allocate to my application ? If so, how can I calculate the default value for that property and for those who will never use that setting ?
Rather than create your own caching algorithms why don't you write the data to a file with the FILE_ATTRIBUTE_TEMPORARY attribute and make use of the client machine's own cache.
Although this approach appears to imply that you use a file, if there is memory available in the system then the file will never leave the cache and will remain in memory the whole time.
Some advantages:
You don't need to write any code.
The system cache takes account of all the other processes running. It would not be practical for you to take that on yourself.
On 64 bit Windows the system can use all the memory available to it for the cache. In a 32 bit Delphi process you are limited to the 32 bit address space.
Even if your cache is full and your files to get flushed to disk, local disk access is much faster than querying the database and then transmitting the files over the network.
It depends on what other software runs on the server. I would make it possible to configure it manually at first. Develop a system that can use a specific amount of memory. If you can, build it so that you can change that value while it is running.
If you got those possibilities, you can try some tweaking to see what works best. I don't know any golden rules, but I'd figure you should be able to set a percentage of total memory or total available memory with a specific minimum amount of memory to be free for the system at all times. If you save a miminum of say 500 MB for the server OS, you can use the rest, or 90% of the rest for your cache. But those numbers depend on the version of the OS and the other applications running on the server.
I think it's best to make the numbers configurable from the outside and create a management tool that lets you set the values manually first. Then, if you found out what works best, you can deduct formulas to calculate those values, and integrate them in your management tool. This tool should not be an integral part of the cache program itself (which will probably be a service without GUI anyway).
Questions:
One image can be requested by multiple clients? Or, one image can be requested by multiple times in a short interval?
How short is the interval?
The speed of the network is really high? Higher than the speed of the hard drive?? If you have a normal network, then the harddrive will be able to read the files from disk and deliver them over network in real time. Especially that Windows is already doing some good caching so the most recent files are already in cache.
The main purpose of the computer that is running the server app is to run the server? Or is just a normal computer used also for other tasks? In other words is it a dedicated server or a normal workstation/desktop?
but should I ask the user how much
memory he is willing to allocate to my
application ?
I would definitively go there!!!
If the user thinks that the server application is not a important application it will probably give it low priority (low cache). Else, it it thinks it is the most important running app, it will allow the app to allocate all RAM it needs in detriment of other less important applications.
Just deliver the application with that setting set by default to a acceptable value (which will be something like x% of the total amount of RAM). I will use like 70% of total RAM if the main purpose of the computer to hold this server application and about 40-50% if its purpose is 'general use' computer.
A server application usually needs resources set aside for its own use by its administrator. I would not care about others application behaviour, I would care about being a "polite" application, thereby it should allow memory cache size and so on to be configurable by the administator, which is the only one who knows how to configure his systems properly (usually...)
Defaults values should anyway take into consideration how much memory is available overall, especially on 32 bit systems with less than 4GB of memory (as long as Delphi delivers only 32 bit apps), to leave something free to the operating systems and avoids too frequent swapping. Asking the user to select it at setup is also advisable.
If the application is the only one running on a server, a value between 40 to 75% of available memory could be ok (depending on how much memory is needed beyond the cache), but again, ask the user because it's almost impossible to know what other applications running may need. You can also have a min cache size and a max cache size, start by allocating the lower value, and then grow it when and if needed, and shrink it if necessary.
On a 32 bit system this is a kind of memory usage that could benefit from using PAE/AWE to access more than 3GB of memory.
Update: you can also perform a monitoring of cache hits/misses and calculate which cache size would fit the user needs best (it could be too small but too large as well), and the advise the user about that.
To be honest, the questions you ask would not be my main concern. I would be more concerned with how effective my cache would be. If your files are really that big, how many can you hold in the cache? And if your client server app has many users, what are the chances that your cache will actually cache something someone else will use?
It might be worth doing an analysis before you burn too much time on the fine details.

windows page file policy

Generally my question is why Windows is constantly thrashing my HDD. I'm talking about Windows 7 to be exact (and of course indexing is turned off).
I have a feeling that this is related to the page file, because this phenomenon disappears when the page file is turned off.
More specifically, my question is why Windows ever makes a use of the page file, whereas the amount of RAM is not exhausted yet.
I have two possible explanations:
perhaps Windows tends to save memory pages of every running process to the page file "in background" even before this is really needed. Then when free RAM is eventually needed - it can be acquired more quickly.
In kernel mode paging is not always possible. Drivers that run at high IRQL can use only physical RAM. Hence the OS should have some reserve of RAM that drivers may wish to allocate dynamically.
Anyway I'm talking about reserve of GBs free RAM, yet Windows is thrashing HDD.
Does anybody knows what's the exact Windows policy regarding the page file, and if it can be adjusted?
Of course I can turn the page file off, but I do want to use it, but only when RAM is adequately exhausted.
BTW I thought about buying in SSD as a system driver, but I'm afraid it'll be dead in a year with this kind of an abuse.
There are multiple reasons for disk activity in Windows 7: indexing (turned off here), scheduled tasks (collection of usage information, check disk etc.), disk defrag, antivirus, SuperFetch, paging and many more. You are correct to suspect that paging can happen even when there is sufficient RAM and the reason you have is correct. They call is pre-emptive paging to prepare system for spikes of memory demand. Once you have large amounts of RAM this seems unnecessary and it is in 99% of time. 1% are all those situations when you actually run out of memory and paging is the only option. There is a solution to get best of both worlds. You can practically eliminate paging and still have page file by setting its minimum size to something very small. If you are fine with mini kernel dumps to be done when system crashes you can set page file minimum size to be just 16 MB. If you want full kernel dumps, minimum is 1 GB but that would limit paging to a lesser degree. Set maximum page file size to 4 GB or whatever you want.
This way Windows can always expand and use bigger page if it REALLY needs to (out of virtual memory) but should not use it under normal condition. Probably never. You still keep the safety net of page file though.

Memory mapped files causes low physical memory

I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc.
How do I trigger or tell the system to swap the memory to pagefile and free physical memory?
I'm using Windows XP.
If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system.
To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less.
Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :(
Any help is appreciated.
Thanks.
If your data access pattern for using the memory mapped file is sequential, you might get slightly better page recycling by specifying the FILE_FLAG_SEQUENTIAL_SCAN flag when opening the underlying file. If your data pattern accesses the mapped file in random order, this won't help.
You should consider decreasing the size of your map view. That's where all the memory is actually consumed and cached. Since it appears that you need to handle files that are larger than available contiguous free physical memory, you can probably do a better job of memory management than the virtual memory page swapper since you know more about how you're using the memory than the virtual memory manager does. If at all possible, try to adjust your design so that you can operate on portions of the large file using a smaller view.
Even if you can't get rid of the need for full random access across the entire range of the underlying file, it might still be beneficial to tear down and recreate the view as needed to move the view to the section of the file that the next operation needs to access. If your data access patterns tend to cluster around parts of the file before moving on, then you won't need to move the view as often. You'll take a hit to tear down and recreate the view object, but since tearing down the view also releases all the cached pages associated with the view, it seems likely you'd see a net gain in performance because the smaller view significantly reduces memory pressure and page swapping system wide. Try setting the size of the view based on a portion of the installed system RAM and move the view around as needed by your file processing. The larger the view, the less you'll need to move it around, but the more RAM it will consume potentially impacting system responsiveness.
As I think you are hinting in your post, the slow response time is probably at least partially due to delays in the system while the OS writes the contents of memory to the pagefile to make room for other processes in physical memory.
The obvious solution (and possibly not practical) is to use less memory in your application. I'll assume that is not an option or at least not a simple option. The alternative is to try to proactively flush data to disk to continually keep available physical memory for other applications to run. You can find the total memory on the machine with GlobalMemoryStatusEx. And GetProcessMemoryInfo will return current information about your own application's memory usage. Since you say you are using a memory mapped file, you may need to account for that in addition. For example, I believe the PageFileUsage information returned from that API will not include information about your own memory mapped file.
If your application is monitoring the usage, you may be able to use FlushViewOfFile to proactively force data to disk from memory. There is also an API (EmptyWorkingSet) that I think attempts to write as many dirty pages to disk as possible, but that seems like it would very likely hurt performance of your own application significantly. Although, it could be useful in a situation where you know your application is going into some kind of idle state.
And, finally, one other API that might be useful is SetProcessWorkingSetSizeEx. You might consider using this API to give a hint on an upper limit for your application's working set size. This might help preserve more memory for other applications.
Edit: This is another obvious statement, but I forgot to mention it earlier. It also may not be practical for you, but it sounds like one of the best things you might do considering that you are running into 32-bit limitations is to build your application as 64-bit and run it on a 64-bit OS (and throw a little bit more memory at the machine).
Well, it sounds like your program needs more than 2GB of working set.
Modern operating systems are designed to use most of the RAM for something at all times, only keeping a fairly small amount free so that it can be immediately handed out to processes that need more. The rest is used to hold memory pages and cached disk blocks that have been used recently; whatever hasn't been used recently is flushed back to disk to replenish the pool of free pages. In short, there isn't supposed to be much free physical memory.
The principle difference between using a normal memory allocation and memory mapped a files is where the data gets stored when it must be paged out of memory. It doesn't necessarily have any effect on when the memory will be paged out, and will have little effect on the time it takes to page it out.
The real problem you are seeing is probably not that you have too little free physical memory, but that the paging rate is too high.
My suggestion would be to attempt to reduce the amount of storage needed by your program, and see if you can increase the locality of reference to reduce the amount of paging needed.

Resources