How to find if file is in file system cache in Windows? - windows

I guess NTFS (file system of Windows) has some cache. Suppose I have a file, which is frequently accessed (read-only). How can I check if this file is in the file system cache ? Can I increase the file system cache size ?

Check
http://blogs.technet.com/b/askperf/archive/2010/08/13/introduction-to-the-new-sysinternals-tool-rammap.aspx
You can use RamMap which will give you a dedicated view of how current system is caching files.
Also to mention, cache isn't based on file, more by block/page.

There is no direct way from user space to detect if a file has been cached (partially or completely). In a multithreaded/multiprocessing environment, once you have received this information, it is instantly out of date.
There is no "limit" to caching in Windows that can be adjusted (although my data is Windows 7 and prior versions). The cache manager simply uses the memory manager to place data into memory and get callbacks when physical memory needs to be reclaimed (say, by an application's demands). The memory manager trades off file cache against memory demands of processes.

Related

Why some css, js and images files are loaded from the disk cache and other no?

I'm running WP website with cache plugin enabled.
The site is running slow so I decided to check which element is consuming more time to load.
Straight to F12 (chrome web tools) and from there the tab Network.
What I see and I don't understand is why some of the files are loading from disk cache and other no.
Please see the attached image (column "Size")
So, if you know the answer guys, please share it.
Thank you!
Memory Cache- stores and loads resources from Random Access Memory (RAM). It is fast because it is easy to load resources from RAM. These resources will persist until you close the Browser or you manually clear it.
Disk Cache- It stores and load the resources from disk and it is persistent. It will not contact webserver over network to get the data.Disk cache is usually included as part of the hard disk.
I guess browser will decide the type of cache storage based on the type of the resources or based on their frequency of usage.
Sometimes we use assets or resources from other sites(third party) these contents will be transferred over the network and size of those contents will be donated in Bytes(B).
It seems that all resources are loaded from cache. The difference is some resources are read from disk cache. Some resource are read from memory cache and the rest come from 304.
ETag and cache-control decide whether a resource should be read from local disk/memory cache or require to be refreshed(304). If the resource expired, then chrome will send request to server to check whether the file need to be updated. The size of 304 request is just the size of request entity not the size of your source file.
If the resource didn't expire, chrome will try to read from memory/disk cache and it won't send any request to server side.
It is unclear how web browser decide the cache type.
According to some document, we just notice that chrome are prefer to save css file into disk cache and save img/font/js file into memory cache.

Node express app disk requirement

I have an api (Express.js driven) that doesn't do any disk operation. It only reads/writes to db. Would there be a difference if the machine runs an ssd type of disk or standard disk?
Does it influence the performance? Because I believe the require loads files only one time not every request.

Monitoring I/O requests

One of my Railo web applications generates too many I/O requests.
Since it's hosted on an Amazon Ec2 instance, that directly affects my billing badly, because of EBS disk activity (hundreds of milions of operations).
How can I monitor I/O requests? The perfect tool would allow me to find which template/component makes intensive I/O.
I'm already using FusionReactor and that's great for profiling memory spaces and so on, but it doesn't have anything for I/O.
so you could start out by using the operating system monitoring tools to see if you have mainly reads or writes, next step is looking at memory issues despite it being an disk IO issue, maybe your servers are low on memory and thrashing the drives as they are swapping pages in and out of memory.
if you have not done so turn on the template cache this will stop railo checking the file system on every page request (provided you have the memory).
if you have plenty of memory (both for your OS and for the JVM) and you have template caching on start looking for your busy pages in fusion reactor, check for cffile, cfdirectory and other tags in these pages.... good luck.
also use of queries of queries is often a culprit in high disk io as internally a database is used which runs pages to disk on large resultsets if I remeber correctly.

Cache for MKMapView

I am writing an application which overlays results on MKMapView. Thing is the system may not be online all the time, The number of tiles cached be default are limited; i.e. the cache file would have a limited max size. Needed to know:
Location of Cache file
If i can change the default maximum cache file size
If possible, saving different cache files in runtime and loading them as per user requirement(loss of network).
I am not sure how many of these ideas violates license agreement. But since i don't intend to publish it in app store, am not worried about rejection.

Flush disk write cache

When the policy for a disk in Windows XP and Vista is set to enable write caching on the hard disk, is there a way to flush a file that has just been written, and ensure that it has been committed to disk?
I want to do this programmatically in C++.
Closing the file does perform a flush at the application level, but not at the operating system level. If the power is removed from the PC after closing the file, but before the operating system has flushed the disk write cache, the file is lost, even though it was closed.
.NET FileStream.Flush() will NOT flush the Windows cache for that file content; Flush() only flushes the .NET internal file buffer. In .NET 4.0, Microsoft fixed the problem by adding an optional parameter to Flush() which if set true causes FlushFileSystemBuffers to be called. In .NET 3.5 and below your only choice is to call FlushFileBuffers via pinvoke. See MSDN'sFileStream.Flush community comment for how to do this.
You should not fix this at the time you close the file. Windows will cache, unless you open the file passing FILE_FLAG_WRITE_THROUGH to CreateFile().
You may also want to pass FILE_FLAG_NO_BUFFERING; this tells Windows not to keep a copy of the bytes in cache.
This is more efficient than FlushFileBuffers(), according to the CreateFile documentation on MSDN.
See also file buffering and file caching on MSDN.
You haven't specified the development environment, so:
.Net
IO streams have a .Flush method that does what you want.
Win32 API
There is the FlushFileBuffers call, which takes a file handle as argument.
EDIT (based on a comment from the OA): FlushFileBuffers does not need administrative privileges; it does only if the handle passed to it is the handle for a volume, not for a single file.
You should also note, that your data might not get flushed to the actual disk, even when invoking a flush method of your frameworks API.
Calling the flush method will only tell the kernel to flush its pages to disk. However, if you have the disk write-cache turned on, it is allowed to delay the actual writing process indefinitely.
In order to ensure that your data gets written to the physical layer you have to turn of the write cache in your operating system. This most often comes with a performance penalty up to one or two orders of magnitude when dealing with a lot of small io-operations.
Battery based support (UPS) or disks that accept commands to flush the disk write-cache are another option to deal with this problem.
From the microsoft documents you would use _flushall and link in COMMODE.OBJ to ensure that all buffers were committed to disk.
See here: https://jeffpar.github.io/kbarchive/kb/066/Q66052/
When you initially open your file using fopen, include the "c" mode option as the LAST OPTION:
fopen( path, "wc") // w - write mode, c - allow immediate commit to disk
Then when you want to force a flush to disk, call
_flushall()
We made this call before calling
fclose()
We experienced the exact issue you described and this approach fixed it.
Note that this approach does NOT required Administrative rights, which FlushFileBuffers does require, as others have mentioned.
From that above site:
"Microsoft C/C++ version 7.0 introduces the "c" mode option for the fopen()
function. When an application opens a file and specifies the "c" mode, the
run-time library writes the contents of the file buffer to disk when the
application calls the fflush() or _flushall() function. "
You can open/create the file with the FileOptions.WriteThrough flag, which will cause the file to write directly into the disk, bypassing any caches.
E.g.
var file = File.Open(
"1.txt",
new FileStreamOptions
{
Options = FileOptions.WriteThrough
});
// - OR -
var file = new FileStream(
"1.txt",
FileMode.Create,
FileAccess.Write,
FileShare.None,
4096,
FileOptions.WriteThrough)

Resources