I have a windows host where, according to rammap, almost all memory is in mapped files. I try to find out which file causes such leak. All available guides suggest using tab File Summary to find out connection between file and mapped files. But there is no any file which occupies such amount of mapped files memory.
Is there a way to find out which file is to blame? I guess sysinternals tools like rammap already use windows api functions, so i won't find out more info if i'll try to use functions like GetMappedFileNameA on my own.
I have 24 GB of mapped files on my 96 GB machine. It seems to me that this is simply the Windows file cache ("Smartdrv", if you know that from DOS times).
This is roughly the same amount as displayed in Task Manager as "cached". The tool tip of that reads as
Memory that contains cached data and code that is not actively in use.
So, this is nothing to worry about. In fact it's great, because Windows can read files from memory instead of disk. That makes stuff much faster.
Related
I have two questions, both are them may be related so I am asking at once.
Linux has /proc directory which is runtime data structure and gives information about running process. Does windows have any such directory where I can get runtime info about process, like its layout and open handles. Please do no suggest tools like Process Explorer, its good but they are not part of core windows os.
Secondly, it is said for Windows that not everything is file, like socket is not a file. Does it mean that it is not a sort of file you can see in your hard disk but a runtime it creates file and in proc like data structure it has some entry.
Thanks.
While Windows has the ability to create virtual files (device drivers use this), there are no such files for process information.
Information about processes is available either through the process functions, the undocumented functions used by Process Explorer, or not at all.
Not every file is stored on some disk.
Virtual files are essentially just some value in memory, or some callback function that generates the file contents dynamically when you're trying to read it.
This isn't necessarily a programming question, but I've hit a performance bottleneck with disk IO and I'd like to try writing and reading from RAM instead of the hard drive. I want to create my file in RAM and then run my application against it.
There are lots of tools for creating RAM drives. None of them seem to work for windows 2008 R2. Does anyone know if this is possible and if so how. Does anyone know of a tool that works?
Use Memory-Mapped Files to map the file into RAM (including memory backed to pagefile, if it's large. so be careful).
File mapping is the association of a
file's contents with a portion of the
virtual address space of a process.
The system creates a file mapping
object (also known as a section
object) to maintain this association.
A file view is the portion of virtual
address space that a process uses to
access the file's contents. File
mapping allows the process to use both
random input and output (I/O) and
sequential I/O. It also allows the
process to work efficiently with a
large data file, such as a database,
without having to map the whole file
into memory.
whatever ram disk you choose to purchase or write, remember check if the ram disk driver responds the SetDispositionInformationFile call.
I ended up finding and using Vsuit Ramdisk (Server Edition). It works great but its not free.
When I boot up Linux Mint from a Live CD, I am able to save files to the "File System". But where are these files being saved to? Can't be the disc, since it's a CDR. I don't think it's stored in the RAM, because it can only hold so much data and isn't really intended to be used as a "hard drive". The only other option is the hard drive... but it's certainly not saving to any partition on the hard drive I know about, since none of them are mounted. Then where are my files being saved to??
Believe it or not, it's a ramdisk :)
All live distros mount a temporary hard disk in RAM memory. The process is completely user-transparent and is all because of the magic of Linux kernel.
The OS, in fact, first allocates an area of your RAM memory into a virtual device, then mounts it as a regular hard drive in your file system.
Once you reboot, you lose all your data from that ramdrive.
Ramdrive is needed by almost all software running on Live CDs. In fact, almost all programs, in particular desktop managers, are designed in order to write files, even temporary, during their execution.
As an example, there are two ways to run KDE on a Live CD: either modify its code deeply in order to disallow you to change wallpaper etc. (the desktop settings are stored inside ~/.kde) or redeploy it onto a writable file system such as a ramdrive in order to avoid write fails on read-only file systems.
Obviously, you can mount your real HDD or any USB drive into your virtual file system and make all writes to them permanent, but by default no live distro mounts your drives into the root file system, instead they usually mount into specific subdirectories like /mnt, /media, /windows
Hope to have been of help.
It does indeed emulate a disk using RAM; from Wikipedia:
It is able to run without permanent
installation by placing the files that
typically would be stored on a hard
drive into RAM, typically in a RAM
disk, though this does cut down on the
RAM available to applications.
RAM. In Linux, and indeed most unix systems, any kind of device is seen as a file system.
For example, to get memory info on linux you use cat /proc/meminfo, where cat is used to read files. Then, there's all sorts of strange stuff like /dev/random (to read random crap) and /dev/null (to throw away crap). ;-)
To make it persistent - use a USB device - properly formatted and with a special name. See here:
https://help.ubuntu.com/community/LiveCD/Persistence
My application writes some bytes of data to an alternate data stream. This works fine on all but one machine (Windows Server 2003 SP2).
Instead, CreateFile returns ERROR_DISK_FULL when I try to create an alternate data stream (on the root directory). I don't find the reason for this result, because...
There's plenty of space on that drive.
The drive is NTFS formatted (due to GetVolumeInformation).
The drive supports altenate data
streams (due to GetVolumeInformation).
Edit: I can provide some more information about what the reason not is:
I added many streams on a test system which didn't show the error and wondered if the error might occur. It didn't. Instead after about 2000 Streams with long file names another error occurred and persisted: 1450 (ERROR_NO_SYSTEM_RESOURCES).
EDIT: Here is an example for one of the used file names:
char szStreamFileName[] = "C:\\:abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnoqrstuvwxyz012345";
EDIT: Our customer uses some corporate antivirus software from Avira on this server. Maybe this is the reason (Alternate data streams can be abused by malware).
After opening a support ticket at MS I know that there was a readonly flag set which one only can set (and reset) with undocumented Windows functions. Nobody knows who set this flag and why, but I sent them an image of the drive (after I got the machine from our customer) and so they figured it out. We only have a workaround in our application (We use another location if we detect this error). Meanwhile we know that some of our customers have this problem.
Are there any compressed/spare files or alternate data streams?
Often backup applications receive ERROR_DISK_FULL errors attempting to back up compressed files and this causes quite a bit of confusion when there are still several gigabytes of free space on the drive. Other issues may also occur when copying compressed files. The goal of this blog is to give the reader a more thorough understanding of what really happens when you compress NTFS files.
From Understanding NTFS Compression
Just another possibility...
Did you check the number of currently opend files in your OS?
The OS support max. number of reserved file handles after that report ERROR_DISK_FULL or ERROR_NO_SYSTEM_RESOURCES.
And second possibility...
The root directory is limited by number of files. As I remember 512 files in older versions of OS. But the NTFS support unlimited number of files in root!
You might want to see what something like Sysinternal's Process Monitor utility captures when trying to create this file - it show the return codes of various APIs involved in the I/O stack and one of them might give you a clue as to why 112 is being returned to you. Hopefully the level of detail in ProcMon is enough - if not, I imagine there are other, more detailed I/O trace facilities for Windows (but I don't know of them off the top of my head)
The filename you give is
char szStreamFileName[] = "C:\\:abcdefghijklm...
it starts with
C:\\:
Is that a typo on the post, or is there really a colon after the slash? I think thats a illegal filename.
If you try to copy a file greater than 2GB from another filesystem (NTFS) to FAT / FAT32 which has a 2GB limit you may see this error.
Just a blind shot, but are the rights set properly?
What tools or techniques can I use to remove cached file contents to prevent my performance results from being skewed? I believe I need to either completely clear, or selectively remove cached information about file and directory contents.
The application that I'm developing is a specialised compression utility, and is expected to do a lot of work reading and writing files that the operating system hasn't touched recently, and whose disk blocks are unlikely to be cached.
I wish to remove the variability I see in IO time when I repeat the task of profiling different strategies for doing the file processing work.
I'm primarily interested in solutions for Windows XP, as that is my main development machine, but I can also test using linux, and so am interested in answers for that environment too.
I tried SysInternals CacheSet, but clicking "Clear" doesn't result in a measurable increase (restoration to timing after a cold-boot) in the time to re-read files I've just read a few times.
Use SysInternal's RAMMap app.
The Empty / Empty Standby List menu option will clear the Windows file cache.
For Windows XP, you should be able to clear the cache for a specific file by opening the file using CreateFile with the FILE_FLAG_NO_BUFFERING options and then closing the handle. This isn't documented, and I don't know if it works on later versions of Windows, but I used this long ago when writing test code to compare file compression libraries. I don't recall if read or write access affected this trick.
A command line utility can be found here
from source:
EmptyStandbyList.exe is a command line tool for Windows (Vista and
above) that can empty:
process working sets,
the modified page list,
the standby lists (priorities 0 to 7), or
the priority 0 standby list only.
Usage:
EmptyStandbyList.exe workingsets|modifiedpagelist|standbylist|priority0standbylist
A quick googling gives these options for Linux
Unmount and mount the partition holding the files
sync && echo 1 > /proc/sys/vm/drop_caches
#include <fcntl.h>
int posix_fadvise(int fd, off_t offset, off_t len, int advice);
with advice option POSIX_FADV_DONTNEED:
The specified data will not be accessed in the near future.
I've found one technique (other than rebooting) that seems to work:
Run a few copies of MemAlloc
With each one, allocate large chunks of memory a few times
Use Process Explorer to observe the System Cache size reducing to very low levels
Quit the MemAlloc programs
It isn't selective though. Ideally I'd like to be able to clear the specific portions of memory being used for caching the disk blocks of files that I want to no longer be cached.
For a much better view of the Windows XP Filesystem Cache - try ATM by Tim Murgent - it allows you to see both the filesystem cache Working Set size and Standby List size in a more detailed and accurate view. For Windows XP - you need the old version 1 of ATM which is available for download here since V2 and V3 require Server 2003,Vista, or higher.
You will observe that although Sysinternals Cacheset will reduce the "Cache WS Min" - the actual data still continues to exist in the form of Standby lists from where it can be used until it has been replaced with something else. To then replace it with something else use a tool such as MemAlloc or flushmem by Chad Austin or Consume.exe from the Windows Server 2003 Resource Kit Tools.
As the question also asked for Linux, there is a related answer here.
The command line tool vmtouch allows for adding and removing files and directories from the system file cache, amongst other things.
There's a windows API call https://learn.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-setsystemfilecachesize that can be used to flush the file system cache. It can also be used to limit the cache size to a very small value. Looks perfect for these kinds of tests.