How to create a RAM Drive in Windows 7 (Windows 2008 R2)? - performance

This isn't necessarily a programming question, but I've hit a performance bottleneck with disk IO and I'd like to try writing and reading from RAM instead of the hard drive. I want to create my file in RAM and then run my application against it.
There are lots of tools for creating RAM drives. None of them seem to work for windows 2008 R2. Does anyone know if this is possible and if so how. Does anyone know of a tool that works?

Use Memory-Mapped Files to map the file into RAM (including memory backed to pagefile, if it's large. so be careful).
File mapping is the association of a
file's contents with a portion of the
virtual address space of a process.
The system creates a file mapping
object (also known as a section
object) to maintain this association.
A file view is the portion of virtual
address space that a process uses to
access the file's contents. File
mapping allows the process to use both
random input and output (I/O) and
sequential I/O. It also allows the
process to work efficiently with a
large data file, such as a database,
without having to map the whole file
into memory.

whatever ram disk you choose to purchase or write, remember check if the ram disk driver responds the SetDispositionInformationFile call.

I ended up finding and using Vsuit Ramdisk (Server Edition). It works great but its not free.

Related

get name of memory mapped file

I have a windows host where, according to rammap, almost all memory is in mapped files. I try to find out which file causes such leak. All available guides suggest using tab File Summary to find out connection between file and mapped files. But there is no any file which occupies such amount of mapped files memory.
Is there a way to find out which file is to blame? I guess sysinternals tools like rammap already use windows api functions, so i won't find out more info if i'll try to use functions like GetMappedFileNameA on my own.
I have 24 GB of mapped files on my 96 GB machine. It seems to me that this is simply the Windows file cache ("Smartdrv", if you know that from DOS times).
This is roughly the same amount as displayed in Task Manager as "cached". The tool tip of that reads as
Memory that contains cached data and code that is not actively in use.
So, this is nothing to worry about. In fact it's great, because Windows can read files from memory instead of disk. That makes stuff much faster.

boost shared_memory_object stores content in disk?

I'm training multiple networks based on a single database.
So to accelerate speed and reduce disk reading, I use shared_memory_object class provided by boost. Since the lab workstation is currently unavailable, I migrated my code to my personal computer.
On the lab workstation, the host program successfully reads all data to memory. But on my PC, strangely it creates a file on system drive rather than storing the data in memory. The whole database is about 3.7 GB; the lab workstation has 32 GB memory and runs Windows Server 2008 R2; my PC has 8 GB memory and runs Windows 7.
There should be enough memory to store the data. So why? Are there certain ways to force the program to keep all data in memory?
That is using a memory-mapped file as the backing of the shared memory, so a physical file is necessary on disk on either machine. The OS still does extensive caching of the contents of that file, so it may still actually be able to keep it cached entirely in RAM if space is available.
If you don't like looking at a filename physically existing on disk, then you can try
windows_shared_memory instead. It will use space taken from the system swap file as the backing of the shared memory instead.

Cheat exclusive access locked files in Windows (7)

I am currently on a mission loading files into pagecache, and I want to load locked files, too. The goal is nothing more than pro-actively keeping a dataset in RAM, reducing loading times within third party applications.
Shadow copies were my first thought on this, but unfortunately seem to have separated pagecaches.
So is there any way cheating around the exclusive lock mechanism? Like fetching file fragment location on disk, accessing whole disk and reading directly (which I fear is another separated pagecache, anyways)?
Or is there a very different approach to directing the pagecache, e.g. some Windows API that can be told to load a specific file into pagecache?
You can access locked files in Windows from kernel-mode driver, or using our RawDisk product. But for your task (speed up DB file access) this won't work right as Windows' filesystem cache size is limited (it won't accommodate GBs of data).
In general, if I were to develop a large software project (for small application the amount of work needed is just enormous) I'd do the following: create a virtual drive backed by in-memory storage, present the DB file to the application via that virtual disk and flush drive contents to the disk on change asynchronously. All of this should be done in kernel mode (this is where development time grows to 12-15 man-months of work).
In theory the same can be done using one of our Virtual Storage products, but going back into user mode for callback handling would eliminate all that you gain from moving the data into RAM.

Where is data on a non-persistant Live CD stored?

When I boot up Linux Mint from a Live CD, I am able to save files to the "File System". But where are these files being saved to? Can't be the disc, since it's a CDR. I don't think it's stored in the RAM, because it can only hold so much data and isn't really intended to be used as a "hard drive". The only other option is the hard drive... but it's certainly not saving to any partition on the hard drive I know about, since none of them are mounted. Then where are my files being saved to??
Believe it or not, it's a ramdisk :)
All live distros mount a temporary hard disk in RAM memory. The process is completely user-transparent and is all because of the magic of Linux kernel.
The OS, in fact, first allocates an area of your RAM memory into a virtual device, then mounts it as a regular hard drive in your file system.
Once you reboot, you lose all your data from that ramdrive.
Ramdrive is needed by almost all software running on Live CDs. In fact, almost all programs, in particular desktop managers, are designed in order to write files, even temporary, during their execution.
As an example, there are two ways to run KDE on a Live CD: either modify its code deeply in order to disallow you to change wallpaper etc. (the desktop settings are stored inside ~/.kde) or redeploy it onto a writable file system such as a ramdrive in order to avoid write fails on read-only file systems.
Obviously, you can mount your real HDD or any USB drive into your virtual file system and make all writes to them permanent, but by default no live distro mounts your drives into the root file system, instead they usually mount into specific subdirectories like /mnt, /media, /windows
Hope to have been of help.
It does indeed emulate a disk using RAM; from Wikipedia:
It is able to run without permanent
installation by placing the files that
typically would be stored on a hard
drive into RAM, typically in a RAM
disk, though this does cut down on the
RAM available to applications.
RAM. In Linux, and indeed most unix systems, any kind of device is seen as a file system.
For example, to get memory info on linux you use cat /proc/meminfo, where cat is used to read files. Then, there's all sorts of strange stuff like /dev/random (to read random crap) and /dev/null (to throw away crap). ;-)
To make it persistent - use a USB device - properly formatted and with a special name. See here:
https://help.ubuntu.com/community/LiveCD/Persistence

Improving filesystem access on a remote fileserver

I have a large file server machine which contains several terabytes of image data that I generally access in chunks. I'm wondering if there is anything special that I can do to hint to the OS that a specific set of documents should be preloaded into memory to improve the access time for that subset of files when they are loaded over a file share.
I can supply a parent directory that contains all of the files that comprise a given chunk before I start to access them.
The first thing that comes to mind is to simply write a service that will iterate through the files in the specified path, load them into process memory and then free the memory in hopes that the OS filesystem cache holds on to them, but I was wondering if there is a more explicit way to do this.
It would save a lot of work if I could re-use the existing file share access paradigm rather than requiring the access to these files to go through a memory caching layer.
The files in question will almost always be accessed in a readonly manner.
I'm working on Windows Server 2003/2008
Two approaches come to mind:
1) Set the server to be optimized for file serving. This used to be in the properties for file & printer sharing, but seems to have gone away in Windows 2008. This is set via the registry in:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory
Management\LargeSystemCache=1
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size=3
http://technet.microsoft.com/en-us/library/cc784562.aspx as ref.
2) Ensure that both endpoints are either windows 2008/windows 2008, or windows 2008/Vista. There are significant performance improvements in SMB 2.0 as well as the IP stack which improve performance greatly. This may not be an option due to cost, organizational constraints, or procurement lead time, but I thought I'd mention it.
http://technet.microsoft.com/en-us/library/bb726965.aspx as ref.

Resources