I'm training multiple networks based on a single database.
So to accelerate speed and reduce disk reading, I use shared_memory_object class provided by boost. Since the lab workstation is currently unavailable, I migrated my code to my personal computer.
On the lab workstation, the host program successfully reads all data to memory. But on my PC, strangely it creates a file on system drive rather than storing the data in memory. The whole database is about 3.7 GB; the lab workstation has 32 GB memory and runs Windows Server 2008 R2; my PC has 8 GB memory and runs Windows 7.
There should be enough memory to store the data. So why? Are there certain ways to force the program to keep all data in memory?
That is using a memory-mapped file as the backing of the shared memory, so a physical file is necessary on disk on either machine. The OS still does extensive caching of the contents of that file, so it may still actually be able to keep it cached entirely in RAM if space is available.
If you don't like looking at a filename physically existing on disk, then you can try
windows_shared_memory instead. It will use space taken from the system swap file as the backing of the shared memory instead.
Related
I work on-the-side doing computer repair. Standard operating procedure is to pop out the HDD/SSD, mount it to a backup machine, and pull the client's data (i.e., in case the drive fails/something goes horribly wrong, their data is protected). More and more often, my office is seeing SSDs soldered directly to the motherboard, making this technique impossible.
I was wondering if any of you knew of a some method that would allow direct disk access without drive removal. An analogue would be mounting a phone in Mass Storage Device mode, I suppose. This may be possible already by doing something with a Linux LiveUSB, but I'm not sure how. Booting from a LiveUSB and transferring files over the network is unacceptably slow given the volume of computers we see and amount of data involved.
On Apple computers, this is simple--plug in a Thunderbolt/Firewire connector and use Target Disk Mode to pull directly from the drive.
tl;dr: making a backup of a Windows computer without opening them: how do?
Boot a live Linux from a large USB3 HDD, and use the same disk to copy the client's data to with about USB3 speed.
I am currently on a mission loading files into pagecache, and I want to load locked files, too. The goal is nothing more than pro-actively keeping a dataset in RAM, reducing loading times within third party applications.
Shadow copies were my first thought on this, but unfortunately seem to have separated pagecaches.
So is there any way cheating around the exclusive lock mechanism? Like fetching file fragment location on disk, accessing whole disk and reading directly (which I fear is another separated pagecache, anyways)?
Or is there a very different approach to directing the pagecache, e.g. some Windows API that can be told to load a specific file into pagecache?
You can access locked files in Windows from kernel-mode driver, or using our RawDisk product. But for your task (speed up DB file access) this won't work right as Windows' filesystem cache size is limited (it won't accommodate GBs of data).
In general, if I were to develop a large software project (for small application the amount of work needed is just enormous) I'd do the following: create a virtual drive backed by in-memory storage, present the DB file to the application via that virtual disk and flush drive contents to the disk on change asynchronously. All of this should be done in kernel mode (this is where development time grows to 12-15 man-months of work).
In theory the same can be done using one of our Virtual Storage products, but going back into user mode for callback handling would eliminate all that you gain from moving the data into RAM.
This isn't necessarily a programming question, but I've hit a performance bottleneck with disk IO and I'd like to try writing and reading from RAM instead of the hard drive. I want to create my file in RAM and then run my application against it.
There are lots of tools for creating RAM drives. None of them seem to work for windows 2008 R2. Does anyone know if this is possible and if so how. Does anyone know of a tool that works?
Use Memory-Mapped Files to map the file into RAM (including memory backed to pagefile, if it's large. so be careful).
File mapping is the association of a
file's contents with a portion of the
virtual address space of a process.
The system creates a file mapping
object (also known as a section
object) to maintain this association.
A file view is the portion of virtual
address space that a process uses to
access the file's contents. File
mapping allows the process to use both
random input and output (I/O) and
sequential I/O. It also allows the
process to work efficiently with a
large data file, such as a database,
without having to map the whole file
into memory.
whatever ram disk you choose to purchase or write, remember check if the ram disk driver responds the SetDispositionInformationFile call.
I ended up finding and using Vsuit Ramdisk (Server Edition). It works great but its not free.
Hi we have a server with 32 cores and 256GB RAM, we are using this with SQL Server 2008 Enterprise on Windows 2008 R2 Enterprise.
Currently windows has allocated automatically a swapfile of 256GB which seems excessive. Is it advisable to hard limit the swapfile to something smaller like 32GB to force it to use the physical RAM?
Is it the swap file or is it the hibernate file?
The answer depends upon the work the machine is expected to do. You might find that Windows doesn't touch the swap file much because you have adequate physical memory available. One approach would be to cut the swap file allocation in half, then use the inbuilt performance monitoring tools to make sure it is still running ok, then after a period of stable running look to half the swap allocation again.
But is it really a problem? With a machine like that you probably have a good chunk of hard drive space available, and i doubt that they would be slow old 5400rpm drives :)
An ideally setup OLTP SQL Server should never need to use the swap file. It depends what you are using this server for.
But unless you are short of disk space, I wouldn't worry too much. 32GB sounds a better size though.
Two applications share memory by MMF.
A create MMF (about 1GB), B open that MMF file by name.
When I see Windows Task Manager, A has 1GB memory.
But, after several closing and launching B app again,
(or after 1 days later? I'm not sure how to reproduce)
A's memory in Windows Task Manager is below 1K bytes.
My guess is,
maybe because A app doesn't do anything after create MMF,
so, Windows thinks MMF is belong to B app. (Just guess).
My OS is Windows 2003 Enterprise x64, SP2.
Is there somebody who knows the reason?
Thanks in advance.
Memory mapped file is still part of your Virtual Address Space, use perfmon to get reliable counters instead of Task Manager, which changes with each release of Windows. The Perfmon counter of Process | Virtual Bytes (total VAS) is the most interesting.
My understanding is that 1GB is reserved in the virtual address space, but memory is only actually allocated for pages that are touched. Memory mapped files are implemented parallel to the Virtual Memory API, and both build upon the NT Virtual Memory Manager. See this article and diagram for an explanation.
Did you fill your entire file with data, or did you just allocate 1GB?
UPDATE:
Which column are you viewing in Task Manager?
The default Memory (Private Working) represents physically allocated memory.
You can add the column Commit Size to see the total amount of virtual address space allocated to the process.
Here is a summary of the various memory statistics you can see in Task Manager and what they mean.
It's because of memory working set minimize.
Thanks for everyone. :)