I see many articles suggesting not to map huge files as mmap files so the virtual address space won't be taken solely by the mmap.
How does that change with 64 bit process where the address space dramatically increases?
If I need to randomly access a file, is there a reason not to map the whole file at once? (dozens of GBs file)
On 64bit, go ahead and map the file.
One thing to consider, based on Linux experience: if the access is truly random and the file is much bigger than you can expect to cache in RAM (so the chances of hitting a page again are slim) then it can be worth specifying MADV_RANDOM to madvise to stop the accumulation of hit file pages steadily and pointlessly swapping other actually useful stuff out. No idea what the windows equivalent API is though.
There's a reason to think carefully of using memory-mapped files, even on 64-bit platform (where virtual address space size is not an issue). It's related to the (potential) error handling.
When reading the file "conventionally" - any I/O error is reported by the appropriate function return value. The rest of error handling is up to you.
OTOH if the error arises during the implicit I/O (resulting from the page fault and attempt to load the needed file portion into the appropriate memory page) - the error handling mechanism depends on the OS.
In Windows the error handling is performed via SEH - so-called "structured exception handling". The exception propagates to the user mode (application's code) where you have a chance to handle it properly. The proper handling requires you to compile with the appropriate exception handling settings in the compiler (to guarantee the invocation of the destructors, if applicable).
I don't know how the error handling is performed in unix/linux though.
P.S. I don't say don't use memory-mapped files. I say do this carefully
One thing to be aware of is that memory mapping requires big contiguous chunks of (virtual) memory when the mapping is created; on a 32-bit system this particularly sucks because on a loaded system, getting long runs of contiguous ram is unlikely and the mapping will fail. On a 64-bit system this is much easier as the upper bound of 64-bit is... huge.
If you are running code in controlled environments (e.g. 64-bit server environments you are building yourself and know to run this code just fine) go ahead and map the entire file and just deal with it.
If you are trying to write general purpose code that will be in software that could run on any number of types of configurations, you'll want to stick to a smaller chunked mapping strategy. For example, mapping large files to collections of 1GB chunks and having an abstraction layer that takes operations like read(offset) and converts them to the offset in the right chunk before performing the op.
Hope that helps.
Related
outI'm working with memory mapped files (MMF) with very large datasets (depending on the input file), where each file has ~50GB and there are around 40 files open at the same time. Of course this depends, I can also have smaller files, but I can also have larger files - so the system should scale itself.
The MMF is acting as a backing buffer, so as long as I have enough free memory there shoud occur no paging. The problem is that the windows memory manager and my application are two autonomous processes. In good conditions everything is working fine, but the memory manager obviously is too slow in conditions where I'm entering low memory conditions, the memory is full and then the system starts to page (which is good), but I'm still allocating memory, because I don't get any information about the paging.
In the end I'm entering a state where the system stalls, the memory manager pages and I'm allocating.
So I came to the point where I need to advice the memory manager, check current memory conditions and invoke the paging myself. For that reason I wanted to use the GetWriteWatch to inspect the memory region I can flush.
Interestingly the GetWriteWatch does not work in my situation, it returns a -1 without filling the structures. So my question is does GetWriteWatch work with MMFs?
Does GetWriteWatch work with Memory-Mapped Files?
I don't think so.
GetWriteWatch accepts memory allocated via VirtualAlloc function using MEM_WRITE_WATCH.
File mapping are mapped using MapViewOfFile* functions that do not have this flag.
My application needs lots of memory and big data structure in order to perform its work.
Often the application needs more than 1 GB of memory, and in some cases my customers really need to use the 64-bit version of the application because they have several gigabytes of memory.
In the past, I could easily explain to the user that if memory reached 1.6 to 1.7 GB of memory usage, it was 'out of memory' or really close to an 'out of memory' situation, and that they needed to reduce their memory or move to a 64-bit version.
The last year I noticed that often the application only uses about 1 GB before it already runs out of memory. After some investigations it seemed that the cause of this problem is memory fragmentation. I used VMMAP (a SysInternals utility) to look at the memory usage of my application and saw something like this:
The orange areas are memory allocated by my application. The purple areas are executable code.
As you can see in the bottom halve of the image, the purple areas (which are the DLL's) are loaded at many different addresses, causing my memory to be fragmented. This isn't really a problem if my customer does not have a lot of data, but if my customer has data sets that take more than 1 GB, and a part of the application needs a big block of memory (e.g. 50 MB), it can result in a memory allocation failure, causing my application to crash.
Most of my data structures are STL-based and often don't require big chunks of contiguous memory, but in some cases (e.g. very big strings), it is really needed to have a contiguous block of memory. Unfortunately, it is not always possible to change the code so that it doesn't need such a contiguous block of memory.
The questions are:
How can I influence the location where DLL's are loaded in memory, without explicitly using REBASE on all the DLL's on the customer's computer, or without loading all DLL's explicitly.
Is there a way to specify load addresses of DLL's in your own application manifest file?
Or is there a way to tell Windows (via the manifest file?) to not scatter the DLL's around (I think this scattering is called ASLR).
Of course the best solution is one that I can influence from within my application's manifest file, since I rely on the automatic/dynamic loading of DLL's by Windows.
My application is a mixed mode (managed+unmanaged) application, although the major part of the application is unmanaged.
Anyone suggestions?
First, your virtual address space fragmentation should not necessarily cause the out-of-memory condition. This would be the case if your application had to allocate contiguous memory blocks of the appropriate size. Otherwise the impact of the fragmentation should be minor.
You say most of your data is "STL-based", but if for example you allocate a huge std::vector you'll need a contiguous memory block.
AFAIK there is no way to specify the preferred mapping address of the DLL upon its load. So that there are only two options: either rebase it (the DLL file), or implement DLL loading yourself (which is not trivial of course).
Usually you don't need to rebase the standard Windows API DLLs, they are loaded at your address space very tightly. Fragmentation may arrive from some 3rd-party DLLs (such as windows hooks, antivirus injections and etc.)
You cannot do this with a manifest, it must be done by the linker's /BASE option. Linker + Advanced + Base address in the IDE. The most flexible way is to use the /BASE:#filename,key syntax so that the linker reads the base address from a text file.
The best way to get the text file filled is from the Debug + Windows + Modules window. Get the Release build of your program loaded in the debugger and load up the whole shebang. Debug + Break All, bring up the window and copy-paste it into the text file. Edit it to match the required format, calculating load addresses from the Address column. Leave enough space between the DLLs so that you don't constantly have to tweak the text file.
If you are able to execute some of your own code before the libraries in question are loaded, you could reserve yourself a nice big chunk of address space ahead of time to allocate from.
Otherwise, you need to identify the DLLs responsible, to determine why they are being loaded. For example, are they part of .NET, the language's runtime library, your own code, or third party libraries you are using?
For your own code the most sensible solution is probably to use static instead of dynamic linking. This should also be possible for the language runtime and may be possible for third party libraries.
For third party libraries, you can change from using implicit loading to explicit loading, so that loading only takes place after you've reserved your chunk of address space.
I don't know if there's anything you can do about .NET libraries; but since most of your code is unmanaged, it might be possible to eliminate the managed components in order to get rid of .NET. Or perhaps you could split the .NET parts into a separate process.
I wanted to know why code segment is common for different instances of same program.
For example: consider program P1.exe running, if another copy of P1.exe is running, code segment will be common for both running instances. Why is it so?
If the code segment in question is loaded from a DLL, it might be the operating system being clever and re-using the already loaded library. This is one of the core points of using dynamically loaded library code, it allows the code to be shared across multiple processes.
Not sure if Windows is clever enough to do this with the code sections of regular EXE files, but it would make sense if possible.
It could also be virtual memory fooling you; two processes can look like they have the same thing on the same address, but that address is virtual, so they really are just showing mappings of physical memory.
Code is typically read-only, so it would be wasteful to make multiple copies of it.
Also, Windows (at least, I can't speak for other OS's at this level) uses the paging infrastructure to page code in and out direct from the executable file, as if it were a paging file. Since you are dealing with the same executable, it is paging from the same location to the same location.
Self-modifying code is effectively no longer supported by modern operating systems. Generating new code is possible (by setting the correct flags when allocating memory) but this is separate from the original code segment.
The code segment is (supposed to be) static (does not change) so there is no reason not to use it for several instances.
Just to start at a basic level, Segmentation is just a way to implement memory isolation and partitioning. Paging is another way to achieve this. For the most part, anything you can achieve via segmentation, you can be achieve via paging. As such, most modern operating systems on the x86 forego using segmentation at all, instead relying completely on paging facilities.
Because of this, all processes will usually be running under the trivial segment of (Base = 0, Limit = 4GB, Privilege level = 3), which means the code/data segment registers play no real part in determining the physical address, and are just used to set the privilege level of the process. All processes will usually be run at the same privilege, so they should all have the same value in the segment register.
Edit
Maybe I misinterpreted the question. I thought the question author was asking why both processes have the same value in the code segment register.
I need to store large amounts of data on-disk in approximately 1k blocks. I will be accessing these objects in a way that is hard to predict, but where patterns probably exist.
Is there an algorithm or heuristic I can use that will rearrange the objects on disk based on my access patterns to try to maximize sequential access, and thus minimize disk seek time?
On modern OSes (Windows, Linux, etc) there is absolutely nothing you can do to optimise seek times! Here's why:
You are in a pre-emptive multitasking system. Your application and all it's data can be flushed to disk at any time - user switches task, screen saver kicks in, battery runs out of charge, etc.
You cannot guarantee that the file is contiguous on disk. Doing Aaron's first bullet point will not ensure an unfragmented file. When you start writing the file, the OS doesn't know how big the file is going to be so it could put it in a small space, fragmenting it as you write more data to it.
Memory mapping the file only works as long as the file size is less than the available address range in your application. On Win32, the amount of address space available is about 2Gb - memory used by application. Mapping larger files usually involves un-mapping and re-mapping portions of the file, which won't be the best of things to do.
Putting data in the centre of the file is no help as, for all you know, the central portion of the file could be the most fragmented bit.
To paraphrase Raymond Chen, if you have to ask about OS limits, you're probably doing something wrong. Treat your filesystem as an immutable black box, it just is what it is (I know, you can use RAID and so on to help).
The first step you must take (and must be taken whenever you're optimising) is to measure what you've currently got. Never assume anything. Verify everything with hard data.
From your post, it sounds like you haven't actually written any code yet, or, if you have, there is no performance problem at the moment.
The only real solution is to look at the bigger picture and develop methods to get data off the disk without stalling the application. This would usually be through asynchronous access and speculative loading. If your application is always accessing the disk and doing work with small subsets of the data, you may want to consider reorganising the data to put all the useful stuff in one place and the other data elsewhere. Without knowing the full problem domain it's not possible to to be really helpful.
Depending on what you mean by "hard to predict", I can think of a few options:
If you always seek based on the same block field/property, store the records on disk sorted by that field. This lets you use binary search for O(log n) efficiency.
If you seek on different block fields, consider storing an external index for each field. A b-tree gives you O(log n) efficiency. When you seek, grab the appropriate index, search it for your block's data file address and jump to it.
Better yet, if your blocks are homogeneous, consider breaking them down into database records. A database gives you optimized storage, indexing, and the ability to perform advanced queries for free.
Use memory-mapped file access rather than the usual open-seek-read/write pattern. This technique works on Windows and Unix platforms.
In this way the operating system's virtual memory system will handle the caching for you. Accesses of blocks that are already in memory will result in no disk seek or read time. Writes from memory back to disk are handled automatically and efficiently and without blocking your application.
Aaron's notes are good too as they will affect initial-load time for a chunk that's not in memory. Combine that with the memory-mapped technique -- after all it's easier to reorder chunks using memcpy() than by reading/writing from disk and attempting swapouts etc.
The most simple way to solve this is to use an OS which solves that for you under the hood, like Linux. Give it enough RAM to hold 10% of the objects in RAM and it will try to keep as many of them in the cache as possible reducing the load time to 0. The recent server versions of Windows might work, too (some of them didn't for me, that's why I'm mentioning this).
If this is a no go, try this algorithm:
Create a very big file on the harddisk. It is very important that you write this in one go so the OS will allocate a continuous space on disk.
Write all your objects into that file. Make sure that each object is the same size (or give each the same space in the file and note the length in the first few bytes of of each chunk). Use an empty harddisk or a disk which has just been defragmented.
In a data structure, keep the offsets of each data chunk and how often it is accessed. When it is accessed very often, swap its position in the file with a chunk that is closer to the start of the file and which has a lesser access count.
[EDIT] Access this file with the memory-mapped API of your OS to allow the OS to effectively cache the most used parts to get best performance until you can optimize the file layout next time.
Over time, heavily accessed chunks will bubble to the top. Note that you can collect the access patterns over some time, analyze them and do the reorder over night when there is little load on your machine. Or you can do the reorder on a completely different machine and swap the file (and the offset table) when that's done.
That said, you should really rely on a modern OS where a lot of clever people have thought long and hard to solve these issues for you.
That's an interesting challenge. Unfortunately, I don't know how to solve this out of the box, either. Corbin's approach sounds reasonable to me.
Here's a little optimization suggestion, at least: Place the most-accessed items at the center of your disk (or unfragmented file), not at the start of end. That way, seeking to lesser-used data will be closer by average. Err, that's pretty obvious, though.
Please let us know if you figure out a solution yourself.
I am using VB6 and the Win32 API to write data to a file, this functionality is for the export of data, therefore write performance to the disk is the key factor in my considerations. As such I am using the FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH options when opening the file with a call to CreateFile.
FILE_FLAG_NO_BUFFERING requires that I use my own buffer and write data to the file in multiples of the disk's sector size, this is no problem generally, apart from the last part of data, which if it is not an exact multiple of the sector size will include character zero's padding out the file, how do I set the file size once the last block is written to not include these character zero's?
I can use SetEndOfFile however this requires me to close the file and re-open it without using FILE_FLAG_NO_BUFFERING. I have seen someone talk about NtSetInformationFile however I cannot find how to use and declare this in VB6. SetFileInformationByHandle can do exactly what I want however it is only available in Windows Vista, my application needs to be compatible with previous versions of Windows.
I believe SetEndOfFile is the only way.
And I agree with Mike G. that you should bench your code with and without FILE_FLAG_NO_BUFFERING. Windows file buffering on modern OS's is pretty darn effective.
I'm not sure, but are YOU sure that setting FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH give you maximum performance?
They'll certainly result in your data hitting the disk as soon as possible, but that sort of thing doesn't actually help performance - it just helps reliability for things like journal files that you want to be as complete as possible in the event of a crash.
For a data export routine like you describe, allowing the operating system to buffer your data will probably result in BETTER performance, since the writes will be scheduled in line with other disk activity, rather than forcing the disk to jump back to your file every write.
Why don't you benchmark your code without those options? Leave in the zero-byte padding logic to make it a fair test.
If it turns out that skipping those options is faster, then you can remove the 0-padding logic, and your file size issue fixes itself.
For a one-gigabyte file, Windows buffering will indeed probably be faster, especially if doing many small I/Os. If you're dealing with files which are much larger than available RAM, and doing large-block I/O, the flags you were setting WILL produce must better throughput (up to three times faster for heavily threaded and/or random large-block I/O).