boost managed shared memory construction ends up with "bus error" - c++11

I have a code that is going to create a big segment on managed shared memory using c++ boost (about 2 Gigs). And if we don't have enough memory on the machine, it will receive bus error.
Actually, the error happens when I try to write on shared memory with use of construct function. When I create the segment, it doesn't receive any error. I've already check my segment's size and free size and they all would be showing values that if there was enough memory to allocate! (get_size returns 2000000000!). even if the machine has less than that!
I know that the OS makes the program think that there exists enough memory, but I have to run the code on different machines and it must work on all of them. I mean, it MUST NOT crash even if there doesn't exist enough memory and we should have a good exception to be thrown out in this case. No mater that there exists enough memory or not. And there must be a way to find this out programmatically.
So, I was wondering is there any way to understand whatever requesting memory exists or not "USING BOOST"?
Here is what I want (Or at least have in mind!)
// consider that we are going to create a shared memory segment with 2G size in a machine that only has 1G of RAM
boost::interprocess::managed_shared_memory segment(open_or_create, "name", 2000000000);
if (real_allocated_memory < actual_need)
throw std::overflow_error("Not enough memory");
segment.find_or_construct(a huge object); // here is where I receive the error

Related

metadata of heap block corrupted, user accessible part not. Is it a overrun?

I got a memory corruption on the heap when running my 32bit app on Windows 2008 server, 64bit, when I check the heap block which was corrupted, I found that the metadata of the heap block was not corrupted, but the user accessible part was corrupted (at lease the first 4 bytes were corrupted according to my analysis).
You know, there are a lot of possibilities that can lead to heap corrupt, memory overrun/underrun, use wild pointer, mismatch heap handler, use uninitialized memory etc.
But since the metadata and the first 4 bytes of user accessible part are adjacent parts. I think the possibility of memory overrun/under run is very low. Because if it is a memory overrun or under run, then it is very likely that the metadata will also be corrupted.
I am not sure whether my understanding is correct or not? Anyone can give me a hint here?

Benefits of reserving vs. committing+reserving memory using VirtualAlloc on large arrays

I am writing a C++ program that essentially works with very large arrays. On Windows, I am using VirtualAlloc to allocate memory to my arrays. Now I fully understand the difference between reserving and committing memory using VirutalAlloc; however, I am wondering whether there is any benefit in committing memory page-by-page to a reserved region. In particular, MSDN (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887(v=vs.85).aspx) contains the following explanation for the MEM_COMMIT option:
Actual physical pages are not allocated unless/until the virtual addresses are actually accessed.
My experiments confirm this: I can reserve and commit several GB of memory wihtout increasing memory usage of my process (as shown in Task Manager); actual memory gets allocated only when I actually access memory.
Now I saw quite a few examples arguing that one should reserve a large portion of the address space and then commit memory page-by-page (or in some larger blocks, depending on the app's logic). As explained above, however, memory does not seem to be committed before one accesses it; thus, I'm wondering whether there is any real benefit in committing memory page-by-page. In fact, committing memory page-by-page might actually slow my program down due to many system calls for actually comitting memory. If I commit the entire region at once, I pay for just one system call, but the kernel seems to be smart enough to actually allocate only memory that I actually use.
I would appreciate it if someone could explain to me which strategy is better.
The difference is that commit "backs" the memory against the page file. To give an example:
Given 2GB of physical ram and 2GB of swap (assume fixed-size swap for this purpose).
Reserve 6GB - OK.
Commit first 2GB - OK.
Commit remaining 4GB - fails.
Extend swap file to 8GB
Commit remaining 4GB - succeeds.
The reason for using MEM_COMMIT would primarily be for runtime error suppression (app stability). If you have a process that commits pages on-demand then there's always a chance that a commit along-the-way could fail if it exceeds amount of memory+swap available. When memory has been backed by the page file then you have a strong guarantee that the memory is available for use from now until the point that you release it.
There's a number of reasons to go one way or the other, and I don't think there's any perfect science to deciding which. MEM_RESERVE alone is only needed for very large sparse array scenarios, ex: multi-gigabyte array which has at most 25-33% utilization (a popular technique for accelerating hash tables, etc).
Almost everything else is gray area where you could probably go either way -- MEM_COMMIT up-front would make your own app a little more stable and essentially give it priority to physical ram over competing apps that might allocate on-demand. (if you grab the ram first then your app will be the last left standing when physical memory is exhausted) At the same time, if you're not actually using all that ram then you may end up limiting the multi-tasking potential of your client's machine or causing unnecessary wasted disk space via a growing page file.

Win32/MFC: How to find free memory (RAM) available?

Any suggestions/hints/links/tutorials would be appreciated! :)
There really is no answer to this. Under normal circumstances, the OS will keep something in essentially all the memory on the system. Basically, once it's read something into memory, it'll keep a copy of it in memory until something else needs memory so the first one gets kicked out. There are a number of functions that can get you information about memory, but none of them even attempts to really return an amount of memory that's completely unused. The closest of which I'm aware is GlobalMemoryStatusEx, which does return a number for the amount of memory that's available.
That means whatever is currently in that memory is currently both in memory and on disk, so the copy in memory can be thrown away without having to write it to disk first. For example, if you ran a program, most of is code will stay in memory (until something else wants memory), in case you decide to run it again. Since it's just a copy of the program on disk, it can be thrown away, and (if necessary) reloaded from disk when needed.
If you want more detail, you can use things like VirtualQueryEx to get it -- but it'll usually overload you with information, telling you about each block of memory used in a given process, instead of giving a nice, simple number saying "x bytes free".
GlobalMemoryStatus/GlobalMemoryStatusEx
http://msdn.microsoft.com/en-us/library/aa366586(VS.85).aspx
That's pretty easy to answer, free RAM is always sufficiently close to 0 to consider it zero and not bother. Unused RAM is always used by the file system cache, you can see this in the Taskmgr.exe, Performance tab.
If you actually mean "free virtual memory", the number you'd only really care about, then the answer is "not really possible". You'd need HeapWalk(), a very awkward and dangerous function to use. Only HeapWalk can detect blocks in the heap that are marked free but are still mapped. The number you'd arrive at is meaningless anyway. A program never runs out of free virtual memory blocks, it always runs out of large-enough memory blocks first.
Detecting this condition is easy enough. Malloc returns NULL, the new operator throws std::bad_alloc. Dealing with the condition is not easy. Solving it takes less than two hundred bucks, roughly the license fee for a 64-bit version of Windows.

Memory mapped files causes low physical memory

I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc.
How do I trigger or tell the system to swap the memory to pagefile and free physical memory?
I'm using Windows XP.
If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system.
To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less.
Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :(
Any help is appreciated.
Thanks.
If your data access pattern for using the memory mapped file is sequential, you might get slightly better page recycling by specifying the FILE_FLAG_SEQUENTIAL_SCAN flag when opening the underlying file. If your data pattern accesses the mapped file in random order, this won't help.
You should consider decreasing the size of your map view. That's where all the memory is actually consumed and cached. Since it appears that you need to handle files that are larger than available contiguous free physical memory, you can probably do a better job of memory management than the virtual memory page swapper since you know more about how you're using the memory than the virtual memory manager does. If at all possible, try to adjust your design so that you can operate on portions of the large file using a smaller view.
Even if you can't get rid of the need for full random access across the entire range of the underlying file, it might still be beneficial to tear down and recreate the view as needed to move the view to the section of the file that the next operation needs to access. If your data access patterns tend to cluster around parts of the file before moving on, then you won't need to move the view as often. You'll take a hit to tear down and recreate the view object, but since tearing down the view also releases all the cached pages associated with the view, it seems likely you'd see a net gain in performance because the smaller view significantly reduces memory pressure and page swapping system wide. Try setting the size of the view based on a portion of the installed system RAM and move the view around as needed by your file processing. The larger the view, the less you'll need to move it around, but the more RAM it will consume potentially impacting system responsiveness.
As I think you are hinting in your post, the slow response time is probably at least partially due to delays in the system while the OS writes the contents of memory to the pagefile to make room for other processes in physical memory.
The obvious solution (and possibly not practical) is to use less memory in your application. I'll assume that is not an option or at least not a simple option. The alternative is to try to proactively flush data to disk to continually keep available physical memory for other applications to run. You can find the total memory on the machine with GlobalMemoryStatusEx. And GetProcessMemoryInfo will return current information about your own application's memory usage. Since you say you are using a memory mapped file, you may need to account for that in addition. For example, I believe the PageFileUsage information returned from that API will not include information about your own memory mapped file.
If your application is monitoring the usage, you may be able to use FlushViewOfFile to proactively force data to disk from memory. There is also an API (EmptyWorkingSet) that I think attempts to write as many dirty pages to disk as possible, but that seems like it would very likely hurt performance of your own application significantly. Although, it could be useful in a situation where you know your application is going into some kind of idle state.
And, finally, one other API that might be useful is SetProcessWorkingSetSizeEx. You might consider using this API to give a hint on an upper limit for your application's working set size. This might help preserve more memory for other applications.
Edit: This is another obvious statement, but I forgot to mention it earlier. It also may not be practical for you, but it sounds like one of the best things you might do considering that you are running into 32-bit limitations is to build your application as 64-bit and run it on a 64-bit OS (and throw a little bit more memory at the machine).
Well, it sounds like your program needs more than 2GB of working set.
Modern operating systems are designed to use most of the RAM for something at all times, only keeping a fairly small amount free so that it can be immediately handed out to processes that need more. The rest is used to hold memory pages and cached disk blocks that have been used recently; whatever hasn't been used recently is flushed back to disk to replenish the pool of free pages. In short, there isn't supposed to be much free physical memory.
The principle difference between using a normal memory allocation and memory mapped a files is where the data gets stored when it must be paged out of memory. It doesn't necessarily have any effect on when the memory will be paged out, and will have little effect on the time it takes to page it out.
The real problem you are seeing is probably not that you have too little free physical memory, but that the paging rate is too high.
My suggestion would be to attempt to reduce the amount of storage needed by your program, and see if you can increase the locality of reference to reduce the amount of paging needed.

How could VirtualAlloc fail (no mem) despite plenty of phys memory on WinMobile?

I am routinely seeing VirtualAlloc calls to reserve memory fail. I'm requesting 2MB so that the allocations do not count against my per process virtual memory and instead use system shared memory. At the time of failure, the system reports having over 100 MB available in physical memory.
I'm running on a windows mobile 6.1 device. So far this is a device-specific problem. It works on many identical devices and fails on one device. I would like to be able to determine if other processes on this device are reserving shared memory and therefore preventing me from doing so. Not sure how i can do that though.
This is the doc I am relying on and I see nothing that would explain this problem:
http://msdn.microsoft.com/en-us/library/aa908768.aspx
Any ideas? Thanks.
I'm tempted to say that VirtualAlloc has run out of (contiguous) virtual address space, at least as far as your process is concerned.
I'd first try to establish to which memory slot those previously successful VirtualAlloc chunks got mapped to, and based on that see whom I am fighting with for address space. You should be able to do this either programatically or by using a tool from William J. Blanke (or other similar tools.)

Resources