Although it won't happen often, there are a couple of cases where my Cocoa application will allocate very large amounts of memory, enough to make me worry about malloc failing. What is the best way to handle this sort of failure in a Cocoa application? I've heard that Exceptions are generally discouraged in this development environment but is this a case where they would be useful?
If you have an allocation fail because you are out of memory, more likely than not there has been an allocation error in some framework somewhere that has left the app in an undetermined state.
Even if that isn't the case, you can't do anything that'll allocate memory and that leaves you with very few options.
Even freeing memory in an attempt to "fix" the problem isn't going to consistently work, not even to "fix" it by showing a nice error message and exiting cleanly.
You also don't want to try and save data from this state. Or, at least, not without writing all the code necessary to deal with corrupt data on read (because it is quite possible that a failed allocation meant some code somewhere corrupted memory).
Treat allocation failures as fatal, log and exit.
It is extremely uncommon for a correctly written application to run out of memory. More likely, too, when an app runs out of memory, the user's system is going to be paging like hell and, thus, performance had degraded significantly long before the allocation failure.
Your return on investment for focusing on optimizing and reducing memory use will be orders of magnitude greater than trying to recover from an allocation failure.
(Alan's original answer was accurate as well as his edit).
If you're running into memory allocation errors, you shouldn't try to handle them, and instead rethink how your application uses memory.
I'm not sure what the Cocoa idioms are, but for C++ and C# at least, out of memory exceptions are a sign of larger problems and are best left to the user/OS to deal with.
Say your memory allocation fails, what else can your system do? How much memory is left? Is it enough to show a dialog/print a message, before shutting down? Will throw an exception succeed? Will cleaning up resources cause cascading memory exceptions?
If malloc fails, you will get a null back, so if that's the case, can your application continue without the memory? If not, treat the condition as a fatal error and exit with a user helpful message.
If you run out of memory there is usually not much you can do short of terminate your app. Even showing a notification could fail because there is not enough memory for that.
The standard in C applications is to write a void xmalloc(size_t size); function that will check the return value of malloc, and if NULL, print out an error to stderr and then call abort(). That way you just use xmalloc throughout your code and don't think about it. If you run out of memory, bad luck and your app will die.
Related
As per the documentation (https://www.linuxjournal.com/article/6930),
which says:
Flag Description
__GFP_REPEAT The kernel repeats the allocation if it fails.
__GFP_NOFAIL The kernel can repeat the allocation.
So, both of them may cause the kernel to repeat the allocation operation.
How can I choose between them?
What are the major differences?
That isn't really "documentation", but just an article on LinuxJournal. Granted, the author (Robert Love) is surely knowledgeable on the subject, but nonetheless those descriptions are quite imprecise and outdated (the article is from 2003).
The __GFP_REPEAT flag was renamed to __GFP_RETRY_MAYFAIL in kernel version 4.13 (see the relevant patchwork) and its semantics were also modified.
The original meaning of __GFP_REPEAT was (from include/linux/gfp.h kernel v4.12):
__GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt
_might_ fail. This depends upon the particular VM implementation.
The name and semantic of this flag were somewhat unclear, and the new __GFP_RETRY_MAYFAIL flag has a much clearer name and description (from include/linux/gfp.h kernel v5.7.2):
%__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
procedures that have previously failed if there is some indication
that progress has been made else where. It can wait for other
tasks to attempt high level approaches to freeing memory such as
compaction (which removes fragmentation) and page-out.
There is still a definite limit to the number of retries, but it is
a larger limit than with %__GFP_NORETRY.
Allocations with this flag may fail, but only when there is
genuinely little unused memory. While these allocations do not
directly trigger the OOM killer, their failure indicates that
the system is likely to need to use the OOM killer soon. The
caller must handle failure, but can reasonably do so by failing
a higher-level request, or completing it only in a much less
efficient manner.
If the allocation does fail, and the caller is in a position to
free some non-essential memory, doing so could benefit the system
as a whole.
As per __GFP_NOFAIL you can find a detailed description in the same file:
%__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
cannot handle allocation failures. The allocation could block
indefinitely but will never return with failure. Testing for
failure is pointless.
New users should be evaluated carefully (and the flag should be
used only when there is no reasonable failure policy) but it is
definitely preferable to use the flag rather than opencode endless
loop around allocator.
Using this flag for costly allocations is _highly_ discouraged.
In short, the difference between __GFP_RETRY_MAYFAIL and __GFP_NOFAIL is that the former will retry allocating memory only a finite amount of times before eventually reporting failure, while the latter will keep trying indefinitely until memory is available and will never report failure to the caller, because it assumes that the caller cannot handle allocation failure.
Needless to say, the __GFP_NOFAIL flag must be used with care only in scenarios in which no other option is feasible. It is useful in that it avoids explicitly calling the allocator in a loop until a request succeeds (e.g. while (!kmalloc(...));), and thus it's more efficient.
V8 developer is needed.
I've noticed that the following code leaks mapped memory (mmap, munmap), concretely the amount of mapped regions within cat /proc/<pid>/maps continuously grows and hits the system limit pretty quickly (/proc/sys/vm/max_map_count).
void f() {
auto platform = v8::platform::CreateDefaultPlatform();
v8::Isolate::CreateParams create_params;
create_params.array_buffer_allocator =
v8::ArrayBuffer::Allocator::NewDefaultAllocator();
v8::V8::InitializePlatform(platform);
v8::V8::Initialize();
for (;;) {
std::shared_ptr<v8::Isolate> isolate(v8::Isolate::New(create_params), [](v8::Isolate* i){ i->Dispose(); });
}
v8::V8::Dispose();
v8::V8::ShutdownPlatform();
delete platform;
delete create_params.array_buffer_allocator;
}
I've played a little bit with platform-linux.cc file and have found that UncommitRegion call just remaps region with PROT_NONE, but not release it. Probably thats somehow related to that problem..
There are several reasons why we recreate isolates during the program execution.
The first one is that creating new isolate along with discarding the old one is more predictable in terms of GC. Basically, I found that doing
auto remoteOldIsolate = std::async(
std::launch::async,
[](decltype(this->_isolate) isolateToRemove) { isolateToRemove->Dispose(); },
this->_isolate
);
this->_isolate = v8::Isolate::New(cce::Isolate::_createParams);
//
is more predictable and faster than call to LowMemoryNotification. So we monitor memory consumptions using GetHeapStatistics and recreate isolate when it hits the limit. Turns out we cannot consider GC activity as a part of code execution, this leads to bad user experience.
The second reason is that having isolate per code allows as to run several codes in parallel, otherwise v8::Locker will block second code for that particular isolate.
Looks like at this stage I have no choices and will rewrite application to have a pool of isolates and persistent context per code..of course this way code#1 may affect code#2 by doing many allocations and GC will run on code2 with no allocations at all, but at least it will not leak.
PS. I've mentioned that we use GetHeapStatistics for memory monitoring. I want to clarify a little bit that part.
In our case its a big problem when GC works during code execution. Each code has execution timeout (100-500ms). Having GC activity during code execution locks code and sometimes we have timeouts just for assignment operation. GC callbacks don't give you enough accuracy, so we cannot rely on them.
What we actually do, we specify --max-old-space-size=32000 (32GB). That way GC don't want to run, cuz it should see that a lot of memory exists. And using GetHeapStatistics (along with isolate recreation I've mentioned above) we have manual memory monitoring.
PPS. I also mentioned that sharing isolate between codes may affect users.
Say you have user#1 and user#2. Each of them have their own code, both are unrelated. code#1 has a loop with tremendous memory allocation, code#2 is just an assignment operation. Chances are GC will run during code#2 and user#2 will receive timeout.
V8 developer is needed.
Please file a bug at crbug.com/v8/new. Note that this issue will probably be considered low priority; we generally assume that the number of Isolates per process remains reasonably small (i.e., not thousands or millions).
have a pool of isolates
Yes, that's probably the way to go. In particular, as you already wrote, you will need one Isolate per thread if you want to execute scripts in parallel.
this way code#1 may affect code#2 by doing many allocations and GC will run on code2 with no allocations at all
No, that can't happen. Only allocations trigger GC activity. Allocation-free code will spend zero time doing GC. Also (as we discussed before in your earlier question), GC activity is split into many tiny (typically sub-millisecond) steps (which in turn are triggered by allocations), so in particular a short-running bit of code will not encounter some huge GC pause.
sometimes we have timeouts just for assignment operation
That sounds surprising, and doesn't sound GC-related; I would bet that something else is going on, but I don't have a guess as to what that might be. Do you have a repro?
we specify --max-old-space-size=32000 (32GB). That way GC don't want to run, cuz it should see that a lot of memory exists. And using GetHeapStatistics (along with isolate recreation I've mentioned above) we have manual memory monitoring.
Have you tried not doing any of that? V8's GC is very finely tuned by default, and I would assume that side-stepping it in this way is causing more problems than it solves. Of course you can experiment with whatever you like; but if the resulting behavior isn't what you were hoping for, then my first suggestion is to just let V8 do its thing, and only interfere if you find that the default behavior is somehow unsatisfactory.
code#1 has a loop with tremendous memory allocation, code#2 is just an assignment operation. Chances are GC will run during code#2 and user#2 will receive timeout.
Again: no. Code that doesn't allocate will not be interrupted by GC. And several functions in the same Isolate can never run in parallel; only one thread may be active in one Isolate at the same time.
My C program is running on bare metal Raspberry Pi 3B+. It's working fine except I got random freezes that are reported as Prefetch Abort by the CPU itself. The device may work fine for hours then suddently crash. It's doing nothing special before it crashes, so it's not predictable.
The FS register (FSR) is set to 0xD when this error happens, which tells it's a Permission Error : http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0087e/Cihhhged.html
Other registers : FAR is 0xE80000B6, LR is 0xFFFFFFFF, PC is 0xE80000B6, PSR is 0x200001F1
My program uses FIQ and IRQ interrupts, and use the all the four cpu cores.
I don't ask for specific debug here since it would be too complicated to dive into the details, but are you aware of common causes for Prefetch Errors to happen ?
Given that your code is multi-threaded (multi-core, indeed) and the crash is not predictable, I'd say that the prefetch abort is almost certainly being caused by memory corruption due to a race.
It might help if you can find out where the abort is being generated. Bugs like this can be extremely hard to track down though; if the code always crashes in the same place then that could help, but even if it does and you can find out which address is suffering corruption, monitoring that address for rogue writes without affecting the timing of the program (and hence the manifestation of the bug) is essentially impossible.
It is quite likely that the root cause is a buffer overrun, especially given your comments above. Really you should know in advance how big your buffers will need to be, and then make them that size. If whatever algorithm you're using can't guarantee a limit on the amount of buffer it uses, you should add code that performs a runtime check on the buffer and responds appropriately (perhaps a nicely reported error so you know which buffer is overflowing). Using the heap is ok but declaring a large buffer as static is faster and leak-free, providing the function containing the buffer is non-reentrant.
If there are data access races in the mix too, note that you will need more than data barrier instructions to solve these. The data barrier instructions only address consistency problems related to pending memory transactions. They don't prevent register caching of shared data (you need the volatile keyword for that) or simultaneous read-modify-write races (you need mutual exclusion mechanisms for that, either as provided by whatever framework you're using or home-brewed using the STREX and LDREX instructions on armv7).
Any suggestions/hints/links/tutorials would be appreciated! :)
There really is no answer to this. Under normal circumstances, the OS will keep something in essentially all the memory on the system. Basically, once it's read something into memory, it'll keep a copy of it in memory until something else needs memory so the first one gets kicked out. There are a number of functions that can get you information about memory, but none of them even attempts to really return an amount of memory that's completely unused. The closest of which I'm aware is GlobalMemoryStatusEx, which does return a number for the amount of memory that's available.
That means whatever is currently in that memory is currently both in memory and on disk, so the copy in memory can be thrown away without having to write it to disk first. For example, if you ran a program, most of is code will stay in memory (until something else wants memory), in case you decide to run it again. Since it's just a copy of the program on disk, it can be thrown away, and (if necessary) reloaded from disk when needed.
If you want more detail, you can use things like VirtualQueryEx to get it -- but it'll usually overload you with information, telling you about each block of memory used in a given process, instead of giving a nice, simple number saying "x bytes free".
GlobalMemoryStatus/GlobalMemoryStatusEx
http://msdn.microsoft.com/en-us/library/aa366586(VS.85).aspx
That's pretty easy to answer, free RAM is always sufficiently close to 0 to consider it zero and not bother. Unused RAM is always used by the file system cache, you can see this in the Taskmgr.exe, Performance tab.
If you actually mean "free virtual memory", the number you'd only really care about, then the answer is "not really possible". You'd need HeapWalk(), a very awkward and dangerous function to use. Only HeapWalk can detect blocks in the heap that are marked free but are still mapped. The number you'd arrive at is meaningless anyway. A program never runs out of free virtual memory blocks, it always runs out of large-enough memory blocks first.
Detecting this condition is easy enough. Malloc returns NULL, the new operator throws std::bad_alloc. Dealing with the condition is not easy. Solving it takes less than two hundred bucks, roughly the license fee for a 64-bit version of Windows.
Note: I am aware of the question Memory management in memory intensive application, however that question appears to be about applications that make frequent memory allocations, whereas my question is about applications intentionally designed to consume as much physical memory as is safe.
I have a server application that uses large amounts of memory in order to perform caching and other optimisations (think SQL Server). The application runs on a dedicated machine, and so can (and should) consume as much memory as it wants / is able to in order to speed up and increase throughput and response times without worry of impacting other applications on the system.
The trouble is that if memory usage is underestimated, or if load increases its possible to end up with nasty failures as memory allocations fail - in this situation obviously the best thing to do is to free up memory in order to prevent the failure at the expense of performance.
Some assumptions:
The application is running on a dedicated machine
The memory requirements of the application exceed the physical memory on the machine (that is, if additional memory was available to the application it would always be able to use that memory to in some way improve response times or throughput)
The memory is effectively managed in a way such that memory fragmentation is not an issue.
The application knows what memory can be safely freed, and what memory should be freed first for the least performance impact.
The app runs on a Windows machine
My question is - how should I handle memory allocations in such an application? In particular:
How can I predict whether or not a memory allocation will fail?
Should I leave a certain amount of memory free in order to ensure that core OS operations remain responsive (and don't in that way adversely impact the applications performance), and how can I find out how much memory that is?
The core objective is to prevent failures as a result of using too much memory, while at the same time using up as much memory as possible.
I'm a C# developer, however my hope is that the basic concepts for any such app are the same regardless of the language.
In linux, the memory usage percentage is divided into following levels.
0 - 30% - no swapping
30 - 60% - swap dirty pages only
60 - 90% - swap clean pages also based on LRU policy.
90% - Invoke OOM(Out of memory) killer and kill the process consuming maximum memory.
check this - http://linux-mm.org/OOM_Killer
In think windows might have similar policy, so you can check the memory stats and make sure you never get to the max threshold.
One way to stop consuming more memory is to go to sleep and give more time for memory cleanup threads to run.
That is a very good question, and bound to be subjective as well, because the very nature of the fundamental of C# is that all memory management is done by the runtime, i.e. Garbage Collector. The Garbage Collector is a non-deterministic entity that manages and sweeps the memory for reclaiming, depending on how often the memory gets fragmented, the GC will kick in hence to know in advance is not easy thing to do.
To properly manage the memory sounds tedious but common sense applies, such as the using clause to ensure an object gets disposed. You could put in a single handler to trap the OutOfMemory Exception but that is an awkward way, since if the program has run out of memory, does the program just seize up, and bomb out, or should it wait patiently for the GC to kick in, again determining that is tricky.
The load of the system can adversely affect the GC's job, almost to a point of a Denial of Service, where everything just grind to a halt, again, since the specifications of the machine, or what is the nature of that machine's job is unknown, I cannot answer it fully, but I'll assume it has loads of RAM..
In essence, while an excellent question, I think you should not worry about it and leave it to the .NET CLR to handle the memory allocation/fragmentation as it seems to do a pretty good job.
Hope this helps,
Best regards,
Tom.
Your question reminds me of an old discussion "So what's wrong with 1975 programming ?". The architect of varnish-cache argues, that instead of telling the OS to get out of the way and manage all memory yourself, you should rather cooperate with the OS and let it understand what you intend to do with memory.
For example, instead of simply reading data from disk, you should use memory-mapped files. This allows the OS to apply its LRU algorithm to write-back data to disk when memory becomes scarce. At the same time, as long as there is enough memory, your data will stay in memory. Thus, your application may potentially use all memory, without risking getting killed.