I'd like to know how to force Valgrind tool to show only memory leaks!
with --leak-check=full or --leak-check=full it shows memory leaks (which is good) but also uninitialization problems and/or conditional jump taking problems
thanks!
You can remove the uninitialized value reads, including the ones for jumps with --undef-value-errors=no. I don't know if you can disable other kinds of errors, such as heap corruption and double free.
Related
I am building an app in Xcode and am now deep into the memory management portion of the project. When I use Allocations and Leaks I seem to get entirely different results from what I see in Xcode's debug panel: particularly the debug panel seems to show much higher memory usage than what I see in Allocations and it also seems to highlight leaks that as far as I can tell (1) do not exist and (2) are confirmed to not exist by the Leaks tool. Is this thing useless, or even worse, misleading?
Here was a new one: today it told me I was using >1 GB of memory but its little memory meter read significantly <1 GB (and was still wrong if the Allocations data is accurate). Picture below.
UPDATE: I ran VM Tracker in a 38-minute session and it does appear virtual memory accounts for the difference between allocations / leaks and the memory gauge. Picture below. I'm not entirely sure how to think about this yet. Our game uses a very large number of textures that are swapped. I imagine this is common in most games of our scale (11 boards, 330 levels; each board and map screen has unique artwork).
You are probably using the Memory Gauge while running in the Simulator using a Debug build configuration. Both of those will give you misleading memory results. The only reliable way to know how memory is being managed is to run on a device using a Release build. Instruments uses the Release build configuration, so it's already going to be better than just running and using the Memory Gauge.
Moreover, it is a known flaw that the Xcode built-in memory tools, such as the Memory Debugger, can generate false positives for leaks.
However, Instruments has its flaws as well. My experience is that, for example, it fails to catch leaks generated during app startup. Another problem is that people don't always understand how to read its output. For example, you say:
the debug panel seems to show much higher memory usage than what I see in Allocations
Yes, but Allocations is not the whole story. You are probably failing to look at the VM allocations. Those are shown separately and often constitute the reason for high memory use (because they include the backing stores for images and the view rendering tree). The Memory Gauge does include virtual memory, so this alone might account for the "difference" you think you're seeing.
So, the answer to your question is: No, the Memory Gauge is not useless. It gives a pretty good idea of when you might need to be alert to a memory issue. But you are then expected to switch to Instruments for a proper analysis.
I am new to swift and coding in general. I have made my first OS X app over the last few days. It is a simple ticker app that lives in the menu bar.
My issue is that over the space of 3 hours, my app goes from 10mb or ram being used to over 1gb. It slowly and slowly uses more and more. I noticed after about 6 hours the app stops working, I can only assume that OS X has stopped the process because it's hogging too much memory?
Anyway, I have looked online and I have used Xcode instruments to try and find a memory leak, but I don't know exactly how to pin point it. Can anyone give me some general good ways to find memory leaks and sources of bugs when using Xcode? Any general practices are good too.
If the memory loss is not due to a leak (Run Leaks and Analyzer) the lost is to inadvertently retained and unused memory.
Use instruments to check for leaks and memory loss due to retained but not leaked memory. The latter is unused memory that is still pointed to. Use Mark Generation (Heapshot) in the Allocations instrument on Instruments.
For HowTo use Heapshot to find memory creap, see: bbum blog
Basically the method is to run Instruments allocate tool, take a heapshot, run an iteration of your code and take another heapshot repeating 3 or 4 times. This will indicate memory that is allocated and not released during the iterations.
To figure out the results disclose to see the individual allocations.
If you need to see where retains, releases and autoreleases occur for an object use instruments:
Run in instruments, in Allocations set "Record reference counts" on (For Xcode 5 and lower you have to stop recording to set the option). Cause the app to run, stop recording, drill down and you will be able to see where all retains, releases and autoreleases occurred.
When confronted with a memory leak, the first thing I do is look at where variables are created and destroyed; especially if they are defined in looping logic (although generally not a good idea).
Generally most memory leaks come from there. I would venture a guess that the leak occurs somewhere in the logic that tracks your timed iterations.
Good luck!
I have a C++ code and I am playing with Intel's VTune and I ran the General Exploration analysis and have no idea how to interpret the results. It flags as an issue the number of Retire Stalls.
On it's own, that is enough to confuse me because I'm probably in over my head. But the functions that it lists as having an abnormal amount of retire stalls is _int_malloc and malloc_consolidate, both in libc. So it's not even something that I can look at my own code and try to figure out and it's not something that I can really begin to change.
Is there a way to use that information to improve my own code? Or does it really just mean that I should find ways to allocate less or less often?
(Note: the specific code at hand isn't the issue, I'm looking for strategies to interpret the data and improve things when the hotspots or the stalls or whatever the "problem" may be occurs in code outside my control)
Is there a way to use that information to improve my own code? Or does
it really just mean that I should find ways to allocate less or less
often?
Yes, it pretty much sounds like you should make changes in your code so that malloc gets called less often.
Is the heap allocation really necessary?
Is there a buffer that you can reuse?
Is using memory pool an option?
Can you do stack allocation instead? For example, if those are
arrays, do you happen to know the maximum size of those arrays at
compile time?
Depending on your application, memory allocation can be expensive. I once made a program 20x faster by removing memory allocations from a tight loop. The application wasn't that slow on Linux but it was a disaster on Windows. After my changes, it was also OK on Windows.
know which line of code is calling malloc mostly
avoid repeated allocation and deallocation
potentially use thread-local-storage together with the previous point
write your own allocator which only returns memory when you tell him to and otherwise keeps freed memory blocks in a list (use list::splice to move list elements from one list into another)
use allocators from boost which potentially do the same like the previous point
I have used pageheap for debugging heap corruptions in last four years. generally, I don't have any problems with it. But now I have faced with weird behavior.
After enabling pageheap for my process in win7-sp1-x86 host using global flags with following flags:
-Enable heap tail checking
-Enable heap free checking
-Enable Page Heap
I noticed crashes with out-of-memory exceptions.
!address -summary command said that ~90% of virtual memory was consumed by PageHeap.
It is really strange for me, because, as I know, pageheap should not lead to such big amount of memory overhead.
Can please someone explain whats is the reason of such behavior?
When running an application with full page-heap enabled, 2 pages (4kb) are allocated for each 'malloc'. When the memory is freed, these pages (or may be only the first one) are still 'reserved' : they don't occupy any physical or page file memory, but the virtual address range is made unavailable and an access violation is raised when trying to access this memory. This allows to catch read-after-free kind of bugs. Thus, the virtual address space of the application keeps on increasing even if you properly call free for each malloc.
Although it won't happen often, there are a couple of cases where my Cocoa application will allocate very large amounts of memory, enough to make me worry about malloc failing. What is the best way to handle this sort of failure in a Cocoa application? I've heard that Exceptions are generally discouraged in this development environment but is this a case where they would be useful?
If you have an allocation fail because you are out of memory, more likely than not there has been an allocation error in some framework somewhere that has left the app in an undetermined state.
Even if that isn't the case, you can't do anything that'll allocate memory and that leaves you with very few options.
Even freeing memory in an attempt to "fix" the problem isn't going to consistently work, not even to "fix" it by showing a nice error message and exiting cleanly.
You also don't want to try and save data from this state. Or, at least, not without writing all the code necessary to deal with corrupt data on read (because it is quite possible that a failed allocation meant some code somewhere corrupted memory).
Treat allocation failures as fatal, log and exit.
It is extremely uncommon for a correctly written application to run out of memory. More likely, too, when an app runs out of memory, the user's system is going to be paging like hell and, thus, performance had degraded significantly long before the allocation failure.
Your return on investment for focusing on optimizing and reducing memory use will be orders of magnitude greater than trying to recover from an allocation failure.
(Alan's original answer was accurate as well as his edit).
If you're running into memory allocation errors, you shouldn't try to handle them, and instead rethink how your application uses memory.
I'm not sure what the Cocoa idioms are, but for C++ and C# at least, out of memory exceptions are a sign of larger problems and are best left to the user/OS to deal with.
Say your memory allocation fails, what else can your system do? How much memory is left? Is it enough to show a dialog/print a message, before shutting down? Will throw an exception succeed? Will cleaning up resources cause cascading memory exceptions?
If malloc fails, you will get a null back, so if that's the case, can your application continue without the memory? If not, treat the condition as a fatal error and exit with a user helpful message.
If you run out of memory there is usually not much you can do short of terminate your app. Even showing a notification could fail because there is not enough memory for that.
The standard in C applications is to write a void xmalloc(size_t size); function that will check the return value of malloc, and if NULL, print out an error to stderr and then call abort(). That way you just use xmalloc throughout your code and don't think about it. If you run out of memory, bad luck and your app will die.