I have written an app and was testing it for memory leaks when I noticed that the "all allocations" category in the leaks simulator keeps increasing its size whenever I open and close a sub-view.
I intially thought it was a memory leak, but it does not show up as a leak in the leaks tab.
Is this normal?
It depends which column of the table you are looking at.
The 'Overall' and 'Overall Bytes' figures will always go up, since they are a running count of allocations made with no account of deallocations.
However, the 'Live Bytes' and '# Living' figures should go up when an object or block of memory has been allocated, but should go down when they are deallocated.
Repeatedly opening and closing a sub-view should (subject to image or data caching) hover around a fixed number of live bytes and living objects / memory blocks.
Instruments sometimes gets a bit confused, however, as you can see from the screenshot. The whole '# Transitory' column is showing '0', which is obviously incorrect. A transitory object is just one that has been allocated and subsequently deallocated, i.e. it's a non-living object.
(# Living + # Transitory == # Overall)
Whenever Instruments gives me that column of zeros, I quit the current run and start a new one.
As for the Leaks Instrument, it will only show those objects or memory blocks that no longer have any pointers pointing to them. If a program continually allocates more and more objects / memory blocks but retains pointers to them, the Leaks Instrument won't show them.
That would make sense would it not? Every time you do something in the app, something is probably allocated such as your different subviews. Therefore total allocations will increase.It's just a record of the total allocations.
Related
I'm currently studying memory management of OS by the video lecture. The instructor says,
In fact, you may have, and it is quite often the case that there may
be several parts of the process memory, which are not even accessed at
all. That is, they are neither executed, loaded or stored from memory.
I don't understand the saying since even if in a simple C program, we access whole address space of it. Don't we?
#include <stdio.h>
int main()
{
printf("Hello, World!");
return 0;
}
Could you elucidate the saying? If possible could you provide an example program wherein "several parts of the process memory, which are not even accessed at all" when it is run.
Imagine you have a large and complicated utility (e.g. a compiler), and the user asks it for help (e.g. they type gcc --help instead of asking it to compile anything). In this case, how much of the utility's code and data is used?
Most programs have various optional parts that aren't used (e.g. maybe something that works with graphics will have some code for 16 bits per pixel and other code for 32 bits per pixel, and will determine which code to use and not use the other code). Most heap allocators are "eager" (e.g. they'll ask the OS for 20 MiB of space and then might only "malloc() 2 MiB of it). Sometimes a program will memory map a huge file but then only access a small part of it.
Even for your trivial "hello world" example code; the virtual address space probably contains a huge (several MiB) shared library to support lots of C standard library functions (e.g. puts(), fprintf(), sprintf(), ...) and your program only uses a small part of that shared library; and your program probably reserves a conservative amount of space for its stack (e.g. maybe 20 KiB of space for its stack) and then probably only uses a few hundred bytes of stack.
In a virtual memory system, the address space of the process is created in secondary store at start up. Little or nothing gets placed in memory. For example, the operating system may use the executable file as the page file for the code and static data. It just sets up an internal structure that says some range of memory is mapped to these blocks in the executable file. The same goes for shared libraries. The other data gets mapped to the page file.
As your program runs it starts page faulting rapidly because nothing is in memory and the operating system has to load it from secondary storage.
If there is something that your program does not reference, it never gets loaded into memory.
If you had global variable declared like
char somedata [1045] ;
and your program never references that variable, it will never get loaded into memory. The same goes for code. If you have pages of code that done get execute (e.g. error handling code) it does not get loaded. If you link to shared libraries, you will likely bece including a lot of functions that you never use. Likewise, they will not get loaded if you do not execute them.
To begin with, not all of the address space is backed by physical memory at all times, especially if your address space covers 248+ bytes, which your computer doesn't have (which is not to say you can't map most of the address space to a single physical page of memory, which would be of very little utility for anything).
And then some portions of the address space may be purposefully permanently inaccessible, like a few pages near virtual address 0 (to catch NULL pointer dereferences).
And as it's been pointed out in the other answers, with on-demand loading of programs, you may have some portions of the address space reserved for your program but if the program doesn't happen to need any of its code or data there, nothing needs to be loader there either.
My app crashes on devices with 0.5GB memory. However, profiling memory usage in Xcode - it rarely goes above 140MB. I've used instruments to check leaks and there are none that are significant.
However, when I run my app, the memory used by 'Other Processes' is always very high. This is the resting state after launching:
I added a 1 second delay in each cycle of a loop in my code, and discovered that on each loop, the 'other processes' increases memory usage by about 3MB per object, until on 0.5GB devices, it runs out and crashes.
This question suggests that these are other apps using that memory, but I have closed every other app and the usage directly correlates with my looping code.
What could be using memory in other processes, that is actually running in my app? And why is my 'Other Processes' using up so much memory?
To give an idea of what I'm doing, I'm pulling data from Parse, then looping through each of the objects returned and creating an SKNode subclass object from the data. I add this node to an array, (for reference) and to the scene. Here's the code I'm doing on the main thread with the delay added. NB the line:
self drawRelationships:[_batches objectAtIndex:_index] forMini:_playerMini];
Is a BFTask and so asynchronous. And I'm dividing the array into smaller batches so I can see incremental memory usage, as each batch is drawn. If I try to draw the whole lot at once, OOM occurs immediately...
- (void)drawNewRelationships
{
_batches = [NSMutableArray array];
_index = 0;
[_playerMini fetchInBackgroundWithBlock:^(PFObject *object, NSError *error) {
[ParseQuery getNewRelationshipsForMini:_playerMini current:_miniRows.relationshipIds withBlock:^(NSMutableArray *newRelationships) {
_batches = [self batchArrays:3 fromArray:newRelationships];
_index = 0;
[self drawBatches];
}];
}];
}
- (void)drawBatches
{
if ([_batches objectAtIndex:_index]) {
[self drawRelationships:[_batches objectAtIndex:_index] forMini:_playerMini];
_index++;
if (_index < [_batches count]) {
[self performSelector:#selector(drawBatches) withObject:nil afterDelay:1];
}
}
}
The node contains other data, (a couple of arrays, custom object) and I've tried running the app with all that data removed. I've tried running on the main thread and background threads. I've tried using BFTask to do things asynchronously. Everything I've tried ends up with the same behaviour - creating these SKNode objects eats up memory in 'Other Processes', until on low memory devices, it crashes.
It might be worth noting, that this behaviour has only started to occur since iOS9.
Basically, what can be using all this memory in 'other processes' and how can I free it?
Update
I've tried running the Sprite Kit sample app, and even that uses ~550MB in other processes when it launches. Could this be a major Sprite Kit bug?
Well it turned out to be a rather specific issue. The memory allocated to other processes was in fact memory leaking from my app. It occurred when I flattened a node with many children, but didn't nil an NSDictionary that contained references to all the pre-flattened nodes. For some reason, this mem leak didn't show up when profiling.
I also found a very good blog post: http://battleofbrothers.com/sirryan/memory-usage-in-sprite-kit on reducing your app's memory footprint. Worth a read if you're trying to optimise.
I want to provide a solution for those who are not necessarily using SpriteKit, but are facing issues with Other Processes taking up more and more memory - meaning there is a leak. This is the best way I've found to debug leaks in 'Other processes' so far.
Open Instruments, select Activity Monitor
Reproduce the steps in your application, see which process is taking ownership for the leak. For example if it is mediaserverd, you probably have a leak revolving around encoding/decoding or something media related, so things like releasing buffers or releasing the decompression session might not be working as planned.
Now that you have an idea of where to look, open Xcode and use the Debug Memory Graph tool, and look for the potential leaking instances, and you should see all the strong references towards it. For myself working a lot in objective-c++ it turns out to be missing autoreleasepools quite often.
I have a service written in go that takes 6-7G memory at runtime (RES in top). So I used the pprof tool trying to figure out where the problem is.
go tool pprof --pdf http://<service>/debug/pprof/heap > heap_prof.pdf
But there are only about 1-2G memory in result ('Total MB' in pdf). Where's the rest ?
And I've tried profile my service with GOGC=off, as a result the 'Total MB' is exactly the same as 'RES' in top. It seems that memory is GCed but haven't been return to kernel won't be profiled.
Any idea?
P.S, I've tested in both 1.0.3 and 1.1rc3.
This is because Go currently does not give memory of GC-ed objects back to the operating system, to be precise, only for objects smaller then predefined limit (32KB). Instead memory is cached to speed up future allocations Go:malloc. Also, it seems that this is going to be fixed in the future TODO.
Edit:
New GC behavior: If the memory is not used for a while (about 5 min), runtime will advise the kernel to remove the physical mappings from the unused virtual ranges. This process can be forced by calling runtime.FreeOSMemory()
At one point in my program I am calling GlobalFree() to release a memory buffer I allocated with GlobalAlloc() using GMEM_FIXED flag. There is nothing that could be locking this block. However, when I call GlobalFree() after referencing the data (and all of the internal data is still the same as it was), the program stops and says it has encountered a user breakpoint from code in the GlobalFree() code.
Any ideas what could cause this?
The heap functions typically call DebugBreak() - which implements a user-breakpoint - when they detect that the heap structures have been damaged.
The implication is you have written past the end (or the beginning) of the allocated area.
After creating large objects and running out of RAM, I will try and delete the objects in my current environment using
rm(list=ls())
When I check my RAM usage, nothing has changed. Even after calling gc() nothing has changed. I can only replenish my RAM by quitting R.
Anybody have advice for dealing with memory-intensive objects within R?
Memory for deleted objects is not released immediately. R uses a technique called "garbage collection" to reclaim memory for deleted objects. Periodically, it cycles through the list of accessible objects (basically, those that have names and have not been deleted and can therefore be accessed by the user), and "tags" them for retention. The memory for any untagged objects is returned to the operating system after the garbage-collection sweep.
Garbage collection happens automatically, and you don't have any direct control over this process. But you can force a sweep by calling the command gc() from the command line.
Even then, on some operating systems garbage collection might not reclaim memory (as reported by the OS). Older versions of Windows, for example, could increase but not decrease the memory footprint of R. Garbage collection would only make space for new objects in the future, but would not reduce the memory use of R.
On Windows, the technique you describe works for me. Try the following example.
Open the Windows Task Manager (CTRL+SHIFT+ESC).
Start RGui. RGui.exe mem usage is 27 460K.
Type
gcinfo(TRUE)
x <- rnorm(1e8)
RGui.exe mem usage is now 811 100K.
Type rm("x"). RGui.exe mem usage is still 811 100K.
Type gc(). RGui.exe mem usage is now 28 332K.
Note that gc shoud be called automatically if you have removed objects from your workspace, and then you try to allocate more memory to new variables.
My impression is that multiple forms of gc() are tried before R reports failed memory allocation. I'm not aware of a solution for this at present, other than restarting R as you suggest. It appears that R does not defragment memory.
An old question, I realize, but I've found that (on OS Mojave), invoking pryr::mem_used() in the R session causes the activity monitor to immediately update the reported memory usage to reflect only the objects retained in the R environment.