Okay so this is my issue and I apologize if its a duplicate. I searched but couldn't find anything I considered relevant.
When I run instruments from xcode and begin testing my application for memory leaks or allocations my iMac eventually begins to run very very slowly.
This caused me to run activity monitor while using instruments and I notice that every second instruments is open it takes up more and more real memory. Roughly 100MB's a sec.
It doesn't take long for it to consume all my iMacs free memory (2gbs) and then it starts to lag.
Anyways this doesn't occur with every application. I've done the same test with some application/projects I downloaded and instruments only seems to use about 250mbs of space and doesn't dramatically increase.
Is there something obvious that I'm doing incorrectly? Any help would be appreciated.
Thanks.
instruments consumes a lot of memory.
depending on what you are recording, you may reduce its memory usage. for example, you can often specify what (or what not) to record, or lower sampling frequencies(if applicable).
100MB/s is unusually high. can you give a more exact description of what you are recording in that time? (instruments you use, what the process you record is doing, etc).
Xcode 3 used a lot less memory - not sure if that's also the case for Instruments.
You can reduce the memory usage somewhat by running the toolset as 32 bit processes.
lastly, 2GB physical memory is nothing for Xcode + Instruments + iOS Sim. fwiw, i regularly exhaust physical memory with 8 or more GB. boo. fortunately, memory is cheap when you want 4 or 8GB.
Update
I tried using instruments for Allocations, Leaks and Zombies
You can run these tests individually, if you must.
Allocations
By itself, allocations should not consume a ton of memory if your app is not creating a lot of allocations.
To reduce memory with this instrument, you can disable some options you are not interested in:
do not record each ref count operation
only track active allocs
disable zombie detection
do not identify c++ objects
Leaks
implies Allocations instrument only if you want history of leaks.
Leaks detection itself can consume a lot of memory because it scans memory, basically cloning your allocations. say you have 100MB allocated - leaks will periodically pause the process, clone the memory and scan it for patterns. this could consume more memory than your app. iirc, it's executed as a subprocess in instruments.
Zombies
implies Allocations instrument.
Zombie detection usually implies ref count recording. When debugging zombies, it's most effective to never free them. If you do free them, you may only detect transient zombies (not sure if there's an option for that in instruments...). Never freeing objc allocations will obviously consume more memory. Running leaks on a process will then consume more memory because your heap sizes will be larger - leaks and zombies should not be combined.
you should be able to reduce total consumption by disabling some of these options and testing for them individually.
Notes
The bleeding edge Developer Tools releases can be really shaky. If you are having problems, it helps to stick to the official releases.
I can run a osx unit test (primarily c/c++ apis) with allocations alone, it consumes about 1MB/s when recording. something seems wrong, but perhaps that indicates an issue in your program (many transient allocations?).
changing the way the data is displayed and/or the charge/focus settings can require a lot of memory. e.g. "Restore All" can require a few GB to process a large sample.
if 100MB/s is an accurate number, i'd file a bug. I know Instruments consumes a lot of memory, but that's very high for recording an idle app, even with the expectation that instruments consumes a lot of memory.
good luck
Related
I am new to swift and coding in general. I have made my first OS X app over the last few days. It is a simple ticker app that lives in the menu bar.
My issue is that over the space of 3 hours, my app goes from 10mb or ram being used to over 1gb. It slowly and slowly uses more and more. I noticed after about 6 hours the app stops working, I can only assume that OS X has stopped the process because it's hogging too much memory?
Anyway, I have looked online and I have used Xcode instruments to try and find a memory leak, but I don't know exactly how to pin point it. Can anyone give me some general good ways to find memory leaks and sources of bugs when using Xcode? Any general practices are good too.
If the memory loss is not due to a leak (Run Leaks and Analyzer) the lost is to inadvertently retained and unused memory.
Use instruments to check for leaks and memory loss due to retained but not leaked memory. The latter is unused memory that is still pointed to. Use Mark Generation (Heapshot) in the Allocations instrument on Instruments.
For HowTo use Heapshot to find memory creap, see: bbum blog
Basically the method is to run Instruments allocate tool, take a heapshot, run an iteration of your code and take another heapshot repeating 3 or 4 times. This will indicate memory that is allocated and not released during the iterations.
To figure out the results disclose to see the individual allocations.
If you need to see where retains, releases and autoreleases occur for an object use instruments:
Run in instruments, in Allocations set "Record reference counts" on (For Xcode 5 and lower you have to stop recording to set the option). Cause the app to run, stop recording, drill down and you will be able to see where all retains, releases and autoreleases occurred.
When confronted with a memory leak, the first thing I do is look at where variables are created and destroyed; especially if they are defined in looping logic (although generally not a good idea).
Generally most memory leaks come from there. I would venture a guess that the leak occurs somewhere in the logic that tracks your timed iterations.
Good luck!
So in xcode, the Debug Navigator shows CPU Usage and Memory usage. When you click on Memory it says 'Memory Utilized'.
In my app I am using the latest Restkit (0.20.x) and every time I make a GET request using getObjectsAtPath (which doesn't even return a very large payload), the memory utilized increases about 2mb. So if I refresh my app 100 times, the Memory Utilized will have grown over 200mb.
However, when I run the Leaks tool, the Live Bytes remain fairly small and do not increase with each new request. Live bytes remains below 10mb the whole time.
So do I have a memory issue or not? Memory Utilized grows like crazy, but Live Bytes suggests everything is okay.
You can use Heapshot Analysis to evaluate the situation. If that shows no growth, then the memory consumption may be virtual memory which may (for example) reside in a cache/store which may support eviction and recreation -- so you should also identify growth in Virtual Memory regions.
If you keep making requests (e.g. try 200 refreshes), the memory will likely decrease at some point - or you will have memory warnings and ultimately allocation requests may fail. Determine how memory is reduced, if this is the case. Otherwise, you will need to determine where it is created and possibly referenced.
Also, test on a device in this case. The simulator is able to utilise much more memory than a device simply because it has more to work with. Memory constraints are not simulated.
OK, just to be clear, I understand that the Task Manager is never a good way to monitor memory consumption of a program. Now that I've cleared the air...
I've used SciTech's .Net memory profiler for a while now, but almost exclusively on the .Net 3.5 version of our app. Typically, I'd run an operation, collect a baseline snapshot, run the operation again and collect a comparison snapshot and attack leaks from there. Generally, the task manager would mimic the rise and fall of memory (within a range of accuracy and within a certain period of time).
In our .net 4.0 app, our testing department reported a memory when performing a certain set of operations (which I'll call operation A). Within the profiler, I'd see a sizable change of live bytes (usually denoting a leak). If I immediately collect another snapshot, the memory is reclaimed (regardless of how long I waited to collect the initial comparison snapshot). I thought the finalizer might be getting stuck so I manually injected the following calls:
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
When I do that, I don't see the initial leak in the profiler, but my working set in the Task Manager is still ridiculously high (operation A involves loading images). I thought the issue might be a stuck finalizer (and that SciTech was able to do some voodoo magic when the profiler collects its snapshots), but for all the hours I spent using WinDbg and MDbg, I couldn't ever find anything that suggested the finalizer was getting stuck. If I just let my app sit for hours, the memory in the working set would never reduce. However, if I proceeded to perform other operations, the memory would drop substantially at random points.
MY QUESTION
I know the GC changed substantially with CLR 4.0, but did it affect how the OS sees allocated memory? My computer has 12 GB RAM, so when I run my app and ramp up my memory usage, I still have TONS free so I'm hypothesizing that it just doesn't care to reduce what it's already allocated (as portrayed in the Task Manager), even if the memory has been "collected". When I run this same operation on a machine with 1GB RAM, I never get an out of memory exception, suggesting that I'm really not leaking memory (which is what the profiler also suggests).
The only reason I care what the Task Manager shows because it's how our customers monitor our memory usage. If there is a change in the GC that would affect this, I just want to be able to show them documentation that says it's Microsoft's fault, not ours :)
In my due diligence, I've searched a bunch of other SO threads for an answer, but all I've found are articles about the concurrency of the generational cleanup and other unrelated, yet useful, facts.
You cannot expect to see changes in the use of managed heap immediately reflected in process memory. The CLR essentially acts as a memory manager on behalf of your application, so it allocates and frees segments as needed. As allocating and freeing segments is an expensive operation the CLR tries to optimize this. It will typically not free an empty segment immediately as it could be used to serve new managed allocations. If you want to monitor managed memory usage, you should rely on the .NET specific counters for this.
Note: I am aware of the question Memory management in memory intensive application, however that question appears to be about applications that make frequent memory allocations, whereas my question is about applications intentionally designed to consume as much physical memory as is safe.
I have a server application that uses large amounts of memory in order to perform caching and other optimisations (think SQL Server). The application runs on a dedicated machine, and so can (and should) consume as much memory as it wants / is able to in order to speed up and increase throughput and response times without worry of impacting other applications on the system.
The trouble is that if memory usage is underestimated, or if load increases its possible to end up with nasty failures as memory allocations fail - in this situation obviously the best thing to do is to free up memory in order to prevent the failure at the expense of performance.
Some assumptions:
The application is running on a dedicated machine
The memory requirements of the application exceed the physical memory on the machine (that is, if additional memory was available to the application it would always be able to use that memory to in some way improve response times or throughput)
The memory is effectively managed in a way such that memory fragmentation is not an issue.
The application knows what memory can be safely freed, and what memory should be freed first for the least performance impact.
The app runs on a Windows machine
My question is - how should I handle memory allocations in such an application? In particular:
How can I predict whether or not a memory allocation will fail?
Should I leave a certain amount of memory free in order to ensure that core OS operations remain responsive (and don't in that way adversely impact the applications performance), and how can I find out how much memory that is?
The core objective is to prevent failures as a result of using too much memory, while at the same time using up as much memory as possible.
I'm a C# developer, however my hope is that the basic concepts for any such app are the same regardless of the language.
In linux, the memory usage percentage is divided into following levels.
0 - 30% - no swapping
30 - 60% - swap dirty pages only
60 - 90% - swap clean pages also based on LRU policy.
90% - Invoke OOM(Out of memory) killer and kill the process consuming maximum memory.
check this - http://linux-mm.org/OOM_Killer
In think windows might have similar policy, so you can check the memory stats and make sure you never get to the max threshold.
One way to stop consuming more memory is to go to sleep and give more time for memory cleanup threads to run.
That is a very good question, and bound to be subjective as well, because the very nature of the fundamental of C# is that all memory management is done by the runtime, i.e. Garbage Collector. The Garbage Collector is a non-deterministic entity that manages and sweeps the memory for reclaiming, depending on how often the memory gets fragmented, the GC will kick in hence to know in advance is not easy thing to do.
To properly manage the memory sounds tedious but common sense applies, such as the using clause to ensure an object gets disposed. You could put in a single handler to trap the OutOfMemory Exception but that is an awkward way, since if the program has run out of memory, does the program just seize up, and bomb out, or should it wait patiently for the GC to kick in, again determining that is tricky.
The load of the system can adversely affect the GC's job, almost to a point of a Denial of Service, where everything just grind to a halt, again, since the specifications of the machine, or what is the nature of that machine's job is unknown, I cannot answer it fully, but I'll assume it has loads of RAM..
In essence, while an excellent question, I think you should not worry about it and leave it to the .NET CLR to handle the memory allocation/fragmentation as it seems to do a pretty good job.
Hope this helps,
Best regards,
Tom.
Your question reminds me of an old discussion "So what's wrong with 1975 programming ?". The architect of varnish-cache argues, that instead of telling the OS to get out of the way and manage all memory yourself, you should rather cooperate with the OS and let it understand what you intend to do with memory.
For example, instead of simply reading data from disk, you should use memory-mapped files. This allows the OS to apply its LRU algorithm to write-back data to disk when memory becomes scarce. At the same time, as long as there is enough memory, your data will stay in memory. Thus, your application may potentially use all memory, without risking getting killed.
Applications like Microsoft Outlook and the Eclipse IDE consume RAM, as much as 200MB. Is it OK for a modern application to consume that much memory, given that few years back we had only 256MB of RAM? Also, why this is happening? Are we taking the resources for granted?
Is it acceptable when most people have 1 or 2 gigabytes of RAM on their PCS?
Think of this - although your 200mb is small and nothing to worry about given a 2Gb limit, everyone else also has apps that take masses of RAM. Add them together and you find that the 2Gb I have very quickly gets all used up. End result - your app appears slow, resource hungry and takes a long time to startup.
I think people will start to rebel against resource-hungry applications unless they get 'value for ram'. you can see this starting to happen on servers, as virtualised systems gain popularity - people are complaining about resource requirements and corresponding server costs.
As a real-world example, I used to code with VC6 on my old 512Mb 1.7GHz machine, and things were fine - I could open 4 or 5 copies along with Outlook, Word and a web browser and my machine was responsive.
Today I have a dual-processor 2.8Ghz server box with 3Gb RAM, but I cannot realistically run more than 2 copies of Visual Studio 2008, they both take ages to start up (as all that RAM still has to be copied in and set up, along with all the other startup costs we now have), and even Word take ages to load a document.
So if you can reduce memory usage you should. Don't think that you can just use whatever bloated framework/library/practice you want with impunity.
http://en.wikipedia.org/wiki/Moore%27s_law
also:
http://en.wikipedia.org/wiki/Wirth%27s_law
There's a couple of things you need to think about.
1/ Do you have 256M now? I wouldn't think so - my smallest memory machine is 2G so a 200M application is not much of a problem.
2a/ That 200M you talk about might not be "real" memory. It may just be address space in which case it might not all be in physical memory at once. Some bits may only be pulled in to physical memory when you choose to do esoteric things.
2b/ It may also be shared between other processes (such as a DLL). This means it could be only held in physical memory as one copy but be present in the address space of many processes. That way, the usage is amortized over those many processes. Both 2a and 2b depend on where your figure of 200M actually came from (which I don't know and, running Linux, I'm unlikel to find out without you telling me :-).
3/ Even if it is physical memory, modern operating systems aren't like the old DOS or Windows 3.1 - they have virtual memory where bits of applications can be paged out (data) or thrown away completely (code, since it can always reload from the executable). Virtual memory gives you the ability to use far more memory than your actual physical memory.
Many modern apps will take advantage of the existance of more memory to cache more. Some like firefox and SQL server have explicit settings for how much memory they will use. In my opinion, it's foolish to not use available memory - what's the point of having 2GB of RAM if your apps all sit around at 10MB leaving 90% of your physical memory unused. Of course, if your app does use caching like this, it better be good at releasing that memory if page file thrashing starts, or allow the user to limit the cache size manually.
You can see the advantage of this by running a decent-sized query against SQL server. The first time you run the query, it may take 10 seconds. But when you run that exact query again, it takes less than a second - why? The query plan was only compiled the first time and cached for use later. The database pages that needed to be read were only loaded from disk the first time - the second time, they were still cached in RAM. If done right, the more memory you use for caching (until you run into paging) the faster you can re-access data. You'll see the same thing in large documents (e.g. in Word and Acrobat) - when you scroll to new areas of a document, things are slow, but once it's been rendered and cached, things speed up. If you don't have enough memory, that cache starts to get overwritten and going to the old parts of the document gets slow again.
If you can make good use of the RAM, it is your responsability to use it.
Yes, it is perfectly normal. Also something big was changed since 256MB were normal... and do not forget that before that 640Kb were supposed to be enough for everybody!
Now most software solutions are build with a garbage collector: C#, Java, Ruby, Python... everybody love them because certainly development can be faster, however there is one glitch.
The same program can be memory leak free with either manual or automatic memory deallocation. However in the second case it is likely for the memory consumption to grow. Why? In the first case memory is deallocated and kept clean immediately after something becomes useless (garbage). However it takes time and computing power to detect that automatically, hence most collectors (except for reference counting) wait for garbage to accumulate in order to make worth the cost of the exploration. The more you wait the more garbage you can sweep with the cost of one blow, but more memory is needed to accumulate that garbage. If you try to force the collector constantly, your program would spend more time exploring memory than working on your problems.
You can be completely sure than as long as programmers get more resources, they will sacrifice them using heavier tools in exchange for more freedom, abstraction and faster development.
A few years ago 256 MB was the norm for a PC, then Outlook consumed about 30 - 35 MB or so of memory, that's around 10% of the available memory, Now PC's have 2 GB or more as a norm, and outlook consumes 200 MB of memory, that's about 10% also.
The 1st conclusion: as more memory is available applications use more of it.
The 2nd conclusion: no matter what time frame you pick there are applications that are true memory hogs (like Outlook) and applications that are very efficient memory wise.
The 3rd conclusion: memory consumption of a app can't go down with time, else 640K would have been enough even today.
It completely depends on the application.