I have the following function:
(defn- clock-process [every f]
(while true
(f)
(Thread/sleep every)))
And I'm spinning it off in to its own thread, to avoid blocking the main thread:
(future
(clock-process 3600000 ;; fire every hour
#(intensive-fn ...)))
On a Heroku dyno, total memory use is substantially growing at each hour interval:
Where the memory drops off is where the application is restarted.
Can anyone help me understand what's happening?
I don't know enough to help you understand why this is happening, but I can suggest some options for mitigating the problem.
tl;dr: Lowering you heap size is probably the best thing to do.
First, set up the Memory logging agent. This will periodically print some messages to the logs like:
source=web.1 measure.mem.jvm.heap.used=33M measure.mem.jvm.heap.committed=376M measure.mem.jvm.heap.max=376M
source=web.1 measure.mem.jvm.nonheap.used=19M measure.mem.jvm.nonheap.committed=23M measure.mem.jvm.nonheap.max=219M
source=web.1 measure.threads.jvm.total=21 measure.threads.jvm.daemon=11 measure.threads.jvm.nondaemon=1 measure.threads.jvm.internal=9
From this, you'll be able to determine where growth occurs (i.e. heap or non-heap).
If you find that the growth is all heap, you should try setting your Maximum heap size (i.e. -Xmx). lower than the defaults. Something like -Xmx300m should be sufficient, but you can probably go lower. You could also generate some heap dumps, and analyze them with a tool like Eclipse MAT if you want to pinpoint the source.
If you find that growth is in non-heap, then use some of the other steps described in the troubleshooting guide to determine if it's Metaspace or something else. If it is not in Metaspace, then you may have a native memory leak, which can result from leaving file handles or buffers open.
Have you tried to reproduce the problem locally? If you do so, if will be easier to dig a little deeper using tools like VisualVM and jmap.
Finally, you can open a support ticket with Heroku. They can look at things like smaps to help with non-heap/native-memory leaks.
Related
My program detects a memory leak when closed and I'd like to resolve it. I've replaced my only usage of new with a smart pointer, but I am using quite a few bare char[] and can't really afford the time to go through each pointer I'm using in the project.
Can I identify the cause of a memory leak from this heap snapshot?
Is there a way to download this file and search through it for that particular memory location?
Am I wasting my time trying to diagnose this issue because Windows will clean up the memory anyways?
Can I identify the cause of a memory leak from this heap snapshot?
Typically not. You would need at least 2 snapshots and compare them to find the difference. You would then walk through the code that was run between these 2 snapshots and see where the difference comes from.
This requires a lot of discipline and closely following the steps of memory analysis:
Perform the steps in question once. This will ensure that any lazy initialization is done.
Take a snapshot
Perform the steps in question again. Make sure you return to the same point as before.
Take a snapshot
Compare the snapshots
The result may still include false positives, e.g. output in the log window (20 lines before, 40 lines after).
Is there a way to download this file and search through it for that particular memory location?
I don't know if that's built into Visual Studio. But you could of course create a crash dump file with full memory and then look it up there.
Am I wasting my time trying to diagnose this issue because Windows will clean up the memory anyways?
IMHO you're not wasting your time. An application may crash and cause data loss for the user if it has a memory leak and runs out of address space. You'll find the problem and become a better programmer. You'll learn something which helps you also with other programming languages.
But other than that, you're right: Windows will free the memory when the process terminates. So restarting the program will likely help and that may do as a workaround for customers for a period of a few months.
I'm looking for option something similar to -Xmx in Java, that is to assign maximum runtime memory that my Go application can utilise. Was checking the runtime , but not entirely if that is the way to go.
I tried setting something like this with func SetMaxStack(), (likely very stupid)
debug.SetMaxStack(5000000000) // bytes
model.ExcelCreator()
The reason why I am looking to do this is because currently there is ample amount of RAM available but the application won't consume more than 4-6% , I might be wrong here but it could be forcing GC to happen much faster than needed leading to performance issue.
What I'm doing
Getting large dataset from RDBMS system , processing it to write out in excel.
Another reason why I am looking for such an option is to limit the maximum usage of RAM on the server where it will be ultimately deployed.
Any hints on this would greatly appreciated.
The current stable Go (1.10) has only a single knob which may be used to trade memory for lower CPU usage by the garbage collection the Go runtime performs.
This knob is called GOGC, and its description reads
The GOGC variable sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100. Setting GOGC=off disables the garbage collector entirely. The runtime/debug package's SetGCPercent function allows changing this percentage at run time. See https://golang.org/pkg/runtime/debug/#SetGCPercent.
So basically setting it to 200 would supposedly double the amount of memory the Go runtime of your running process may use.
Having said that I'd note that the Go runtime actually tries to adjust the behaviour of its garbage collector to the workload of your running program and the CPU processing power at hand.
I mean, that normally there's nothing wrong with your program not consuming lots of RAM—if the collector happens to sweep the garbage fast enough without hampering the performance in a significant way, I see no reason to worry about: the Go's GC is
one of the points of the most intense fine-tuning in the runtime,
and works very good in fact.
Hence you may try to take another route:
Profile memory allocations of your program.
Analyze the profile and try to figure out where the hot spots
are, and whether (and how) they can be optimized.
You might start here
and continue with the gazillion other
intros to this stuff.
Optimize. Typically this amounts to making certain buffers
reusable across different calls to the same function(s)
consuming them, preallocating slices instead of growing them
gradually, using sync.Pool where deemed useful etc.
Such measures may actually increase the memory
truly used (that is, by live objects—as opposed to
garbage) but it may lower the pressure on the GC.
So I have autoreleased/released every object that I alloc/init/copy...and the allocations instrument seems to show minimal leaks...however...my program's memory usage does not stop increasing. I have included a screenshot of my allocations run (I have run allocations for longer but it remains relatively constant...it certainly does not compare to the amount the program gains when actually running. When running my program it will double in memory over the course of about 10 hours. The memory drastically increases in the first 5 minutes however (2-3MB), and just keeps on going. I don't understand why allocations would remain constant when running in instruments but my program would just keep gaining memory when actually run.
Since I can't post images yet...here is the link to the screenshot:
allocations run
UPDATE: Here are some screenshots from my memory heapshot analysis...I am not allocating these objects explicitly and don't really know where they are coming from. Almost all of them have their source with something similar to the second screenshot details on the right (lots of HTTPs and URLs in the call tree). Anybody know where these are coming from? I know I've read about some NSURLConnection leaks but I have tried all of the cache clearing that those suggest to no avail. Thanks for all the help so far!
memory heap analysis 1
memory heap analysis 2
Try heapshots.
Are you running with different environment variables when you run in different environments?
For example, you could have NSZombie enabled when you launch your app (causing all your objects to not be free'd) but not when you run in Instruments?
Just for a sanity check - How are you determining memory usage? You say that memory usage keeps going up, but not when you run in Instruments. Given that Instruments is a reliable way of measuring memory usage (the most reliable way?) this sounds a little odd - a bit like saying memory keeps going up except when i try to measure it.
If you are using autoreleased objects (like [NSString stringWithFormat:]) in a loop the pool won't be drained until that loop is exited and the program is allowed to complete the main event loop, at which point the autorelease pool is drained and a new one is instantiated.
If you have code like this the solution is to instantiate a new auto release pool before entering your loop, then draining it periodically during your loop (and reinstantiating the auto release pool after you drain it).
You can use Instruments to find out the location of where your allocations are originating. When running Instruments in the Allocation mode:
Move your mouse over the Category field in the Object Summary
Click on the Grey Circle with an arrow which appears next to the field name
This will bring up a list of locations where the objects in that category have been instantiated from, and the stats of how many allocations each have made.
If your memory usage is rising (but not leaking) you should be able to see where that memory was created, and then track down why it is hanging around.
This tool is also very useful in reducing your memory profile for mobile applications.
I have a J2EE project running on JBoss, with a maximum heap size of 2048m, which is giving strange results under load testing. I've benchmarked the heap and cpu usage and received the following results (series 1 is heap usage, series 2 is cpu usage):
It seems as if the heap is being used properly and getting garbage collected properly around A. When it gets to B however, there appears to be some kind of a bottleneck as there is heap space available, but it never breaks that imaginary line. At the same time, at C, the cpu usage drops dramatically. During this period we also receive an "OutOfMemoryError (GC overhead limit exceeded)," which does not make much sense to me as there is heap space available.
My guess is that there is some kind of bottleneck, but what exactly I can't even imagine. How would you suggest going about finding the cause of the issue? I've profiled the memory usage and noticed that there are quite a few instances of the one class (around a million), but the total size of these instances is fairly small (around 50MB if I remember correctly).
Edit: The server is dedicated to to this application and the CPU usage given is only for the JVM (there should not be any significant CPU usage outside of the JVM). The memory usage is only for the heap, it does not include the permgen space. This problem is reproducible. My main concern is surrounding the limit encountered around B, for which I have not found a plausible explanation yet.
Conclusion: Turns out this was caused by a bunch of long running SQL queries being called concurrently. The returned ResultSets were also very large, possibly explaining the OOME. I still have no reasonable explanation for why there appears to be some limit at B.
From the error message it appears that the JVM is using the parallel scavenger algorithm for garbage collection. The message is dumped along with an OOME error when a lot of time is spent on GC, but not a lot of the heap is recovered.
The document from Sun does not specify if the 98% of the total time consumed is to be read as 98% of the CPU utilization of the process or that of the CPU itself. In either case, I have to draw the following inferences (with limited information):
The garbage collector or the JVM process does not have enough CPU utilization, most likely due to other processes consuming CPU at the same time.
The garbage collector does not have enough CPU utilization since it is a low priority thread, and another memory intensive (but not CPU intensive) thread in the JVM is doing work at the same time, which results in the failure to de-allocate memory.
Based on the above inferences (all, one or none of them could be true), it would be worthwhile to correlate the graph that you're obtained with the runtime behavior of the application as far as users are concerned. In other words, you might find it useful to determine if other processes are kicked off (when your problem occurs), or the part of the application that is in operation (again, when the problem occurs).
In any case, the page referenced above, does give an option to disable the GC overhead limit used by the GC algorithm.
EDIT: If the problem occurs periodically, and can be reproduced, it might turn out to be a memory leak, otherwise (i.e. it occurs sporadically), you are better off tuning the GC algorithm or even changing it.
If I want to know where the "bottlenecks" are, I just get a few stackshots. There's no need to wonder and guess and play detective. They will just tell you.
Usually memory problems and performance problems go hand in hand, so if you fix the performance problems, you will also fix the memory problems (not for certain, though).
I know that memory usage is a very complex issue on Windows.
I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application.
One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available.
However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that".
So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using.
This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong.
So, StackOverflow world...how would you implement this?
which method is more promising, or do you have a third, better method?
should I go back to the original method, and further debug the odd errors?
should I stay away from walking the heap every 5-10 seconds?
Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)
Yes, the Windows memory manager was optimized to fulfill requests for memory as quickly and efficiently possible, it was not optimized to easily measure how much space is used. The first downfall is that heap blocks that are released are rarely unmapped. They are simply marked as "free", to be used by the next allocation. That's why VirtualQueryEx() cannot work.
The problem with HeapWalk is that you have to lock the heap (HeapLock) so that it can walk it without the heap allocation changing. That lock can have very detrimental side-effects. Quoting:
Walking a heap may degrade
performance, especially on symmetric
multiprocessing (SMP) computers. The
side effects may last until the
process ends.
Even then, the number you get back is pretty meaningless. A program never runs out of free space, it runs out of a large enough contiguous chunk of memory to fulfill the request. No happy answers I'm afraid. Except one: a 64-bit operating system cost less than two hundred bucks.
The place to start is probably GetProcessMemoryInfo(). This fills in a structure for you that has, among other things, the current working set in bytes.
Have a look at the following article .NET and running processes
It uses WMI to check for the memory usage of processes, specifically using the
System.Diagnostics.Process
and another link on how to use WMI in C#: WMI Made Easy for C#
Hope this helps.