Go GC strange behavior - go

We have several services written in Go. After running for several weeks we find that occasionally the heap size keeps increasing. We added some instrumentation to find out that this is due to a reduction is GC frequency.
We are using Go 1.8. I am not sure if this is a bug in Go GC or something that we have written is triggering this behavior. Any pointers, tips would be helpful.

Never guess bad performance reasons before profiling. Here are instruments https://golang.org/doc/diagnostics.html#profiling

Related

Profiling a Java EE Application

What does a Profiler do exactly?
I ran the JProbe profiler on my Java EE Application.
For now I selected Performance Analysis only. When I investigated the code, it showed how many times each method gets called and how much time it took. It gave me a clear view of these things.
Now my question is, what in general does a profiler do exactly? The only thing it seems to do is showing how many times a method is called and how much time each method took?
Does Profiling a Java EE application indeed only means this thing? (In the concern of Performance Analyses only)
A profiler can tell you lots of useful things, in addition to traces and method timings:
The state of the heap and its generations in real time: perm, eden, etc.
Created threads and their states
CPU usage
Number of instances of each class
I like to use Visual VM 1.3.3 with all the plugins installed. I use the Oracle/Sun JVMs, so it works for me.
Just to add..Profiling is general concept not only restricted to Java or Java EE.
It is a dynamic analysis of your program.
what does it do? you can check on profiler application.
how does it help: Helps you to optimize your program and trouble shoot cases such as Out of memory and dead lock which is not possible with the routine debugging techniques (printlns and debugger).

Side Effects of running the JVM in debug mode

I'd like to realease a Java application in debug mode to allow for easier debugging when random or hard to reproduce problems occur on the customer side.
However, I want to get a heads up on potential side effects of doing this? From the Java HotSpot Documentation it seems that there should be no performance penalty.
From the link
Full Speed Debugging
The Java HotSpot VM now uses
full-speed debugging. In previous
version of the VM, when debugging was
enabled, the program executed using
only the interpreter. Now, the full
performance advantage of HotSpot
technology is available to programs,
even with compiled code. The improved
performance allows long-running
programs to be more easily debugged.
It also allows testing to proceed at
full speed. Once there is an
exception, the debugger launches with
full visibility to code sources.
Is this accurate or are there hidden caveats, what about memory footprint and are there any other hidden gotchas while using debug mode.
PS: I found this article from AMD which confirmed my initial suspiciion that the original article from oricale doesn't show the full story.
I can't speak for HotSpot, and won't officially for IBM, but I will say there are certainly legal kinds of optimization that aren't possible to undo fully should a decompilation be required in the middle of them, and thus aren't enabled when debug is being asked for in the production JVMs you are likely to use.
Imagine a situation where the optimizer discovers a part of the program is provably not required and by the various language rules (including JSR 133) is legal to remove, the JVM will want to get rid of it. The one wrinkle is debug: removing the code will look odd to the human stepping through it (variables not updating, possibly not stopping on lines when stepping) so the choice is to disable said optimizations in those cases. The same might also be true for opts like stack allocated objects, etc.. so while the JVM says it's "full speed" it's actually closer to "nearly full speed, with some of the funkier opts that can't quite be undone removed".
This question is old but came up while I was searching for any performance impact if you just leave -agentlib:jdwp... on but are not actively debugging.
Summary: Starting with debugging options but not connecting shouldn't impact the speed now (Java 7+).
Before java 6 (ish) you used -Xdebug and this had a definite impact, it shut off the JIT!
In java 6 they changed it to -agentlib and made it better. there were some bugs though that did cause a performance penalty. Here is one of the bugs that was filed against openjdk, My guess is that there were similar problems with The oracle/sun version: https://bugs.openjdk.java.net/browse/JDK-6902182
Note however that the stated goal is that simply enabling debugging by opening the port should not cause any performance penalty.
It looks like, at least in openjdk, the bugs were cleaned up by java 7. I didn't see anything about performance impacts after that.
If you research this further and find negative results, take note of the java version the testing was done under--everything I saw was referring to versions before 7.
I'd love to hear if anyone encountered performance problems in a recent VM just leaving the port enabled.
If you plan to run the app with remote debugging enabled, it can affect security also. Remote debugging leaves a port open on your machine, and by connecting to it, I can do all sorts of fun things with your application.
The program definitely does lot more than simply running when in debugging mode, so it is obvious that performance can not be same. However if you read the statement carefully, it says that new release can run fully optimized code even if in debugging mode which was not possible earlier. Thus the new jvm is much more faster than previous one which could only run in interpreted mode which no optimization.

Can you start and stop boundschecker (DevPartner)?

I'm trying to use boundschecker to analyze a rather complex program. Running the program with boundschecker is almost too slow for it to be of any use since it takes me almost a day to run the program to the point in the code where I suspect the issue exists. Can anyone give me some ideas for how to inspect only certain parts of my software using boundschecker (DevPartner) in Visual Studio 2005?
Thanks in advance for all your help!
I last used BoundsChecker a few years ago, and had the same problems. With large projects, it makes everything run so slowly that it is useless. We ended up ditching it.
But, we still needed some of it's functionality, but like you, not for the whole program. So we had to do it ourselves.
In our case, we mainly used it to try and track down memory leaks. If that's your objective as well, there are other options.
Visual Studio does a pretty good job of telling you about memory leaks when your program exits
It reports leaks in the order that they were created
It will tell you exactly where the leaked memory was created if your source files have this at the top
#ifdef _DEBUG
#undef THIS_FILE
static char THIS_FILE[]=__FILE__;
#define new DEBUG_NEW
#endif
Those help a lot, but it's often not enough. Adding that snippet everywhere isn't always feasible. If you use factory classes, knowing where memory was allocated doesn't help at all. So when all else fails, we take advantage of #2.
Add something like the following:
#define LEAK(str) {char *s = (char*)malloc(100); strcpy(s, str);}
Then, pepper your code with "LEAK("leak1");" or whatever. Run the program, and exit it. Your new leaked strings will display in Visual Studio's leak dump surrounding the existing leaks. Keep adding/moving your LEAK statements and re-running the program to narrow your search until you've pinpointed the exact location. Then fix the leak, remove your debugging leaks, and you're all set!
BoundsChecker tracks all memory allocations and releases in extreme detail. It knows, for instance, that such and such a memory allocation was done from the C runtime heap, which in turn was taken from a Win32 heap, which in turn started life as memory allocated by VirtualAlloc. If the application was instrumented (FinalCheck), it also has detailed information as to which pointers reference the memory.
This is one reason (of many) why the thing is slow.
If BC were to connect to an application late, it would have none of this data built up, and would have either (1) dig it all up at once, or (2) start guessing about things. Neither solution is very practical.
One way to lighten up BoundsChecker is by excluding from instrumentation all but the few modules you are interested in. I know thats not great because if you knew where the leak was you wouldn't need BoundsChecker. What I usually recommend is that you use BC's Active Check mode first with only Memory Tracking available. You miss the API Validations but you could always rerun that seperately. After you run Active Check and you get clues regarding which modules tend to be problematic, only then do you enable instrumentation for the module or modules of interest and their dependencies. We know Final Check is annoyingly slow but as Mistiano correctly states, with Final Check not only does BC keep a graph of all allocated blocks but also all pointers and contexts to them. Therein lies the magic in how Final Check can nail leaks and corruptions at the point of occurance, not just on application shutdown or fault. Shameless plug: I work on the DevPartner team. We are releasing DPS 10.5 on February 4, 2011 with x64 application support in BC. Unlike the relatively ancient and undersold BC64 for Itanium which only provided Active Check, DPS 10.5 provides full Final Check support for x64 applications, both for pure C++ and for native modules running in .NET processes. See microfocus.com under MF Developer for details.

ReSharper sluggishness

I like ReSharper, but it is a total memory hog. It can quickly swell up and consume a half-gig of RAM without too much effort and bog down the IDE. Does anybody know of any way to configure it to be not as slow?
Turn off the on-the-fly compilation (which, unfortunately, is one of its best features)
The next release 4.5 is going to based around performance and memory footprint.
see Ilya Ryzhenkov's blog
Resharper 4.5 has been released
From my experience it is less of a memory hog, but i still can run out of memory.
I had an issue where it was taking upwards of 10 minutes to load a solution of 100+ projects. Once loaded VS performance would be ok, though it would oddly flutter back and forth between ok and really bad.
The short answer: Eliminating Resharper warnings seems to improve overall VS/R# performance.
The biggest problem ultimately was that we had a number of files of binary data (encrypted stuff) being included as embedded resources, which happened to have .xml extensions. Resharper was trying really really hard to analyze those files. Eventually it'd get through but would generate 100K+ errors in the process. Changing the extension to one Resharper did not automatically analyze (.bin in this case) solved the problem.
We still have about 10 files which when they or a file they depend on is edited performance tanks for a while. These files are the partial parts of a single class definition where each file averages 3000 LOC. Yes, that's right, it's about a 30K line class. It also happens to be rather poor code for other reasons, many of which Resharper flags making the right hand gutter bar practically a solid orange line. Editing often causes Resharper to reanalyze the whole thing. While that analysis runs, performance is noticeably affected.
I've come to the conclusion that the less errors/warnings there are for R# to identify, the better it performs. My anecdotal evidence gathered while cleaning up/refactoring this project seems to support it.
A lot of folks complain of perf problems with Resharper. If you have even a few big ugly code files with lots of Resharper warnings, then a little time spent cleaning that code up might yield better performance overall. It has for us.
Not sure how big your solutions are, but I stopped using 4.5 for the same reasons I stopped using all previous versions, memory usage.
Code analysis and unit test support was the main reason I bought it, turning it off means the rationale for using it is gone.
Workstation has 4GB of memory, and I can easily kill it with ReSharper when running our end-to-end stack in debuggers.
You can look how much memory ReSharper use.
ReSharper -> General -> Show managed memory usege in status bar.
If you are working on large source files, Resharper does get sluggish (I'm working on version 5.0 at the time of writing this).
You can view the memory usage of Resharper by clicking on Resharper options -> General -> Show memory use in status bar.
When I first did this, I noticed Resharper had clocked up hundreds of megabytes of memory usage! However, the next step worked for me in (temporarily) fixing the slugishness:
Right click the memory usage, and select "Collect garbage" - this seemed to fix the slugishness for me straight away.
Regarding memory hogging - I've found that my VS2008 memory footprint grows every time I close one solution and open another. This is true even if I close a solution and re-open that same solution.
The new ReSharper 4.5 works a lot better than the previous 4.x releases. I would recommend you try that one.
In previous versions I had the same problem, when 4.0 came out these problems have seemed to have gone away. Now with 4.1 i do not feel the huge slow down i used to have. My IDE does not freeze up anymore.
have you tried upgrading ?
Try the 4.5 beta. 4.1 was killing my 2GB dev machine, but it's back to running incredibly smoothly with the beta. Others have had the opposite experience, though, so YMMV.
Yes, 4.5 works much better. My understanding is that 4.5 was to address the performance issues.
Me and my colleagues are also having huge performance issues with ReSharper, just now my ReSharper took 1.1GB of memory. Visual Studio slows down specially when writing JavaScript, it's unbearable. You can turn of the on the fly compilation, but it's the best feature it has...
edit: Everybody in this thread seems to have ReShaprper 4.x, my version is 6.0.

Finding GDI/User resource usage from a crash dump

I have a crash dump of an application that is supposedly leaking GDI. The app is running on XP and I have no problems loading it into WinDbg to look at it. Previously we have use the Gdikdx.dll extension to look at Gdi information but this extension is not supported on XP or Vista.
Does anyone have any pointers for finding GDI object usage in WinDbg.
Alternatively, I do have access to the failing program (and its stress testing suite) so I can reproduce on a running system if you know of any 'live' debugging tools for XP and Vista (or Windows 2000 though this is not our target).
I've spent the last week working on a GDI leak finder tool. We also perform regular stress testing and it never lasted longer than a day's worth w/o stopping due to user/gdi object handle overconsumption.
My attempts have been pretty successful as far as I can tell. Of course, I spent some time beforehand looking for an alternative and quicker solution. It is worth mentioning, I had some previous semi-lucky experience with the GDILeaks tool from msdn article mentioned above. Not to mention that i had to solve a few problems prior to putting it to work and this time it just didn't give me what and how i wanted it. The downside of their approach is the heavyweight debugger interface (it slows down the researched target by orders of magnitude which I found unacceptable). Another downside is that it did not work all the time - on some runs I simply could not get it to report/compute anything! Its complexity (judging by the amount of code) was another scare-away factor. I'm not a big fan of GUIs, as it is my belief that I'm more productive with no windows at all ;o). I also found it hard to make it find and use my symbols.
One more tool I used before setting on to write my own, was the leakbrowser.
Anyways, I finally settled on an iterative approach to achieve following goals:
minor performance penalties
implementation simplicity
non-invasiveness (used for multiple products)
relying on as much available as possible
I used detours (non-commercial use) for core functionality (it is an injectible DLL). Put Javascript to use for automatic code generation (15K script to gen 100K source code - no way I code this manually and no C preprocessor involved!) plus a windbg extension for data analysis and snapshot/diff support.
To tell the long story short - after I was finished, it was a matter of a few hours to collect information during another stress test and another hour to analyze and fix the leaks.
I'll be more than happy to share my findings.
P.S. some time did I spend on trying to improve on the previous work. My intention was minimizing false positives (I've seen just about too many of those while developing), so it will also check for allocation/release consistency as well as avoid taking into account allocations that are never leaked.
Edit: Find the tool here
There was a MSDN Magazine article from several years ago that talked about GDI leaks. This points to several different places with good information.
In WinDbg, you may also try the !poolused command for some information.
Finding resource leaks in from a crash dump (post-mortem) can be difficult -- if it was always the same place, using the same variable that leaks the memory, and you're lucky, you could see the last place that it will be leaked, etc. It would probably be much easier with a live program running under the debugger.
You can also try using Microsoft Detours, but the license doesn't always work out. It's also a bit more invasive and advanced.
I have created a Windbg script for that. Look at the answer of
Command to get GDI handle count from a crash dump
To track the allocation stack you could set a ba (Break on Access) breakpoint past the last allocated GDICell object to break just at the point when another GDI allocation happens. That could be a bit complex because the address changes but it could be enough to find pretty much any leak.

Resources