Profiling a Java EE Application - performance

What does a Profiler do exactly?
I ran the JProbe profiler on my Java EE Application.
For now I selected Performance Analysis only. When I investigated the code, it showed how many times each method gets called and how much time it took. It gave me a clear view of these things.
Now my question is, what in general does a profiler do exactly? The only thing it seems to do is showing how many times a method is called and how much time each method took?
Does Profiling a Java EE application indeed only means this thing? (In the concern of Performance Analyses only)

A profiler can tell you lots of useful things, in addition to traces and method timings:
The state of the heap and its generations in real time: perm, eden, etc.
Created threads and their states
CPU usage
Number of instances of each class
I like to use Visual VM 1.3.3 with all the plugins installed. I use the Oracle/Sun JVMs, so it works for me.

Just to add..Profiling is general concept not only restricted to Java or Java EE.
It is a dynamic analysis of your program.
what does it do? you can check on profiler application.
how does it help: Helps you to optimize your program and trouble shoot cases such as Out of memory and dead lock which is not possible with the routine debugging techniques (printlns and debugger).

Related

Why Java based serverless functions have cold start if the JVM uses a JIT compiler?

Late Friday night thoughts after reading through material on how Cloudflare's v8 based "no cold start" Workers function - in short, because of the V8 engine's Just-in-Time compiler of Javascript code - I'm wondering why this no cold start type of serverless functions seems to only exist for Javascript.
Is this just because architecturally when AWS Lambda / Azure Functions were launched, they were designed as a kind of even more simplified Kubernetes model, where each function exists in its own container? I would assume that was a simpler model of keeping different clients' code separate than whatever magic sauce v8 isolates provided under the hood.
So given Java is compiled into bytecode for the JVM, which uses JIT compilation (if it doesn't optimise and compile to machine code certain high usage functions), is it therefore also technically possible to have no cold start Java serverless functions? As long as there is some way to load in each client's bytecode as they are invoked, on the cloud provider's server.
What are the practical challenges for this to become a reality? I'm not a big expert on all this, but can imagine perhaps:
The compiled bytecode isn't designed to be loaded in this way - it expects to be the only code being executed in a JVM
JVM optimisations aren't written to support loading short-lived, multiple functions, and treats all code loaded in to be one massive program
JVM once started doesn't support loading additional bytecode.
In principle, you could probably develop a Java-centric serverless runtime in which individual functions are dynamically loaded on-demand, and you might be able to achieve pretty good cold-start time this way. However, there are two big reasons why this might not work as well as JavaScript:
While Java is designed for JIT compiling, it has not been optimized for startup time nearly as intensely as V8 has. Today, the JVM is most commonly used in large always-on servers, where startup speed is not that important. V8, on the other hand, has always focused on a browser environment where code is downloaded and executed while a user is waiting, so minimizing startup latency is critical. (It might actually be interesting to look at an alternative Java runtime like Android's Dalvik, which has had much more reason to prioritize startup speed. Maybe it could be the basis of a really fast Java serverless environment!)
Security. V8 and other JavaScript runtimes have been designed with hostile code in mind from the beginning, and have had a huge amount of security research done on them. Java tried to target this too, in the very early days, with "applets", but that usage of Java never caught on. These days, secure sandboxing is not a major concern of Java. Because of this, it is probably too risky to run multiple Java apps that don't trust each other within the same container. And so, you are back to starting a separate container for each application.

Java7 vs java5 garbage collection

We are planning to migrate our enterprise application currently running on Java5 stack to Java7 stack. We are having issues with implicit gc calls (mainly major gc) causing system to be unstable for a short time(ranging from 5 mins - 30 mins). After analyzing the gc stats, we found that Compact phase is taking quite long time to complete when compared to Mark and sweep phase. I understand compaction is quite complex and time taking but its impacting the app server which is customer facing and few connections being dropped off during this phase.
Now, my question is as we are migrating to Java7, is there a better garbage collection process compared to Java5?
App servers are provided with decent system resources.
Each app server contains 32 cpu cores
contains 64 gb ram
App server is IBM webpshere server
Operating System - 64 bit IBM AIX
As said earlier, gc is happening because of implicit system calls. No explicit system calls invoking gc.
Now, my question is as we are migrating to Java7, is there a better garbage collection process compared to Java5?
Generally yes, though as #Pushkar, you should really be migrating to Java 8.
With respects to the specifics of your application(s), it sounds like you need to tune / retune the garbage collection on Java 5. If you are periodically experiencing 5 to 30 >>minutes<< of unstability due to GC, there is something rather wrong. The current behavior may be due your application or Websphere (e.g. memory leaks, excessive caching, etc), or it may be due to poor GC tuning.
In short, switching to Java 7 (or 8) might make things better "out of the box", but it is likely that you will need to put in more effort to address the underlying cause of your problems.
Finally, I'd advise the obvious things.
Implement the changes in small steps. Don't upgrade your app, websphere version, java version, etc all at the some time.
Do the upgrades of your servers one at a time. Have a roll-back plan in case you get unacceptable performance.
If possible test it all first ... including performance / load testing.
By default, java 7 uses parallelGC on server class machines. If you are using JDK 7 update 4 or later version, switch to G1 garbage collector which might give you better performance. But as #the8472 suggested,it will be good to know what settings you used in java 5 and now in your current environment.
Java 7 reached end of life around April 2015. Why not migrate to 1.8?
GC performance usually improves with java major releases (and in some cases with minor GCs).
You should take a look difference GC tuning flags, following link may help you
http://www.oracle.com/technetwork/articles/java/vmoptions-jsp-140102.html
http://stas-blogspot.blogspot.com/2011/07/most-complete-list-of-xx-options-for.html

Side Effects of running the JVM in debug mode

I'd like to realease a Java application in debug mode to allow for easier debugging when random or hard to reproduce problems occur on the customer side.
However, I want to get a heads up on potential side effects of doing this? From the Java HotSpot Documentation it seems that there should be no performance penalty.
From the link
Full Speed Debugging
The Java HotSpot VM now uses
full-speed debugging. In previous
version of the VM, when debugging was
enabled, the program executed using
only the interpreter. Now, the full
performance advantage of HotSpot
technology is available to programs,
even with compiled code. The improved
performance allows long-running
programs to be more easily debugged.
It also allows testing to proceed at
full speed. Once there is an
exception, the debugger launches with
full visibility to code sources.
Is this accurate or are there hidden caveats, what about memory footprint and are there any other hidden gotchas while using debug mode.
PS: I found this article from AMD which confirmed my initial suspiciion that the original article from oricale doesn't show the full story.
I can't speak for HotSpot, and won't officially for IBM, but I will say there are certainly legal kinds of optimization that aren't possible to undo fully should a decompilation be required in the middle of them, and thus aren't enabled when debug is being asked for in the production JVMs you are likely to use.
Imagine a situation where the optimizer discovers a part of the program is provably not required and by the various language rules (including JSR 133) is legal to remove, the JVM will want to get rid of it. The one wrinkle is debug: removing the code will look odd to the human stepping through it (variables not updating, possibly not stopping on lines when stepping) so the choice is to disable said optimizations in those cases. The same might also be true for opts like stack allocated objects, etc.. so while the JVM says it's "full speed" it's actually closer to "nearly full speed, with some of the funkier opts that can't quite be undone removed".
This question is old but came up while I was searching for any performance impact if you just leave -agentlib:jdwp... on but are not actively debugging.
Summary: Starting with debugging options but not connecting shouldn't impact the speed now (Java 7+).
Before java 6 (ish) you used -Xdebug and this had a definite impact, it shut off the JIT!
In java 6 they changed it to -agentlib and made it better. there were some bugs though that did cause a performance penalty. Here is one of the bugs that was filed against openjdk, My guess is that there were similar problems with The oracle/sun version: https://bugs.openjdk.java.net/browse/JDK-6902182
Note however that the stated goal is that simply enabling debugging by opening the port should not cause any performance penalty.
It looks like, at least in openjdk, the bugs were cleaned up by java 7. I didn't see anything about performance impacts after that.
If you research this further and find negative results, take note of the java version the testing was done under--everything I saw was referring to versions before 7.
I'd love to hear if anyone encountered performance problems in a recent VM just leaving the port enabled.
If you plan to run the app with remote debugging enabled, it can affect security also. Remote debugging leaves a port open on your machine, and by connecting to it, I can do all sorts of fun things with your application.
The program definitely does lot more than simply running when in debugging mode, so it is obvious that performance can not be same. However if you read the statement carefully, it says that new release can run fully optimized code even if in debugging mode which was not possible earlier. Thus the new jvm is much more faster than previous one which could only run in interpreted mode which no optimization.

What rarely used debugging tools you found useful?

What rarely used debugging tools you found useful ?
My recent debugging situation on Visual Studio required trapping the breakpoint on fresh built 32-bit DLL, which was loaded by GUI-less executable, which was spawned by COM+ server on remote x64 machine, which was called through RPC from actual GUI. As usual, all worked well on all 32 bit machine, but kept failing on "machine other than development one". So remote debugging was inevitable.
So after scratching the head beaten against wall for 2 days, I have added 10 sec delay into DLL attach entry point and used Microsoft Remote Debugger wich I never used before. It saved my day.
Another favorite: Java JMX console as a performance "debugging" tool. You can see all threads, memory chart, have a snapshot of any thread stack any time you click. Clicking several times helps to find what exactly is slow in J2EE application.
Process Monitor and other Mark Russinovich's tools.
A logic analyzer plugged to CPU pins and able to disassemble executed code. I tracked a bug in the boot sequence of an embedded system.
I find printf to be the most useful.
These - in my experience at least - do not seem to be the intuitive first choice for many when debugging apps accessing a database (i.e. the majority), that perhaps they should be :
SQL Profiler (SQL Server)
TKPROF (Oracle)
Another interesting combination was using eclipse running in a virtual machine, accessing a remote server, attaching to the Tomcat process there; and doing it from two different machines to debug two different packages simultaneously.
All time favorite is depends.exe, for finding out why a dll or exe is not starting http://dependencywalker.com/
For performance, at my former job we used to have really simple to use C++ macro's that did statistics on runtime function calls. This is so much better than a profiler, because you can use it from your regular IDE, and it allows you to zoom in on the code you are optimizing.
In my new job, I wrote a C# version of the same idea.
WinDbg and other lower level debuggers are the ultimate weapon if you know the tricks and tips.
For Windows/.Net development I am always using Debugview and Ildasm.

Finding GDI/User resource usage from a crash dump

I have a crash dump of an application that is supposedly leaking GDI. The app is running on XP and I have no problems loading it into WinDbg to look at it. Previously we have use the Gdikdx.dll extension to look at Gdi information but this extension is not supported on XP or Vista.
Does anyone have any pointers for finding GDI object usage in WinDbg.
Alternatively, I do have access to the failing program (and its stress testing suite) so I can reproduce on a running system if you know of any 'live' debugging tools for XP and Vista (or Windows 2000 though this is not our target).
I've spent the last week working on a GDI leak finder tool. We also perform regular stress testing and it never lasted longer than a day's worth w/o stopping due to user/gdi object handle overconsumption.
My attempts have been pretty successful as far as I can tell. Of course, I spent some time beforehand looking for an alternative and quicker solution. It is worth mentioning, I had some previous semi-lucky experience with the GDILeaks tool from msdn article mentioned above. Not to mention that i had to solve a few problems prior to putting it to work and this time it just didn't give me what and how i wanted it. The downside of their approach is the heavyweight debugger interface (it slows down the researched target by orders of magnitude which I found unacceptable). Another downside is that it did not work all the time - on some runs I simply could not get it to report/compute anything! Its complexity (judging by the amount of code) was another scare-away factor. I'm not a big fan of GUIs, as it is my belief that I'm more productive with no windows at all ;o). I also found it hard to make it find and use my symbols.
One more tool I used before setting on to write my own, was the leakbrowser.
Anyways, I finally settled on an iterative approach to achieve following goals:
minor performance penalties
implementation simplicity
non-invasiveness (used for multiple products)
relying on as much available as possible
I used detours (non-commercial use) for core functionality (it is an injectible DLL). Put Javascript to use for automatic code generation (15K script to gen 100K source code - no way I code this manually and no C preprocessor involved!) plus a windbg extension for data analysis and snapshot/diff support.
To tell the long story short - after I was finished, it was a matter of a few hours to collect information during another stress test and another hour to analyze and fix the leaks.
I'll be more than happy to share my findings.
P.S. some time did I spend on trying to improve on the previous work. My intention was minimizing false positives (I've seen just about too many of those while developing), so it will also check for allocation/release consistency as well as avoid taking into account allocations that are never leaked.
Edit: Find the tool here
There was a MSDN Magazine article from several years ago that talked about GDI leaks. This points to several different places with good information.
In WinDbg, you may also try the !poolused command for some information.
Finding resource leaks in from a crash dump (post-mortem) can be difficult -- if it was always the same place, using the same variable that leaks the memory, and you're lucky, you could see the last place that it will be leaked, etc. It would probably be much easier with a live program running under the debugger.
You can also try using Microsoft Detours, but the license doesn't always work out. It's also a bit more invasive and advanced.
I have created a Windbg script for that. Look at the answer of
Command to get GDI handle count from a crash dump
To track the allocation stack you could set a ba (Break on Access) breakpoint past the last allocated GDICell object to break just at the point when another GDI allocation happens. That could be a bit complex because the address changes but it could be enough to find pretty much any leak.

Resources