I Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) deployed on Linux box. I am able to run Java Mission control (JMC) however I am not able to run "Flight Recorder" from the JMC. I am getting a popup with this message :
Commercial features are not enabled. In JDK7u4 and above, the JVM must be started with -XX:+UnlockCommercialFeatures -XX:+FlightRecorder.
I checked my jmc.ini file which resides in the same $JAVA_HOME/bin directory as JMC application itself and it has these two flags :
-XX:+UnlockCommercialFeatures
-XX:+FlightRecorder
What could be the problem with the Flight Recorder?
thank you in advance.
Those parameters you need to add to the JVM you wish to start recordings on. (They are already added for JMC itself, since we want people to be able to record the JMC client, should it be needed for support reasons. As a matter of fact, more recent versions of the JMC are always starting with a recording running. That way, even if the JVM crashes, there is always information about what was going on in the runtime.)
Simply add the parameters to the start-up of the JVM you wish to do recordings on. Here is more info:
http://hirt.se/blog/?p=370
If it fails to connect to the JMC application itself, it's strange. Otherwise, you must add the command line parameters to JVM you want to monitor.
Related
I am currently using JDK Flight Recorder with JDK 11 and came across some trouble in the CI/CD Plattform. Unfortunately, there is not too much documentation on the new Flight Recorder, but rather on the older one, which was still developed under Java.
When I try to start tests directly from the IDE, everything works fine and I get my recording files.
When I try to do the same thing, automatically, in the CI/CD Plattform, it causes time out and a lot of different indefinite failures, among them: trouble creating the file, the file is not even written, etc.
The JVM commands I used are the following (I put extra spaces for better readability):
-XX:+FlightRecorder
-XX:StartFlightRecording= name="UiTestServer", settings="profile", dumponexit=true, filename=""+System.getenv("CI_PROJECT_DIR") + "flightRecording/javaFlightRecorder.jfr"
The commands are the same that the IDE uses automatically, when starting the flight recording with right click on the specified test.
Does anybody know, whether the Flight Recorder has problems with such systems or specific services which might run parallely to it? I heard of some profiling tools, that are unable to perform on CI Plattforms.
If you need more detail, just ask me. Though, it might happen that I cannot tell anything related to the project.
Bit late as an answer, but JFR can definitely run in CI/CD environments. I have successfully attached JFR to our JMH microbenchmarks and published the results as artifacts in Atlassian Bamboo. Our Bamboo agents are running on AWS, so JFR itself should be good for most cloud environments.
JFR has been built to work in production systems, but if you want guarantees of low overhead (<1%), you should use the default settings, not profile.
'profile' is for a shorter period of time, i.e. 10 minutes, where it may be OK with additional overhead to gain more insight.
This is what I would recommend, for JDK 11 and later:
$ java -XX:StartFlightRecording:filename=/path
There is no need set dumponexit=true if a filename has been specified.
-XX:+FlightRecorder is only needed before JDK 8u40.
You can set a name if you like, but it's typically not needed. If you want to use jcmd and dump a recording, the name can be omitted.
I am trying to install elastic search in Linux VM but could not able to start the service though it has java installed. I am getting following message while elasticsearch script runs.
[xxxx#ABCWCW0ASMGNJ01A bin]$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (novell-2.5.1.2.el6_5-x86_64 u65-b17)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
[xxxx#ABCWCW0ASMGNJ01A bin]$ ./elasticsearch
Error occurred during initialization of VM
Too small initial heap for new size specified
I have downlaoded 2.3.2 from elastic search website. After initial google I did set the ES_HEAP_SIZE=1g in bash_profile but still no luck. Can you thow some light what could be issue.
Thanks
It seems you don't have enough heap space to start elastic. Please go to increase the java heap size permanently? and do the needful.
Depending on the flavor of Linux you are using, you may need to use a text editor to update the following file to increase the heap size.
/etc/sysconfig/elasticsearch
(uncomment the line 'ES_HEAP_SIZE=' and set this to half of the allocated virtual RAM for the VM)--Based on: https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
I will caution you to read this as well afterwards if you see something like "Unable to lock JVM memory" in your elasticsearch application log after starting it up:
https://github.com/elastic/elasticsearch/issues/9357
I ran into problems myself with trying to use the environment variable method on Centos 6 and 7.
You may also want to try using the Kopf plugin (open source) to get some simple visibility:
https://github.com/lmenezes/elasticsearch-kopf
Using the following general instructions of course:
https://www.elastic.co/guide/en/elasticsearch/plugins/current/installation.html
If you don't know where certain things are located for elasticsearch on your system, please use the below defaults listing as guidance:
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-dir-layout.html
The root cause of the problem you are experiencing is most likely related to the startup script being used to bring up your instance of elasticsearch on the VM and it not utilizing the environment variable as expected. I hope you don't mind the extra information, I'm just trying to help you save some time.
To get the crash-dump i used the below registry setting for windows 7 machine also tried gflags.exe.
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps]
"DumpFolder"=hex(2):[path goes here in hex value]
"DumpType"=dword:00000002
"DumpCount"=dword:0000000a
This is works well in most of the cases and I am able to get the crash dump when my software crashes. But in one of the cases when I use my software in integration with one more custom software2 I am not able to get crashdump.
I did multiple testing and confirmed that whenever the custom software2 is running along with the main software the crash dumps are not getting generated. The registry setting are not helping. And we need to have custom software2 running along with the main software.
Are there any alternative way (other than registry setting or GFLAGS.exe) or software to generate the crash dumps in this scenario?
I cant debug it because the issue is at the deployed machine.
since none of the utilities are helping i am using the task manager to get the crash-dump. When my application crashes windows displays the window. During that time i am generating the crash-dump using task manager manually.For 32 bit application use the task manger from SYSWOR64 folder.
I'm using NetBeans IDE 6.9.1. I have a web application in JSP using Spring version 3.0.2 and Hibernate Tools 3.2.1.GA. Slowly and gradually, it has been growing in size yet it's not a very big application though I have added many external class libraries as and when required like HibernateValidator.
The performance is degraded and it takes a considerable amount of time in building the application. When changes are saved, many a times, the application is deployed infinitely/endlessly with the auto-deploy feature of NetBeans. It never ends and I have to restart the IDE and the procedure begins all over again from scratch. Sometimes the application is stopped automatically and I have to restart the Tomcat server (6.0.26) because mostly an attempt to restart the application doesn't succeed.
Many a times (every half an hour or so), the application ends with following exception.
java.lang.OutOfMemoryError: PermGen Space
and I have to restart the system itself!
While working with JPA along with EJB and JSF as a front-end (GlassFish Server 3), it often wasn't the case even with heavily loaded applications with the same version of the NetBeans IDE and exactly the same platform, if I remember correctly.
Are there some ways to improve the performance?
try overriding the jvm option for more memory if you can
export JAVA_OPTS="-Xms64m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=756m"
here http://www.unidata.ucar.edu/projects/THREDDS/tech/tds4.2/reference/JavaOptsSummary.html you can find a bit more about java_opts parameters
I understand you are using Netbeans. A simple solution would be to go tools->servers -> (selecte the server (in your case tomcat))->platform, then in the VM option, paste this settings:
-Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m
that would solve your problem. good luck!
I am not an expert on JAVA_OPTS, but I am getting an error in my grails app related to Permgen space. Now I receive a recommendation from grails blog to set JAVA_OPTS to this value:
JAVA_OPTS="-client -Xmx256M $JAVA_OPTS"
I do understand the other values except '-client'. What does it really mean? I can't find the significance of it in books.
The -client and -server options are intended to optimize performance for client and server applications; the default varies by platform, where typically client-oriented platforms (Windows, MacOS) get the client VM by default, and typically server-oriented platforms (Linux, Windows Server) get the server VM by default. More information is available here: http://download.oracle.com/javase/6/docs/technotes/guides/vm/index.html
Basically, the client VM is optimized to start up quickly and use less memory, while the server VM is designed for maximum performance after start-up.
Usually, there is -server and -client,
-client starts faster than -server.
Nowadays, in some versions, like AMD64 version, it does nothing, there is only server version.