Running eclipse v4.7.0
Just installed STS v3.9.0.20170706
Tried to run my spring boot app through the Spring Boot console.
I have -Xmx2g -Xms1g specified as VM arguments
Running display the following error
Invalid initial heap size: -Xms1g-noverify
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
How do I stop it from appending -noverify.
Previous to this STS plugin update never had this before
This looks like a bug that has already been fixed in the current development branch (see commit Fix fast startup vm args addition […]).
So I guess it will work again with the next update…
As a workaround in the meantime you can add something like -noverify -Ddummy= at the end of your VM Arguments, so the final command line becomes … -noverify -Ddummy=-noverify …. Or you can disable Fast startup…
Related
I have Windows 10 and I followed this guide to install Spark and make it work on my OS, as long as using Jupyter Notebook tool. I used this command to instantiate the master and import the packages I needed for my job:
pyspark --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --master local[2]
However, later, I figured out that any worker wasn't instantiated according to the aforementioned guide and my tasks were really slow. Therefore, taking inspiration from this, since I could not find any other way to connect workers to the Cluster manager due to the fact it was run by Docker, I tried to set up everything manually with the following commands:
bin\spark-class org.apache.spark.deploy.master.Master
The master was correctly instantiated, so I continued by the next command:
bin\spark-class org.apache.spark.deploy.worker.Worker spark://<master_ip>:<port> --host <IP_ADDR>
Which returned me the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/04/01 14:14:21 INFO Master: Started daemon with process name: 8168#DESKTOP-A7EPMQG
21/04/01 14:14:21 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1006)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.master.MasterArguments.<init>(MasterArguments.scala:57)
at org.apache.spark.deploy.master.Master$.main(Master.scala:1123)
at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #60015ef5
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 6 more
From that moment on, none of the commands I used to run before were working anymore, and they returned the message you can see. I guess I messed up some Java stuff, but I do not understand what and where, honestly.
My java version is:
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)
I got the same error just now, the issue seems with Java version.
I installed java, python, spark etc. All latest versions... !
Followed steps mentioned in the below link..
https://phoenixnap.com/kb/install-spark-on-windows-10
Got the same error as you.. !
Downloaded Java SE 8 version from Oracle site..
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
Downloaded jdk-8u281-windows-x64.exe
Reset the JAVA_HOME.
Started spark-shell - it opened perfectly without any issues.
FYI: I don't have neither java or spark experience, if anyone feels something is wrong please correct me. Just that it worked for me, so providing the same solution here.. :)
Thanks,
Karun
I got a similar error on MacOS. The problem was with Java (I was using JDK 17), had to downgrade or use a different version.
Ended up using this:
https://adoptium.net/releases.html?variant=openjdk11
Download and install. Might have to remove your JDK17 version.
Easiest solution :
Latest version of Java (JDK) is not supported by Spark.
Please try installing JDK version 8. This will solve the error.
I just installed a fresh copy of weblogic server and OSB.
After the successful installation of weblogic 10.3.6 on the quick start screen I tried to configure domain. however the screen doesn't processed any further giving the error in the screenshot below.
Also, in eclipse, when i try to add the server, It prompts me to create a domain, but that doesn't work either.
The error am getting in the console is:
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=128m; support was removed in 8.0
Unrecognized VM option 'UseSpinning'
It looks like Java 8 is being picked up elsewhere on your system.
Check to see if you have Java 8 installed and look at you path environment variable.
That warning you are getting is from Java 8. 10.3.6 would use Java 5 or 6...
I am facing strange issue with sonarqube 5.0.1 , one one of the machine it is not starting. Here is the error log - sonar.log -
--> Wrapper Started as Daemon
Launching a JVM...
Unable to start JVM: No such file or directory (2)
JVM exited while loading the application.
JVM Restarts disabled. Shutting down.
<-- Wrapper Stopped
Machine is x86_64 GNU/Linux - Centos 5.1.
this box has java installed -
$java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)
The same sonarqube package works on another machine.
Any idea what could be the issue here?
Thanks.
Issue was in the wrapper.conf where the java wrapper command was not getting resolved. It worked if I give the absolute path - ‘wrapper.java.command=/path/to/my/jdk/bin/java’
This could be an issue with an environment on a host.. not sure.
Few things that helped me in troubleshooting this -
log level changed to DEBUG in wrapper.conf
comments given in the wrapper.conf!
Thanks all for chiming in! Appreciate your inputs.
1.Just close all running jvm from task manager
2. Change the port of the sonar runner from the properties
I had the same symptoms (wrapper starts then immediately stops).
I tried these steps and finally succeeded (on a windows 10 pc):
1) in wrapper.conf, specified the java command:
wrapper.java.command=C:\Program Files\Java\jdk1.7...\bin\java.exe
That did not help.
2) Finally this fixed the problem. In the windows Services, open the Sonar service and then open Log On tab.
Changed the Log On to myself as follows:
I was facing the same issue on sonar startup. After reading this post , i modified the JDK path in below file and it works.
Modify the JDK path in wrapper.conf
wrapper.java.command=%JAVA_HOME%/bin/java
Install jdk 11
sudo yum install java-11-openjdk -y
sudo alternatives --config java
Select the JDK 11 version
Set the JDK 11 version in wrapper.conf
vi /opt/sonar/conf/wrapper.conf
wrapper.java.command=/usr/lib/jvm/java-11-openjdk-11.0.13.0.8-3.el8_5.x86_64/bin/java
Could you verify the Java version on the machine starting?
Java 6 is no more supported http://docs.sonarqube.org/display/SONAR/Requirements#Requirements-Prerequisite but from your error message, I don't know if this is the problem you meet.
Solution 1
Set java path globally
Solution 2
Go to sonarqube-{version}/conf directory
Edit wrapper.conf file
Replace wrapper.java.command=java with wrapper.java.command= {path-to-your-java-bin-directory}/java
eg: wrapper.java.command=/usr/java/bin/java
Try using a relative path, if your Sonar Folder is located in the same root folder as your jdk. For me my sonar and jdk are both under "Program Files", which has restrictive permission, hence the error.
E.g:
wrapper.java.command=../../../Java/jdk-11.0.4/bin/java
I don't have any other issues with java and STS starts up fine but when I try to run my app as "Run as Spring Boot App" (or any of the samples), the console is empty for up to 5 minutes, before I get the familiar "Spring Boot" ASCII art. Then it works fine.
Turns out there was an issue resolving the network host. I fixed it by executing this command from the console:
scutil --set HostName "localhost"
It must be something with you environment. You may try running the app in other IDE like Intellij. I presume it's the STS causing the problem. You may also try running it in fresh STS installation. I'm using latest OSX and Intellij and have no problems.
If you want to play around with this you could also analyse a java code dump to see what's happening inside your jvm: http://www.javacodegeeks.com/2013/02/analysing-a-java-core-dump.html
When I add the following Java options to enable debugging:
JAVA_OPTS="$JAVA_OPTS -noverify -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"
I get the following error whenever I try to shutdown the tomcat:
ERROR: transport error 202: bind failed: Address already in use ["transport.c",L41]
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510) ["debugInit.c",L500]
JDWP exit error JVMTI_ERROR_INTERNAL(113): No transports initializedFATAL ERROR in native method: JDWP No transports initialized, jvmtiError=JVMTI_ERROR_INTERNAL(113)
Thank you for a nice short explanation, PHeath! Following your advice, I found the best way to solve the problem is simply to use CATALINA_OPTS instead of JAVA_OPTS.
Looking into catalina.sh, one can see CATALINA_OPTS is only used by the "start" and "start-security" commands, whereas JAVA_OPTS is also used by the "stop" command (at least with Tomcat 6.0.33 on openSUSE 12.1).
At least if you have Tomcat installed on Linux using a package manager then modifying the CATALINA_OPTS variable in /etc/tomcat6/tomcat6.conf (or whatever path in your distribution) is cleaner than changing the catalina.sh script directly, for the package manager assumes that the user changes only configuration files and breaking this assumption may cause problems when upgrading the Tomcat packages (e. g. lost settings because the catalina.sh file is overwritten).
I think one should prefer CATALINA_OPTS over JAVA_OPTS not only for JDWP but for many other options as well: e. g. if one uses the heap size option -Xmx... then it would be reasonable to put it into CATALINA_OPTS, as the "stop" command does not need much heap.
You are trying to debug tomcat on startup, so it binds to port 5005 when the jvm starts.
When you run catalina.sh stop, it starts up another jvm which also tries to bind to port 5005.
You need to move the debug args to the run and start arguments (in catalina.sh) of tomcat, putting them straight into the JAVA_OPTS is the cause of the issue you're having.
The problem is your tomcat is still running on the debug port(5005) or some other service running on the same port(5005).
If tomcat still running, you can kill it
if it in linux environment ps -ef|grep java, and identify the process id of it. and kill the process using sudo kill -9 .
If it in windows environment got to task manager and kill the tomcat and java process.
Now you should be able to start the server in debug mood without any prob.
This can happen on debugging unit test through the tool(eclipse) which has been executed through the maven. To sole this you can flow the same process.
First close the Eclipse and kill the java process as well and start it again.
This is due to both applications are listening the same port number i.e 8000 while running in debug mode.
One quick solution is change the debug port to 8001 in startup.bat
SET DEBUGPORT=8001
It seems that the port 5005 is already in use. Check open ports with netstat command.
This may be because you already opened tomcat. Check your processes.
It appears you are starting Tomcat with the Debugger enabled, This causes the JVM to attach to the Process for Debugging, However in the catalina.sh there is a case statement for start, stop, restart, so on and so forth. Issuing the stop command still adds this in as it is part of your Global JAVA_OPTS and tries to start the debugger listening on the same port for the shutdown command. If you remove the address=50005 from your JAVA_OPTS or use the start jdpa commands to start the VM with the debugger this will fix your problem.
Look at the default catalina.sh in the latest Tomcat distribution if you need a clean copy. It sounds like someone has made changes inside yours that are invalid and causing JDPA to run on start, stop, any command issued.
set JPDA_ADDRESS=8001 in catalina.bat i.e debug port
and change all 3 ports in server.xml
In my case (Tomcat installed form tarball) I had those debug options unintentionally set in my env. This fixed the error:
$ unset JAVA_OPTS