Embedded Tomcat Would Not Start - spring

I got this error when I run Jhipster project on Jrebel. I try to increase my java heap size until to 512m by insert this line to VM Argument on Arguments Tab, but can not solve the error. I want to ask what the cause of error and how to solve it?
${jrebel_args}
-Xms512m -Xmx1024m
[ERROR] org.springframework.boot.context.embedded.tomcat.ServletContextInitializerLifecycleListener - Error starting Tomcat context: org.springframework.beans.factory.BeanCreationException
Exception in thread "main"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"
Exception in thread "process reaper" Exception in thread "process reaper"

I solve this problem by add -XX:MaxPermSize=512m at VM Arguments (eclipse)
thanks to ZT (zeroturnaround) Support

Running JVM with JRebel will use up to 50% more memory, if it uses more then there is a problem. Just in case please try to double the max limit with -Xmx2048m . This must avoid the OutOfMemoryError . After that you can reduce it when observing the actual memory usage.

Related

Facing Runtime Exception at Nutch Injector

I am trying to crawl website using Apache Nutch but getting following exception. Tried both in NetBeans and Intellj IDE....Your suggestion would be of great help on this issue...
Exception in thread "main" java.lang.RuntimeException: Injector job did not succeed, job status: FAILED, reason: NA
``` at org.apache.nutch.crawl.Injector.inject(Injector.java:365)
``` at com.yegor256.nutch.MainTest.main(MainTest.java:82)
Process finished with exit code 1

Ho to increase Java heap space for jmeter

While running my test after some time i got below error like "java.lang.OutOfMemoryError: Java heap space"
Could some one please help me how to increase java heap space for jmeter.
2018-08-16 18:57:07,765 ERROR o.a.j.JMeter: Uncaught exception:
java.lang.OutOfMemoryError: Java heap space
2018-08-16 18:57:14,745 INFO o.a.j.t.JMeterThread: Thread finished: Thread Group 1-6
2018-08-16 18:57:14,745 ERROR o.a.j.JMeter: Uncaught exception:
java.lang.OutOfMemoryError: Java heap space
For JMeter 4.0 default settings are:
-Xms1g -Xmx1g -X:MaxMetaspaceSize=256m
For Windows you can amend them in 2x times:
set HEAP="-Xms2g -Xmx2g -X:MaxMetaspaceSize=512m" && jmeter.bat
For Linux/Unix/Macosx:
export HEAP="-Xms2g -Xmx2g -X:MaxMetaspaceSize=512m" && ./jmeter.sh
Also make sure you're following JMeter Best Practices and recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

How to solve Sonarqube java.lang.OutOfMemoryError: Java heap space

I'm using Sonarqube community version. I'm getting the following error,
Exception in thread "LOG_FLUSHER" Exception in thread "CHECKPOINT_WRITER" java.lang.OutOfMemoryError: Java heap space
at java.util.ArrayList.iterator(ArrayList.java:840)
at java.util.Collections$SynchronizedCollection.iterator(Collections.java:2031)
at com.persistit.Persistit.pollAlertMonitors(Persistit.java:2285)
at com.persistit.Persistit$LogFlusher.run(Persistit.java:192)
java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap$Values.iterator(HashMap.java:968)
at com.persistit.Persistit.earliestDirtyTimestamp(Persistit.java:1439)
at com.persistit.CheckpointManager.pollFlushCheckpoint(CheckpointManager.java:271)
at com.persistit.CheckpointManager.runTask(CheckpointManager.java:301)
at com.persistit.IOTaskRunnable.run(IOTaskRunnable.java:144)
at java.lang.Thread.run(Thread.java:748)
WARNING: WARN: [JOURNAL_FLUSHER] WARNING Journal flush operation took 7,078ms last 8 cycles average is 884ms
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 1:17.852s
ERROR: Error during SonarQube Scanner execution
ERROR: Java heap space
ERROR:
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "CLEANUP_MANAGER"
INFO: Final Memory: 40M/989M
INFO: ------------------------------------------------------------------------
The SonarQube Scanner did not complete successfully
I have Changed the size in sonar.properties, still I'm facing the same problem. How to solve this.
sonar.web.javaOpts=-Xmx4G -Xms2048m -XX:+HeapDumpOnOutOfMemoryError
sonar.ce.javaOpts =-Xmx4G -Xms2048m -XX:+HeapDumpOnOutOfMemoryError
sonar.search.javaOpts=-Xmx4G -Xms2048m -XX:+HeapDumpOnOutOfMemoryError
What you've changed are the settings that allocate memory to SonarQube itself.
What you need to change is the setting that allocates memory to the analysis process. You haven't said which analyzer you're using, so the details will vary a little, but
for SonarQube Scanner export SONAR_SCANNER_OPTS="-Xmx512m"
for SonarQube Scanner for Maven export MAVEN_OPTS="-Xmx512m"
Large files in Project cause this problem, for me a 50MB XML file gives this error and this file was not important to the analysis process. I excluded this file in the configuration file (SonarQube.Analysis.xml) and the problem was solved

worker is getting restarted continuously with closedchannel exception in supervisor

worker which is in one of the supervisor is getting restarted continuously and getting Closedchannel exception . But if run the same topology in another storm cluster which is in another environment , it is running without giving any errors.
Below is the error i can see from Storm UI.
java.lang.RuntimeException: java.nio.channels.ClosedChannelException at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) at org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129) at org.apache.storm.daemon.executor$fn__7990$fn__8005$fn__8036.invoke(executor.clj:648) at org.apache.storm.util$async_loop$fn__624.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68) at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127) at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65) at org.apache.storm.kafka.PartitionManager.(PartitionManager.java:94) at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ... 6 mo
Can any one please help me to find out the exact issue.Please let me know if need any more information.
I faced this issue and problem was that ZooKeeper host names not being resolved from worker host.

Unable to deploy application on Websphere 8.0 due to OutOfMemory error

When I start to deploy my application, the build is successful but I get the following error while installing:
[exec] JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2013/07/09 13:55:51 - please wait.
[exec] JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
[exec] WASX7017E: Exception received while running file "deployStartApp_DEV1.py"; exception information: com.ibm.websphere.management.application.client.AppDeploymentException: com.ibm.websphere.management.application.client.AppDeploymentException: [Root exception is java.lang.OutOfMemoryError: Java heap space]
[exec] java.lang.OutOfMemoryError: java.lang.OutOfMemoryError: Java heap space
I tried increasing the heap size, but it didn't help. Can anybody help me out? I could not find a solution for deploying on WebSphere 8.0
Since I understand you are using wsadmin to do your deployment you might want to use something like:
wsadmin.bat -javaoption –Xms256m -javaoption –Xmx768m
when calling the wsadmin command.
More on this here and here.

Resources