How to avoid OutOfMemoryErrors when processing big analysis reports? - sonarqube

Running SonarQube 5.6.6 from Jenkins on CentOS 7.3, I got the following error:
2017.09.01 19:05:16 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Failed to execute task AV485bp0qXlQ-QPWWE9A
java.lang.OutOfMemoryError: Java heap space
2017.09.01 19:05:17 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=PP::Symphony3M | type=REPORT | id=AV485bp0qXlQ-QPWWE9A | time=74089ms
sonar.ce.javaOpts is set like below:
sonar.ce.javaOpts=-Xmx60g -Xms1g -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true
How much heap space should I give to SonarQube, when analyzing a one million LOC project? Or is there another way of avoiding Java heap space issues?

The max heap you can allocate depends on the free Ram on your server. free command can help identify the stats. Based on free Ram you can set your Xmx values.
BTW, make sure the code compiles on a server. If you are able to compile and not scan then only the increasing the heap will help.

Related

Spark - Container is running beyond physical memory limits

I have a cluster of two worker nodes.
Worker_Node_1 - 64GB RAM
Worker_Node_2 - 32GB RAM
Background Summery :
I am trying to execute spark-submit on yarn-cluster to run Pregel on a Graph to calculate the shortest path distances from one source vertex to all other vertices and print the values on console.
Experment :
For Small graph with 15 vertices execution completes application final status : SUCCEEDED
My code works perfectly and prints shortest distance for 241 vertices graph for single vertex as source vertex but there is a problem.
Problem :
When I dig into the Log file the task gets complete successfully in 4 mins and 26 Secs but still on the terminal it keeps on showing application status as Running and after approx 12 more minutes task execution terminates saying -
Application application_1447669815913_0002 failed 2 times due to AM Container for appattempt_1447669815913_0002_000002 exited with exitCode: -104 For more detailed output, check application tracking page:http://myserver.com:8088/proxy/application_1447669815913_0002/
Then, click on links to logs of each attempt.
Diagnostics: Container [pid=47384,containerID=container_1447669815913_0002_02_000001] is running beyond physical memory limits. Current usage: 17.9 GB of 17.5 GB physical memory used; 18.7 GB of 36.8 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447669815913_0002_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 47387 47384 47384 47384 (java) 100525 13746 20105633792 4682973 /usr/lib/jvm/java-7-oracle-cloudera/bin/java -server -Xmx16384m -Djava.io.tmpdir=/yarn/nm/usercache/cloudera/appcache/application_1447669815913_0002/container_1447669815913_0002_02_000001/tmp -Dspark.eventLog.enabled=true -Dspark.eventLog.dir=hdfs://myserver.com:8020/user/spark/applicationHistory -Dspark.executor.memory=14g -Dspark.shuffle.service.enabled=false -Dspark.yarn.executor.memoryOverhead=2048 -Dspark.yarn.historyServer.address=http://myserver.com:18088 -Dspark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native -Dspark.shuffle.service.port=7337 -Dspark.yarn.jar=local:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/lib/spark-assembly.jar -Dspark.serializer=org.apache.spark.serializer.KryoSerializer -Dspark.authenticate=false -Dspark.app.name=com.path.PathFinder -Dspark.master=yarn-cluster -Dspark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native -Dspark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class com.path.PathFinder --jar file:/home/cloudera/Documents/Longest_Path_Data_1/Jars/ShortestPath_Loop-1.0.jar --arg /home/cloudera/workspace/Spark-Integration/LongestWorstPath/configFile --executor-memory 14336m --executor-cores 32 --num-executors 2
|- 47384 47382 47384 47384 (bash) 2 0 17379328 853 /bin/bash -c LD_LIBRARY_PATH=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native::/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native /usr/lib/jvm/java-7-oracle-cloudera/bin/java -server -Xmx16384m -Djava.io.tmpdir=/yarn/nm/usercache/cloudera/appcache/application_1447669815913_0002/container_1447669815913_0002_02_000001/tmp '-Dspark.eventLog.enabled=true' '-Dspark.eventLog.dir=hdfs://myserver.com:8020/user/spark/applicationHistory' '-Dspark.executor.memory=14g' '-Dspark.shuffle.service.enabled=false' '-Dspark.yarn.executor.memoryOverhead=2048' '-Dspark.yarn.historyServer.address=http://myserver.com:18088' '-Dspark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native' '-Dspark.shuffle.service.port=7337' '-Dspark.yarn.jar=local:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/lib/spark-assembly.jar' '-Dspark.serializer=org.apache.spark.serializer.KryoSerializer' '-Dspark.authenticate=false' '-Dspark.app.name=com.path.PathFinder' '-Dspark.master=yarn-cluster' '-Dspark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native' '-Dspark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'com.path.PathFinder' --jar file:/home/cloudera/Documents/Longest_Path_Data_1/Jars/ShortestPath_Loop-1.0.jar --arg '/home/cloudera/workspace/Spark-Integration/LongestWorstPath/configFile' --executor-memory 14336m --executor-cores 32 --num-executors 2 1> /var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001/stdout 2> /var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
Things I tried :
yarn.schedular.maximum-allocation-mb – 32GB
mapreduce.map.memory.mb = 2048 (Previously it was 1024)
Tried varying --driver-memory upto 24g
Could you please put more color on to how I can configure the Resource Manager so that Large Size Graphs ( > 300K vertices) can also be processed? Thanks.
Just increase default conf of spark.driver.memory from 512m to 2g solve this error in my case.
You may set the memory to higher if it keeps hitting the same error. Then, you can keep reducing it until it hits the same error so that you know the optimum driver memory to use for your job.
The more data you are processing, the more memory is needed by each Spark task. And if your executor is running too many tasks then it can run out of memory. When I had problems processing large amounts of data, it usually was a result of not properly balancing the number of cores per executor. Try to either reduce the number of cores or increase the executor memory.
One easy way to tell that you are having memory issues is to check the Executor tab on the Spark UI. If you see a lot of red bars indicating high garbage collection time, you are probably running out of memory in your executors.
I slove the error in my case to increase conf of spark.yarn.executor.memoryOverhead Which stand for off-heap memory
When you increase the amount of driver-memory and executor-memory, do not forget this config item
I have similar problem :
Key error info:
exitCode: -104
'PHYSICAL' memory limit
Application application_1577148289818_10686 failed 2 times due to AM Container for appattempt_1577148289818_10686_000002 exited with **exitCode: -104**
Failing this attempt.Diagnostics: [2019-12-26 09:13:54.392]Container [pid=18968,containerID=container_e96_1577148289818_10686_02_000001] is running 132722688B beyond the **'PHYSICAL' memory limit**. Current usage: 1.6 GB of 1.5 GB physical memory used; 4.6 GB of 3.1 GB virtual memory used. Killing container.
Increase both spark.executor.memory and spark.executor.memoryOverhead didn't take effect .
Then I increase spark.driver.memory solved it.
Spark jobs ask for resources from resource manager in a different way from MapReduce jobs. Try to tune the number of executors and mem/vcore allocated to each executor. Follow http://spark.apache.org/docs/latest/submitting-applications.html

Too small initial heap error - stanford parser

I am trying my hands on the Stanford dependency parser. I tried running the parser from command line on windows to extract the dependencies using this command:
java -mx100m -cp "stanford-parser.jar" edu.stanford.nlp.trees.EnglishGrammaticalStructure -sentFile english-onesent.txt -collapsedTree -CCprocessed -parserFile englishPCFG.ser.gz
I am getting the following error:
Error occurred during initialization of VM
Too small initial heap
I changed the memory size to -mx1024, -mx2048 as well as -mx4096. It didn't change anything and the error persists.
What am I missing?
Type -Xmx1024m instead of -mx1024.
See https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html
It should be -mx1024m. I skipped m.
One more thing: in the -cp, the model jar should also be included.
... -cp "stanford-parser.jar;stanford-parser-3.5.2-models.jar"...
(assuming you are using the latest version).
Otherwise an IO Exception will be thrown.
There may be some arguments that are preexisting in the IDE.
In eclipse:
Go to-> Run as-> run configuration-> Arguments
then Delete the arguments that are used previously.
Restart your eclipse.
Worked for me!

gmaven plugin is giving object heap space error

I am getting error due to gmaven-plugin and groovy is giving error saying that groovy classpath could not find enough object heap space.I am using 32 bit system & Intellij idea ide. I have tried various options but could not resolve it ?
What i have tried are :
<configuration>
<argLine>-Xmx1024m</argLine>
</configuration>
Xmx256M -XX:MaxPermSize=512m
For 32 bit
-Xms1336m -Xmx1336m
I Got the same error,and resolved this by configuring in the run.conf.bat
Run the JVM with the configuring run.conf.bat in Jboss5x
If free memory is not available AS you are passing in the statement then please make changes in run.conf.bat
set "JAVA_OPTS=-Xms512m -Xmx512m -XX:MaxPermSize=256m
<configuration>
<maxmemory>1024M</maxmemory>
</configuration>
According to this IBM document about the Java heap size (along with some hints about setting the right heap size) the limits for Windows are:
• maximum possible heap size on 32-bit Java: 1.8 GB
• recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
MAVEN_OPTS="-Xms2048m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=1024m"
Could anyone help me out in getting rid of this ?

Intellij - Cannot start maven service - ExecutionException

I'm using Intellij 13.1.5 on Ubuntu 14.0.4 (amd64), Maven 3.0.5, Java Oracle 1.7.0_72
I noticed some irregularities with maven whilst using Intellij namely dependencies added and removed where not reflected in the module or in the External Libaries listing.
Then when I ran Intellij from the shell I saw this exception:
[ 14649] WARN - #org.jetbrains.idea.maven - Cannot open index /home/sbotting/.IntelliJIdea13/system/Maven/Indices/Index0
org.jetbrains.idea.maven.indices.MavenIndexException: Cannot open index /home/sbotting/.IntelliJIdea13/system/Maven/Indices/Index0
at org.jetbrains.idea.maven.indices.MavenIndex.open(MavenIndex.java:164)
at org.jetbrains.idea.maven.indices.MavenIndex.<init>(MavenIndex.java:139)
at org.jetbrains.idea.maven.indices.MavenIndices.load(MavenIndices.java:59)
at org.jetbrains.idea.maven.indices.MavenIndices.<init>(MavenIndices.java:47)
at org.jetbrains.idea.maven.indices.MavenIndicesManager.ensureInitialized(MavenIndicesManager.java:107)
at org.jetbrains.idea.maven.indices.MavenIndicesManager.getIndicesObject(MavenIndicesManager.java:91)
at org.jetbrains.idea.maven.indices.MavenIndicesManager.ensureIndicesExist(MavenIndicesManager.java:164)
at org.jetbrains.idea.maven.indices.MavenProjectIndicesManager$3.run(MavenProjectIndicesManager.java:120)
at com.intellij.util.ui.update.MergingUpdateQueue.execute(MergingUpdateQueue.java:320)
at com.intellij.util.ui.update.MergingUpdateQueue.execute(MergingUpdateQueue.java:310)
at com.intellij.util.ui.update.MergingUpdateQueue$2.run(MergingUpdateQueue.java:254)
at com.intellij.util.ui.update.MergingUpdateQueue.flush(MergingUpdateQueue.java:269)
at com.intellij.util.ui.update.MergingUpdateQueue.flush(MergingUpdateQueue.java:227)
at com.intellij.util.ui.update.MergingUpdateQueue.run(MergingUpdateQueue.java:217)
at com.intellij.util.concurrency.QueueProcessor.runSafely(QueueProcessor.java:238)
at com.intellij.util.Alarm$Request$1.run(Alarm.java:327)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at com.intellij.util.concurrency.QueueProcessor$RunnableConsumer.consume(QueueProcessor.java:298)
at com.intellij.util.concurrency.QueueProcessor$RunnableConsumer.consume(QueueProcessor.java:295)
at com.intellij.util.concurrency.QueueProcessor$2$1.run(QueueProcessor.java:110)
at com.intellij.util.concurrency.QueueProcessor.runSafely(QueueProcessor.java:238)
at com.intellij.util.concurrency.QueueProcessor$2.consume(QueueProcessor.java:107)
at com.intellij.util.concurrency.QueueProcessor$2.consume(QueueProcessor.java:104)
at com.intellij.util.concurrency.QueueProcessor$3$1.run(QueueProcessor.java:215)
at com.intellij.util.concurrency.QueueProcessor.runSafely(QueueProcessor.java:238)
at com.intellij.util.concurrency.QueueProcessor$3.run(QueueProcessor.java:212)
at com.intellij.openapi.application.impl.ApplicationImpl$8.run(ApplicationImpl.java:419)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at com.intellij.openapi.application.impl.ApplicationImpl$1$1.run(ApplicationImpl.java:149)
Caused by: java.lang.RuntimeException: Cannot reconnect.
at org.jetbrains.idea.maven.server.RemoteObjectWrapper.perform(RemoteObjectWrapper.java:111)
at org.jetbrains.idea.maven.server.MavenIndexerWrapper.createIndex(MavenIndexerWrapper.java:61)
at org.jetbrains.idea.maven.indices.MavenIndex.createContext(MavenIndex.java:305)
at org.jetbrains.idea.maven.indices.MavenIndex.access$500(MavenIndex.java:40)
at org.jetbrains.idea.maven.indices.MavenIndex$IndexData.<init>(MavenIndex.java:611)
at org.jetbrains.idea.maven.indices.MavenIndex.doOpen(MavenIndex.java:185)
at org.jetbrains.idea.maven.indices.MavenIndex.open(MavenIndex.java:161)
... 33 more
Caused by: java.rmi.RemoteException: Cannot start maven service; nested exception is:
com.intellij.execution.ExecutionException: -Xmixed mixed mode execution (default)
-Xint interpreted mode execution only
-Xbootclasspath:<directories and zip/jar files separated by :>
set search path for bootstrap classes and resources
-Xbootclasspath/a:<directories and zip/jar files separated by :>
append to end of bootstrap class path
-Xbootclasspath/p:<directories and zip/jar files separated by :>
prepend in front of bootstrap class path
-Xdiag show additional diagnostic messages
-Xnoclassgc disable class garbage collection
-Xincgc enable incremental garbage collection
-Xloggc:<file> log GC status to a file with time stamps
-Xbatch disable background compilation
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size
-Xprof output cpu profiling data
-Xfuture enable strictest checks, anticipating future default
-Xrs reduce use of OS signals by Java/VM (see documentation)
-Xcheck:jni perform additional checks for JNI functions
-Xshare:off do not attempt to use shared class data
-Xshare:auto use shared class data if possible (default)
-Xshare:on require using shared class data, otherwise fail.
-XshowSettings show all settings and continue
-XshowSettings:all
show all settings and continue
-XshowSettings:vm show all vm related settings and continue
-XshowSettings:properties
show all property settings and continue
-XshowSettings:locale
show all locale related settings and continue
The -X options are non-standard and subject to change without notice
at org.jetbrains.idea.maven.server.MavenServerManager.create(MavenServerManager.java:124)
at org.jetbrains.idea.maven.server.MavenServerManager.create(MavenServerManager.java:65)
at org.jetbrains.idea.maven.server.RemoteObjectWrapper.getOrCreateWrappee(RemoteObjectWrapper.java:41)
at org.jetbrains.idea.maven.server.MavenServerManager$5.create(MavenServerManager.java:387)
at org.jetbrains.idea.maven.server.MavenServerManager$5.create(MavenServerManager.java:383)
at org.jetbrains.idea.maven.server.RemoteObjectWrapper.getOrCreateWrappee(RemoteObjectWrapper.java:41)
at org.jetbrains.idea.maven.server.MavenIndexerWrapper.getRemoteId(MavenIndexerWrapper.java:159)
at org.jetbrains.idea.maven.server.MavenIndexerWrapper.access$100(MavenIndexerWrapper.java:37)
at org.jetbrains.idea.maven.server.MavenIndexerWrapper$1.execute(MavenIndexerWrapper.java:64)
at org.jetbrains.idea.maven.server.RemoteObjectWrapper.perform(RemoteObjectWrapper.java:105)
... 39 more
Caused by: com.intellij.execution.ExecutionException: -Xmixed mixed mode execution (default)
-Xint interpreted mode execution only
-Xbootclasspath:<directories and zip/jar files separated by :>
set search path for bootstrap classes and resources
-Xbootclasspath/a:<directories and zip/jar files separated by :>
append to end of bootstrap class path
-Xbootclasspath/p:<directories and zip/jar files separated by :>
prepend in front of bootstrap class path
-Xdiag show additional diagnostic messages
-Xnoclassgc disable class garbage collection
-Xincgc enable incremental garbage collection
-Xloggc:<file> log GC status to a file with time stamps
-Xbatch disable background compilation
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size
-Xprof output cpu profiling data
-Xfuture enable strictest checks, anticipating future default
-Xrs reduce use of OS signals by Java/VM (see documentation)
-Xcheck:jni perform additional checks for JNI functions
-Xshare:off do not attempt to use shared class data
-Xshare:auto use shared class data if possible (default)
-Xshare:on require using shared class data, otherwise fail.
-XshowSettings show all settings and continue
-XshowSettings:all
show all settings and continue
-XshowSettings:vm show all vm related settings and continue
-XshowSettings:properties
show all property settings and continue
-XshowSettings:locale
show all locale related settings and continue
The -X options are non-standard and subject to change without notice.
at com.intellij.execution.rmi.RemoteProcessSupport.acquire(RemoteProcessSupport.java:142)
at org.jetbrains.idea.maven.server.MavenServerManager.create(MavenServerManager.java:121)
... 48 more
I've tried deleting the ~/.IntelliJIdea13/system/Maven/
with no result
My M2_HOME is properly set to /usr/share/maven
I assume that the solution lies in finding the underlying cause for the ExecutionException although the diagnostic doesn't really help with this.
I've tried reverting back to Intellij 12.0.4 and it works fine (in terms of Maven - although sadly it doesn't support svn 1.8)
also the output seems to come from the java -X command which gives you a printout of all the -X options
Any suggestions?
I had this problem all of a sudden (after an update + Windows Update). I tried a lot of things, and in the end, set Settings > Build, Execution, Deployment > Build Tools > Maven > Importing > JDK for Importer to Use JAVA_HOME (which I've set up as an env var pointing to a JDK install). That seems to work.
Go to file and select the project from the menu structure.
Then change the JDK version form the list of JDKs.
In the Settings > Maven > Importing in box for "VM options for importer:" box I had -Xmx512m -X
for some reason there was a loose -X I removed that and everything is now working fine.
My hosts file lack of the line:
127.0.0.1 localhost
lead to this problem.
I had this issue and it was due to me changing the JDK for importing in Intellij
I changed it from using internal JRE to java 1.8.0_74
I changed this back to use the Internal JRE of intellij and it worked.
Didn't need to change anything with my .m2 settings

Sonar - OutOfMemoryError: Java heap space

I am deploying a large Java project on Sonar using "Findbugs" as profile and getting the error below:
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
What i have tried to resolve this:
Replaced %SONAR_RUNNER_OPTS% with -Xms256m -Xmx1024m to increase the heap size in sonar-runner bat file.
Put "sonar.findbugs.effort" parameter as "Min" in Sonar global parameters.
But both of above methods didn't work for me.
I had the same problem and found a very different solution, perhaps because I'm having a hard time swallowing the previous answers / comments. With 10 million lines of code (that's more code than is in an F16 fighter jet), if you have a 100 characters per line (a crazy size), you could load the whole code base into 1GB of memory. I set it 8GB of memory and it still failed. Why?
Answer: Because the community Sonar C++ scanner seems to have a bug where it picks up ANY file with the letter 'c' in its extension. That includes .doc, .docx, .ipch, etc. Hence, the reason it's running out of memory is because it's trying to read some file that it thinks is 300mb of pure code but really it should be ignored.
Solution: Find the extensions used by all of the files in your project (see more here):
dir /s /b | perl -ne 'print $1 if m/\.([^^.\\\\]+)$/' | sort -u | grep c
Then add these other extensions as exclusions in your sonar.properties file:
sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch
Then set your memory limits back to regular amounts.
%JAVA_EXEC% -Xmx1024m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m %SONAR_RUNNER_OPTS% ...
this has worked for me:
SONAR_RUNNER_OPTS="-Xmx3062m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
I set it direct in the sonar-runner(.bat) file
I had the same problem when running sonar with maven. In my case it helped to call sonar separately:
mvn clean install && mvn sonar:sonar
instead of
mvn clean install sonar:sonar
http://docs.sonarqube.org/display/SONAR/Analyzing+with+Maven
Remark: Because my solution is connected to maven, this is not the direct answer for the question. But it might help other users who stumple upon it.
What you can do it to create your own quality profile with just some Findbugs rules at first, and then progressively add more and more until you reach his OutOfMemoryError. There's probably only a single rule that makes all this fail because your code violates it - and if you deactivate this rule, it will certainly work.
I know this thread is a bit old but this info might help someone.
For me the problem was not like suggested by the top-answer with the C++ plugin.
Instead my problem was the Xml-Plugin (https://docs.sonarqube.org/display/PLUG/SonarXML)
after I deactivated it the analysis worked again.
You can solve this issue by increase the maximum memory allocated to the appropriate process by increasing the -Xmx memory setting for the corresponding Java process in your sonar.properties file
under SonarQube/conf/sonar.properties
uncomment below lines and increase the memory as you want:
For Web: Xmx5123m -Xms1536m -XX:+HeapDumpOnOutOfMemoryError
For ElasticSearch: Xms512m -Xmx1536m -XX:+HeapDumpOnOutOfMemoryError
For Compute Engine: sonar.ce.javaOpts=-Xmx1536m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
The problem is on FindBugs side. I suppose you're analyzing a large project that probably has many violations. Take a look at two threads in Sonar's mailing list having the same issue. There are some ideas you can try for yourself.
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td4898141.html
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td5001587.html
I know this is old, but I am just posting my answer anyway. I realized I was using the 32bit JDK (version 8) and after uninstalling it and then installing 64bit JDK (version 12) the problem disappeared.

Resources