Sonar - OutOfMemoryError: Java heap space - sonarqube

I am deploying a large Java project on Sonar using "Findbugs" as profile and getting the error below:
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
What i have tried to resolve this:
Replaced %SONAR_RUNNER_OPTS% with -Xms256m -Xmx1024m to increase the heap size in sonar-runner bat file.
Put "sonar.findbugs.effort" parameter as "Min" in Sonar global parameters.
But both of above methods didn't work for me.

I had the same problem and found a very different solution, perhaps because I'm having a hard time swallowing the previous answers / comments. With 10 million lines of code (that's more code than is in an F16 fighter jet), if you have a 100 characters per line (a crazy size), you could load the whole code base into 1GB of memory. I set it 8GB of memory and it still failed. Why?
Answer: Because the community Sonar C++ scanner seems to have a bug where it picks up ANY file with the letter 'c' in its extension. That includes .doc, .docx, .ipch, etc. Hence, the reason it's running out of memory is because it's trying to read some file that it thinks is 300mb of pure code but really it should be ignored.
Solution: Find the extensions used by all of the files in your project (see more here):
dir /s /b | perl -ne 'print $1 if m/\.([^^.\\\\]+)$/' | sort -u | grep c
Then add these other extensions as exclusions in your sonar.properties file:
sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch
Then set your memory limits back to regular amounts.
%JAVA_EXEC% -Xmx1024m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m %SONAR_RUNNER_OPTS% ...

this has worked for me:
SONAR_RUNNER_OPTS="-Xmx3062m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
I set it direct in the sonar-runner(.bat) file

I had the same problem when running sonar with maven. In my case it helped to call sonar separately:
mvn clean install && mvn sonar:sonar
instead of
mvn clean install sonar:sonar
http://docs.sonarqube.org/display/SONAR/Analyzing+with+Maven
Remark: Because my solution is connected to maven, this is not the direct answer for the question. But it might help other users who stumple upon it.

What you can do it to create your own quality profile with just some Findbugs rules at first, and then progressively add more and more until you reach his OutOfMemoryError. There's probably only a single rule that makes all this fail because your code violates it - and if you deactivate this rule, it will certainly work.

I know this thread is a bit old but this info might help someone.
For me the problem was not like suggested by the top-answer with the C++ plugin.
Instead my problem was the Xml-Plugin (https://docs.sonarqube.org/display/PLUG/SonarXML)
after I deactivated it the analysis worked again.

You can solve this issue by increase the maximum memory allocated to the appropriate process by increasing the -Xmx memory setting for the corresponding Java process in your sonar.properties file
under SonarQube/conf/sonar.properties
uncomment below lines and increase the memory as you want:
For Web: Xmx5123m -Xms1536m -XX:+HeapDumpOnOutOfMemoryError
For ElasticSearch: Xms512m -Xmx1536m -XX:+HeapDumpOnOutOfMemoryError
For Compute Engine: sonar.ce.javaOpts=-Xmx1536m -Xms128m -XX:+HeapDumpOnOutOfMemoryError

The problem is on FindBugs side. I suppose you're analyzing a large project that probably has many violations. Take a look at two threads in Sonar's mailing list having the same issue. There are some ideas you can try for yourself.
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td4898141.html
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td5001587.html

I know this is old, but I am just posting my answer anyway. I realized I was using the 32bit JDK (version 8) and after uninstalling it and then installing 64bit JDK (version 12) the problem disappeared.

Related

File.exists() sometimes wrong on Windows 10 with Java 8.191

I have a bunch of unit tests which contain code like:
File file = new File("src/main/java/com/pany/Foo.java");
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
This test is suddenly failing when running it with Maven Surefire and -DforkCount=0. With -DforkCount=1, it works.
Things I tried so far:
The file does exist. Windows Explorer, command line (copy & paste), text editors, Cygwin can all find it and show the contents. That's why I think it's not a permission problem.
It's not modified by the unit tests or anything else. Git shows no modifications for the last two months.
I've checked the file system, it's clean.
I've tried other versions of Java 8, namely 8u171 and 8u181. Same problem.
I've run Maven from within Cygwin and the command prompt. Same result.
Reboot :-) No effect :-(
More details:
When I see this problem, I start to see the "The forked VM terminated without properly saying goodbye. VM crash or System.exit called?" in other projects. That's why I tried forkCount=0 which often helps in this case to find out why the forked VM crashed.
This has started recently, maybe around the October 2018 update of Windows 10. Before that, the builds were rock solid for about three years. My machine was switched to Windows 10 late 2017, I think.
I'm using Maven 3.6 and can't easily try an older version because of an important bug that was fixed with it. I did see the VM crash above with Maven 3.5.2 as well.
It's always the same files which fail (so it's stable).
ulimit (from Cygwin) says:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2032
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
I'm wondering if the "open files" limit of 256 only applied to Cygwin processes or whether that's something which Cygwin reads from Windows.
Let me know if you need anything else. I'm running out of ideas what I could try.
Update 1
Bernhard asked me to print absolute names. My answer was that I was already using absolute names but I was wrong. The actual code was:
File file = new File("src/main/java/com/pany/Foo.java");
if (!file.exists()) {
log.debug("Missing file {}", file.getAbsolutePath());
... fail ...
}
... do something with file...
I have now changed this to:
File file = new File("src/main/java/com/pany/Foo.java").getAbsoluteFile();
if (!file.exists()) {
log.debug("Missing file {}", file);
}
and that fixed the problem. I just can't understand why.
When Maven creates a forked VM to run the tests with Surefire, then it can change the current directory. So in this case, it would make sense that the tests work when forked but fail when running in the same VM (since the VM was created in the root folder of the multi-module build). But why is making the path absolute before the call to exists() fixing the issue?
Some background. Each process has a notion of "current directory". When started from the command line, then it's the directory in which the command was executed. When started from the UI, it's usually the folder in which the program (the .exe file) is.
In the command prompt or BASH, you can change this folder with cd for the process which runs the command prompt.
When Maven builds a multi-module project, it has to change this for each module (so that the relative path src/main/java/ always points to the right place). Unfortunately, Java doesn't have a "set current directory" method anywhere. You can only specify one when creating a new process and you can modify the system property user.dir.
That's why new File("a").exists() and new File("a").getAbsoluteFile().exists() work differently.
The latter will use new File(System.getProperty("user.dir"), "a") to determine the path and the former will use the Windows API function _wgetdcwd (docs) which in turn uses a field of the Windows process to get the current directory - in our case, that's always the folder in which Maven was originally started because Java doesn't update the field in the process when someone changes user.dir and Maven can only change this property to "simulate" changing folders.
WinNTFileSystem_md.c calls fileToNTPath(). That's defined in io_util_md.c and calls pathToNTPath(). For relative paths, it will call currentDirLength() which calls currentDir() which calls _wgetdcwd().
See also:
https://github.com/openjdk-mirror/jdk7u-jdk/blob/jdk7u6-b08/src/windows/native/java/io/WinNTFileSystem_md.c
https://github.com/openjdk-mirror/jdk7u-jdk/blob/jdk7u6-b08/src/windows/native/java/io/io_util_md.c
and here is the place where the Surefire plugin modifies the Property user.dir: https://github.com/apache/maven-surefire/blob/56d41b4c903b6c134c5e1a2891f9f08be7e5039f/maven-surefire-common/src/main/java/org/apache/maven/plugin/surefire/AbstractSurefireMojo.java#L1060
When not forking, it's copied into the current VM's System properties: https://github.com/apache/maven-surefire/blob/56d41b4c903b6c134c5e1a2891f9f08be7e5039f/maven-surefire-common/src/main/java/org/apache/maven/plugin/surefire/AbstractSurefireMojo.java#L1133
So I have checked via printing out the system properties with some simple tests.
During the tests via maven-surefire-plugin the user.dir will be changed to the root of the appropriate module in a multi module build.
But as I mentioned already there is a system property available basedir which can be used to correctly handle the location for tests which needs to access them via File...The basedir is pointed to the location of the pom.xml of the appropriate module.
But unfortunately the basedir property is not set by IDEA IntelliJ during the test run.
But this can be solved by a setup like this:
private String basedir;
#Before
public void before() {
this.basedir = System.getProperty("basedir", System.getProperty("user.dir", "Need to think about the default value"));
}
#Test
public void testX() {
File file = new File(this.basedir, "src/main/java/com/pany/Foo.java");
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
}
This will work in Maven Surefire with -DforkCount=0 as well as -DforkCount=1 and in IDE's (checked only IDEA IntelliJ).
And yes that is an issue in Maven Surefire plugin changing the user.dir.
We might convince IDE's to support the basedir property as well ?
Aaron, we develop the Surefire. We can help you if you provide the path for this:
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
Pls post the actual path, expected path and basedir where your POM resides.
The theory would not help here. We are testing all the spectrum of JDKs 7-12 but we do not have the combination Cygwin+Windows which must be considered.
The code setting user.dir in Surefire you mentioned exists a decade.

Starting a payara 5 has encountered

I have build a very simple project of hello world in
Payara 5 (5.181)
JSF 2.3
JDK 1.8
CDI 2.0
Maven
and encountered a problem
Unable to start server due following issues: Launch process failed with exit code 1
in console it throws an error of :
Error: Could not find or load main class server\payara5\glassfish.lib.grizzly-npn-bootstrap.jar
[PIC] Payara 5 Error
It seems that the Payara Tools for Eclipse suffer by several bugs that may cause this. In my case, the following workarounds helped:
The Payara installation path should not contain spaces (e.g. Program Files\Payara)
It seems that only Java 8 is supported at the time
Open the domain.xml configuration file for the domain you are trying to start (typically payara_install_path/glassfish/domains/domain1/config/domain1.xml) and search for "Xbootclasspath". You should find a couple of lines like
<jvm-options>[1.8.0|1.8.0u120]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.6.jar</jvm-options>
<jvm-options>[1.8.0u121|1.8.0u160]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.7.jar</jvm-options>
<jvm-options>[1.8.0u161|1.8.0u190]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.jar</jvm-options>
<jvm-options>[1.8.0u191|1.8.0u500]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
Depending of your installed Java version (try running java --version) and choose the appropriate line (most likely the last one). Remove the remaining lines and remove the [...] part at the beginning of the chosen line so you will get something like
<jvm-options>-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
After this, the tools seem to start normally.
The Problem is with Java version. The grizzly-npn-bootstrap-1.8.1.jar Jar is placed in bootclasspath, thats why it requires proper java version to start payara server. So remove unnecessary bootstrap jar from domain.xml.
In Windows:
1) Go To ---C:\Users\xxxx\payara5\glassfish\domains\domain1\config\domain.xml
2) According to my java verson(java version "1.8.0_191") I deleted the following lines from domain.xml. So delete according to your java version.
<jvm-options>[1.8.0|1.8.0u120]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.6.jar</jvm-options>
<jvm-options>[1.8.0u121|1.8.0u160]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.7.jar</jvm-options>
<jvm-options>[1.8.0u161|1.8.0u190]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.jar</jvm-options>
3) Remove this [1.8.0u191|1.8.0u500] part from jvm-options & Edit the line in your domain.xml(w.r.t java -version) as shown below
<jvm-options>-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
4) restart your server.
As Radkovo said, "The Payara installation path should not contain spaces (e.g. Program Files\Payara)", so I moved the Payara to the Documents folder.
Problem solved!

How to avoid OutOfMemoryErrors when processing big analysis reports?

Running SonarQube 5.6.6 from Jenkins on CentOS 7.3, I got the following error:
2017.09.01 19:05:16 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Failed to execute task AV485bp0qXlQ-QPWWE9A
java.lang.OutOfMemoryError: Java heap space
2017.09.01 19:05:17 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=PP::Symphony3M | type=REPORT | id=AV485bp0qXlQ-QPWWE9A | time=74089ms
sonar.ce.javaOpts is set like below:
sonar.ce.javaOpts=-Xmx60g -Xms1g -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true
How much heap space should I give to SonarQube, when analyzing a one million LOC project? Or is there another way of avoiding Java heap space issues?
The max heap you can allocate depends on the free Ram on your server. free command can help identify the stats. Based on free Ram you can set your Xmx values.
BTW, make sure the code compiles on a server. If you are able to compile and not scan then only the increasing the heap will help.

Too small initial heap error - stanford parser

I am trying my hands on the Stanford dependency parser. I tried running the parser from command line on windows to extract the dependencies using this command:
java -mx100m -cp "stanford-parser.jar" edu.stanford.nlp.trees.EnglishGrammaticalStructure -sentFile english-onesent.txt -collapsedTree -CCprocessed -parserFile englishPCFG.ser.gz
I am getting the following error:
Error occurred during initialization of VM
Too small initial heap
I changed the memory size to -mx1024, -mx2048 as well as -mx4096. It didn't change anything and the error persists.
What am I missing?
Type -Xmx1024m instead of -mx1024.
See https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html
It should be -mx1024m. I skipped m.
One more thing: in the -cp, the model jar should also be included.
... -cp "stanford-parser.jar;stanford-parser-3.5.2-models.jar"...
(assuming you are using the latest version).
Otherwise an IO Exception will be thrown.
There may be some arguments that are preexisting in the IDE.
In eclipse:
Go to-> Run as-> run configuration-> Arguments
then Delete the arguments that are used previously.
Restart your eclipse.
Worked for me!

Travis-ci boost log compilation with biicode time-out

I am using travis-ci and biicode to build my project who is depending on boost log. But boost log times are longer than 10 min so I get this message:
No output has been received in the last 10 minutes, this potentially indicates a
stalled build or something wrong with the build itself.
The build has been terminated
The build is working correctly, it's just that boost log is really long to compile with limited resources (I tried to compile it on a VM with 1 CPU and 2GB of RAM and it took almost more than 15 min)
I know this is happening because there is not enough verbose going on so I tried the following flags:
>bii cpp:build -- VERBOSE=1
In the CMakeList.txt, set BII_BOOST_VERBOSE ON as mentionnened here
Set BOOST_LOG_COMPILE_FAST_ON as explained here
Using travis_wait
Actually travis_wait seems to be the correct solution but when I put it in my .travis.yml like this
script: travis_wait bii cpp:build
It does actually doesn't output logs like usually and just time out after 20 min. I don't think the actual building is taking place
What is the correct way to handle this problem?
This is a known issue, Boost.Log takes a long time to compile.
You can use travis_wait to call bii cpp:configure, but I'm with you, I need log feedback (No pun intended). However, I have tried that too and leaded to >50min build, which means travis aborts build on free accounts :( Of course my repo does not build Boost.Log only.
Just as a note, here's part of the settings.py file from the boost-biicode repo:
#Boost.Log takes so much time to compile, leads to timeouts on Travis CI
#It was tested on Windows and linux, works 'ok' (Be careful with linking settings)
if args.ci: del packages['examples/boost-log']
I'm currently working on a solution, launching asynchronous builds while printing progress. Check this issue. It will be ready for this week :)
To speed-up your build, try to play with BII_BOOST_BUILD_J variable to set the number of threads you want for building Boost components. Here's an example:
script:
- bii cpp:configure -DBII_BOOST_BUILD_J=4
Be careful, more threads means more RAM needed to compile at a time. Be sure you don't make the travis job VM go out of memory.

Resources