gmaven plugin is giving object heap space error - maven

I am getting error due to gmaven-plugin and groovy is giving error saying that groovy classpath could not find enough object heap space.I am using 32 bit system & Intellij idea ide. I have tried various options but could not resolve it ?
What i have tried are :
<configuration>
<argLine>-Xmx1024m</argLine>
</configuration>
Xmx256M -XX:MaxPermSize=512m
For 32 bit
-Xms1336m -Xmx1336m
I Got the same error,and resolved this by configuring in the run.conf.bat
Run the JVM with the configuring run.conf.bat in Jboss5x
If free memory is not available AS you are passing in the statement then please make changes in run.conf.bat
set "JAVA_OPTS=-Xms512m -Xmx512m -XX:MaxPermSize=256m
<configuration>
<maxmemory>1024M</maxmemory>
</configuration>
According to this IBM document about the Java heap size (along with some hints about setting the right heap size) the limits for Windows are:
• maximum possible heap size on 32-bit Java: 1.8 GB
• recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
MAVEN_OPTS="-Xms2048m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=1024m"
Could anyone help me out in getting rid of this ?

Related

How to avoid OutOfMemoryErrors when processing big analysis reports?

Running SonarQube 5.6.6 from Jenkins on CentOS 7.3, I got the following error:
2017.09.01 19:05:16 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Failed to execute task AV485bp0qXlQ-QPWWE9A
java.lang.OutOfMemoryError: Java heap space
2017.09.01 19:05:17 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=PP::Symphony3M | type=REPORT | id=AV485bp0qXlQ-QPWWE9A | time=74089ms
sonar.ce.javaOpts is set like below:
sonar.ce.javaOpts=-Xmx60g -Xms1g -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true
How much heap space should I give to SonarQube, when analyzing a one million LOC project? Or is there another way of avoiding Java heap space issues?
The max heap you can allocate depends on the free Ram on your server. free command can help identify the stats. Based on free Ram you can set your Xmx values.
BTW, make sure the code compiles on a server. If you are able to compile and not scan then only the increasing the heap will help.

How to run two versions of ElasticSearch with custom heap size on same server?

I need to run side by side two ElasticSearch instances (version 1.7 and version 5.2.2) on the same server (Widows 2012 R2). When I try to run the newer version, I receive an error:
PS C:\Program Files\Elasticsearch\elasticsearch-5.2.2\bin> .\elasticsearch.bat
Error: encountered environment variables that are no longer supported
Use jvm.options or ES_JAVA_OPTS to configure the JVM
ES_HEAP_SIZE=8g: set -Xms8g and -Xmx8g in jvm.options or add "-Xms8g -Xmx8g" to ES_JAVA_OPTS
This is caused by the fact that that there was a breaking change (described here) in the way the heap size is set. In the previous version of ElasticSearch (1.7) it was set by an environment variable:
ES_HEAP_SIZE = 8g
I tried setting up another env variable:
ES_JAVA_OPTS = -Xms8g -Xmx8g
and I also edited jvm.options file by adding
-Xms8g
-Xmx8g
but I'm still getting the same error.
Is there a way to configure heap size in ElasticSearch 5.2.2 without deleting ES_HEAP_SIZE environment variable (which I need to keep version 1.7 up and running)? If not, is it possible to set heap size in the old version in a way that would also allow the new version to run?
Edit: Given that jvm.options is not an option, the only thing I see is modifying your elasticsearch/bin/elasticsearch script, mostly the line:
ES_JAVA_OPTS="$(parse_jvm_options "$ES_JVM_OPTIONS") $ES_JAVA_OPTS" #default
to:
ES_JAVA_OPTS="-Xms myXmsValue -Xmx myXmxValue -someOtherOptions someValue"
With the other options and value according to what you want.
Here is what I found out:
it turns out that the error I was struggling with was raised not by ElasticSearch itself, but by the batch script running the ElasticSearch (bin\elasticsearch.bat). The problematic lines are:
if not "%ES_HEAP_SIZE%" == "" set bad_env_var=1
(...)
if %bad_env_var% == 1 (
echo Error: encountered environment variables that are no longer supported
echo Use jvm.options or ES_JAVA_OPTS to configure the JVM
(...)
if not "%ES_HEAP_SIZE%" == "" echo ES_HEAP_SIZE=%ES_HEAP_SIZE%: set -Xms%ES_HEAP_SIZE% and -Xmx%ES_HEAP_SIZE% in jvm.options or add "-Xms%ES_HEAP_SIZE% -Xmx%ES_HEAP_SIZE%" to ES_JAVA_OPTS
(...)
exit /b 1
)
As far as I can see, this check is there in order to help people migrating from older version to the newer, pointing them to a new way of setting up the heap size. Apart from that, the environment variable ES_HEAP_SIZE is not mentioned in the script, so its' existence should not affect the ElasticSearch 5.2.2 instance. Based on these observations, the easiest fix seems to be simply commenting out the check:
rem if not "%ES_HEAP_SIZE%" == "" set bad_env_var=1
I tried it and both instances of ElasticSearch now run side by side without issues.
One additional trap to avoid is not to set up the heap size in both the file and ES_JAVA_OPTS, which leads to duplicate min heap size settings found error when deploying. Just stick to the file config.
Huge thanks to #asettouf whose answer led me to the correct solution!

How to increase available memory size in GroovyConsole?

I'm running scripts inside GroovyConsole 2.4.5 on Windows 7 64-bit and they are crashing due to out of memory error. Runtime.getRuntime().maxMemory() shows 247MB and my PC has 32GB RAM. What is the way to increase memory available for GroovyConsole and underlying JVM?
I tried editing startGroovy.bat file with:
set GROOVY_OPTS="-Xmx2g -Xms1g"
and other values, but it had no effect.
I'm not on Windows, so can't test, but you should be able to use JAVA_OPTS instead of GROOVY_OPTS, ie:
set JAVA_OPTS="-Xmx1G"
Before you run groovyConsole
You're already doing correctly, edit startGroovy.bat and simply try with g lowercase, to set GROOVY_OPTS:
set GROOVY_OPTS="-Xmx1g"
After some tries I see the follow effect, If I use " to set GROOVY_OPTS only work with one parameter, if I want to use two parameters -Xmx1g -Xms512m I've to remove " if not it doesn't works. So you can try with:
set GROOVY_OPTS=-Xmx1g -Xms512m
Instead of
set GROOVY_OPTS="-Xmx1g -Xms512m"
Hope it helps,

Sonar - OutOfMemoryError: Java heap space

I am deploying a large Java project on Sonar using "Findbugs" as profile and getting the error below:
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
What i have tried to resolve this:
Replaced %SONAR_RUNNER_OPTS% with -Xms256m -Xmx1024m to increase the heap size in sonar-runner bat file.
Put "sonar.findbugs.effort" parameter as "Min" in Sonar global parameters.
But both of above methods didn't work for me.
I had the same problem and found a very different solution, perhaps because I'm having a hard time swallowing the previous answers / comments. With 10 million lines of code (that's more code than is in an F16 fighter jet), if you have a 100 characters per line (a crazy size), you could load the whole code base into 1GB of memory. I set it 8GB of memory and it still failed. Why?
Answer: Because the community Sonar C++ scanner seems to have a bug where it picks up ANY file with the letter 'c' in its extension. That includes .doc, .docx, .ipch, etc. Hence, the reason it's running out of memory is because it's trying to read some file that it thinks is 300mb of pure code but really it should be ignored.
Solution: Find the extensions used by all of the files in your project (see more here):
dir /s /b | perl -ne 'print $1 if m/\.([^^.\\\\]+)$/' | sort -u | grep c
Then add these other extensions as exclusions in your sonar.properties file:
sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch
Then set your memory limits back to regular amounts.
%JAVA_EXEC% -Xmx1024m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m %SONAR_RUNNER_OPTS% ...
this has worked for me:
SONAR_RUNNER_OPTS="-Xmx3062m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
I set it direct in the sonar-runner(.bat) file
I had the same problem when running sonar with maven. In my case it helped to call sonar separately:
mvn clean install && mvn sonar:sonar
instead of
mvn clean install sonar:sonar
http://docs.sonarqube.org/display/SONAR/Analyzing+with+Maven
Remark: Because my solution is connected to maven, this is not the direct answer for the question. But it might help other users who stumple upon it.
What you can do it to create your own quality profile with just some Findbugs rules at first, and then progressively add more and more until you reach his OutOfMemoryError. There's probably only a single rule that makes all this fail because your code violates it - and if you deactivate this rule, it will certainly work.
I know this thread is a bit old but this info might help someone.
For me the problem was not like suggested by the top-answer with the C++ plugin.
Instead my problem was the Xml-Plugin (https://docs.sonarqube.org/display/PLUG/SonarXML)
after I deactivated it the analysis worked again.
You can solve this issue by increase the maximum memory allocated to the appropriate process by increasing the -Xmx memory setting for the corresponding Java process in your sonar.properties file
under SonarQube/conf/sonar.properties
uncomment below lines and increase the memory as you want:
For Web: Xmx5123m -Xms1536m -XX:+HeapDumpOnOutOfMemoryError
For ElasticSearch: Xms512m -Xmx1536m -XX:+HeapDumpOnOutOfMemoryError
For Compute Engine: sonar.ce.javaOpts=-Xmx1536m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
The problem is on FindBugs side. I suppose you're analyzing a large project that probably has many violations. Take a look at two threads in Sonar's mailing list having the same issue. There are some ideas you can try for yourself.
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td4898141.html
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td5001587.html
I know this is old, but I am just posting my answer anyway. I realized I was using the 32bit JDK (version 8) and after uninstalling it and then installing 64bit JDK (version 12) the problem disappeared.

JDeveloper: Could not reserve enough space for object heap

Hi i am encountering the following error when deploying an project from my jdeveloper studio.
[scac] Error occurred during initialization of VM
[scac] Could not reserve enough space for object heap
Can anyone advise on how to resolve this issue?
In case you have enough free RAM on your computer:
go to jdev.conf file (~/Oracle/middleware/jdeveloper/jdev/bin) and add more memory to the file
I haven't checked but you could add:
AddVMOption -XX:MaxHeapSize=512m
or whatever you want
more help here
See in \jdeveloper\bin\ant-sca-compile.xml
Change Xmx value of line specified in JDe. Your system can't reserve enought memory.
Reducing the value -Xmx on \jdeveloper\bin\ant-sca-compile.xml worked for me:
<target name="scac" description="Compile and validate a composite">
<scac input="${scac.input}" outXml="${scac.output}" error="${scac.error}" appHome="${scac.application.home}" failonerror="true" displayLevel="${scac.displayLevel}">
<jvmarg value="-Xms128m"/>
#<jvmarg value="-Xmx1024m"/>
<jvmarg value="-Xmx700m"/>
<jvmarg value="-XX:PermSize=32m"/>
<jvmarg value="-XX:MaxPermSize=256m"/>
<!-- jvmarg value="-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"/ -->
</scac>
</target>
If you change jdev.conf you may experience the error:
Unable to create instance of the Virtual Java Machine Located at Path:
C:\Program Files(x86)\Java\jdk1.6.0_45\jre\bin\client\jvm.dll

Resources