I am using Sonar Runner 2.2 and set SONAR_RUNNER_OPTS=-Xmx8000m, but I am getting the following error:
Final Memory: 17M/5389M
INFO: ------------------------------------------------------------------------
ERROR: Error during Sonar runner execution
ERROR: Unable to execute Sonar
ERROR: Caused by: Java heap space
How can this be?
I had the same problem and found a very different solution, perhaps because I don't believe any of the previous answers / comments. With 10 million lines of code (that's more code than is in an F16 fighter jet), if you have a 100 characters per line (a crazy size), you could load the whole code base into 1GB of memory. Simple math. Why would 8GB of memory fail?
Answer: Because the community Sonar C++ scanner seems to have a bug where it picks up ANY file with the letter 'c' in it's extension. That includes .doc, .docx, .ipch, etc. Hence, the reason it's running out of memory is because it's trying to read some file that it thinks is 300mb of pure code but really it should be ignored.
Solution: Find the extensions used by all of the files in your project (see here).
Then add these other extensions as exclusions in your sonar.properties file:
sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch
Then set your memory limits back to regular amounts:
%JAVA_EXEC% -Xmx1024m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m %SONAR_RUNNER_OPTS% ...
If you allow the heap space to grow up to 8000m, this does not mean that you will always have enough physical memory to get there as you have other processes running on your operating system that also consume memory. For instance, if you have "only" 8GB of RAM on your machine, it's likely that the heap space will never be able to reach the maximum you've set.
BTW, I don't know what you're trying to analyse but I've never seen anyone requiring so much memory to analyse a project.
I faced the same problem while running test cases. With the help of Visuval VM, analysed the min and max memory allocating to PermGen during test cases execution and found that, its allocating 80MB to PermGen. Hence the same can be managed through pom.xml in properties section as follows.
<properties>
<argLine>-XX:PermSize=256m -XX:MaxPermSize=256m</argLine>
</properties>
This <argLine> tag, we can use either in <maven-surefire-plugin> or in <properties>. The advantage of using in section is, the same configuration can be utilized by both test cases and sonar. Please find reference here.
Related
In maven jobs we have the maven option to set parameters. At present I am using a stack value -Xss30M in some jobs because we face this error with anything lower:
[ERROR] error: java.lang.StackOverflowError
Most of our jobs handle values of -Xss10M or -Xss15M. I have tried to push this value further for testing purpose and see that it is capable of going up to -Xss200M .
What is the significance of this stack value?
Does a larger stack have a likelihood of crashing the build instance?
Can a large -Xss value be pinpointed to memory leaks or other source code issues?
I am using a t2.small machine with 1CPU and 1 executor.
I'am trying to run my application after compiling it with AdaCores GPS (Gnat Programming Studio).
I get the run time error
Exception name: STORAGE_ERROR
Message: EXCEPTION_STACK_OVERFLOW
I get these run time errors despite setting the stack size in the binder options using
-d65535 (task stack size) and
-D65535 (secondary stack size)
(I also have tried 65535k on both as well as 655m).
The application runs well when compiling it with the Aonix Object Ada compiler. In the Aonix compiler I set the
- stack size to 65535,
- the secondary stack size to 65535
- and the Task stack size to 46345.
My main aim is to port the application to the GNAT Ada compiler.
I notice -d sets the task stack size and -D the secondary stack size but I can't see where to set the main stack size, and I am assuming that this is the issue with the application, but please correct me if I am looking in the wrong direction.
Any pointers would be greatly appreciated appreciated.
Bearslumber
If the problem is indeed the main task, a workaround is to move the main procedure to the body of a helper task.
First, compile for debug (-g) (there may be other relevant options; posting wrong information is the fastest way to find them ;-) and you should get more information : the source line and file that raised the exception. Or a stack trace which you can analyze via addr2line.
That should help understand why it is raising...
Are you allocating hundreds of MB on the stack? I've got away with about 200MB in the past...
Is the raise within one of the container classes or part of the RTS?
Is the message actually misleading and a new() heap allocation failed? Other things than the stack can raise Storage_Error, and I'm not clear how or if the default handler distinguishes the cause...
We can't proceed further down this path without further information : edit it into the question and I'll take another look.
Setting stack size for the environment task is not directly possible within Gnat. It's part of gcc's interaction with the OS, and supposed to use the system's ulimit settings for the resulting executable (on Linux; other OS may vary)...
Unfortunately, around the gcc/gnat 4.5 timeframe I found evidence these were being ignored, though that may have been corrected, and I haven't revisited that problem.
I have seen Alex's answer posted elsewhere as a viable workround if the debug trace and ulimit settings don't provide the answer, or if you need to press on instead of spending time debugging. To keep the codebase clean, I would suggest a wrapper, simply creating the necessary task and calling your current main procedure. For Aonix you simply don't need to include the wrapper file in your build.
Our PVCS service getting down once the physical memory usage of the server goes high. Once the server restarts(Not recommended) again the service will be up. Is there any permenant fix for this?
I resolved this issue by increasing the heapsize parameters...:-)
1.On the server system, open the following file in a text editor:
Windows as of VM 8.4.6: VM_Install\vm\common\bin\pvcsrunner.bat
Windows prior to VM 8.4.6: VM_Install\vm\common\bin\pvcsstart.bat
UNIX/Linux: VM_Install/vm/common/bin/pvcsstart.sh
2.Find the following line:
set JAVA_OPTS=
And set the value of the following parameters as needed:
-Xmsvaluem -Xmxvaluem
3.If you are running a VM release prior to 8.4.3, make sure -Dpvcs.mx= is followed by the same value shown after -Xmx.
4.Save the file and restart the server.
The following is a rule of thumb when increasing the values for -Xmx:
•256m -> 512m
•512m -> 1024m
•1024m -> 1280m
As Riant points out above, adjusting the HEAP size is your best course of action here. I actually supported PVCS for nine years until this time in 2014 when I jumped ship. Riant's numbers are exactly what I would recommend.
I would actually counsel a lot of customers to set -Xms and -Xmx to the same value (basically start it at 1024) because if your PDBs and/or your user community are large you're going to hit the ceiling quicker than you might realize.
So, after seeing the a percent or so of running the job I get an error that says, "Error: Java heap space" and then something along the lines of, "Application container killed"
I am literally running an empty map and reduce job. However, the job does take in an input that is, roughly, about 100 gigs. For whatever reason, I run out of heap space. Although the job does nothing.
I am using default configurations and it's on a single machine. It is running on hadoop version 2.2 and ubuntu. The machine has 4 gigs of ram.
Thanks!
//Note
Got it figured out.
Turns out I was setting the configuration to have a different terminating token/string. The format of the data had changed, so that token/string no longer existed. So it was trying to send all 100gigs into ram for one key.
I have a VS 2005 application using C++ . It basically importing a large XML of around 9 GB into the application . After running for more than 18 hrs it gave an exception 0xc0000006 In page error. THe virtual memory consumed is 2.6 GB (I have set the 3GB) flag.
Does any one have a clue as to what caused this error and what could be the solution
Instead of loading the whole file into the memory you can use SAX parsers to load only a part of the file to the memory.
9Gb seems overly large to read in. I would say that even 3Gb is too large in one go.
Is your OS 64bit?
What is the maximum pagefile size set to?
How much RAM do you have?
Were you running this in debug or release mode?
I would suggest that you try to reading the XML in smaller chunks.
Why are you trying to read in such a large file in one go?
I would imagine that your application took so long to run before failing as it started to copy the file into virtual memory, which is basically a large file on the hard disk. Thus the OS is reading the XML from the disk and writing it back onto a different area of disk.
**Edit - added text below **
Having had a quick peek at Expat XML parser it does look as if you're running into problems with stack or event handling, most likely you are adding too much to the stack.
Do you really need 3Gb of data on the stack? At a guess I would say that you are trying to process a XML database file, but I can't imagine that you have a table row that is so large.
I think that really you should use it to search for key areas and discard what is not wanted.
I know nothing other than what I have just read about Expat XML Parser but would suggest that you are not using it in the most efficient manner.