I have an Ant's build.xml file which executes with no problems on my machine (Ubuntu), but throws the following error:
/var/lib/hudson/workspace/myproject/build.xml:254: Error running /var/lib/hudson/tools/java_6/bin/javac compiler
at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:525)
(...)
Caused by: java.io.IOException: Cannot run program "/var/lib/hudson/tools/java_6/bin/javac": java.io.IOException: error=7, Argument list too long
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at java.lang.Runtime.exec(Runtime.java:593)
at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:862)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:481)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:495)
at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:522)
... 19 more
Caused by: java.io.IOException: java.io.IOException: error=7, Argument list too long
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
... 24 more
The argument list is quite big in fact contains all the jar files from WEB-INF/lib which is 231650 characters long!
Any suggestions how to fix it?
With a command that long you are likely running in to ARG_MAX at the shell.
This will report a good estimate of the available length,
expr getconf ARG_MAX - env|wc -c - env|wc -l * 4 - 2048
A nice article about command arg lists and length can be found here
Run ant -d. This will produce capacious amounts of output. However, it will also show your entire compile line which may help you understand why it is so long.
Are you using Jenkins/Hudson and that's where the error occurs?
Try the following:
Disable the build.
Log into your build server, AS YOUR JENKINS USER and find the workdir directory where Jenkins/Hudson is attempting the build.
You may have to change $PATH or set $JAVA_HOME to point to the JDK that Hudson/Jenkins is using.
Now, run ant -d  <target> just as Jenkins/Hudson would. Pipe this output through tee into a file. Now, take a look and see what Hudson/Jenkins is doing and why javac has too many arguments.
Use apply for your fileset in your build.xml, e.g.
<?xml version="1.0" encoding="UTF-8"?>
<project default="build">
<fileset id="myfiles" dir="${basedir}">
<include name="**/*.java"/>
<exclude name="**/Resources/**"/>
<modified>
<param name="cache.cachefile" value="${basedir}/cache.${project}.fileset.myfiles.properties"/>
</modified>
</fileset>
<target name="execute-some-command">
<apply executable="javac" dir="${basedir}" failonerror="true">
<fileset refid="myfiles"/>
</apply>
</target>
</project>
By default, the command will be executed once for every file.
If you need to use parallel to run the command only once, then use maxparallel to limit the amount of parallelism by passing at most this many sourcefiles at once (e.g. set to 1000 to pass a thousand files per run). For example:
<apply executable="javac" parallel="true" maxparallel="1000" dir="${basedir}">
<fileset refid="myfiles"/>
</apply>
To see how many files you've got in total, check the content of cache file (look for cache.cachefile in above example).
Related
This is the gschema.xml code for my app:
<?xml version="1.0" encoding="UTF-8"?>
<schemalist>
<schema path="/com/github/Suzie97/epoch"
id="com.github.Suzie97.epoch"
gettext-domain - "com.github.Suzie97.epoch">
<key name="pos-x" type="i">
<default>360</default>
<summary>Most recent x position of Epoch</summary>
<description>Most recent x position of Epoch</description>
</key>
<key name="pos-y" type="i">
<default>360</default>
<summary>Most recent y position of Epoch</summary>
<description>Most recent y position of Epoch</description>
</key>
</schema>
</schemalist>
This is the meson.build file to install the gschema:
install_data(
'gschema.xml',
install_dir: join_paths (get_option ('datadir'), 'glib-2.0', 'schemas'),
rename: meson.project_name() + '.gschema.xml'
)
When I compile this error is displayed:
Settings schema 'com.github.Suzie97.epoch' is not installed
This is the post_install.py script:
#!/usr/bin/env python3
import os
import subprocess
install_prefix = os.environ['MESON_INSTALL_PREFIX']
schemadir = os.path.join(install_prefix, 'share/glib-2.0/schemas')
if not os.environ.get('DESTDIR'):
print('Compiling the gsettings schemas ... ')
subprocess.call(['glib-compile-schemas', schemadir])
Why is this happening?
There are various issues at play:
the name of the schema file should match your application's identifier—in this case, it would be com.github.Suzie97.epoch.gschema.xml
you should not rename the file on installation; just installing it under the glib-2.0/schemas data directory is enough
you should call glib-compile-schemas $datadir/glib-2.0/schemas to "compile" all the schemas once you installed your application; this is typically done as a post-installation script in Meson, using meson.add_install_script().
GSettings does not use the XML per se: the glib-compile-schemas tool will generate a cache file that will be shared by all applications using GSettings, and will be fast to load. This is why GSettings will warn you that the schema is not installed: you are missing that last compilation step.
I am facing this problem for a while:
make [1]: *** read jobs pipe: Resource temporarily unavailable. Stop.
I have a top-level makefile that "calls" additional cmake and make based on the rules.
target A is a relatively small code (C/C++). target B is much bigger.
The issue happens mainly during the building target B but not always. I use make 3.82.
I have tried with make 4.3 and it occurs always.
Here is a skeleton of my build system:
MAKEFLAGS = --jobs=10 --max-load=20
<target A>:
+. /usr/share/Modules/init/bash &&
module load <target> &&
cmake --build <project buildsystem> --target <target A>
<target B>: <target A>
+. /usr/share/Modules/init/bash &&
module load <target> &&
make -C <path> <target B> &&
cmake --build <project buildsystem> --target <target B>
/usr/share/Modules/init/bash
The Modules package and the module command are initialized when a shell-specific initialization script is sourced into the shell. The script creates the module command as either an alias or function and creates Modules environment variables.
https://modules.readthedocs.io/en/latest/module.html
(--max-load is equal the number of cores on the build sever. --jobs is half of it.)
I have verified the MAKEFLAGS passes correctly (I have checked the number of gcc/g++ processes during build) and I don't have this warning:
warning: -jN forced in submake: disabling jobserver mode.
Host OS: RedHat 7.6
cmake: 3.16.4
How can I solve this issue?
I had this exact same error on nested makes. The issue ended up being hardware. I added a CPU temperature throttle background job to keep the CPU cores below 85C and the errors went away.
My system is a core-i9 10850K with a very small water cooler. During the heavy parts of the compile, the core temps were throttled from the stock 5GHz down to about 3.8GHz. Without the throttle, core temps actually hit 100C.
Hi I am editing my android docker instance which builds my android APK.
I want to add a checkstyle exception which should cause an abort if Any warnings occure.
I have it working in that it Runs checkstyle, but it just output warnings. I do not see a way of making these errors or halting the operation like Lint does. What should I add to my docker file?
java -jar ./styleguide/checkstyle-7.7-all.jar -c ./styleguide/rules/google_checks.xml .
As I do not have the google indentation I get 18k errors that look like
[WARN] pathstuff/./app/src/testRelease/java/com/app/BuildConfigReleaseTest.java:41: 'method def rcurly' has incorrect indentation level 4, expected level should be 2. [Indentation]
Audit done.
These are what I want to abort on. Preferably list all of them, but if we just list that they need to run checkstyles -- that will be enough.
Thanks!
I have it working in that it Runs checkstyle, but it just output warnings.
This is being overridden inside the google_checks.xml file. Checkstyle by default, will print everything as errors. If anything else comes up, then the configuration is overriding it.
I do not see a way of making these errors
Open up google_checks.xml and look for the line similar to: <property name="severity" value="warning"/>
Change warning to error in the value attribute and it will print violations as errors.
I am deploying a large Java project on Sonar using "Findbugs" as profile and getting the error below:
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
What i have tried to resolve this:
Replaced %SONAR_RUNNER_OPTS% with -Xms256m -Xmx1024m to increase the heap size in sonar-runner bat file.
Put "sonar.findbugs.effort" parameter as "Min" in Sonar global parameters.
But both of above methods didn't work for me.
I had the same problem and found a very different solution, perhaps because I'm having a hard time swallowing the previous answers / comments. With 10 million lines of code (that's more code than is in an F16 fighter jet), if you have a 100 characters per line (a crazy size), you could load the whole code base into 1GB of memory. I set it 8GB of memory and it still failed. Why?
Answer: Because the community Sonar C++ scanner seems to have a bug where it picks up ANY file with the letter 'c' in its extension. That includes .doc, .docx, .ipch, etc. Hence, the reason it's running out of memory is because it's trying to read some file that it thinks is 300mb of pure code but really it should be ignored.
Solution: Find the extensions used by all of the files in your project (see more here):
dir /s /b | perl -ne 'print $1 if m/\.([^^.\\\\]+)$/' | sort -u | grep c
Then add these other extensions as exclusions in your sonar.properties file:
sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch
Then set your memory limits back to regular amounts.
%JAVA_EXEC% -Xmx1024m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m %SONAR_RUNNER_OPTS% ...
this has worked for me:
SONAR_RUNNER_OPTS="-Xmx3062m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
I set it direct in the sonar-runner(.bat) file
I had the same problem when running sonar with maven. In my case it helped to call sonar separately:
mvn clean install && mvn sonar:sonar
instead of
mvn clean install sonar:sonar
http://docs.sonarqube.org/display/SONAR/Analyzing+with+Maven
Remark: Because my solution is connected to maven, this is not the direct answer for the question. But it might help other users who stumple upon it.
What you can do it to create your own quality profile with just some Findbugs rules at first, and then progressively add more and more until you reach his OutOfMemoryError. There's probably only a single rule that makes all this fail because your code violates it - and if you deactivate this rule, it will certainly work.
I know this thread is a bit old but this info might help someone.
For me the problem was not like suggested by the top-answer with the C++ plugin.
Instead my problem was the Xml-Plugin (https://docs.sonarqube.org/display/PLUG/SonarXML)
after I deactivated it the analysis worked again.
You can solve this issue by increase the maximum memory allocated to the appropriate process by increasing the -Xmx memory setting for the corresponding Java process in your sonar.properties file
under SonarQube/conf/sonar.properties
uncomment below lines and increase the memory as you want:
For Web: Xmx5123m -Xms1536m -XX:+HeapDumpOnOutOfMemoryError
For ElasticSearch: Xms512m -Xmx1536m -XX:+HeapDumpOnOutOfMemoryError
For Compute Engine: sonar.ce.javaOpts=-Xmx1536m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
The problem is on FindBugs side. I suppose you're analyzing a large project that probably has many violations. Take a look at two threads in Sonar's mailing list having the same issue. There are some ideas you can try for yourself.
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td4898141.html
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td5001587.html
I know this is old, but I am just posting my answer anyway. I realized I was using the 32bit JDK (version 8) and after uninstalling it and then installing 64bit JDK (version 12) the problem disappeared.
Hi i am encountering the following error when deploying an project from my jdeveloper studio.
[scac] Error occurred during initialization of VM
[scac] Could not reserve enough space for object heap
Can anyone advise on how to resolve this issue?
In case you have enough free RAM on your computer:
go to jdev.conf file (~/Oracle/middleware/jdeveloper/jdev/bin) and add more memory to the file
I haven't checked but you could add:
AddVMOption -XX:MaxHeapSize=512m
or whatever you want
more help here
See in \jdeveloper\bin\ant-sca-compile.xml
Change Xmx value of line specified in JDe. Your system can't reserve enought memory.
Reducing the value -Xmx on \jdeveloper\bin\ant-sca-compile.xml worked for me:
<target name="scac" description="Compile and validate a composite">
<scac input="${scac.input}" outXml="${scac.output}" error="${scac.error}" appHome="${scac.application.home}" failonerror="true" displayLevel="${scac.displayLevel}">
<jvmarg value="-Xms128m"/>
#<jvmarg value="-Xmx1024m"/>
<jvmarg value="-Xmx700m"/>
<jvmarg value="-XX:PermSize=32m"/>
<jvmarg value="-XX:MaxPermSize=256m"/>
<!-- jvmarg value="-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"/ -->
</scac>
</target>
If you change jdev.conf you may experience the error:
Unable to create instance of the Virtual Java Machine Located at Path:
C:\Program Files(x86)\Java\jdk1.6.0_45\jre\bin\client\jvm.dll