System info
Software info
OS:
Java: OpenJDK 12.0.2
Gradle: 5.6.2
The issue
Building Gradle multi-project with parallel builds enabled consumes almost all the CPU time. PC is not interactable during the build process
Steps to reproduce
1. git clone --recursive https://github.com/vividus-framework/vividus.git
2. cd vividus
3. ./gradlew build
In your gradle.properties file (or GRADLE_OPTS environment variable), try setting org.gradle.priority=low. On my machine it has a noticeable effect with parallel enabled, but I've also heard from some of my co-workers with older machines that this setting didn't help them too much.
You can also experiment with setting org.gradle.workers.max. It defaults to the number of CPU processors. Maybe set it to the number of logical processors minus one.
If it still stops you from interacting with your computer during the build, you should probably just disable parallel execution and let Gradle work on a single processor.
Related
We were seeing mysterious failures in our CI environment when running tests of a Kotlin code base.
gradle test compiled the code and the tests just fine. The tests ran and all appeared to pass. But then Gradle exited with code 137 (indicating it was killed with SIGKILL) and the CI system thus thought the build had failed.
Limiting the size of the JVM for Gradle didn't help. Nor did the --no-daemon and --max-workers options.
This is using Kotlin 1.2.40 and Gradle 4.3 on the Java 8 JVM.
The culprit in this case turned out to be the Kotlin compiler.
By default, the Kotlin compiler spawns a daemon in the background so that subsequent compilation jobs are faster. For a Kotlin code base of nontrivial size, this process can end up eating significant memory.
Its presence was causing Gradle to hit the memory limit on the CI container, and the Linux OOM killer was killing the Gradle process.
The solution: Tell the Kotlin compiler not to spawn the background process. This turns out to be a simple matter of setting an environment variable:
GRADLE_OPTS=-Dkotlin.compiler.execution.strategy=in-process
With that variable in place, Kotlin compilation runs inline in the Gradle JVM, and its data can thus be garbage-collected when Gradle moves on to running the tests.
It would also work to pass the option to the gradle command rather than setting it in the environment.
Every build takes an extra 3-4 seconds, pausing immediately after the log output prints the following.
[LIFECYCLE]
[org.jetbrains.kotlin.gradle.plugin.KotlinGradleBuildServices] Forcing
System.gc()
Why is it "forcing" this? How do I avoid this and speed up my build?
I've looked into this, and this is a consequence of having Gradle's debug-level logging enabled (eg. gradle --debug assemble).
Run Gradle without debug logging enabled (eg. gradle --info assemble) and this should not occur anymore.
References: libraries/tools/kotlin-gradle-plugin/src/main/kotlin/org/jetbrains/kotlin/gradle/plugin/KotlinGradleBuildServices.kt
Kotlin Gradle plugins calls System.gc before and after a build only when debug logging is enabled (Gradle is run with -d or --debug command line argument).
Users do not normally run Gradle with debug logging enabled because it is extremely noisy and slow, so forcing a GC is a relatively minor issue.
Historically this behaviour was added to test against memory leaks when Gradle daemon is enabled. The idea was to log a difference of used memory before and after a build, run a few builds consequently in a test, and assert that the difference is not exceeding the threshold.
I think calling System.gc should be avoided unless the test KotlinGradleIT#testKotlinOnlyDaemonMemory is running, so I've created an issue at Kotlin bugtracker https://youtrack.jetbrains.com/issue/KT-17960
I've trying to fiddle with SonarQube and now I'm learning about the incremental mode. In my understanding it should analyze only the changed files.
So my first test is just to run SonarQube twice on our project without any change. I run SonarQube (5.1.2) installed locally on windows 7 64-bit machine with SSD drive and I7 CPU. We use java 1.7 and Maven 3.3.3. Our project is fairly big (~570 modules) of maven, most of them are java code. After I run a prepare-agent of jacoco along with my unit tests I understand that its time to run sonar:sonar and create a report.
So what I try is:
mvn sonar:sonar -Dsonar.analysis.mode=incremental -Dsonar.host.url=http://localhost:9000 -Dsonar.java.coveragePlugin=jacoco
This runs for 20 minutes. Ok, now I run the same command again without doing any change and it still runs the same 20 minutes
So my question is - whether someone can explain me how to use the incremental mode correctly? I have a hard time understanding what I'm doing wrong, in my understanding the second run has to be much faster, otherwise I don't see any advantage over the preview mode here.
Thanks Mark
The incremental mode will analyze only changed files since latest "regular" analysis on server. So in your case you should first run a normal (now called "publish") analysis:
mvn sonar:sonar -Dsonar.java.coveragePlugin=jacoco
Then your can use the incremental mode:
mvn sonar:sonar -Dsonar.analysis.mode=incremental -Dsonar.java.coveragePlugin=jacoco
mvn clean install results in the following which stalls until I kill the process. This only happens as a part of a much larger build on a bamboo server. When I build locally the build doesn't stall.
[INFO] --- gwt-maven-plugin:2.4.0:compile (default) # alerts ---
[WARNING] Don't declare gwt-dev as a project dependency. This may introduce complex dependency conflicts
[INFO] Compiling module com....alerts.Alerter
What can I do to gain better insight into the hang?
What are the likely causes of the hang?
This is likely a problem with memory. If you are bounded by x86 limits (as I am in this case) then you can use gwt.localWorker to reduce memory footprint. Fewer workers translates to less parallel needs and a longer build. Increasing memory may prevent the problem. Increasing the logLevel may expose the nature of the stall.
The following got me past my stall.
<gwt.compiler.localWorkers>1</gwt.compiler.localWorkers>
<gwt.logLevel>TRACE</gwt.logLevel>
<gwt.extraJvmArgs>-Xmx1024m -Djava.io.tmpdir=target</gwt.extraJvmArgs>
Suggestions For Improving Rep Ability
- Compare environment settings between build server and local for better ability to rep. JAVA_OPTS and MAVEN_OPTS may import
- Ensure you use identical build commands in both contexts
- try running with -pl :module-artifact-name on build server to reduce time to failure
- mvn -X will provide some additional
Scenario
While using the Maven Ant Task artifact:deploy, I'm encountering the error java.lang.OutOfMemoryError: Java heap space.
I'm only getting the error if the size of the file being deployed is greater than 25 MB. My artifacts are not greater than 50 MB in size.
What could the reason be? And, what can I do to fix it?
Code snippet
<artifact:deploy file="#{app.name}.jar">
<pom file="#{pom.file}"/>
<remoteRepository url="http://xxx.com:xxx/xxx-webapp/content/repositories/xxx-releases/">
<authentication username="xxx" password="xxx" />
</remoteRepository>
</artifact:deploy>
Existing solutions
Most online results indicate that it's something to do with the JVM default heap size and that it can be fixed by setting the appropriate environmental variables.
However, I would want the Ant scripts to run on any computer and not to depend on the environmental variables.
Is there a way to configure these settings in the Ant scripts or the
POM file?
EDIT
The install-provider task (http://maven.apache.org/ant-tasks/examples/install-deploy.html) seems to work for some people. I keep getting download errors when I use it.
Answer
It turns out that I'm not getting the Java heap error when I run my Maven Ant task on a different machine (which probably has more memory allocated to the JVM heap). Hence, I haven't attempted the solution mentioned by #Attila, though it seems to be going in the right direction.
Once ant is running, you cannot change the heap size of the JVM runing ant. So your only option is to run the task that comsumes a large amount of memory in a separate JVM, specifying enough heap space. Note this relies on the task allowing you to fork a new JVM to execute the task
Update: I could not find a way to specify to fork the maven (deploy) task, but this page specifies how you can define a macro to run maven using the java task (note that this relies on maven beeing installed and properly configured on the machine) (see the "Using the Java Task" section)
please try to increase VM memory, eg.: -Xmx512m
if you are using ANT, you can add it to the ANT_OPTS environment variable: ANT_OPTS="-Xmx512m"