I have a gradle build which consists of near a thirty included builds. Recently I encounter a hung with this build. I have taken a thread dump and saw there are were more than a five hundred waiting threads. Their names make me think that for each included build gradle creates a worker thread per cpu core. So if I have 30 included builds and 12 cores there will be 30x12 worker threads.
I tried to use org.gradle.workers.max property to resolve my problem but this didn't work.
Is there a best practice how to use gradle included builds the right way?
Related
I have a multi-module Maven project that consists of ~30 modules. 4 of these modules contains a large number of junit tests in separate testcases, so when I'm running tests all modules except these are executed quickly, but each of these modules running separately could take several minutes. What I want to achieve is
When project is compiled I want to use as much threads as CPUs available (-T1C)
When tests are running I want to use as much threads as CPUs available, and I want to run testcases in all modules as parallel as possible, but not more than number of CPUs. If I just run tests with -T1C it assign thread per module, so tests inside long-running modules are not running in parallel. But if I set forkCount=1C in long-running modules each of them occupies as many threads as CPUs are, so when all of them are running CPUs are overused. I know that I can set up something like forkCount=0.5C but in this case it will be required to manually adjust these numbers.
So is it a way to specify some build-level limitation having everything else as parallel as possible?
Thanks.
I have a large project with quite a few Gradle modules (around 90+) and we're seeing some strange build behaviour - especially after migrating to a Java 11 build (from Java 8).
Specifically our test execution times are... wonky.
Running the test target for the entire project this module takes 4min 55s to executes the tests.
Running the test target for a single module on its own it merely takes 1min 15s for the exact same tests with the same results.
Note: Before these both Gradle tests a clean on the entire project had been executed. These times are reproducible with minor time differences.
As a reference: running the tests for the module in IntelliJ I'm at 1min 18s.
I have played around with maxParallelForks and forkEvery, but I did not notice any significant changes.
I'm at a loss to what I could try and would appreciate any ideas and suggestions.
I have a Java microservices project with 37 different modules (and growing). Whenever my versions change, Intellij reimports the Gradle build info and rebuilds all of them, including downloading the dependencies. This takes almost TWO hours! What can be done to speed this up?
I gave the compiler 3g of RAM hoping that would help, but the difference is minimal. Are there any other settings I can tweak?
Just wanted to add that I'm on a 64-bit, 6 core box, so I should certainly be able to build faster than this.
I am pretty new to GitLab. I've set up pipelines and stages via .gitlab-ci.yml and they seem to work but I've just discovered that some of my assumptions were wrong.
I have a large, multi-project Gradle setup, producing many artifacts. We are in the process of setting up GitLab and I really wanted to make use of the GitLab UI to show the progress of the build. The idea was to nicely indicate to developers and reviewers how far the build got before it failed, something like:
Got its dependencies
Compiled main code, YAY!
Compiled test code, yippie!
Passed unit tests, we rock!
Passed integration tests, awesome!
Passed various static code analysis tests. We're almost good to go!
Generated documentation - can we ship it?
I've set up each of these as individual jobs of their individual stages, (incorrectly) assuming that Gradle will be able to do its incremental build magic and that this will be almost as quick as running it as a single step.
Then I noticed that each stage causes what seems to be a Docker container reinitialization. This also means that the Gradle daemon has to restart and has no knowledge of the past. It has to get all the dependencies. I think I could cache these, but it seems that they would be separately cached for each job. Finally, these some jobs end up repeating what jobs before them already did because their output isn't available to them. My thinking that serialized jobs would execute inside the same container instance was proven wrong. Each of the subsequent jobs generally have to repeat what jobs before them already did, significantly increasing the build time.
I think I understand that I could declare artifacts of each job and make them available to dependent jobs that way, but that does not eliminate all of the overhead and adds some of its own - copying the artifacts out to "somewhere" and then back, while also hitting the limits of how much I can pass on. In fact, my unit test job is now failing and I can't see why because of the log size limit, but it seems it has to do only with artifacts (the report) as the unit test pass nicely when I run them outside GitLab.
I also think I understand that the idea behind jobs was to be able to run them in parallel on separate runners. That is a very fine feature and I probably can use them for later stages, but not for (1)-(5) as they heavily rely on a lot of output of at least some of the previous jobs.
I could merge (1)-(5) into a single job (and a single stage) for performance reasons, but then there is no indication in the UI (that I know of) as to how far the build got ... and the logs would be even longer and nastier to figure out even if the log limit got lifted.
Do any of you have any suggestions as to what am I missing / should do here?
After further research, I found that this is not possible (yet). Jobs are meant to be units of (potentially) concurrent execution and can only communicate by copying artifacts, obviously.
What I would be interested in is steps lesser than jobs that would be indicated in the UI and that could post their artifacts when they (steps) complete but before the entire job is done. This would eliminate 1-2 minutes of job startup overhead that I am facing now.
I have Maven project on Jenkins that runs tests. It usually takes 18 minutes to run more than 1000 tests, but now my build with 500 tests
already lasts 1 hour and it hasn't finished yet.
Anyone has suggestion?
Thanks.
It is hard to guess what is causing the delay based on the question. In general, you can improve the build time by launching it in parallel.
You can launch maven with the -T 1C flag. This will attempt to start a parallel build of at most 1 thread per core. Maven can determine based on the dependency tree structure which builds can be started parallel.
Check Maven: how to do parallel builds? for more info.