Continue where an interrupted task left off - gradle

I have an application for which the testing is quite extensive. Essentially, we must run the application a few hundred thousand time on different input. So I have built a custom Gradle task which manages forking processes and reaping finished processes. Each of the thousands of test runs generate a file that goes in a results directory. Full testing can take about a week when distributed across 10 cluster nodes.
The issue is, if I need to stop the testing for whatever reason, then there is currently no way for me to start back up where I left off. Gradle's incremental build and caching features (to my understanding) really only work when tasks finish, and it will just rerun the entire task from scratch if the previous invocation was interrupted (ctrl-c).
I could build in some detection of the results files and only rerun segments for which there is no results file. However, this will not work properly when the application is rebuilt, and then testing legitimately must start from scratch.
So how can I reliably detect which testing segments are up to date when the previous task invocation was interrupted?

Annotated tasks
For any Gradle task, if its output files exist and its inputs (including predecessor tasks) are all up-to-date, Gradle will treat that task as up-to-date and not run it again. You tell Gradle about inputs and outputs by annotating properties of the class you write to define the task.
You can make use of this by breaking your custom Gradle testing task into a number of smaller test tasks and have each of those task definitions declare annotated outputs. The test reports are probably the most suitable for those outputs. Then the test tasks which have a report will not have to be re-run if you stop the build halfway.
A whole application rebuild will always need all tests to be re-run
However, if your whole application is rebuilt then those test tasks will no longer be up-to-date as their predecessor build tasks will not be up-to-date. This makes sense of course: a new application build needs its tests to be run again to check it still works as intended.
Multimodule builds may mean only part of an application needs rebuilding
It may be that there are parts of the application that are not being rebuilt, and test tasks that depend solely on those intact parts of the application. If the chain of predecessor tasks for any previously completed test task are all up-to-date, then Gradle will not re-run those tests again either.
This would be more likely to be the case for more test tasks if your application, if appropriate, is separated into different Gradle subprojects in a multimodule build. Each then would have its own chain of tasks which may not have to be re-run if only part of the application's code or other inputs is changed.

Related

Parallel Gradle task to run concurrently [duplicate]

In project on which I work (based on gradle) there is one very big module (gradle subproject).
During build on CI two tasks from this subproject are executed sequentially and it leads to significant execution time.
The project uses org.gradle.parallel=true, but when I created simple project to check how independent tasks from same subproject are executed with this property I found out that tasks are executed sequentially.
My question:
Is possible to execute two independent tasks from same gradle subproject in parallel to shorten theirs execution time? (Assuming that they doesn't produce output in same place and don't use any shared state)
From the documentation (see Parallel execution):
Most builds consist of more than one project and some of those projects are usually independent of one another. Yet Gradle will only run one task at a time by default, regardless of the project structure (this will be improved soon). By using the --parallel switch, you can force Gradle to execute tasks in parallel as long as those tasks are in different projects.
I think the most important part here is "as long as those tasks are in different projects": if your two long-running tasks belong to the same subproject you won't be able to make them executed in parallel (not in current Gradle version)

Is it possible to run two independent gradle tasks from same subproject in parallel?

In project on which I work (based on gradle) there is one very big module (gradle subproject).
During build on CI two tasks from this subproject are executed sequentially and it leads to significant execution time.
The project uses org.gradle.parallel=true, but when I created simple project to check how independent tasks from same subproject are executed with this property I found out that tasks are executed sequentially.
My question:
Is possible to execute two independent tasks from same gradle subproject in parallel to shorten theirs execution time? (Assuming that they doesn't produce output in same place and don't use any shared state)
From the documentation (see Parallel execution):
Most builds consist of more than one project and some of those projects are usually independent of one another. Yet Gradle will only run one task at a time by default, regardless of the project structure (this will be improved soon). By using the --parallel switch, you can force Gradle to execute tasks in parallel as long as those tasks are in different projects.
I think the most important part here is "as long as those tasks are in different projects": if your two long-running tasks belong to the same subproject you won't be able to make them executed in parallel (not in current Gradle version)

How to set up GitLab CI to run multiple build steps efficiently while indicating at which step it is?

I am pretty new to GitLab. I've set up pipelines and stages via .gitlab-ci.yml and they seem to work but I've just discovered that some of my assumptions were wrong.
I have a large, multi-project Gradle setup, producing many artifacts. We are in the process of setting up GitLab and I really wanted to make use of the GitLab UI to show the progress of the build. The idea was to nicely indicate to developers and reviewers how far the build got before it failed, something like:
Got its dependencies
Compiled main code, YAY!
Compiled test code, yippie!
Passed unit tests, we rock!
Passed integration tests, awesome!
Passed various static code analysis tests. We're almost good to go!
Generated documentation - can we ship it?
I've set up each of these as individual jobs of their individual stages, (incorrectly) assuming that Gradle will be able to do its incremental build magic and that this will be almost as quick as running it as a single step.
Then I noticed that each stage causes what seems to be a Docker container reinitialization. This also means that the Gradle daemon has to restart and has no knowledge of the past. It has to get all the dependencies. I think I could cache these, but it seems that they would be separately cached for each job. Finally, these some jobs end up repeating what jobs before them already did because their output isn't available to them. My thinking that serialized jobs would execute inside the same container instance was proven wrong. Each of the subsequent jobs generally have to repeat what jobs before them already did, significantly increasing the build time.
I think I understand that I could declare artifacts of each job and make them available to dependent jobs that way, but that does not eliminate all of the overhead and adds some of its own - copying the artifacts out to "somewhere" and then back, while also hitting the limits of how much I can pass on. In fact, my unit test job is now failing and I can't see why because of the log size limit, but it seems it has to do only with artifacts (the report) as the unit test pass nicely when I run them outside GitLab.
I also think I understand that the idea behind jobs was to be able to run them in parallel on separate runners. That is a very fine feature and I probably can use them for later stages, but not for (1)-(5) as they heavily rely on a lot of output of at least some of the previous jobs.
I could merge (1)-(5) into a single job (and a single stage) for performance reasons, but then there is no indication in the UI (that I know of) as to how far the build got ... and the logs would be even longer and nastier to figure out even if the log limit got lifted.
Do any of you have any suggestions as to what am I missing / should do here?
After further research, I found that this is not possible (yet). Jobs are meant to be units of (potentially) concurrent execution and can only communicate by copying artifacts, obviously.
What I would be interested in is steps lesser than jobs that would be indicated in the UI and that could post their artifacts when they (steps) complete but before the entire job is done. This would eliminate 1-2 minutes of job startup overhead that I am facing now.

Trigger Snapshot just before build in TeamCity

We need to setup a snapshot dependency in TeamCity that runs just before the dependant build starts.
Currently we have the problem that the snapshot maybe taken 30 minutes before the build starts. (The build starts later due to other factors). Ideally we want the snapshot to be taken and then the dependant build to immediately start.
Is this possible in TeamCity? Is there another way to achieve a similar thing.
Reading between the lines I'm guessing the problem here is that the dependent build isn't always run immediately when it's snapshot dependency finishes because an agent isn't always available. If that's the case then my suggestion would be to merge the two builds into a single build configuration as that will ensure that all the steps occur with no delay in between.

How to split a big Jenkins job/project into smaller jobs without compromising functuality?

we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.

Resources