It was known a while back that Gradle cached Java modules when doing incremental builds. Meaning modules A, B, & C with changes just done to A, would just require rebuilding of A alone.
However, in reading their documentation see no mention of module caching. It seems to be more general than so.
Can anyone confirm module was a thing of the past? Also if modules help reduce times with incremental builds nowadays?
Yes, tasks provided by Gradle itself (incl. by built-in plugins like java) generally try to be incremental, i.e., tasks with up-to-date outputs on disk won’t re-run. See also this doc section which says:
Most tasks provided by Gradle take part in incremental build because they have been defined that way.
Related
What are some good patterns for using go in concourse-ci tasks. For example, should I build files locally with all dependencies and check-in cross-compiled binaries to the repo? Should I build on concourse prior to running the task?
Examples of what people do here would be great. Public repos of pipelines/tasks even better.
The way I see it there are currently 3 options for handling go builds:
Use vendoring
Explicitly declare the dependencies as concourse resources
Maintain a docker image with the required dependencies already included
All options have pros and cons. The first option is currently my favorite since the responsibility for handling dependencies is up to the project maintainers and there is a very clear way to see which versions/revisions of the dependencies are being used - i.e. just check the vendoring tool config - but it does force you to have all dependency code in the project's repo.
The second option follows the go "philosophy" of always tracking master, but it may lead to slower builds (concourse will need to check every single resource regularly) and may lead to sudden breakage because of changes in dependencies.
The third option allows you to implicitly fix the revision of the dependencies in the docker image, in that respect it's similar to the first, however it requires maintaining docker images (doesn't necessarily mean 1 per project, but it might mean more than one depending on the number of projects that use this option and if there are conflicting dependency versions between them)
I'm migrating our build system over to TeamCity and, because we have quite long build times, I'm trying to make good use of parallelism in build configurations.
If two configs can run in parallel they are obviously not dependent on each other. However there are some cases where, if two parallel builds are serialised (due to lack of available agents) then I would prefer one to run ahead of another (for example one is a set of regression tests that I'd like to see the result of before a packaging step is run - but if resources are available I'd like them both to run concurrently).
I can't find an explicit way to specify ordering of logically independent builds. However I've observed that the build order tends to be lexicographical - although I'm not sure if that's on the config name or ID.
I could experiment but would prefer a more definite answer, if possible.
This used to be available as a plugin, but has since been bundled into the product.
Go to the build queue and click on Configure Build Priorities
If you add a class with a high number, you can then associate that with the build you'd like to be built first
Managing Build Priorities - TeamCity documentation
Hope this helps
I've tried building it:
parallel (helps a bit)
daemon=true
preDexLibraries true/false ( no big difference )
incremental true
offline
It takes >40s EVERY time. It doesn't matter if I change anything or not. Building it after "gradle clean" takes >50s.
I hope that I'm doing something wrong. Eclipse/ant could build consecutive builds WITH CHANGES in <10s.
Any help how to bring this <10s appreciated.
Yes, we have a lot of modules, some jar libraries, some maven dependancies. Still don't get why would it take >40s for two consecutive builds with no changes.
Gradle used: 2.2.1
Android studio: not really relevant, usually building it only with Gradle.
EDIT: adding some profiling logs.
:app:dexProjDebug 22.541s
:app:shrinkProjDebugMultiDexComponents 4.279s
:app:compileProjDebugJava 3.478s
:app:packageProjDebug 2.591s
:app:processProjDebugResources 2.590s
:app:packageAllProjDebugClassesForMultiDex 2.536s
:app:createProjDebugMainDexClassList 2.126s
You need to properly define inputs and outputs even for library projects with source code. (Gradle Documentation). Do you get UP-TO-DATE message when you rebuild second time onwards?
If you have no changes to the other modules, you can build the app itself using gradlew :app:build. This will eliminate the time Gradle spends on building the other modules. You can build any module separately like this as well, using gradlew :moduledirectoryname:build. If you need to build one module and the app, you can execute both tasks at once to avoid going through Gradle's configuration stage twice: gradlew :app:build :moduledirectoryname:build
If possible, update to Gradle 2.4. It's significantly faster (claiming 20-40%).
One of the most significant build time reducers is the preDex task. Check whether preDex is running in your build. It increases build time the first time you run it, but dramatically reduces build times in subsequent builds, as most of your SDKs and libraries will already be dexed. Note however that it won't run under some conditions, e.g. if you use multidex.
Btw Google are well aware of build times with Gradle and Android Studio and are going to introduce significant improvements in upcoming releases, including Jack and Jill which will allow compiling from source code directly to dex. You can already try to experiment with them - see here: http://tools.android.com/tech-docs/jackandjill.
Also worth watching this video from recent Google I/O: https://youtu.be/f7ihSQ44WO0?t=327
Download Genymotion emulator. It is very fast and good to run android apps. It will save you a lot of time.
Here is the link - GenyMotionDownload
we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.
Consider a Maven project with multiple interdependent modules: let's say, three jar modules A, B, and C, which are dependencies for a war module Z. I have a separate Hudson build for each of these modules, so that only modules that have changed are re-built.
My issue is that if I commit a changeset that changes both module A and module Z, Z may be built before A and fail, before A completes and triggers a rebuild of Z which now passes. Allowing builds to regularly fail for reasons to do with build ordering rather than "real" failures desensitizes us to real failures; we end up ignoring builds which have legitimately broken because we are used to assuming it will eventually flip back.
I have been managing this through the use of quiet periods, blocking when upstream builds are running, etc. But in practice, my build has more modules than the example I've given, many of which take a while to build and test. I also have a small horde of diligent developers making frequent commits.
This means my jar modules are constantly building, only rarely leaving a gap for my war module(s) to build. So the war doesn't build very frequently, meaning it takes a long time to find out when we've broken it, and also takes longer to identify which change broke it.
Also, the constant running of builds means that if I commit a change that touches jars A and B, the war file Z may be built once for jar A (which builds quickly), and then again for jar B (which takes longer). This makes it hard to understand the results of a given commit.
I've considered using the join plugin, but this appears to require all of the modules to build every time. Since I actually have quite a few jar modules, I really don't want to have to build them all every time, I only want to build the ones that have changed for a given commit.
Are there any better ways to handle this?
Thanks
This is always a difficult problem (and I've re-written this answer more than once!)
In terms of a technical solution, you want something that will wait for the build of several different jobs to be not running before it starts to run. If it's difficult to quantify, it's going to be difficult to put in place. I'll be very interested to see what technical solutions are suggested in this thread.
I guess you have to look at why your jobs are being run, and how often. If there's any code that requires unit testing in your WAR, could you move it out into it's own module? That way you can run only integration tests every hour/30 mins using the war and not worry about where and when the individual modules are at.
You may want to also look at what your modules contain. Do they ALL have to be modules? Can you perhaps reduce the fragmentation - it might help reduce the complexity of what you are attempting to schedule :)
I understand and applaud your efforts to get as much tested as soon as possible - but sometimes a smoke test is all you can do if there's a constant churn of code.
The approach we're now looking at is combining some Maven modules into single Hudson jobs, rather than having a one to one mapping of modules to jobs.
Specifically, if a war module's dependencies are fairly small and quick to build on their own, building them in the same job with the war ensures that all of the code from a single commit is built together, at least for that given war file.
This does result in duplication - we have multiple war files using the same jars, so the jars are essentially rebuilt for every war, rather than once only. But in practice, the jars are quick to build, and this makes the war files conceptually cleaner.
This would be less attractive if the jars took a while to build and test, since the combined jars + war job would then be quite long, giving us long feedback loops for problems within the jars. Getting the balance right is important.
So my takeaway: don't assume that one Hudson/Jenkins job per module is the best way to go, and don't be afraid to rebuild the same code in multiple jobs.