What is the difference between allprojects and subprojects - gradle

On a multi-project gradle build, can someone tell me what exactly is the difference between the "allprojects" section and the "subprojects" one? Just the parent directory? Does anyone use both? If so, do you have general rules that determines what typically is put in each one?
Related question: what is the difference between the two syntaxes (really for allprojects AND subprojects):
subprojects { ...
}
and
configure(subprojects) { ...
}
When would you one over the other?

In a multi-project gradle build, you have a rootProject and the subprojects. The combination of both is allprojects. The rootProject is where the build is starting from. A common pattern is a rootProject has no code and the subprojects are java projects. In which case, you apply the java plugin to only the subprojects:
subprojects {
apply plugin: 'java'
}
This would be equivalent to a maven aggregate pom project that just builds the sub-modules.
Concerning the two syntaxes, they do the exact same thing. The first one just looks better.

Adding to Ryan's answer, the configure method becomes important when you want to configure custom subsets of objects. For example configure([project(":foo"), project(":bar")]) { ... } or configure(tasks.matching { it.name.contains("foo") }) { ... }.
When to use allprojects vs. subprojects depends on the circumstances. Often you'll use both. For example, code related plugins like the Java plugin are typically applied to subprojects, because in many builds the root project doesn't contain any code. The Eclipse and IDEA plugins, on the other hand, are typically applied to allprojects. If in doubt, look at examples and other builds and/or experiment. The general goal is to avoid irrelevant configuration. In that sense, subprojects is better than allprojects as long as it gives the expected results.

Related

How do I include a project dependency with a classifier

I have a sub-project with a classifier, test-fixtures, that I want to include into an adjacent sub-project:
dependencies {
implementation(project(":producer"))
}
I assumed that would be a trivial task, but I can't seem to find out how. Is that possible?
I found that this was not a classifier in the maven sense. The variant (what gradle calls them) was created by the java-test-fixtures-plugin. See user guide on testing.
It is used by importing the dependency like this:
dependencies {
testFixtures(project(":producer"))
}
I had a little problem on this since this inhibits my freedom to select the configuration I needed in order to include this jar in an ear-file. I found a way around this by adding it manually as the earlib(...)-method actually does:
dependencies {
add("earlib", testFixtures(project(":producer")))
}

Recommended way of plugin configuration with "tasks.withType(Foo) {...}" or "foo {...}"?

I'm learning Gradle and am confused to see two styles of how plugins are configured, depending on which tutorial/book I read:
checkstyle {
ignoreFailures = true
}
tasks.withType(Checkstyle) {
ignoreFailures = true
}
The first one looks cleaner but the second one would also apply to custom tasks that inherit from "Checkstyle". I suspect that the latter makes it easier for the IDE to guess the type and allow proper auto completion, is that right?
Is there a general trend towards one or the other that I should follow?
The two are slightly different
checkstyle {...} will configure a single task named "checkstyle". It will fail if a task named "checkstyle" does not exist
tasks.withType(Checkstyle) {...} will configure any tasks in the project of type Checkstyle. This could result in zero, one or multiple task instances being configured.

Logic in gradle/groovy scripting

I am new in groovy-gradle field, and right now I am taking multiple online courses, but I am missing something here.
I'll give you and example. This creates a Jar file:
apply plugin: 'java' // 1. Apply the Java plugin to the project
sourceSets {
main {
java {
srcDir 'java' // 3. Add 'java' directory as a source directory
}
}
}
jar {
manifest {
attributes 'Implementation-Version': '1.0' // 2. Add manifest attribute
}
}
You can find this solution everywhere, but not clear explanation.
Now, can apply plugin by: Plugin.apply(T)
I assume that the Plugin is object instance and apply is its method and T is an an argument.
So what is what is apply plugin "java"?
There is also thing called sourceSets. It may be method that takes an argument of a Closure, or it may be a property that takes a Closure as argument because of default getter generated by groovy.
I canot tell because in groovy equal sign is optional, parenthesis are optional.--- VERY INOVATIVE!!!!!!!!
And finally there is thing called main. I cannot find what it is, and I've been looking for it everywhere, even here :https://docs.gradle.org/current/dsl/org.gradle.api.tasks.SourceSet.html
And this 'main'-ting contains thing called java (looks like instance of SourceDirectorySet), which contains method srcDir, that takes a string as an agument.
Does it makes sense to you?.
How to extract information from here:
https://docs.gradle.org/current/dsl/, and use it in a build?
What am I missing here?
I'm on my mobile so it's difficult to explain all of the Gradle magic but this should help you on your way.
There is an implicit Project instance in scope and much of what you see in a gradle script will delegate to this.
Eg
apply plugin: 'java'
Is equivalent to
getProject().apply([plugin: 'java'])
You can read more in the writing a custom plugin section but there's a property file that maps "java" to JavaPlugin
When you applied the "java" plugin, this will "mixin" the JavaPluginConvention to the project (so you can now call getSourceSets() in java which can be shortened to "sourceSets" in groovy)
One feature that gets a lot of use in Gradle's DSL is Groovy's methodMissing(...). So
sourceSets.main { ... }
will actually get dynamically delegated to
sourceSets.getByName('main').configure { ... }
Another feature which is a source of "magic" is that in groovy you can write
def a = new A()
a.foo = 'bar'
a.baz('x')
As
def a = new A()
a.with {
foo = 'bar'
baz('x')
}
Hopefully you can see that all property/method references in the closure are delegated to the "a" instance. This style of configuration helps gradle scripts remain succinct.
See the with method

Child build.gradle not evaluating at afterEvaluate event

I have a project with 21 subprojects. one of those subprojects has its own build.gradle file because its a little obscure.
Now, i have a configuration setting in the build that is needed during the config phase. So, in my plugin i have
project.afterEvaluate {
if (project == project.rootProject) {
project.allprojects.each { proj ->
MetaExtension config = proj.getExtensions().getByType(MetaExtension)
if (config.inventoryHash == null){
throw new ProjectHashNotSetException(proj.name)
}
}
}
}
Now, if i have everything in one build.gradle file, it all works perfectly. but, as soon as i broke the 21st subproject into its own build.gradle file, it now always comes back as null. copy and past back into one build.gradle file, works fine, 2 gradle files, fails.
Why would this be?
afterEvaluate is a method of the project and is evaluated after the project is evaluated, not after all projects are evaluated.
You need to restructure your logic. You could e. g. add an evaluationDependsOnChildren() or evaluationDependsOn '21st-project', then you don't need the afterEvaluate at all.
Actually if the shown code is your actual code, it is a bit strange anyway. You add an afterEvaluate action to all projects, but only execute it if it is the root project. You could as well just add the action to the root project and then leave out the condition. Or maybe even better, just add an afterEvaluate to each project that checks exactly that project like
project.afterEvaluate {
MetaExtension config = extensions.getByType(MetaExtension)
if (!config.inventoryHash) {
throw new ProjectHashNotSetException(name)
}
}
Then the evaluation order is not important, as each is checked individually. This is also better for project decoupling if you e. g. ever want to use configure-on-demand which would not be possible with your approach.

Referencing the outputs of a task in another project in Gradle

Consider the following setup
rootProject
|--projectA
|--projectB
There is a task taskB in projectB and I would like the reference the outputs of that task in a copy task taskA in projectA. For example, taskA might look something like:
task taskA(type: Copy) {
dependsOn ':projectB:taskB'
from ':projectB:taskB.outputs'
into 'someFolder'
}
Of course the above example doesn't actually work. While it's okay to reference the task as :projectB:taskB as a dependency, :projectB:taskB.outputs doesn't seem to mean anything to Gradle. I've tried reading through the Gradle docs but didn't find anything that referenced what I'm trying to do.
The accepted answer has been to only and recommended way to solve this problem. However building up this kind of project dependencies and reaching from one project into another is discouraged by the Gradle team now. Instead of this, projects should only interact with each others using publication variants. So the idiomatic (but sadly at the moment more verbose) way would be:
On the producing side (projectB) define a configuration that is not resolvable but consumable by other projects and creates a new variant (called taskB-variant)
configurations.create("taskElements") {
isCanBeResolved = false
isCanBeConsumed = true
attributes {
attribute(Usage.USAGE_ATTRIBUTE, project.objects.named(Usage::class, "taskB-variant"))
}
outgoing.artifact(taskB.outputs)
}
On the consuming side (projectA) define a configuration that is resolvable but not consumable for the same variant and define a dependency to projectB
val taskOutputs by configurations.creating {
isCanBeResolved = true
isCanBeConsumed = false
attributes {
attribute(Usage.USAGE_ATTRIBUTE, project.objects.named(Usage::class, "taskB-variant"))
}
}
dependencies {
taskOutputs(project(":projectB"))
}
tasks.register<Copy>("taskA") {
from taskOutputs
into 'someFolder'
}
This way you decouple how the outputs are produced and the publication variant (called "taskB-variant") becomes the interface between projectA and projectB. So whenever you change the way the output is created you only need to refactor projectB but not projectA as long as you make sure the outputs end up in the taskElements configuration.
At the moment this is still pretty verbose but hopefully Gradle will get more powerful APIs to describe this kind of project relationships in the future.
projectA build.gradle should be:
evaluationDependsOn(':projectB')
task taskA(type:Copy, dependsOn:':projectB:taskB'){
from tasks.getByPath(':projectB:taskB').outputs
into 'someFolder'
}

Resources