Copying files based on a pattern that is determined after the configuration phase has completed - gradle

I am currently assessing gradle as an alternative to Maven for a homegrown convention based ant+ivy build. The ant+ivy build is designed to provide a standard environment for a wide range of j2se apps & it supports the following conventional layout to app config
conf/
fooPROD.properties
fooUAT.properties
bar.properties
UK/
bazPROD.properties
bazUAT.properties
If I choose to do a build for UAT then I get
conf/
foo.properties
bar.properties
UK/
baz.properties
i.e. it copies the files that are suffixed with the target environment (UAT in this case) as well as anything that has no such pattern. There are a variety of other things that happen alongside this to make it rather more complicated but this is the core of my current problem.
I've been playing around with various gradle features while transcribing this as opposed to just getting it working. My current approach is to allow the targetenv to be provided on the fly like so
tasks.addRule("Pattern: make<ID>") { String taskName ->
task(taskName).dependsOn tasks['make']
}
The make task deals with the various copying/filtering/transforming of conf files from src into the build area. To do this, it has to work out what the targetenv is which I am currently doing after the DAG has been created
gradle.taskGraph.whenReady {taskGraph ->
def makeTasks = taskGraph.getAllTasks().findAll{
it.name.startsWith('make') && it.name != 'make'
}
if (makeTasks.size() == 1) {
project.targetEnv = makeTasks[0].name - 'make'
} else {
// TODO work out how to support building n configs at once
}
}
(it feels like there must be a quicker/more idiomatic way to do this but I digress)
I can then run it like gradle makeUAT
My problem is that setting targetEnv in this way means the targetEnv is not set at configuration time. Therefore if I have a copy task like
task prepareEnvSpecificDist(type: Copy) {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
it doesn't do what I want because $project.targetEnv hasn't been set yet. Naively, I changed this to
task prepareEnvSpecificDist(type: Copy) << {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
once I understood what was going on. This then fails like
Skipping task ':prepareEnvSpecificDist' as it has no source files.
because I haven't configured the copy task to tell it what the inputs and outputs are.
The Q is how does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
NB: I realise I could pass a system property in and do something like gradle -Dtarget.env=UAT make but that's relatively verbose and I want to work out what is going on anyway.
Cheers
Matt

Building for a particular target environment is a cross-cutting concern and does not really fit the nature of a task. Using a system property (-D) or project property (-P) is a natural way of dealing with this.
If you absolutely want to save a few characters, you can query and manipulate gradle.startParameter.taskNames to implement an environment switch that looks like a task name. However, this is a non-standard solution.
How does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
This is a special case of the more general problem that a configuration value gets written after it has been read. Typical solutions are:
Avoid it if you can.
Some task/model properties accept a closure that will then get evaluated lazily. This needs to be looked up in the corresponding task/plugin documentation.
Perform the configuration in a global hook like gradle.projectsEvaluated or gradle.taskGraph.whenReady (depending on the exact needs).
Perform the configuration in a task action (at execution time). As you have already experienced, this does not work in all cases, and is generally discouraged (but sometimes tolerable).
Plugins use convention mapping to lazily bind model values to task properties. This is an advanced technique that should not be used in build scripts, but is necessary for writing plugins that extend the build language.
As a side note, keep in mind that Gradle allows you to introduce your own abstractions. For example, you could add a method that lets you write:
environment("uat") {
// special configuration for UAT environment
}

Related

What does create method do in gradlePlugin.plugins?

In the gradle doc: https://docs.gradle.org/current/userguide/custom_plugins.html#sec:custom_plugins_standalone_project
The code block is:
gradlePlugin {
plugins {
create("simplePlugin") {
id = "org.example.greeting"
implementationClass = "org.example.GreetingPlugin"
}
}
}
I noticed it calls create method. And I looked the source code. It says:
Creates a new item with the given name, adding it to this container,
then configuring it with the given action.
What does it mean? Is it actually used anywhere? Or it can be any name does not really matter?
gradlePlugin.plugins is a NamedDomainObjectContainer - which are used by Gradle to hold multiple objects, each with a name.
The documentation on plugin development goes into more detail on the usage of NamedDomainObjectContainers.
Sometimes you might want to expose a way for users to define multiple, named data objects of the same type.
[...]
It’s very common for a plugin to post-process the captured values within the plugin implementation e.g. to configure tasks.
Since the elements of a NamedDomainObjectContainer can be any type, there's no specific usage for one. Generally they are used to configure the Gradle project, for example creating specific tasks, configuring source code locations.

Confused about Gradle delete task

I'm currently learning Gradle, so this is probably a simple question but I can't seem to understand.
I need to create a task in my gradle build that deletes a set of intermediate files. So after a bunch of Google'ing, I tried the following:
task deleteTest (type: Delete) {
doLast {
delete fileTree ('src/main/gen') {
include '**/*'
}
}
}
This has no effect, since when I run the task all of the files in the 'src/main/gen' directory still exist. From reading various websites, it seemed like this was the correct approach, but it just doesn't work.
Just for grins, I tried:
task deleteTest (type: Delete) {
delete fileTree ('src/main/gen') {
include '**/*'
}
}
This seems to work, all of the files get removed from the directory(although it leaves empty sub-directories, which I also don't understand). But from what I read, this is not the correct way to go, since it executes during configuration, not during execution.
Can someone please explain this to me? There's apparently something I'm just not grokking with respect to Gradle in general and this problem in particular.
The short answer:
If you just want to delete the folder src/main/gen and everything inside, use something like this:
task deleteTest(type: Delete) {
delete 'src/main/gen'
}
Your second example is fine, too. It preserves directories because a fileTree is used, which only collects files.
The long answer:
Your first example mixes the two ways to delete files in Gradle. The first one is to use a task of type Delete, the second one is to invoke the method delete of the type Project. But how to they differ and why are they mixed in your example?
Gradle is based on its task system that allows to define and configure tasks which are only run if necessary. Whether a task is required for the build will be determined from task dependencies (dependsOn). This is the reason why Gradle distinguishes between the configuration phase and the execution phase. During configuration phase, the whole build script gets executed except the actual task actions (not visible in the build script) and code wrapped in doFirst / doLast closures. During execution phase, each required task gets run by Gradle. This involved executing the doFirst closures of the task, then the actual task actions and in the end the doLast closures of the task. Now for a Delete task like the one above this means, that the code in the configuration closure delete 'src/main/gen' gets executed during configuration phase, but the actual deletion of the files (the task action) happens later on, during execution phase.
The problem with this approach arises when its required to delete files directly or all the time (e.g. in a plugin or another scenario). It would be too complicated to create a task, setup the dependencies and so on. Here comes the method delete of the type Project to the rescue. It provides the same interface for configuration as the task type Delete, but executes directly. It can be called via the project instance (e.g. project.delete 'src/main/gen') everywhere in your script and runs instantly, but because the project instance is used as scope of the whole script, just using delete is sufficient, too. Well, it is not always sufficient. If the current scope provides a method called delete (with the same signature), this method will be used instead. This is the case inside a task of type Delete and this is the reason why your first script does not work:
Your task of type Delete gets configured in the doLast closure, which runs after the actual deletion should have taken place. If you remove the type: Delete, the method delete will no longer configure the task, but instead delete the files instantly because it is no longer the method delete of the task Delete, but the method delete of the type Project. This works fine, but using a real task should be preferred.
If you remove the type: Delete from your second example, the same thing will happen. Instead of configuring the task, the files will be deleted instantly (now during configuration phase). You do not want this behavior, because the task will be obsolete, since the files will be deleted every time Gradle is invoked. This is what you mentioned as a possible problem.

Recommended way of plugin configuration with "tasks.withType(Foo) {...}" or "foo {...}"?

I'm learning Gradle and am confused to see two styles of how plugins are configured, depending on which tutorial/book I read:
checkstyle {
ignoreFailures = true
}
tasks.withType(Checkstyle) {
ignoreFailures = true
}
The first one looks cleaner but the second one would also apply to custom tasks that inherit from "Checkstyle". I suspect that the latter makes it easier for the IDE to guess the type and allow proper auto completion, is that right?
Is there a general trend towards one or the other that I should follow?
The two are slightly different
checkstyle {...} will configure a single task named "checkstyle". It will fail if a task named "checkstyle" does not exist
tasks.withType(Checkstyle) {...} will configure any tasks in the project of type Checkstyle. This could result in zero, one or multiple task instances being configured.

How to set dynamic input dependency on gradle task

I'm working on a Gradle plugin that has a task that generates compilable Java source code. It takes as input for the code generator a "static" directory (property value) which should contain files in a special language (using a specific extent). In addition, if a particular configuration property is set to true, it will also search for files with the same extent in the entire classpath (or better, in a specific configuration).
I want to make sure that the task runs if any of its input dependencies are new.
It's easy enough to add #InputDirectory to the property definition for the "static" location, but I'm unsure how to handle the "dynamic" input dependency.
I have a property that defines the name of the configuration that would be used to search for additional files with that extent. We'll call that "searchConfiguration". This property is optional. If it's not set, it will use "compile". I also have the property that specifies whether we will search for additional files in the first place. We'll call that "inspectDependencies".
I think I could write a #Input-annotated method that returns essentially the "configurations.searchConfiguration.files" list. We'll call that "getDependencies". I think that is the basic idea. However, I don't understand what to do about "inspectDependencies". I could easily make "getDependencies" return an empty list if "inspectDependencies" is false, but is that truly the correct thing to do? It seems likely that if someone changed "inspectDependencies" from "true" to "false" after a build, the next build should run the task again.
Well, this is tentative, but I asked about this on the Gradle Forum and Mark Viera convinced me that it really should be this simple, although it requires #InputFiles instead of #Input. My particular method looks like this:
#InputFiles
def getOptionalYangClasspath() {
return inspectDependencies ? project.configurations[yangFilesConfiguration] : Collections.emptyList()
}

Yaml properties as a Map in Spring Boot

We have a spring-boot project and are using application.yml files. This works exactly as described in the spring-boot documentation. spring-boot automatically looks in several locations for the files, and obeys any environment overrides we use for the location of those files.
Now we want to also expose those yaml properties as a Map. According to the documentation this can be done with YamlMapFactoryBean. However YamlMapFactoryBean wants me to specify which yaml files to use via the resources property. I want it to use the same yaml files and processing hierarchy that it used when creating properties, so that I can take still take advantage of "magical" features such as placeholder resolution in property values.
I didn't see any documentation on if this was possible.
I was thinking of writing a MapFactoryBean that looked at the environment and simply reversed the "flattening" performed by the YamlProcessor when creating the properties representation of the file.
Any other ideas?
The ConfigFileApplicationContextListener contains the logic for searching for files in various locations. And PropertySourcesLoader loads a file (Resource) into property sources. Neither is really designed for standalone use, but you could easily duplicate them if you want more control. The PropertySourcesLoader delegates to a collection of PropertySourceLoaders so you could add one of the latter that delegates to your YamlMapFactoryBean.
A slightly awkward but workable solution would be to use the existing machinery to collect the YAML on startup. Add a new PropertySourceLoader to your META-INF/spring.factories and let it create new property sources, then post process the Environment to extract the source map(s).
Beware, though: creating a single Map from multiple YAML files, or even a single one with multiple documents (let alone multiple files with multiple documents) isn't as easy as you might think. You have a map-merge problem, and someone is going to have to define the algorithm. The flattening done in YamlMapPropertiesBean and the merge in YamlMapFactoryBean are just two choices out of (probably) a larger set of possibilities.

Resources