Gradle custom task action order - gradle

I'm looking at a simple example of a custom task in a Gradle build file from Mastering Gradle by Mainak Mitra (page 70). The build script is:
println "Working on custom task in build script"
class SampleTask extends DefaultTask {
String systemName = "DefaultMachineName"
String systemGroup = "DefaultSystemGroup"
#TaskAction
def action1() {
println "System Name is "+systemName+" and group is "+systemGroup
}
#TaskAction
def action2() {
println 'Adding multiple actions for refactoring'
}
}
task hello(type: SampleTask)
hello {
systemName='MyDevelopmentMachine'
systemGroup='Development'
}
hello.doFirst {println "Executing first statement "}
hello.doLast {println "Executing last statement "}
If I run the build script with gradle -q :hello, the output is:
Executing first statement
System Name is MyDevelopmentMachine and group is Development
Adding multiple actions for refactoring
Executing last statement
As expected with the doFirst method excecuting first, the two custom actions executing in the order in which they were defined, then the doLast action executing. If I comment out the lines adding the doFirst and doLast actions, the output is:
Adding multiple actions for refactoring
System Name is MyDevelopmentMachine and group is Development
The custom actions are now executing in the reverse order in which they are defined. I'm not sure why.

I think it's simply a case that the ordering is not deterministic, and you get different results depending on how you further configure the task.
Why do you want 2 separate #TaskAction methods as opposed to a single one that calls methods in a deterministic sequence, though ? I can't think of a particular advantage of doing it that way (I realize it's from a sample given in a book).
Most other samples I find only have a single method
#TaskAction
void execute() {...}
which I think makes more sense and be more predictable.

Patrice M. is correct, the way those methods will be executed is undeterministic.
In detail
#TaskAction annotated methods are being processed by AnnotationProcessingTaskFactory.
But first Task action methods are fetched with DefaultTaskClassInfoStore and results stored in TaskClassInfo.
You can see that Class.getDeclaredMethods() is being used to fetch all methods to check if they contain #TaskAction annotation
And here the definition of public Method[] getDeclaredMethods() throws SecurityException
Description contains following note:
The elements in the returned array are not sorted and are not in any
particular order.
Link to Gradle discussion forum with the topic about #TaskAction

It doesn't guarantee the order.
For your information, just added one more link which was an issue I raised few years ago, it should be giving warnings or replaced with some other better solutions by Gradle.
https://github.com/gradle/gradle/issues/8118

Related

How to determine if a Gradle task was instantiated (Configuration Avoidance API)

I am trying to improve the performance of our Gradle builds and discovered the Gradle Task Configuration Avoidance API (https://docs.gradle.org/current/userguide/task_configuration_avoidance.html). It allows to postpone the creation and configuration of a task, unless it is really needed. This might save a lot of startup time and as we call Gradle multiple times during a build, this could amount to considerable time savings.
We developed some plugins for internal use and I put an effort into changing how I define the tasks to avoid creation when not needed. I want to test if my changes are successful and the task instantiation is delayed.
Simple example of how to create a task without instantiating it:
class MyTask extends DefaultTask {
}
TaskProvider customTask = tasks.register("customAction", MyTask) {
println "task configured!" // configuration time output
doLast {
println "action 1!" // execution time output
}
}
// configuration like this avoids task instantiation
tasks.named("customAction") {
doLast {
println "action 2!"
}
}
tasks.withType(MyTask).configureEach {
doLast {
println "action 3!"
}
}
Executing gradle help does not print the "task configured!" message, while gradle customAction does.
To make sure I do not accidentally trigger task instantiation, I would like to write tests for our plugins. But I could not find a way to determine if a task is instantiated or not.
I know about Build Scans (https://guides.gradle.org/creating-build-scans/), but our corporate guidelines are strict and clearance is pending, so I can not use it for now. Also, I do not see a way to use it in tests.
Is there a way to
get a list of created/instantiated tasks from the Gradle project?
or is there any property on Task or TaskProvider showing whether the task has been created/instantiated?
or can buildscans be used offline somehow?
It would be cool, if the solution could be used in the plugin's test code, but manual evaluation would also be valuable.
configureEach called only when task is created. you can write something like this to get list of all configured tasks
def tasks = []
project.allprojects { Project sp ->
sp.tasks.configureEach { Task t ->
tasks,add(t.path)
}
}
project.gradle.buildFinished {
println tasks
}
This is not exactly what OP asked for, but this will give you some stats on how many of each type of task were configured:
gradlew :help -Dorg.gradle.internal.tasks.stats

Propagating logs in shared library to jenkins job console

I am trying to write a shared libray which combines of global variables and shared functions to perform automated task of build and deployment for our project
The project layout as below:
The project has two major parts:
Global shared variables which are placed in vars folder
Supporting groovy scripts to abstract logics which in turn will be called in the global variable.
Inside the groovy class, I am using println to log debugging information
But it never got printed out when it is invoked through jenkins pipeline job
The log console of jenkins job as below:
Can someone show me how to propage logs from groovy class to jenkins job console as I can see only log from println in the global shared script is shown in log console.
I just found a way to do it by calling println step that is available in the jenkins job
Basically I create a wrapper function as below in Groovy class PhoenixEurekaService:
The steps is actually the jenkins job environment passed into Groovy class via constructor. By this way we can call any steps available in jenkins job in Groovy class.
In global groovy script PhoenixLib.groovy
I am not sure is there other way to do that...
All the commands/DSL e.g: println, sh, bat, checkout etc can't be accessed from shared library.
ref: https://jenkins.io/doc/book/pipeline/shared-libraries/.
You can access steps by passing them to shared library.
//your library
package org.foo
class Utilities implements Serializable {
def steps
Utilities(steps) {this.steps = steps}
def mvn(args) {
steps.println "Hello world"
steps.echo "Hello echo"
//steps.sh "${steps.tool 'Maven'}/bin/mvn -o ${args}"
}
}
jenkinsfile
#Library('utils') import org.foo.Utilities
def utils = new Utilities(this)
node {
utils.mvn '!!! so this how println can be worked out or any other step!!'
}
I am not a 100% sure if this what you are looking for, but printing things in a shared library can be achieved by passing steps and using echo. See Accessing steps in https://jenkins.io/doc/book/pipeline/shared-libraries/
This answer addresses those who need to log something from deep in the call stack. It might be cumbersome to pass pipeline "steps" object all the way in the stack of the shared library especially when the call hierarchy gets complex. One might therefore create a helper class with a static Closure that holds some logic to print to the console. For example:
class JenkinsUtils {
static Closure<Void> log = { throw new RuntimeException("Logger not configured") }
}
In the steps groovy, this needs to be initialized (ideally in a NonCPS block). For example your Jenkinsfile (or var file):
#NonCPS
def setupLogging() {
JenkinsUtils.log = { String msg-> println msg }
}
def call() {
...
setupLogging()
...
}
And then, from any arbitrary shared library class one can call print to console simply like:
class RestClient {
void doStuff() {
JenkinsUtils.log("...")
}
}
I know this is still hacky for a workaround, although I could not find any better working solution even though I spent quite some time researching.
Posted this also as a gist to my github profile

What's the best choice between tasks or methods to organize your Gradle build logic?

I'm currently migrating some old huge Maven 1 script to Gradle.
As a consequence, I need to adapt the old Maven 1 / Ant & its goals logic to Gradle.
After having read the Gradle User Guide, and some articles on Gradle tasks and methods, I am quite confused about the way to write my script.
In the official Gradle User Guide, §6.1, it is explained that a Gradle task:
represents some atomic piece of work which a build performs
In §6.11, it is also explained that we can use methods to organize our build logic.
So, my question is: what's the correct way to use each of them?
I am creating a build script so, in my opinion:
tasks should only be what the user is allowed to see, and so, to call through the command line.
By example gradle doSomeInternalTechnicalWork is not correct for me, as the user should even not know that doSomeInternalTechnicalWork exists.
Always in my opinion, it should NOT be a task.
method should be used to organize the build logic, and should NOT be visible by the user
With the former logic, I encounter problems when my methods need to call Gradle tasks (like the JAR creation of the Java plugin).
I know that I should not call task from task (and so the same for task from method), but, have a look to this example:
task independentTask << {
// initialization stuff
println "doing a lot of initialization"
// using methods to organize build logic, good or not?
doComplexThingsThatTheUserShouldNeverDoHimself()
}
task dependentTask(dependsOn: 'independentTask') << {
println "now that 'independentTask' is done, I can continue to do complex things..."
}
void doComplexThingsThatTheUserShouldNeverDoHimself() {
println "doing really complex things"
// I really need to create my JAR here and not somewhere else
// And I know it's not a good thing to directly call the Action.execute
jar.execute()
println "doing other really complex things"
}
In this case, what would be a correct build logic?
Should doComplexThingsThatTheUserShouldNeverDoHimself be converted in 1 or more tasks, so as to be able to dependsOn the JAR task?
But that would mean to have really a lot of tasks, callable by the user, when, indeed, that should not be the case.
After having search quite a lot, I concluded that, when you need to call a task from another task, you have little choice but to rely on tasks relationships (dependsOn, mustRunAfter, finalizedBy).
Which means that methods can't be used to organize the build logic in the same way that they are used to structure a program in Java, Groovy & Co.
As a consequence, you can't prevent the user from seeing (and using) some internal tasks, that should normally only be used as dependency by some other ones.
A "Gradle correct" version of the former build script would hence be:
task dependentTask(dependsOn: 'doComplexThingsThatTheUserShouldNeverDoHimselfPart2') << {
println "now that 'independentTask' is done, I can continue to do complex things..."
}
task doComplexThingsThatTheUserShouldNeverDoHimselfPart2(dependsOn: ['doComplexThingsThatTheUserShouldNeverDoHimselfPart1', 'jar']) << {
println "doing other really complex things"
}
task doComplexThingsThatTheUserShouldNeverDoHimselfPart1(dependsOn: 'independentTask') << {
println "doing really complex things"
}
task independentTask << {
// initialization stuff
println "doing a lot of initialization"
}
Or, with tasks relationships declared separately:
task dependentTask << {
println "now that 'independentTask' is done, I can continue to do complex things..."
}
task independentTask << {
// initialization stuff
println "doing a lot of initialization"
}
task doComplexThingsThatTheUserShouldNeverDoHimselfPart1 << {
println "doing really complex things"
}
task doComplexThingsThatTheUserShouldNeverDoHimselfPart2 << {
println "doing other really complex things"
}
// we declare all tasks relationships separately
dependenTask.dependsOn doComplexThingsThatTheUserShouldNeverDoHimselfPart2
doComplexThingsThatTheUserShouldNeverDoHimselfPart2 dependsOn doComplexThingsThatTheUserShouldNeverDoHimselfPart1, jar
doComplexThingsThatTheUserShouldNeverDoHimselfPart1 dependsOn independentTask
Personally, I prefer the last one, the relationship block being more readable.

Custom gradle task classes: is there a "post-construction" hook?

Is there some sort of "post-construction hook" available on custom task classes, so I can call methods like inputs and outputs in class-specific logic?
Let's say I'm defining a custom Gradle task class like
class ExampleTask extends DefaultTask {
def exFile = null
}
Now, I'd like to instantiate it via
task('ex', type: ExampleTask) {
exFile = file("some-example.json")
}
... and I'd like to automatically run the equivalent of inputs(exFile) on the instance. Where does the logic go to handle this kindof configuration? I see that I could add an #InputFiles decorator on a method in my custom task class, like
#InputFiles
def getFiles(){
file(exFile)
}
... but this doesn't seem very general. I'd rather just use the existing inputs() functionality, rather than rewriting portions of it. But I can't figure out where to call it from.
If necessary, you can do these initializations in the zero-argument constructor of the task class. Default property values are often set by a plugin (especially if a default value depends on information from outside the task class). Input/output annotations should be preferred over the input/output API. (The latter exists for ad-hoc tasks that don't have their own task class.)
I need the exact same thing, and to my understanding, the answers are more or less - no, that is currently not possible.
See https://discuss.gradle.org/t/custom-task-with-extensions/12491

Trying to understand gradle project properties

Clearly I don't understand what's going on here.
I guess prop2 and prop3 can't be accessed because they are variables instead of "project properties".
The question arose because I would like the variables prop2 and prop3 to be visible from within the "doTheThing()" method, but I don't want to have to pass them in. I want the variables to be globally accessible to tasks, methods and classes (but only from within in the build script itself) - and I want them to be typed (which is why the defintion of prop1 is not acceptable).
Really, though - I guess what I'm asking for is some help understanding what a Gradle project property is and what the syntax 'prop1 = "blah"' is actually doing.
I have read the Gradle user guide and also Gradle in Action - if they already explain this concept please point me to the right section (maybe I glossed over it at the time not understanding what is was saying).
prop1 = "blah"
String prop2 = "bleah"
def prop3 = "blargh"
task testPropAccess << {
println "1: $prop1"
println "2: $prop2"
println "3: $prop3"
doTheThing()
}
private void doTheThing(){
println "4: $prop1"
println "5: $prop2" // error: Could not find property 'prop2' on root project 'script'
println "6: $prop3" // error: Could not find property 'prop3' on root project 'script'
}
When you declare a variable at the outermost level (as in your second and third statement), it becomes a local variable of the script's run method. This is really just Groovy behavior, and nothing that Gradle can easily change.
If you want the equivalent of a global variable, just assign a value to an unbound variable (as in your first statement). This adds a dynamic property to Gradle's Project object, which is visible throughout the build script (unless shadowed). In other words, prop1 = "blah" is equivalent to project.prop1 = "blah".
If you want the equivalent of a typed global variable, you'll have to wait until Gradle upgrades to Groovy 1.8, which makes this possible with the #Field annotation. Or you write a plugin that mixes a convention object into the Project object (but that's not suitable for ad-hoc scripting).

Resources