"Up To Date" Gradle task status when it has no output - gradle

How can you correctly mark a Gradle task as being "up to date" when the task doesn't produce any output? The task should remain "up to date" provided the last run was successful and the inputs haven't changed since then. The Gradle guide states just before section 15.9.2, the following:
"A task with no defined outputs will never be considered up-to-date."
How is it possible to mark tasks as up to date in this case? It appears that Gradle needs to know the time of the last successful run and then compare that to the last modified time of the inputs. As a workaround the script could create / touch an empty file to mark the task as complete? Are there any other suggested workarounds?

To just think through the different scenarios...
Tasks without any inputs or outputs. These run all the time. This might be just wrapping an existing "do something" executable.
Tasks with inputs and outputs. These run when either the inputs or outputs change. This might be a compiler.
Tasks with just outputs. These run only when the outputs have changed/don't exist. This might be something that downloads something. (I think these are pretty rare in reality, I'd count the URL to download as an input.)
Tasks with inputs and no outputs. I haven't run into these in practice.
Like you've said, you could cheat the up-to-date checks with an output file that is just empty. The built-in Gradle Test task is most similar to what you're describing and it has a "report" as its output. I think you would probably have something similar too. It could be as simple as capturing the stdout/stderr of the task and putting that into a file. That's not too useful for when everything passes, but it would be useful for when things fail.
Of course, any of these could be supplemented with a custom upToDateWhen bit of code. e.g., you have a task that starts a webserver and it's "up-to-date" when the webserver is already running. I don't think that's a good fit with what you're describing here.
To start out with, I'd try:
outputs.files file("${buildDir}/reports/${name}.out")
I think that'd work with or without actually putting something in the file.

You can override the tasks upToDateWhen method.
task createDist(type: Zip) {
outputs.upToDateWhen {
return true
}
}
So with true it is always up-of-date and false always out-of-date. You can define any kind of custom logic to determine if it is up to date.

Tasks with inputs and no outputs
Unless you explicitly declare the output, Gradle will decide the output is out-of-date to stay build safe.
You should always patently declare both inputs and outputs.
Use outputs.upToDateWhen { true } to simultaneously declare outputs and indicate that your outputs are always up-to-date.
For example (in Kotlin):
task ("dataFileHandler") {
inputs.file ("data-file.txt")
outputs.upToDateWhen { true }
doLast {
// do something with "data-file.txt"
}
}
This task will be executed once (on the first build) and will remain “UP-TO-DATA” until “data-file.txt” is changed.

Related

How do I debug Ansible includes and dependencies?

I've joined a project which has a large number of playbooks and roles, and which makes heavy use of include (often in a nested fashion) in order to include playbooks/roles within existing playbooks/roles. (Whether this is good or bad practice should be considered out of scope of this question, because it's not something I can immediately change. Note also that include_role is not used because these playbooks were written well before 2.2 was out, and are still in the process of being updated.)
Normally when running ansible-playbook, the output just shows each task being run, but it does not show the includes which pull in extra tasks. This makes it hard to how the overall flow jumps around between playbooks. In contrast, include_vars tasks are included in the output. I'm guessing this is because it's an Ansible module, whereas include isn't really a module.
So without having to modify the playbooks, is there way to run playbooks which shows the following?
when include directives are triggering, and
(ideally) also the exact files which are being included, since it's not always obvious how relative paths are converted into absolute paths
I've found lots of advice on various ways to debug playbooks, but nothing which achieves this. Bonus points if it also shows when roles are being included via meta role dependencies!
I'm aware that there are tools such as ansigenome which do static analysis of playbooks, but I'm hoping for something which can output the information at playbook run-time, for any playbook I choose to invoke.
If it's not currently possible, would it be a reasonable feature request?
Try executing ansible-playbook -vv, it shows "task path" for every executed task, like this:
TASK [debug] *********************************************
task path: /path/to/task/file.yml:5
ok: [localhost] => {
"msg": "aaa"
}
So you can easily track actual file (included or not) path and line number.
As for includes, there are different type of includes in current Ansible versions (2.2, 2.3): static and dynamic.
Static includes happen during parse time and information about them is printed (with -vv verbosity) at the very beginning of playbook run.
Dynamic includes happen in runtime and you can see cyan "included" lines in the output.

Gradle: task's configuration depends on another task's execution

My Gradle build has two task:
findRevision(type: SvnInfo)
buildWAR(type: MavenExec, dependsOn: findRevision)
Both tasks are configuration based, but the buildWAR task depends on a project property that is only defined in the execution phase of the findRevision task.
This breaks the process, as Gradle cannot find said property at the time it tries to configure the buildWAR task.
Is there any way to delay binding or configuration until another task has executed?
In this specific case I can make use of the mavenexec method instead of the MavenExec task type, but what should be done in similar scenarios where no alternative method exists?
Depending on what configuration option exactly you want to change, you might change it in the execution phase of the task with buildWAR.doFirst { }. But generally this is a really bad idea. If you e. g. change something that influences the result of the UP-TO-DATE checks like input files for example, the task might execute though it would not be necessary or even worse do not execute thoug it would be necessary. You can of course make the task always execute to overcome this with outputs.upToDateWhen { false }, but there might be other problems and also this way you disable one of Gradles biggest strenghts.
It is a much better idea to redesign your build so that this is not necessary. For example determining the revision at configuration time already. Depending on whether the task needs much time this might be a viable solution or not. Also depending on what you want to do with the revision, you might consider the suggestion of #LanceJava and make your findRevision task generate a file with the revision in it that is then packaged into the WAR and used at runtime.

Run task only if another task wasn't UP-TO-DATE

I'm new to gradle, but I can't find this issue addressed anywhere in the way I descibe here:
1) I have a task versionUpdate that increments a build number counter in a number of files. (The task is arbitrary; My question is about defining complex graph dependencies.)
2) I only want versionUpdate to execute if the compile task is not UP-TO-DATE. This is a multi-project build, and it needs to happen if any subproject builds, and only once.
3) versionUpdate should happen before compile (as it must reflect the current build number), but if and only if compile was added to the graph. That is, not all tasks should be invoking versionUpdate, and even compile is conditional.
Currently, I just have the following:
[compileJava, compileTestJava]*.dependsOn versionUpdate
TL;DR How can I ask the compile task if it's UP-TO-DATE and modify the task graph based upon this information?
You can achieve what you want by adding a doFirst block to the compileJava task:
compileJava.doFirst {
// your code to update versions goes here
}
This will only be executed if compileJava actually has work to do

Setting System Properties for Gradle Tests

I have an application in which each test should run in a VM whose configuration has been conditioned by a test-specific System property (which can just be based on the test class name).
Something like this sort of works:
test {
forkEvery 1
maxParallelForks Runtime.runtime.availableProcessors()
beforeSuite { TestDescriptor descriptor ->
systemProperty( 'test.class.name', descriptor.getClassName() )
}
}
But it doesn't quite work. The names the forked JVMs see do change, but they aren't the ones I expect, and I suspect they aren't even deterministic. It seems as though there is one shared JavaForkOptions item, and that calls to beforeSuite aren't in a deterministic and exclusive way linked to the forking of a process for that suite, so the name a process gets might not match up with the one set in "its" beforeSuite call.
Is there a better way to do this, or some way to get more precise control over the forking process so that System properties could be set on a fork-specific data structure?
Thanks for any help!

Better to use task dependencies or task.doLast in Gradle?

After building my final output file with Gradle I want to do 2 things. Update a local version.properties file and copy the final output final to some specific directory for archiving. Let's assume I already have 2 methods implemented that do exactly what I just described, updateVersionProperties() and archiveOutputFile().
I'm know wondering what's the best way to do this...
Alternative A:
assembleRelease.doLast {
updateVersionProperties()
archiveOutputFile()
}
Alternative B:
task myBuildTask(dependsOn: assembleRelease) << {
updateVersionProperties()
archiveOutputFile()
}
And here I would call myBuildTask instead of assembleRelease as in alternative A.
Which one is the recommended way of doing this and why? Is there any advantage of one over the other? Would like some clarification please... :)
Whenever you can, model new activities as separate tasks. (In your case, you might add two more tasks.) This has many advantages:
Better feedback as to which activity is currently executing or failed
Ability to declare task inputs and outputs (reaping all benefits that come from this)
Ability to reuse existing task types
More possibilities for Gradle to execute tasks in parallel
Etc.
Sometimes it isn't easily possible to model an activity as a separate task. (One example is when it's necessary to post-process the outputs of an existing task in-place. Doing this in a separate task would result in the original task never being up-to-date on subsequent runs.) Only then the activity should be attached to an existing task with doLast.

Resources