Can a gradle task depend on the failure of another task?
For example, I have an auxillary task that opens the test report in a browser. I want the report to only appear when the task "test" fails, not when all tests pass as it does now.
task viewTestReport(dependsOn: 'test') << {
def testReport = project.testReportDir.toString() + "/index.html"
"powershell ii \"$testReport\"".execute()
}
You can try to set task's finilizedBy property, like:
task taskX << {
throw new GradleException('This task fails!');
}
task taskY << {
if (taskX.state.failure != null) {
//here is what shoud be executed if taskX fails
println 'taskX was failed!'
}
}
taskX.finalizedBy taskY
You can find the explanation gradle's user guide in chapter 14.11 "Finalizer tasks". Shortly, due to docs:
Finalizer tasks will be executed even if the finalized task fails.
So, you just have to check the state of the finilized task with TaskState and if it was failed, do what you wanted.
Update:
Unfortunately, because configuration is always executed for all tasks, seems not possible to create some custom task to set the flag to show report within script. On execution phase it is not possible too, because the task will not be called if previewsly runned task has failed. But you can do, what you wanted, providing the build script arguments, like:
task viewTestReport << {
if (project.hasProperty("showReport") && test.state.failure != null) {
//here is what shoud be executed on taskX fail
println 'test was failed!'
}
}
test.finalizedBy(viewTestReport)
In that case, you have to provide -PshowReport arguments, while you call any gradle task, if ou want to get the report in test task fail. For example, if you call:
gradle test -PshowReport
then report will be shown if test task fails, but if you call it:
gradle test
no report will be shown in any case.
Related
I have 2 gradle task, one depends on the other.
The first task get db dump from remote server using sh script. The second task parse dump from previous task.
The problem is that second task starts early than dump script from the first task finishes. So there is no file yet, when second task start to work.
Tasks:
task getDumpFromRemotePostgres(type: Exec) {
executable = "/bin/sh"
println 'started task getDumpFromRemotePostgres'
def dumpScript = './createSqliteDb/scripts/dump_db_script.sh'
def dbDumpFileName = 'dump.sql'
args += [dumpScript]
println 'finished task getDumpFromRemotePostgres'
}
task patchPostgresDumpFile(type: Exec) {
dependsOn getDumpFromRemotePostgres
executable = "/bin/sh"
println 'started task patchPostgresDumpFile'
def dbDumpFileName = 'dump.sql'
File dumpFile = file(dbDumpFileName)
def line
dumpFile.withReader { reader ->
while ((line = reader.readLine()) != null) {
//parse and modify
}
println 'finished task patchPostgresDumpFile'
}
Script dump_db_script.sh:
echo "dump script started"
pg_dump --data-only --inserts --dbname=postgresql://user:pas#server/base_name > dump.sql
echo "dump script finished"
Console log is the follow (if i delete access file lines):
started task getDumpFromRemotePostgres
finished task getDumpFromRemotePostgres
started task patchPostgresDumpFile
finished task patchPostgresDumpFile
> Task :getDumpFromRemotePostgres
dump script started
dump script finished
Is any idea how to solve the problem?
Tried with doLast{ ... } but came to nothing
It seems, it's all due to the different lifecycle phases. You can read more about it here.
First of all, when you createe a task of type Exec or any kind without <<, everything in it's body is a task configuration and is getting executed at the configuration pahse of the build. That's why you have messages like started task getDumpFromRemotePostgres first in your output.
The second thing is that an executable you are running, is executed at the execution phase, after all task's configuration is done already. And that is why dump script started appears after all the configurations are done.
In your case, you don't need to declare a patchPostgresDumpFile as an Exec task, because you don't actually call any executable, but need to run some logic. For that, you have to move this logic to the doLast closure, to run it in the execution phase. Something like this:
task patchPostgresDumpFile() {
dependsOn getDumpFromRemotePostgres
doLast {
println 'started task patchPostgresDumpFile'
def dbDumpFileName = 'dump.sql'
File dumpFile = file(dbDumpFileName)
def line
dumpFile.withReader { reader ->
while ((line = reader.readLine()) != null) {
//parse and modify
}
println 'finished task patchPostgresDumpFile'
}
}
And note, that your message started task getDumpFromRemotePostgres doesn't actually mean that this task is running, but that it's configuring. If you want to have a messages before and after the execution, move them into the doFirst and doLast closures within a task configuration closure.
What is the difference between these two tasks. Only task with << in its definition is shown in the output of ./gradlew tasks.
task greet(type: GreetingToFileTask) {
destination = { project.greetingFile }
}
task sayGreeting(dependsOn: greet) << {
println file(greetingFile).text
}
The lines above are from gradle documentation Here
The << is a shortcut to the toLast configuration item of a task definition. I.e. the following two declarations are equivalent:
task hello << {
println 'Hello world!'
}
and:
task hello {
doLast {
println 'Hello world!'
}
}
(example taken from Gradle documentation here).
Now, in the first code snippet you just define a task and configuring its destination property. However, the task will only be executed if needed.
In the second code snippet, however, you are actually defining an action that will always be executed during the configuration phase, regardless of the tasks targeted for execution (cite from here):
A task has both configuration and actions. When using the <<, you are
simply using a shortcut to define an action. Code defined in the
configuration section of your task will get executed during the
configuration phase of the build regardless of what task was targeted.
We have an optional gradle task docker that depends on task war, which if executed, needs a war file generated with an extra file in it. This extra file can be added to the resources within the processResources task (or potentially directly in the war task). However, the corresponding code block must not run if task docker has not been requested and will not be run.
We need a correct condition in the following block checking if task docker is in the pipeline:
processResources {
if (/* CONDITION HERE: task docker is requested */) {
from ("${projectDir}/docker") {
include "app.properties"
}
}
}
task docker(type: Dockerfile) {
dependsOn build
...
Clarification: processResources is a standard dependency of the war task and the latter is a standard dependency of the build task. processResources is always executed on build, with or without the docker task to collect resources for assembling the war and may not be fully disabled in this case. One could move the code in question to a separate task dependent on docker and working on the output directory of processResources, yet before war is run, however, such a construct will result in much less clarity for such a simple thing.
You can simply add additional dependency to your docker task, to make it relying not only on build task, but also on processResources. In that case, your processResources task will be called only if docker should be executed.
Another solution is to use TaskExecutionGraph. This let you initialize some variable, which could tell you, whether or not some task will be executed. But you have to understand, that graph is prepared only after all the configuration is done and you can rely on it only during the execution phase. Here is a short example, how it could be used:
//some flag, whether or not some task will be executed
def variable = false
//task with some logic executed during the execution phase
task test1 << {
if (variable) {
println 'task2 will be executed'
} else {
println 'task2 will not be executed'
}
}
//according to whether or not this task will run or not,
//will differs test1 task behavior
task test2(dependsOn: test1) {
}
//add some logic according to task execution graph
gradle.taskGraph.whenReady {
taskGraph ->
//check the flag and execute only if it's true
if (taskGraph.hasTask(test2)) {
variable = true
println 'task test2 will be executed'
}
}
Furthermore, you can try to configure your custom task to make it disabled by setting is enabled property to false, if docker task is not in the execution graph. In that case you don't have to provide some flags and logic in execution phase. Like:
task test1 {
//some logic for execution
doLast {
println "execute some logic"
}
}
task test2(dependsOn: test1) {
}
gradle.taskGraph.whenReady {
taskGraph ->
if (!taskGraph.hasTask(test2)) {
//Disable test1 task if test2 will not run
test1.enabled = false
}
}
But it'll be impossible to run this custom task separately without some additional configuration.
I want to automatically add a serverRun task when doing functional tests in Gradle, so I add a dependency :
funcTestTask.dependsOn(serverRun)
Which results in the task running whether or not the funcTestTask even runs
:compile
:serverRun
:funcTestTask (and associate compile tasks... etc)
:serverStop
OR
:compile UP-TO-DATE
:serverRun <-- unnecessary
:funcTestTask UP-TO-DATE
:serverStop
The cost of starting the server is pretty high and I only want it to start if the functionalTest isn't UP-TO-DATE, I'd like to do or something :
if(!funcTestTask.isUpToDate) {
funcTestTask.dependsOn(serverRun)
}
So I know I can't know the up-to-date status of funcTestTask until all it's inputs/outputs are decided BUT can I inherit it's uptoDate checker?
serverRun.outputs.upToDateWhen(funcTestTask.upToDate)
The alternative is to "doFirst" the ServerRun in the FuncTest, which I believe is generally frowned upon?
funcTestTask.doFirst { serverRun.execute() }
Is there a way to conditionally run a task before another?
UPDATE 1
Tried settings inputs/outputs the same
serverRun.inputs.files(funcTestTask.inputs.files)
serverRun.outputs.files(funcTestTask.outputs.files)
and this seems to rerun the server on recompiles (good), skips reruns after successful unchanged functional tests (also good), but wont rerun tests after a failed test like the following
:compile
:serverRun
:funcTestTask FAILED
then
:compile UP-TO-DATE
:serverRun UP-TO-DATE <-- wrong!
:funcTestTask FAILED
Having faced the same problem I found a very clean solution. In my case I want an eclipse project setup to be generated when the build is run, but only at the times when a new jar is generated. No project setup should be executed when the jar is up to date. Here is how one can accomplish that:
tasks.eclipse {
onlyIf {
!jar.state.upToDate
}
}
build {
dependsOn tasks.eclipse
}
Since the task is a dependent tasks of the one you are trying to control then you can try:
tasks {
onlyIf {
dependsOnTaskDidWork()
}
}
I ended up writing to a 'failure file' and making that an input on the serverRun task:
File serverTrigger = project.file("${buildDir}/trigger")
project.gradle.taskGraph.whenReady { TaskExecutionGraph taskGraph ->
// make the serverRun task have the same inputs/outputs + extra trigger
serverRun.inputs.files(funcTestTask.inputs.files, serverTrigger)
serverRun.outputs.files(funcTestTask.outputs.files)
}
project.gradle.taskGraph.afterTask { Task task, TaskState state ->
if (task.name == "funcTestTask" && state.failure) {
serverRun.trigger << new Date()
}
}
With information from an answer to my question on the Gradle forums :
http://forums.gradle.org/gradle/topics/how-can-i-start-a-server-conditionally-before-a-functionaltestrun
I had the same problem but the solution I came up is much simpler.
This starts up the server only if testing is necessary
test {
doFirst {
exec {
executable = 'docker/_ci/run.sh'
args = ['--start']
}
}
doLast {
exec {
executable = 'docker/_ci/run.sh'
args = ['--stop']
}
}
}
Assuming you have task1 and task2 which depends on task1 and you need to run task2 only if task1 is not up-to-date, the following example may be used:
task task1 {
// task1 definition
}
task task2(dependsOn: task1) {
onlyIf { task1.didWork }
}
In this case task2 will run only when task1 is not up-to-date. It's important to use didWork only for tasks which are defined in dependsOn, in order to ensure that didWork is evaluated after that task (task1 in our example) had chance to run.
I have a very simple build script like so
task hello{
println("hello World")
}
task bye {
println("bye")
}
On the command line I run
gradle hello and I get the following output:
hello World
bye
:hello UP-TO-DATE
Why is it executing the task "bye" (I'm assuming it gets executed since "bye" gets printed)? Thanks.
It's a common pitfall:
task hello {
println("Any code in here is about *configuring* the\
task. By default, all tasks always get configured.")
doLast {
println("Any code in here is about *executing* the task.\
This code only gets run if and when Gradle decides to execute the task.")
}
}
The distinction between configuration phase and execution phase is probably the single most important concept to understand in Gradle. It can be confusing at first, and may go away in the future. A kind of analogue in the Ant/Maven world is that these tools first parse XML build scripts and build an object model (perhaps resolving some properties along the way), and only then execute the build.
Adding to Peter answer, If you want to execute all task , you can specify the defaultTasks list.
defaultTasks 'clean', 'run'
task clean {
doLast {
println 'Default Cleaning!'
}
}
task run {
doLast {
println 'Default Running!'
}
}
task other {
doLast {
println "I'm not a default task!"
}
}
Output
Output of gradle -q
> gradle -q
Default Cleaning!
Default Running!
More details can be found here
https://docs.gradle.org/current/userguide/tutorial_using_tasks.html