Is there a way to get the task graph or even just the task names during the Gradle configuration phase?
We have some code in buildSrc and it would be convenient to take some actions based on the tasks that will be run. If I try to get the task graph in our buildSrc code then I receive an exception with Task information is not available, as this task execution graph has not been populated.. Is there any way to get an idea of which tasks are to be executed prior the the execution graph being populated?
I was thinking of parsing the Gradle command line to check for task names there but that is brittle and seems less than ideal.
You should rely as much as possible on Gradle and not try to reinvent the wheel when it comes to figuring out which tasks will run.
From a programmatic point of view:
To list all possible tasks in the project: project.tasks
To obtain the task graph: gradle.taskGraph
For example, at the end of the configuration phase, you could call methods that are in your buildSrc:
gradle.taskGraph.whenReady { taskGraph ->
// Call your methods here using the task graph
}
Related
There are multiple questions related to viewing gradle task dependencies, that is, given Task X, what other tasks will be triggered.
Is there a plugin to perform the opposite calculation, that is, given Task X, list all of the tasks which would cause Task X to be executed.
I have 5 different task these task must be executed parallely. This task implementation in 5 different classes.Now I need to invoke these 5 classes parallel. Also number of times task executed will differ for each invocation.
Lets say I have ProcessExecuoter class it will provide list of task needs to be executed.
//This list will change dynamically each invocation
List myTaskList = new ArrayList();
Based on some property value in MyTask I need to invoke corresponding TaskClass and collect the results.
I am using Spring Boot 1.2.4 and Java 1.6.
You need just send your tasks as payloads of messages to the ExecutorChannel and gather their result afterwards using Aggregator component.
All the necessary info you can find in the Reference Manual:
https://docs.spring.io/spring-integration/docs/4.3.12.RELEASE/reference/html/messaging-channels-section.html#executor-channel
https://docs.spring.io/spring-integration/docs/4.3.12.RELEASE/reference/html/messaging-routing-chapter.html#aggregator
I've got Jenkins job A that triggers job B and afterwards executes a shell script.
Inside this shell script of Jenkins job A I want to use a variable set by Jenkins Job B.
How can I do this?
This can be accomplished in many ways. One way would be to configure Job A to have a build step, that triggers Job B, and fetches variables in a document after Job B has finished. Then Job A can read those variables and use them in later steps.
There are several things to consider here though. First of all this requires Job B to finish before Job A can/should continue, so if you are thinking of parallel job execution this isn't ideal. Secondly, when dealing with env variables you will need a plugin to make variables available outside of the build step (exporting isn't enough), check out the EnvInject plugin. And thirdly, if job configuration is becoming complex, there probably is a better way of doing it. With Jenkinsfile and previously pipelining plugins, Job orchestration has improved a lot, and passing parameters around and such is much easier in this new, shiny world. That being said, here is an example of something that works like what you are asking about.
Job A
As a build step, trigger Job B, and let Job A halt while Job B finishes
As the next build step, copy an artifact from another build (Job B latest stable), using the Copy Artifact Plugin.
Do something with the file, for example just printing it's content, it's now accessible in Job A.
Job B
Export a variable and save it to a file
Archive the written file and make it accessible to Job A.
This isn't pretty, but it works at least.
P.s. I'd recommend checking out the Jenkinsfile (https://jenkins.io/doc/book/pipeline/jenkinsfile/) options, it simplifies a lot after the initial learning curve.
Is there any difference when creating a gradle task using the <<?
I see some tasks created with it and without, for example:
task task1 << {}
task task2 {}
thanks in advance :)
Yes, big difference.
The latter executes the body in the configuration phase, the former in the execution phase. Read the user guide PDF in the distribution to learn what those mean.
Gradle generally determines task order itself, which is often fine, but sometimes you need task ordering.
Based on the the User Guide and other discussion here it appears there are two basic ways to order tasks:
Using dependencies, i.e. taskB depends on taskA, so taskA will execute first.
Adding a new parent task, and using mustRunAfter as a constraint to sequence the subtasks.
However there are some situations where both approaches seem troublesome.
Example 1:
In in daily task run by some continuous build system, you might have a sequence:
setupTask1 setupTask2 build test sendReport
Now, you can actually script this sequence externally, by passing these tasks on the gradle command line, i.e.
gradle setupTask1 setupTask2 build test sendReport
But if you want to write (and document) this sequence in the build file itself (rather than externally), are you required to create a new task, then add four new ordering constraints?
Example 2:
Now imagine we have
startServer1 startServer2 integrationTest tearDownServer2 tearDownServer1
In this case, we want sequencing, but also control over failures, since if integrationTest fails, you still want the teardown steps to occur. Then we not only need mustRunAfter, we also have to add finalizedBy. And do we add the finalizer on integrationTest? What about if we add more tests that need such a harness? What if we want tearDownServer1 to be sequenced relative to tearDownServer2?
In summary, sequencing seems very clumsy, at least to a Gradle novice. In some cases, it might be more elegant simply to sequence things directly in a more imperative way. Am I missing something? Is there a better approach here?
It is (highly) advised to use the (already named) solutions to use dependsOn and mustRunAfter.
If you are fine with invoking tasks rather than depending on tasks, you could try the following possible solution for example one. Create a task that executes the tasks in sequence. You would get something like this:
task dailyCIJob << {
[setupTask1, setupTask2, build, test, sendReport].each {
it.execute()
}
}
There are two, both not satisfying, solutions for example two:
Use the --continue Gradle commanline argument. This will make Gradle continue if a task fails.
Also you seem to be able to set a ignoreFailures boolean property per (test) task.