how does a gradle task explicitly set itself having altered it's output or up to date for tasks dependent on it - gradle

I am creating a rather custom task that processes a number of input files and outputs a different number of output files.
I want to check the dates of the input files against the existing output files and also might look at the content of the input files to make the determination whether it is up to date or needs to be invoked to become up to date. What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Also any doc on standard lib utilities to do things like file date checking etc, getting lists of files etc that are easy like in ruby rake.
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
a sample that does this in a larger project that has tasks that are dependent on it would be really nice :)

What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Ideally this should be done as a custom task type. None of this logic should be in any of the Gradle files at all. Either have the logic in a dedicated plugin project that gets published somewhere which you can then reference in the project, or have the logic in buildSrc.
What you are trying to develop is what is known as an incremental task: https://docs.gradle.org/current/userguide/custom_tasks.html#incremental_tasks
These are used heavily throughout Gradle which makes the incremental build of Gradle possible: https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:up_to_date_checks
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
Once you have your tasks defined and whatever else you need, in your main Gradle files you would configure them as you would any other plugin or task.
The two links above should be enough to help get you started.
As for a small example, I developed a Gradle plugin that generates files based on some input that is not known until its configured. The 'custom' task type just extends the provided JavaExec. The custom task is Wsdl2java. Then based on user configuration, tasks get registered here using the input file from the user. Since I reused built-in task types, I know for sure that no extra work will done and can rely on Gradle doing the heavy lifting. There's also a test to ensure that configuration cache works as expected: ConfigurationCacheFunctionalTests.
As I mentioned earlier, the links above should be enough to get you started.

Related

Copy all Gradle dependencies without pre-registered custom task

Use Case
The use case for grabbing all dependencies (without the definition of a custom task in build.gradle) is to perform policy violation and vulnerability analysis on each of them via a templated pipeline. We are using Nexus IQ to do the evaluation.
Example
This can be done simply with Maven, by specifying the local repository to download all dependencies and then supply a pattern to Nexus IQ to scan. In the example below we would supply maven-dependencies/* as the scan target to Nexus IQ after rounding up all the dependencies.
mvn -B clean verify -Dmaven.repo.local=${WORKSPACE}/maven-dependencies
In order to do something similar in Gradle it seems the most popular method is to introduce a custom task into build.gradle. I'd prefer to do this in a way that doesn't require developers to implement custom tasks; it's preferred to keep those files as clean as possible. Here's one way I thought of making this happen:
Set GRADLE_USER_HOME to ${WORKSPACE}/gradle-user-home.
Run find ${WORKSPACE}/gradle-user-home -type f -wholename '*/caches/modules*/files*/**/*.*' to grab the location of all dependency resources (I'm fine with picking up non-archive files).
Copying all files from step #1 to a gradle-dependencies folder.
Supply gradle-dependencies/* as the scan target to Nexus IQ.
Results
I'm super leery about doing it this way, as it seems very hacky and doesn't seem like the most sustainable solution. Is there another way that I should consider?
UPDATE #1: I've adjusted my question to allow answers that have custom tasks, just not pre-registered. Pre-registered means the custom task is already in the build.gradle file. I'll also provide my answer shortly after this update.
I'm uncertain if Gradle has the ability to register external, custom tasks, but this is how I'm making ends meet. I've created a custom task in a file called copyAllDependencies.gradle, appending the contents of that file (after replacing all newlines and instances of two or more spaces with a single space) to build.gradle when the pipeline runs, and then running gradlew copyAllDependencies. I then pass gradle-dependencies/* as the scan target to Nexus IQ.
task copyAllDependencies(type: Copy) {
def allConfigurations = [];
configurations.each {
if (it.canBeResolved) {
allConfigurations += configurations."${it.name}"
}
};
from allConfigurations
into "gradle-dependencies"
}
I can't help but feel that this isn't the most elegant solution, but it suits my needs for now.
UPDATE #1: Ultimately, I decided to go with requiring development teams to specify this custom task in their build.gradle file. There were too many nuances with echoing script contents into another file (hence the need to include ; when defining allConfigurations and iterating over all configurations). However, I am still open answers that address the original question.

Run Rake task in parallel using different parameters

I've a scenario where data has to be loaded from different input files. So my current approach is to execute the loader script using selenium grid in 10 different systems. Each system will have their own input files and other information like PORT, IP_ADDRESS for grid will also be passed in rake task itself. Now, these information will be saved in an excel file and code has to be written to build n number of rake task with different environment variables and then execute them all together.
I'm unable to come up with a way where all the task will be created automatically and executed as well.
I know it has to be done using 'parallel_test' gem or rake multi-task feature but don't know how exactly this can be achieved. Any other approach is also welcomed.

Including a log4j.properties file in my jar, but different properties file at execution time (optionally)

I want to include a log4j.properties file in my maven build, but be able to use a different properties file at execution time (using cron on unix)
Any ideas?
You want to be able to change properties per environment.
There are number approach to address this issue.
Create directory in each environment which will contain different files (log4j.properties in your example). Add these directories to your classpath in each environment.
Use filter ability + profile ability of maven in order to populate log4j.properties with correct values in the build time.
Use build server (Jenkins, for example) which essentially will make p.2.
Each of these approaches has it's own drawbacks. I am currently using a bit weired combination of 2&3 because Jenkins limitations.

Hudson - how to trigger a build via file using the filename and file contents

Currently I'm working in a continuous integration server solution using Hudson.
Now I'm looking for a build job which will be triggered every time it finds a file in a specific directory.
I've found some plugins which allow Hudson to watch and poll files from a directory (File Found Trigger, FSTrigger and SCM File Trigger) but none of them allow me to get the filename and file contents from the file found and use these values during the build execution (My idea would pass these values to a shell script)
Do you guys know if this is something possible to do via any other Hudson plugin? or maybe I'm missing something.
Thanks,
Davi
Two valid solutions:
As suggested by Christopher, read the values from the file via Shell/Batch commands at the beginning of your build-script.(The downside is that Hudson will not be aware of those values in any way)
Use the Envfile Plugin to read the content of the file and interperate it as a set of key-value pairs.
Note that if the File Found Trigger "eats" the flag-file, you may need to create two files -
one to hold the key-value pairs and another to serve as a flag for the File Found Trigger.

Referencing a file from another Hudson job

What I have is two jobs, A and B, and I'd like job B to use a file from A's last stable build.
Seems like the Hudson UI is able to display all of the information, so I am hoping that there is some way, in Job B, to access that information.
There is probably a solution to copy the file, post build, to a shared location and use it from there, but I don't want to have to worry about Job A starting to build and attempting to whack the file while Job B has it in use.
Ah, but I guess I really do need to copy Job A's file somewhere, and probably put it in a directory named with the build number. Okay, so the new question is how to I get Job A's last stable build # from Job B?
Notes:
Windows environment
Use the 'archive the artifacts' feature to archive the file you want in job A. Then in job B, pull down the file via the permalink to the last successful build.
Something like:
http://localhost:8080/job/A/lastSuccessfulBuild/artifact/myartifact.txt
but replace 'A' with your job name, and 'myartifact.txt' with the path to your artifact
I would like to mention the Parameterized Trigger Plugin:
http://wiki.hudson-ci.org/display/HUDSON/Parameterized+Trigger+Plugin
Ideally, I believe the best solution would be to have this plugin trigger build B with the file from build A. However, as the current status says, "Future •Support file parameters (for passing files from build to build)"
Until that support is added, what I do is copy the artifact from job A to a share, then use the Parameterized Trigger Plugin to trigger job B and give it the name (a unique name so there are no conflicts) of the file on the share. I put the file name in a "properties file" (see plugin documentation) in order to trigger job B. Job B can then grab the file and run.

Resources