Copy all Gradle dependencies without pre-registered custom task - gradle

Use Case
The use case for grabbing all dependencies (without the definition of a custom task in build.gradle) is to perform policy violation and vulnerability analysis on each of them via a templated pipeline. We are using Nexus IQ to do the evaluation.
Example
This can be done simply with Maven, by specifying the local repository to download all dependencies and then supply a pattern to Nexus IQ to scan. In the example below we would supply maven-dependencies/* as the scan target to Nexus IQ after rounding up all the dependencies.
mvn -B clean verify -Dmaven.repo.local=${WORKSPACE}/maven-dependencies
In order to do something similar in Gradle it seems the most popular method is to introduce a custom task into build.gradle. I'd prefer to do this in a way that doesn't require developers to implement custom tasks; it's preferred to keep those files as clean as possible. Here's one way I thought of making this happen:
Set GRADLE_USER_HOME to ${WORKSPACE}/gradle-user-home.
Run find ${WORKSPACE}/gradle-user-home -type f -wholename '*/caches/modules*/files*/**/*.*' to grab the location of all dependency resources (I'm fine with picking up non-archive files).
Copying all files from step #1 to a gradle-dependencies folder.
Supply gradle-dependencies/* as the scan target to Nexus IQ.
Results
I'm super leery about doing it this way, as it seems very hacky and doesn't seem like the most sustainable solution. Is there another way that I should consider?
UPDATE #1: I've adjusted my question to allow answers that have custom tasks, just not pre-registered. Pre-registered means the custom task is already in the build.gradle file. I'll also provide my answer shortly after this update.

I'm uncertain if Gradle has the ability to register external, custom tasks, but this is how I'm making ends meet. I've created a custom task in a file called copyAllDependencies.gradle, appending the contents of that file (after replacing all newlines and instances of two or more spaces with a single space) to build.gradle when the pipeline runs, and then running gradlew copyAllDependencies. I then pass gradle-dependencies/* as the scan target to Nexus IQ.
task copyAllDependencies(type: Copy) {
def allConfigurations = [];
configurations.each {
if (it.canBeResolved) {
allConfigurations += configurations."${it.name}"
}
};
from allConfigurations
into "gradle-dependencies"
}
I can't help but feel that this isn't the most elegant solution, but it suits my needs for now.
UPDATE #1: Ultimately, I decided to go with requiring development teams to specify this custom task in their build.gradle file. There were too many nuances with echoing script contents into another file (hence the need to include ; when defining allConfigurations and iterating over all configurations). However, I am still open answers that address the original question.

Related

how does a gradle task explicitly set itself having altered it's output or up to date for tasks dependent on it

I am creating a rather custom task that processes a number of input files and outputs a different number of output files.
I want to check the dates of the input files against the existing output files and also might look at the content of the input files to make the determination whether it is up to date or needs to be invoked to become up to date. What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Also any doc on standard lib utilities to do things like file date checking etc, getting lists of files etc that are easy like in ruby rake.
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
a sample that does this in a larger project that has tasks that are dependent on it would be really nice :)
What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Ideally this should be done as a custom task type. None of this logic should be in any of the Gradle files at all. Either have the logic in a dedicated plugin project that gets published somewhere which you can then reference in the project, or have the logic in buildSrc.
What you are trying to develop is what is known as an incremental task: https://docs.gradle.org/current/userguide/custom_tasks.html#incremental_tasks
These are used heavily throughout Gradle which makes the incremental build of Gradle possible: https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:up_to_date_checks
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
Once you have your tasks defined and whatever else you need, in your main Gradle files you would configure them as you would any other plugin or task.
The two links above should be enough to help get you started.
As for a small example, I developed a Gradle plugin that generates files based on some input that is not known until its configured. The 'custom' task type just extends the provided JavaExec. The custom task is Wsdl2java. Then based on user configuration, tasks get registered here using the input file from the user. Since I reused built-in task types, I know for sure that no extra work will done and can rely on Gradle doing the heavy lifting. There's also a test to ensure that configuration cache works as expected: ConfigurationCacheFunctionalTests.
As I mentioned earlier, the links above should be enough to get you started.

How can I tell when a gradle plugin property's evaluation will be deferred?

I'm using the docker compose plugin from avast. Below is the relevant stanza. How can I tell if mandatoryDockerWebTag() will be called during the configuration phase? Is the only way to inspect the plugin code to figure out when the closures will be called?
Many times I have information that I only want to provide if a task is in the task graph, but that information may be expensive to get, unavailable, or needs to validate a project parameter when its fetched. For instance I don't want someone bringing up the preprod docker image instance of our stack with the "latest" tag, so the mandatoryDockerWebTag() throws an exception if it's "latest", otherwise it returns the current tag.
dockerCompose {
preprod {
useComposeFiles = ['docker-compose.yml']
environment.putAll([
WEB_DOCKER_IMAGE_VERSION : mandatoryDockerWebTag()
])
tcpPortsToIgnoreWhenWaiting = [33333]
}
}
How can I tell if mandatoryDockerWebTag() will be called during the configuration phase?
I do not believe there is a way to explictly tell how or when a task or configuration is called in Gradle without either:
Examine the source of the plugin you are using.
Examine the build scan report.
For instance I don't want someone bringing up the preprod docker image instance of our stack
Unfortunately, you do not have control over what a plugin author does to your Gradle configuration. They have free/complete access to your project and can configure/alter at will as far as I know.
Good/effective plugin authors (IMO) utilize configuration avoidance. It applies to not only tasks, but configurations as well.

Idiomatic approach to a Go plugin-based system

I have a Go project I would like to open source but there are certain elements which are not suitable for OSS, e.g. company specific logic etc.
I have conceived of the following approach:
interfaces are defined in the core repository.
Plugins can then be standalone repositories whose types implement the interfaces defined in core. This allows the plugins to be housed in completely separate modules and therefore have their own CI jobs etc.
Plugins are compiled into the final binary via symlinks.
This would result in a directory structure something like the following:
|- $GOPATH
|- src
|- github.com
|- jabclab
|- core-system
|- plugins <-----|
|- xxx |
|- plugin-a ------>| ln -s
|- yyy |
|- plugin-b ------>|
With an example workflow of:
$ go get git#github.com:jabclab/core-system.git
$ go get git#github.com:xxx/plugin-a.git
$ go get git#github.com:yyy/plugin-b.git
$ cd $GOPATH/src/github.com
$ ln -s ./xxx/plugin-a/*.go ./jabclab/core-system/plugins
$ ln -s ./yyy/plugin-b/*.go ./jabclab/core-system/plugins
$ cd jabclab/core-system
$ go build
The one issue I'm not sure about is how to make the types defined in plugins available at runtime in core. I'd rather not use reflect but can't think of a better way at the moment. If I was doing the code in one repo I would use something like:
package plugins
type Plugin interface {
Exec(chan<- string) error
}
var Registry map[string]Plugin
// plugin_a.go
func init() { Registry["plugin_a"] = PluginA{} }
// plugin_b.go
func init() { Registry["plugin_b"] = PluginB{} }
In addition to the above question would this overall approach be considered idiomatic?
This is one of my favorite issues in Go. I have an open source project that has to deal with this as well (https://github.com/cpg1111/maestrod), it has pluggable DB and Runtime (Docker, k8s, Mesos, etc) clients. Prior to the plugin package that is in the master branch of Go (so it should be coming to a stable release soon) I just compiled all of the plugins into the binary and allowed configuration decide which to use.
As of the plugin package, https://tip.golang.org/pkg/plugin/, you can use dynamic linking for plugins, so similar to C's dlopen() in its loading, and the behavior of go's plugin package is pretty well outlined in the documentation.
Additionally I recommend taking a look at how Hashicorp addresses this by doing RPC over a local unix socket. https://github.com/hashicorp/go-plugin
The added benefit of running a plugin as a separate process like Hashicorp's model does, is that you get great stability in the event that the plugin fails but the main process is able to handle that failure.
I should also mention Docker does its plugins in Go similarly, except Docker uses HTTP instead of RPC. Additionally, a Docker engineer has posted about embedding a Javascript interpreter for dynamic logic in the past http://crosbymichael.com/category/go.html.
The issue I wanted to point out with the sql package's pattern that was mentioned in the comments is that that's not a plugin architecture really, you're still limited to whatever is in your imports, so you could have multiple main.go's but that's not a plugin, the point of a plugin is such that the same program can run either one piece of code or another. What you have with things like the sql package is flexibility where a separate package determines what DB driver to use. Nonetheless, you end up modifying code to change what driver you are using.
I want to add, with all of these plugin patterns, aside from the compiling into the same binary and using configuration to choose, each can have its own build, test and deployment (i.e. their own CI/CD) but not necessarily.

Including a log4j.properties file in my jar, but different properties file at execution time (optionally)

I want to include a log4j.properties file in my maven build, but be able to use a different properties file at execution time (using cron on unix)
Any ideas?
You want to be able to change properties per environment.
There are number approach to address this issue.
Create directory in each environment which will contain different files (log4j.properties in your example). Add these directories to your classpath in each environment.
Use filter ability + profile ability of maven in order to populate log4j.properties with correct values in the build time.
Use build server (Jenkins, for example) which essentially will make p.2.
Each of these approaches has it's own drawbacks. I am currently using a bit weired combination of 2&3 because Jenkins limitations.

Referencing a file from another Hudson job

What I have is two jobs, A and B, and I'd like job B to use a file from A's last stable build.
Seems like the Hudson UI is able to display all of the information, so I am hoping that there is some way, in Job B, to access that information.
There is probably a solution to copy the file, post build, to a shared location and use it from there, but I don't want to have to worry about Job A starting to build and attempting to whack the file while Job B has it in use.
Ah, but I guess I really do need to copy Job A's file somewhere, and probably put it in a directory named with the build number. Okay, so the new question is how to I get Job A's last stable build # from Job B?
Notes:
Windows environment
Use the 'archive the artifacts' feature to archive the file you want in job A. Then in job B, pull down the file via the permalink to the last successful build.
Something like:
http://localhost:8080/job/A/lastSuccessfulBuild/artifact/myartifact.txt
but replace 'A' with your job name, and 'myartifact.txt' with the path to your artifact
I would like to mention the Parameterized Trigger Plugin:
http://wiki.hudson-ci.org/display/HUDSON/Parameterized+Trigger+Plugin
Ideally, I believe the best solution would be to have this plugin trigger build B with the file from build A. However, as the current status says, "Future •Support file parameters (for passing files from build to build)"
Until that support is added, what I do is copy the artifact from job A to a share, then use the Parameterized Trigger Plugin to trigger job B and give it the name (a unique name so there are no conflicts) of the file on the share. I put the file name in a "properties file" (see plugin documentation) in order to trigger job B. Job B can then grab the file and run.

Resources