In the gradle doc: https://docs.gradle.org/current/userguide/custom_plugins.html#sec:custom_plugins_standalone_project
The code block is:
gradlePlugin {
plugins {
create("simplePlugin") {
id = "org.example.greeting"
implementationClass = "org.example.GreetingPlugin"
}
}
}
I noticed it calls create method. And I looked the source code. It says:
Creates a new item with the given name, adding it to this container,
then configuring it with the given action.
What does it mean? Is it actually used anywhere? Or it can be any name does not really matter?
gradlePlugin.plugins is a NamedDomainObjectContainer - which are used by Gradle to hold multiple objects, each with a name.
The documentation on plugin development goes into more detail on the usage of NamedDomainObjectContainers.
Sometimes you might want to expose a way for users to define multiple, named data objects of the same type.
[...]
It’s very common for a plugin to post-process the captured values within the plugin implementation e.g. to configure tasks.
Since the elements of a NamedDomainObjectContainer can be any type, there's no specific usage for one. Generally they are used to configure the Gradle project, for example creating specific tasks, configuring source code locations.
Related
I am completely new to Groovy, trying to learn it, but stymied because I can't parse the syntax well enough to even know where to look in the documentation. I am using Groovy in Gradle. There are many places where examples are given, but no explanation on what it means, so I just need a few pointers.
publishing {
publications {
mavenJava(MavenPublication) {
groupId = 'com.xxx.yyy'
artifactId = 'zzz'
from components.java
}
}
repositories {
mavenLocal();
}
}
The main build code is referring to things on the project class. On that class, I can find a property called publishing, and it is a class PublishingExtension. It appears then that the curly brace starts a closure with code in it. The documentation says this syntax:
publishing { }
configures the PublishingExtension. What I want to understand is what it means (i.e. what is actually happening) when I specify what looks like a property and follow that with a Closure. In the Groovy documentation I could not find any syntax like this, nor explanation. I sure it is something simple but I don't know enough to even know what to look for.
If I visit the Project Class API Docs there is no method there named publishing. Nor is there a property defined by the method getPublishing. Apparently this magic capability is enabled by the publishing plugin. If I visit the Publishing Plugin API Doc there is no description of this publishing property either or how it modifies the base project.
Similarly, drilling down a little more, that closure starts with a symbol publications and in the documentation for the PublishingExtension I find a property which is of type PublicationContainer which is read only. I also find a method named publications which does not accept a closure, but instead a configuration of type Action<? super PublicationContainer>. Again, I don't know how the contents of the curly braces are converted to an Action class instance. How does this object get constructed? Action is an interface, and the only method is execute however it is completely unclear how this action gets constructed.
The block that defines the Action starts with symbol mavenJava that looks like a method, but actually that first symbol is declaring a name of a new object of type MavenPublication named mavenJava. Either this is magically constructed (I don't know the rules) or there is a method called, but which method? What is it about PublicationContainer that allows it to know that an arbitrary mavenJava command is supposed to create an object instance. Then again, the curly braces that follow this, is that a closure, a configuration, or something even more exotic?
So as you can see, I am missing a little info on how Groovy works. So far I can't find documentation that explains this syntax, however it might be there if I knew what to look for. Can anyone explain what the syntax is really doing, or refer me to a site that can explain it?
publishing is called to configure the PublishingExtension.
In PublishingExtension there is a publications method accepting an Action which is usually coerced from a Closure. In Groovy a Closure is automatically converted to an interface with a single method.
mavenJava is a non-existent DSL method, which is passed by Gradle DSL builder to the create method of PublicationContainer:
publishing.publications.create('mavenJava', MavenPublication) {
// Configure the maven publication here
}
groupId and artifactId are properties of MavenPublication and are being set here.
from is the from(component) of MavenPublication and is written using Groovy simplified method call literal without brackets.
In general Gradle uses a root DSL builder which calls the nested DSL builders provided by plugins. Hence sometimes it's difficuilt (also for the IDE) to find proper references of all parts of the build.gradle file.
I am working with a Go-based software that allows to use several plugins.
A plugin can't be used twice (by choice) => a plugin is either enabled or disabled
Plugin names are unique
All plugins are configured with a plugin-specific configuration defined as JSON-serializable struct
The use of plugins is controlled using a single configuration. Consider the following simplified example of the configuration struct:
type PluginConfig struct {
PluginA *PluginA `json:"pluginA,omitEmpty"`
PluginB *PluginB `json:"pluginB,omitEmpty"`
PluginC *PluginC `json:"pluginC,omitEmpty"`
PluginD *PluginD `json:"pluginD,omitEmpty"`
}
Somewhere in the code, each of the fields is checked, and the actual plugin added if configuration was provided:
if config.PluginA != nil {
AddPlugin(plugina.New(config.PluginA))
}
if config.PluginB != nil {
AddPlugin(pluginb.New(config.PluginB))
}
// ...
I am trying to rework the software so external plugins are supported as well. The software is required to still function as before, so the format and way of configuration cannot be changed. Additionally, I am required to use the default encoding/json package for unmarshaling the configuration.
If I knew all plugins at compile-time, I could go generate the code of the configuration struct before compiling, and generate the corresponding if config.SomePlugin { } statements as well. While this might even be of good performance because no dynamic lookup is used, I would be still limited to knowing all plugins in advance. If this was the case, would you agree this approach is a valid way to go?
What could I do if I could only get a list of plugins at runtime? How could I process the configuration file then, so not only the plugin names are dynamic, but I would also not know of the specific configuration before?
You have two options:
Unmarshal to a generic type such as map[string]interface{}
Unmarshal to json.RawMessage
In either case, you can then pass that data to the plugin, once it's loaded, to do full unmarshaling/conversion.
I'm working on a Gradle plugin that has a task that generates compilable Java source code. It takes as input for the code generator a "static" directory (property value) which should contain files in a special language (using a specific extent). In addition, if a particular configuration property is set to true, it will also search for files with the same extent in the entire classpath (or better, in a specific configuration).
I want to make sure that the task runs if any of its input dependencies are new.
It's easy enough to add #InputDirectory to the property definition for the "static" location, but I'm unsure how to handle the "dynamic" input dependency.
I have a property that defines the name of the configuration that would be used to search for additional files with that extent. We'll call that "searchConfiguration". This property is optional. If it's not set, it will use "compile". I also have the property that specifies whether we will search for additional files in the first place. We'll call that "inspectDependencies".
I think I could write a #Input-annotated method that returns essentially the "configurations.searchConfiguration.files" list. We'll call that "getDependencies". I think that is the basic idea. However, I don't understand what to do about "inspectDependencies". I could easily make "getDependencies" return an empty list if "inspectDependencies" is false, but is that truly the correct thing to do? It seems likely that if someone changed "inspectDependencies" from "true" to "false" after a build, the next build should run the task again.
Well, this is tentative, but I asked about this on the Gradle Forum and Mark Viera convinced me that it really should be this simple, although it requires #InputFiles instead of #Input. My particular method looks like this:
#InputFiles
def getOptionalYangClasspath() {
return inspectDependencies ? project.configurations[yangFilesConfiguration] : Collections.emptyList()
}
I have a puppet module which deploys a JAR file and writes some properties files (by using ERB templates).
Recently we added a "mode" feature to the application, meaning the application can run in different modes depending on the values entered in the manifest.
My hierarchy is as follows:
setup
*config
**files
*install
Meaning setup calls the config class and the install class.
The install class deploys the relevant RPM file according to the mode(s)
The config class checks the modes and for each mode calls the files class with the specific mode and directory parameters, the reason for this structure is that the value of the properties depends on the actual mode.
The technical problem is that if I have multiple modes in the manifest (which is my goal) I need to call the files class twice:
if grep($modesArray, $online_str) == [$online_str] {
class { 'topology::files' :
dir => $code_dir,
mode => $online_str
}
}
$offline_str = "offline"
$offline_suffix = "_$offline_str"
if grep($modesArray, $offline_str) == [$offline_str] {
$dir = "$code_dir$offline_suffix"
class { 'topology::files' :
dir => $dir,
mode => $offline_str
}
However, in puppet you cannot declare the same class twice.
I am trying to figure out how I can call a class twice or perhaps a method which I can access its parameters from my ERB files, but I can't figure this out
The documentation says it's possible but doesn't say how (I checked here https://docs.puppetlabs.com/puppet/latest/reference/lang_classes.html#declaring-classes).
So to summarize is there a way to either:
Call the same class more then once with different parameters
(Some other way to) Create multiple files based on the same ERB file (with different parameters each time)
You can simply declare your class as a define:
define topology::files($dir,$mode){
file{"${dir}/filename":
content=> template("topology/${mode}.erb"),
}
}
That will apply a different template for each mode
And then, instantiate it as many times as you want:
if grep($modesArray, $online_str) == [$online_str] {
topology::files{ "topology_files_${online_str}" :
dir => $code_dir,
mode => $online_str
}
}
$offline_str = "offline"
$offline_suffix = "_$offline_str"
if grep($modesArray, $offline_str) == [$offline_str] {
$dir = "$code_dir$offline_suffix"
topology::files{ "topology_files_${online_str}" :
dir => $dir,
mode => $offline_str
}
Your interpretation of the documentation is off the mark.
Classes in Puppet should be considered singletons. There is exactly one instance of each class. It is part of a node's manifest or it is not. The manifest can declare the class as often as it wants using the include keyword.
Beware of declaration using the resource like syntax.
class { 'classname': }
This can appear at most once in a manifest. Parameter values are now permanently bound to your class. Your node has chosen what specific shape the class should take for it.
Without seeing the code for your class, your question makes me believe that you are trying to use Puppet as a scripting engine. It is not. Puppet only allows you to model a target state. There are some powerful mechanics to implement complex workflows to achieve that state, but you cannot use it to run arbitrary transformations in an arbitrary order.
If you add the class code, we can try and give some advice on how to restructure it to make Puppet do what you need. I'm afraid that may not be possible, though. If it is indeed necessary to sync one or more resources to different states at different times (scripting engine? ;-) during the transaction, you should instead implement that whole workflow as an actual script and have Puppet run that through an exec resource whenever appropriate.
I am currently assessing gradle as an alternative to Maven for a homegrown convention based ant+ivy build. The ant+ivy build is designed to provide a standard environment for a wide range of j2se apps & it supports the following conventional layout to app config
conf/
fooPROD.properties
fooUAT.properties
bar.properties
UK/
bazPROD.properties
bazUAT.properties
If I choose to do a build for UAT then I get
conf/
foo.properties
bar.properties
UK/
baz.properties
i.e. it copies the files that are suffixed with the target environment (UAT in this case) as well as anything that has no such pattern. There are a variety of other things that happen alongside this to make it rather more complicated but this is the core of my current problem.
I've been playing around with various gradle features while transcribing this as opposed to just getting it working. My current approach is to allow the targetenv to be provided on the fly like so
tasks.addRule("Pattern: make<ID>") { String taskName ->
task(taskName).dependsOn tasks['make']
}
The make task deals with the various copying/filtering/transforming of conf files from src into the build area. To do this, it has to work out what the targetenv is which I am currently doing after the DAG has been created
gradle.taskGraph.whenReady {taskGraph ->
def makeTasks = taskGraph.getAllTasks().findAll{
it.name.startsWith('make') && it.name != 'make'
}
if (makeTasks.size() == 1) {
project.targetEnv = makeTasks[0].name - 'make'
} else {
// TODO work out how to support building n configs at once
}
}
(it feels like there must be a quicker/more idiomatic way to do this but I digress)
I can then run it like gradle makeUAT
My problem is that setting targetEnv in this way means the targetEnv is not set at configuration time. Therefore if I have a copy task like
task prepareEnvSpecificDist(type: Copy) {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
it doesn't do what I want because $project.targetEnv hasn't been set yet. Naively, I changed this to
task prepareEnvSpecificDist(type: Copy) << {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
once I understood what was going on. This then fails like
Skipping task ':prepareEnvSpecificDist' as it has no source files.
because I haven't configured the copy task to tell it what the inputs and outputs are.
The Q is how does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
NB: I realise I could pass a system property in and do something like gradle -Dtarget.env=UAT make but that's relatively verbose and I want to work out what is going on anyway.
Cheers
Matt
Building for a particular target environment is a cross-cutting concern and does not really fit the nature of a task. Using a system property (-D) or project property (-P) is a natural way of dealing with this.
If you absolutely want to save a few characters, you can query and manipulate gradle.startParameter.taskNames to implement an environment switch that looks like a task name. However, this is a non-standard solution.
How does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
This is a special case of the more general problem that a configuration value gets written after it has been read. Typical solutions are:
Avoid it if you can.
Some task/model properties accept a closure that will then get evaluated lazily. This needs to be looked up in the corresponding task/plugin documentation.
Perform the configuration in a global hook like gradle.projectsEvaluated or gradle.taskGraph.whenReady (depending on the exact needs).
Perform the configuration in a task action (at execution time). As you have already experienced, this does not work in all cases, and is generally discouraged (but sometimes tolerable).
Plugins use convention mapping to lazily bind model values to task properties. This is an advanced technique that should not be used in build scripts, but is necessary for writing plugins that extend the build language.
As a side note, keep in mind that Gradle allows you to introduce your own abstractions. For example, you could add a method that lets you write:
environment("uat") {
// special configuration for UAT environment
}