I'm working on a Gradle plugin that has a task that generates compilable Java source code. It takes as input for the code generator a "static" directory (property value) which should contain files in a special language (using a specific extent). In addition, if a particular configuration property is set to true, it will also search for files with the same extent in the entire classpath (or better, in a specific configuration).
I want to make sure that the task runs if any of its input dependencies are new.
It's easy enough to add #InputDirectory to the property definition for the "static" location, but I'm unsure how to handle the "dynamic" input dependency.
I have a property that defines the name of the configuration that would be used to search for additional files with that extent. We'll call that "searchConfiguration". This property is optional. If it's not set, it will use "compile". I also have the property that specifies whether we will search for additional files in the first place. We'll call that "inspectDependencies".
I think I could write a #Input-annotated method that returns essentially the "configurations.searchConfiguration.files" list. We'll call that "getDependencies". I think that is the basic idea. However, I don't understand what to do about "inspectDependencies". I could easily make "getDependencies" return an empty list if "inspectDependencies" is false, but is that truly the correct thing to do? It seems likely that if someone changed "inspectDependencies" from "true" to "false" after a build, the next build should run the task again.
Well, this is tentative, but I asked about this on the Gradle Forum and Mark Viera convinced me that it really should be this simple, although it requires #InputFiles instead of #Input. My particular method looks like this:
#InputFiles
def getOptionalYangClasspath() {
return inspectDependencies ? project.configurations[yangFilesConfiguration] : Collections.emptyList()
}
Related
In the gradle doc: https://docs.gradle.org/current/userguide/custom_plugins.html#sec:custom_plugins_standalone_project
The code block is:
gradlePlugin {
plugins {
create("simplePlugin") {
id = "org.example.greeting"
implementationClass = "org.example.GreetingPlugin"
}
}
}
I noticed it calls create method. And I looked the source code. It says:
Creates a new item with the given name, adding it to this container,
then configuring it with the given action.
What does it mean? Is it actually used anywhere? Or it can be any name does not really matter?
gradlePlugin.plugins is a NamedDomainObjectContainer - which are used by Gradle to hold multiple objects, each with a name.
The documentation on plugin development goes into more detail on the usage of NamedDomainObjectContainers.
Sometimes you might want to expose a way for users to define multiple, named data objects of the same type.
[...]
It’s very common for a plugin to post-process the captured values within the plugin implementation e.g. to configure tasks.
Since the elements of a NamedDomainObjectContainer can be any type, there's no specific usage for one. Generally they are used to configure the Gradle project, for example creating specific tasks, configuring source code locations.
Context
I am developing a custom JMeter plugin which generates test data dynamically from a tree like structure.
The editor for the tree generates GUI input fields as needed, and therefore I have no set of defined configuration properties which are set in the respective TestElement. Instead, I serialize the tree as a whole in the GUI class, set the result as one property and deserialize it in the config element where it is processed further during test execution.
Problem
This works just fine, except that JMeter variable/function expressions like ${foo} or ${_bar(..)} in the dynamic input fields are not evaluated. As far as I understand the JMeter source code, the evaluation is triggered somehow if the respective property setters in org.apache.jmeter.testelement.TestElement are used which is not possible for my plugin.
Unfortunately, I was not able to find a proper implementation which can be used in my config element to evaluate such expressions explicitly after deserialization.
Question
I need a pointer to JMeter source code or documentation for evaluating variable/function expressions explicitly.
After I manages to setup the JMeter-Project properly in my IDE, I found org.apache.jmeter.engine.util.CompoundVariable which can be used like this:
CompoundVariable compoundVariable = new CompoundVariable();
compoundVariable.setParameters("${foo}");
// returns the value of the expression in the current context
compoundVariable.execute();
I have an interface which has certain properties defined.
For example:
interface Student {
name: string;
dob:string;
age:number;
city:string;
}
I read a JSON file which contains a record in this format and assign it to a variable.
let s1:Student = require('./student.json');
Now, I want to verify if s1 contains all the properties mentioned in interface Student. At runtime this is not validated. Is there any way I can do this?
There is an option of type guards, but that won't serve the purpose here. I do not know which fields would come from the JSON file. I also cannot add a discriminator(No data manipulation);
Without explicitly writing code to do so, this isn't possible in TypeScript. Why? Because once it's gone through the compiler, your code looks like this:
let s1 = require('./student.json');
Everything relating to types gets erased once compilation completes, leaving you with just the pure JavaScript. TypeScript will never emit code to verify that the type checks will actually hold true at runtime - this is explicitly outside of the language's design goals.
So, unfortunately, if you want this functionality you're going to have to write if (s1.name), if (s1.dob), etc.
(That said, it is worth noting that there are third-party projects which aim to add runtime type checking to TypeScript, but they're still experimental and it's doubtful they'll every become part of the actual TypeScript language.)
Is there any strong reasons to choose one over the other when declaring the mappings for url resources?
#RequestMapping(Mappings.USER)
vs
#RequestMapping("${mappings.user}")
I understand that property files can be modified after deployment, and that might be a reason to keep it in properties if you want it to be changed easily, right? But also I think changing them easily could be undesirable. So for those with experience, which do you prefer, and why? I think a constants file might be easier to refactor, like if I wanted to change the name of a resource I would only have to refactor inside the constants class vs if I refactored properties I would have to refactor in the properties file and everywhere that uses the mapping (Im using eclipse and as far as I know it doesnt have property name refactoring like that). Or maybe a third option of neither and declaring them all as literals inside the controllers?
It all depends on your use case. If you need the change URIs without recompilation, property files is the way to go. Otherwise, constants provide type safety and ease of unit testing that SPEL doesn't. If you're not gonna change or reuse them (for example, same URI for GET and POST is very common), I don't see any need for constants at all.
I am currently assessing gradle as an alternative to Maven for a homegrown convention based ant+ivy build. The ant+ivy build is designed to provide a standard environment for a wide range of j2se apps & it supports the following conventional layout to app config
conf/
fooPROD.properties
fooUAT.properties
bar.properties
UK/
bazPROD.properties
bazUAT.properties
If I choose to do a build for UAT then I get
conf/
foo.properties
bar.properties
UK/
baz.properties
i.e. it copies the files that are suffixed with the target environment (UAT in this case) as well as anything that has no such pattern. There are a variety of other things that happen alongside this to make it rather more complicated but this is the core of my current problem.
I've been playing around with various gradle features while transcribing this as opposed to just getting it working. My current approach is to allow the targetenv to be provided on the fly like so
tasks.addRule("Pattern: make<ID>") { String taskName ->
task(taskName).dependsOn tasks['make']
}
The make task deals with the various copying/filtering/transforming of conf files from src into the build area. To do this, it has to work out what the targetenv is which I am currently doing after the DAG has been created
gradle.taskGraph.whenReady {taskGraph ->
def makeTasks = taskGraph.getAllTasks().findAll{
it.name.startsWith('make') && it.name != 'make'
}
if (makeTasks.size() == 1) {
project.targetEnv = makeTasks[0].name - 'make'
} else {
// TODO work out how to support building n configs at once
}
}
(it feels like there must be a quicker/more idiomatic way to do this but I digress)
I can then run it like gradle makeUAT
My problem is that setting targetEnv in this way means the targetEnv is not set at configuration time. Therefore if I have a copy task like
task prepareEnvSpecificDist(type: Copy) {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
it doesn't do what I want because $project.targetEnv hasn't been set yet. Naively, I changed this to
task prepareEnvSpecificDist(type: Copy) << {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
once I understood what was going on. This then fails like
Skipping task ':prepareEnvSpecificDist' as it has no source files.
because I haven't configured the copy task to tell it what the inputs and outputs are.
The Q is how does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
NB: I realise I could pass a system property in and do something like gradle -Dtarget.env=UAT make but that's relatively verbose and I want to work out what is going on anyway.
Cheers
Matt
Building for a particular target environment is a cross-cutting concern and does not really fit the nature of a task. Using a system property (-D) or project property (-P) is a natural way of dealing with this.
If you absolutely want to save a few characters, you can query and manipulate gradle.startParameter.taskNames to implement an environment switch that looks like a task name. However, this is a non-standard solution.
How does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
This is a special case of the more general problem that a configuration value gets written after it has been read. Typical solutions are:
Avoid it if you can.
Some task/model properties accept a closure that will then get evaluated lazily. This needs to be looked up in the corresponding task/plugin documentation.
Perform the configuration in a global hook like gradle.projectsEvaluated or gradle.taskGraph.whenReady (depending on the exact needs).
Perform the configuration in a task action (at execution time). As you have already experienced, this does not work in all cases, and is generally discouraged (but sometimes tolerable).
Plugins use convention mapping to lazily bind model values to task properties. This is an advanced technique that should not be used in build scripts, but is necessary for writing plugins that extend the build language.
As a side note, keep in mind that Gradle allows you to introduce your own abstractions. For example, you could add a method that lets you write:
environment("uat") {
// special configuration for UAT environment
}