Best data structure for ANT like Utility - data-structures

Trying to make an ANT like utility wherein I am loading a configuration.xml ( similar to ant build.xml ). This configuration.xml has different 'target' tags that need to be executed based on the target attributes and properties. Each target has 'dependent' targets, which must be executed prior to executing the calling target
Which is the Best data structure for such processing ?
Currently I am using HASHMAP together with a Stack
I am reading the configuration.xml by SAX parser and loading each target as an object ( with all its properties and dependencies onto a HASHMAP.)
This Hashmap is then iterated, and dependencies are kept on stack. Once the stack is build, it is poped and each target executed.
Is this the Optimum solution or any better data structure ?

One approach is to use an XSLT transformation and generate ANT file that is dynamically executed. The following example illustrates the principle:
iterate int xml file using ant
But perhaps a better approach is to use a dynamic scripting language like groovy and create a custom DSL language for your application.

Related

JMeter Custom Plugin Variable Substitution

Context
I am developing a custom JMeter plugin which generates test data dynamically from a tree like structure.
The editor for the tree generates GUI input fields as needed, and therefore I have no set of defined configuration properties which are set in the respective TestElement. Instead, I serialize the tree as a whole in the GUI class, set the result as one property and deserialize it in the config element where it is processed further during test execution.
Problem
This works just fine, except that JMeter variable/function expressions like ${foo} or ${_bar(..)} in the dynamic input fields are not evaluated. As far as I understand the JMeter source code, the evaluation is triggered somehow if the respective property setters in org.apache.jmeter.testelement.TestElement are used which is not possible for my plugin.
Unfortunately, I was not able to find a proper implementation which can be used in my config element to evaluate such expressions explicitly after deserialization.
Question
I need a pointer to JMeter source code or documentation for evaluating variable/function expressions explicitly.
After I manages to setup the JMeter-Project properly in my IDE, I found org.apache.jmeter.engine.util.CompoundVariable which can be used like this:
CompoundVariable compoundVariable = new CompoundVariable();
compoundVariable.setParameters("${foo}");
// returns the value of the expression in the current context
compoundVariable.execute();

How to set dynamic input dependency on gradle task

I'm working on a Gradle plugin that has a task that generates compilable Java source code. It takes as input for the code generator a "static" directory (property value) which should contain files in a special language (using a specific extent). In addition, if a particular configuration property is set to true, it will also search for files with the same extent in the entire classpath (or better, in a specific configuration).
I want to make sure that the task runs if any of its input dependencies are new.
It's easy enough to add #InputDirectory to the property definition for the "static" location, but I'm unsure how to handle the "dynamic" input dependency.
I have a property that defines the name of the configuration that would be used to search for additional files with that extent. We'll call that "searchConfiguration". This property is optional. If it's not set, it will use "compile". I also have the property that specifies whether we will search for additional files in the first place. We'll call that "inspectDependencies".
I think I could write a #Input-annotated method that returns essentially the "configurations.searchConfiguration.files" list. We'll call that "getDependencies". I think that is the basic idea. However, I don't understand what to do about "inspectDependencies". I could easily make "getDependencies" return an empty list if "inspectDependencies" is false, but is that truly the correct thing to do? It seems likely that if someone changed "inspectDependencies" from "true" to "false" after a build, the next build should run the task again.
Well, this is tentative, but I asked about this on the Gradle Forum and Mark Viera convinced me that it really should be this simple, although it requires #InputFiles instead of #Input. My particular method looks like this:
#InputFiles
def getOptionalYangClasspath() {
return inspectDependencies ? project.configurations[yangFilesConfiguration] : Collections.emptyList()
}

Yaml properties as a Map in Spring Boot

We have a spring-boot project and are using application.yml files. This works exactly as described in the spring-boot documentation. spring-boot automatically looks in several locations for the files, and obeys any environment overrides we use for the location of those files.
Now we want to also expose those yaml properties as a Map. According to the documentation this can be done with YamlMapFactoryBean. However YamlMapFactoryBean wants me to specify which yaml files to use via the resources property. I want it to use the same yaml files and processing hierarchy that it used when creating properties, so that I can take still take advantage of "magical" features such as placeholder resolution in property values.
I didn't see any documentation on if this was possible.
I was thinking of writing a MapFactoryBean that looked at the environment and simply reversed the "flattening" performed by the YamlProcessor when creating the properties representation of the file.
Any other ideas?
The ConfigFileApplicationContextListener contains the logic for searching for files in various locations. And PropertySourcesLoader loads a file (Resource) into property sources. Neither is really designed for standalone use, but you could easily duplicate them if you want more control. The PropertySourcesLoader delegates to a collection of PropertySourceLoaders so you could add one of the latter that delegates to your YamlMapFactoryBean.
A slightly awkward but workable solution would be to use the existing machinery to collect the YAML on startup. Add a new PropertySourceLoader to your META-INF/spring.factories and let it create new property sources, then post process the Environment to extract the source map(s).
Beware, though: creating a single Map from multiple YAML files, or even a single one with multiple documents (let alone multiple files with multiple documents) isn't as easy as you might think. You have a map-merge problem, and someone is going to have to define the algorithm. The flattening done in YamlMapPropertiesBean and the merge in YamlMapFactoryBean are just two choices out of (probably) a larger set of possibilities.

Copying files based on a pattern that is determined after the configuration phase has completed

I am currently assessing gradle as an alternative to Maven for a homegrown convention based ant+ivy build. The ant+ivy build is designed to provide a standard environment for a wide range of j2se apps & it supports the following conventional layout to app config
conf/
fooPROD.properties
fooUAT.properties
bar.properties
UK/
bazPROD.properties
bazUAT.properties
If I choose to do a build for UAT then I get
conf/
foo.properties
bar.properties
UK/
baz.properties
i.e. it copies the files that are suffixed with the target environment (UAT in this case) as well as anything that has no such pattern. There are a variety of other things that happen alongside this to make it rather more complicated but this is the core of my current problem.
I've been playing around with various gradle features while transcribing this as opposed to just getting it working. My current approach is to allow the targetenv to be provided on the fly like so
tasks.addRule("Pattern: make<ID>") { String taskName ->
task(taskName).dependsOn tasks['make']
}
The make task deals with the various copying/filtering/transforming of conf files from src into the build area. To do this, it has to work out what the targetenv is which I am currently doing after the DAG has been created
gradle.taskGraph.whenReady {taskGraph ->
def makeTasks = taskGraph.getAllTasks().findAll{
it.name.startsWith('make') && it.name != 'make'
}
if (makeTasks.size() == 1) {
project.targetEnv = makeTasks[0].name - 'make'
} else {
// TODO work out how to support building n configs at once
}
}
(it feels like there must be a quicker/more idiomatic way to do this but I digress)
I can then run it like gradle makeUAT
My problem is that setting targetEnv in this way means the targetEnv is not set at configuration time. Therefore if I have a copy task like
task prepareEnvSpecificDist(type: Copy) {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
it doesn't do what I want because $project.targetEnv hasn't been set yet. Naively, I changed this to
task prepareEnvSpecificDist(type: Copy) << {
from 'src/main/conf'
into "$buildDir/conf"
include "**/*$project.targetEnv.*"
rename "(.*)$project.targetEnv.(.*)", '$1.$2'
}
once I understood what was going on. This then fails like
Skipping task ':prepareEnvSpecificDist' as it has no source files.
because I haven't configured the copy task to tell it what the inputs and outputs are.
The Q is how does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
NB: I realise I could pass a system property in and do something like gradle -Dtarget.env=UAT make but that's relatively verbose and I want to work out what is going on anyway.
Cheers
Matt
Building for a particular target environment is a cross-cutting concern and does not really fit the nature of a task. Using a system property (-D) or project property (-P) is a natural way of dealing with this.
If you absolutely want to save a few characters, you can query and manipulate gradle.startParameter.taskNames to implement an environment switch that looks like a task name. However, this is a non-standard solution.
How does one deal with the problem of task configuration based on properties that become concrete after configuration has completed?
This is a special case of the more general problem that a configuration value gets written after it has been read. Typical solutions are:
Avoid it if you can.
Some task/model properties accept a closure that will then get evaluated lazily. This needs to be looked up in the corresponding task/plugin documentation.
Perform the configuration in a global hook like gradle.projectsEvaluated or gradle.taskGraph.whenReady (depending on the exact needs).
Perform the configuration in a task action (at execution time). As you have already experienced, this does not work in all cases, and is generally discouraged (but sometimes tolerable).
Plugins use convention mapping to lazily bind model values to task properties. This is an advanced technique that should not be used in build scripts, but is necessary for writing plugins that extend the build language.
As a side note, keep in mind that Gradle allows you to introduce your own abstractions. For example, you could add a method that lets you write:
environment("uat") {
// special configuration for UAT environment
}

What is the opposite of JAXB? i.e. generating XML FROM classes?

I am currently designing a solution to a problem I have. I need to dynamically generate an XML file on the fly using Java objects, in the same way JAXB generates Java classes from XML files, however the opposite direction. Is there something out there already like this?
Alternatively, a way in which one could 'save' a state of java classes.
The goal I am working towards is a dynamically changing GUI, where a user can redesign their GUI in the same way you can with iGoogle.
You already have the answer. It's JAXB! You can annotate your classes and then have JAXB marshal them to XML (and back) without the need to create an XML schema first.
Look at https://jaxb.dev.java.net/tutorial/section_6_1-JAXB-Annotations.html#JAXB%20Annotations to get started.
I don't know, if this is exactly what you're looking for, but there's the java.beans.XMLEncoder:
XMLEncoder enc = new XMLEncoder(new FileOutputStream(file));
enc.writeObject(obj);
enc.close();
The result can then be loaded by XMLDecoder:
XMLDecoder dec = new XMLDecoder(new FileInputStream(file));
Object obj = dec.readObject();
dec.close();
"generate xml from java objects:"
try xtream.
Here's what is said on the tin:
No mappings required. Most objects can be serialized without need for specifying mappings.
Requires no modifications to objects.
Full object graph support
For saving java object state:
Serialization is the way to do this in Java

Resources