How do I access environment variables in xquery? - maven

My program in xquery has a few variables that are based on the environment where the function is being run. For example, dev points to "devserver", test to "testserver", prod to "server", etc.
How do I set this up in my application.xml file and how do I reference these in my .xqy functions?
"workaround" solution 1
use "switch" to determine host:
switch (xdmp:host-name(xdmp:host()))
case <dev environment as string> return "devserver"
case <test environment as string> return "testserver"
.
.
.
default return fn:error(xs:QName("ERROR"), "Unknown host: " || xdmp:host-name(xdmp:host()))
"workaround" solution 2
create an xml file in your project for each host, update your application.xml to place the xml file in a directory depending on the environment name, then refer to the document now built on installation.
application.xml:
<db dbName="!mydata-database">
<dir name="/config/" permissionsMode="set">
<uriPrivilege roles="*_uberuser"/>
<permissions roles="*_uberuser" access="riu"/>
<load env="DEV" root="/config/DEV/" merge="true" include="*.xml"/>
<load env="TEST" root="/config/TEST/" merge="true" include="*.xml"/>
documents located in project directory /config//environment.xml
<environment>
<services>
<damApi>https://stage.mydam.org</damApi>
<dimeApi>https://stage.mydime.org/api/Services</dimeApi>
</services>
</environment>
usage when I need to get the value
fn:doc("/config/environment.xml")/environment/services/damApi/fn:string()
neither solution seems the best to me.

If you use ml-gradle to deploy your project, it can do substitutions in your code. That means you can set up an XQuery library with code like this:
declare variable $ENV = "%%environmentName%%";
You can then import that library wherever you need.

Don't know if MarkLogic supports it, but XQuery 3.1 has functions available-environment-variables() and environment-variable().

You can consider using the tiny “properties” library at https://github.com/marklogic-community/commons/tree/master/properties
We wrote it long, long ago for MarkMail.org with the belief we didn’t want to put the configuration into a database document because configuration should be separate from data. Data get backed up, restored somewhere else, and the new location may not be the same environment as the old.
So instead we did a little hack and put config into the static namespace context (which each group and app server has). The configured prefix is the property name. The configured value is the property value (including type information). Here’s a screen shot from a MarkMail deployment showing it’s a production server, to email on errors, what static file version to serve, and what domain to output as its base.
This approach lets you configure properties administratively (via the Red GUI or REST) and they’re kept separate from the data. They’re statically available to the execution context without extra cost. You can configure them at the Group level or App Server level or both. The library is a convenience wrapper to pull the typed values.
Maybe by now there’s a better way like the XQuery 3.1 functions, but this one has been working well for 10+ years.

Not yet using gradle in our project, I managed to work out how to use maven profiles to find/replace the values I needed based on the environment it was deployed into. I just have to add to the proper profile the plugin to include, the files to update, and what to replace.
pom.xml:
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.2</version>
<executions>
<execution>
<phase>prepare-package</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<includes>
<include>**/modules/runTasks.xqy</include>
<include>**/imports/resetKey.xqy</include>
</includes>
<replacements>
<replacement>
<token>https://stage.mydime.org/api/Services</token>
<value>https://www.mydime.org/api/Services</value>
</replacement>
</replacements>
</configuration>
</plugin>

Related

ASCIIDOC: "Unresolved directive in...": "<stdin>" "or "index.adoc"

I am new to ASCIIDOC and just wanted to know WHERE the following problem comes from.
Setup:
Intellij with the neweset ASCIIDOC-Plugin
neweset asciidoctor-maven-plugin with preserveDirectories = true
I organized my asciidocs like this:
footer.adoc
header.adoc
index.adoc
subfolder
index.adoc
generated-docs looks like this:
footer.html
header.html
index.html
subfolder
index.html
Now, if I want the subfolder/index.html to include header & footer too, I thought I need to write include::../header.adoc[] into the adoc-file which is no problem for the Intellij-Plugin. But in the generated html you will find following error:
<p>Unresolved directive in index.adoc - include::../header.adoc[]</p>
So when I write the following into the adoc-file: include::header.adoc[] the generated html is happy but the Intellij ASCIIDOC plugin shows an error:
Unresolved directive in <stdin> - include::header.adoc[]
I am just wondering if this is a bug for the Intellij Plugin-Team or for the Maven-Plugin-Team. Or maybe someone has a workaround this problem?
And a little bonus question: Is it possible to configure the maven plugin to not generate header-/footer.htmls since they are already included into the actual htmls?
I have no experience with the maven plugin, but I do have lots of experience with AsciiDoc, the IntelliJ Plugin and the Gradle plugin.
The IntelliJ Plugin behaviour is correct. When you convert /subfolder/index.adoc, the includes are resolved relative to this file, so the include include::../header.adoc is correct.
You describe that you don't specify which file to render for the maven plugin (header.adoc is converted). This might be the problem with the maven plugin:
You just specify the source path and all documents are rendered relative to this source path and hence the /subfolder/index.adoc has the wrong source path.
With the Gradle plugin, you cann specify all documents to be converted. This would also avoid getting header.adoc converted. From the maven plugin docs I see that you can specify only a single file.
With this in mind, I would suggest to change your file structure in such a way that you have all the files to be converted in one folder. You can then specify this folder and the other files should not be converted. This also shoul dresolve your problem with the relative path name:
/src/docs/
|
+-common/
| |
| +-header.adoc
| +-footer.adoc
+-chapters/
+-main/
|
+-index1.adoc
+-index2.adoc
I know this is late, the answer is found in the documentation in the restdoc manual from Spring.io in the "Using the Snippets" section of https://spring.io/guides/gs/testing-restdocs/ There is some mention of this in the sample gradle project
The maven plugin configuration should be something like this:
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.8</version>
<executions>
<execution>
<id>output-html</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<sourceHighlighter>coderay</sourceHighlighter>
<backend>html</backend>
<attributes>
<toc/>
<linkcss>false</linkcss>
<snippets>
${project.build.directory}/generated-snippets
</snippets>
</attributes>
</configuration>
</execution>
</executions>
</plugin>

Asciidoctor does not highlight source code by highlightjs

I try to generate documentation via Spring Rest Docs using Asciidoctor. User manual says: for highlighting source code in documentation I shall to use :source-highlighter: highlightjs attribute in header of .adoc file.
Here's an example of my index.adoc:
:source-highlighter: highlightjs
= Source code listing
Code listings look cool with Asciidoctor and highlight.js with {highlightjs-theme} theme.
[source,groovy]
----
// File: User.groovy
class User {
String username
}
----
[source,sql]
----
CREATE TABLE USER (
ID INT NOT NULL,
USERNAME VARCHAR(40) NOT NULL
);
----
after that I build and run application, and here's an generated doc without highlighting of source code:
My maven plugin configuration:
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<id>generate-docs</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>html</backend>
<doctype>book</doctype>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.springframework.restdocs</groupId>
<artifactId>spring-restdocs-asciidoctor</artifactId>
<version>2.0.2.RELEASE</version>
</dependency>
</dependencies>
</plugin>
What am I doing wrong?
P.S. Also I have tried to install highlight.js locally with renaming highlight/highlight.pack.js to highlight/highlight.min.js and highlight/styles/github.css to highlight/styles/github.min.css and so on, as user manual says but there is no result too
Unfortunately, as you probably figured out, Groovy is not included in the standard highlight.js language pack. It only includes the ones in the "common" section. SQL would work though. As you can see in this picture, the SQL part works with my setup out of the box, but not Groovy.
To fix the Groovy code, you can either use Java as the language (fine for a lot of Groovy code examples) or download the custom HighlightJS pack with Groovy checked. I'm guessing that's where you got to.
If you are using the custom HighlightJS pack, I ran into a similar issue at first. When I went into developer tools in the browser, it showed that the highlight.js files were not found. Another hint that was the problem is that all the Spring REST Docs examples lost their highlighting too. Although the Asciidoctor manual says to put it into the same folder and it should copy over automatically, with Gradle, I still had to tell it to include the highlight files using the resources config option. I'm not a Maven user, but maybe the Maven plugin has a similar setup?
After I fixed the config, it worked for both Groovy and SQL
so I hope that will work for you too.

Spring Boot: Thymeleaf not resolving fragments after packaging

im using fragments like this:
#RequestMapping(value="/fragment/nodeListWithStatus", method= RequestMethod.GET)
public String nodeListWithStatus(Model model) {
// status der nodes
model.addAttribute("nodeList", nodeService.getNodeListWithOnlineStatus());
return "/fragments :: nodeList";
}
The templates are in /src/main/resources/templates. This works fine when starting the application from IntelliJ.
As soon as i create an .jar and start it, above code no longer works. Error:
[2014-10-21 20:37:09.191] log4j - 7941 ERROR [http-nio-666-exec-2] --- TemplateEngine: [THYMELEAF][http-nio-666-exec-2] Exception processing template "/fragments": Error resolving template "/fragments", template might not exist or might not be accessible by any of the configured Template Resolvers
When i open the .jar with winrar, i see /templates/fragments.html - so it seems to be there.
My pom.xml has this part for building the jar (Maven clean, install) :
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>de.filth.Application</mainClass>
<layout>JAR</layout>
</configuration>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Can anyone tell me what im doing wrong here?
Thanks!
You don't need the leading / on the view name, i.e. you should return fragments :: nodeList rather than /fragments :: nodeList. Having made this change Thymeleaf should be able to find the template when run from your IDE or from a jar file.
If you're interested, here's what's happening under the hood:
The view name is used to search for a resource on the classpath. fragments :: nodeList means that the resource name is /templates/fragments.html and /fragments :: nodeList means that the resource name is /templates//fragments.html (note the double slash). When you're running in your IDE the resource is available straight off the filesystem and the double slash doesn't cause a problem. When you're running from a jar file the resource is nested within that jar and the double slash prevents it from being found. I don't fully understand why there's this difference in behaviour and it is rather unfortunate. I've opened an issue so that we (the Spring Boot team) can see if there's anything we can do to make the behaviour consistent.
It's an old topic, but I stumbled upon it while having problem with similar symptoms and different root cause. Wanted to share solution which helped me in case it could help somebody else...
Apparently name of the messages.properties file is case sensitive, but not everywhere. I had mine called "Messages.properties" (with capital M) and it worked just fine from inside IDE (IntelliJ), but once I tried to run app from jar, all messages were replaced with ??parameter.name??. Replacing M with lowercase m resolved the problem.

How to enforce the use of exactly one out of two Maven profiles?

I have a Maven project that defines two separate profiles, developer and release (surely you get the drift, here). I want one of these two profiles to be activated at any time, but never both. If both are somehow activated, this build makes no sense and should fail. If neither is activated, this build also makes no sense and should fail.
I'm sure I can write some custom plugin code to achieve this, and I might very well end up going that way, but I'd be interested in achieving this using POM configuration (could be using existing plugins from Maven Central).
It should be possible to activate plugins using -P (--activate-profiles) so <activation> through properties would not be a valid solution. Solutions using activeByDefault would not be valid either, since activeByDefault is generally known as a pitfall, unreliable (and we may in fact activate other profiles, thus rendering activateByDefault unusable).
Your suggestions much appreciated.
I had a similar need (i.e. for mutual exclusivity of two profiles) and solved it by considering the two target profiles to be internal profiles that shouldn't be specified on the command line: Instead, a controlling system property can either be specified or not. E.g. let's assume that by default you want the "dev" profile to be active. We can then activate/deactivate the relevant internal profiles based on whether the -Drelease option is specified as follows:
<!-- Internal profile: FOR INTERNAL USE ONLY - active if -Drelease is *not* specified. -->
<profile>
<id>internal-dev</id>
<activation>
<!-- Activation via *absence* of a system property to ensure mutual exclusivity
of this profile with internal-release -->
<property>
<name>!release</name>
</property>
</activation>
...
</profile>
<!-- Internal profile: FOR INTERNAL USE ONLY - active if -Drelease *is* specified. -->
<profile>
<id>internal-release</id>
<activation>
<!-- Activation via *presence* of a system property to ensure mutual exclusivity
of this profile with internal-dev -->
<property>
<name>release</name>
</property>
</activation>
...
</profile>
The simplest solution for this kind of problem would be to use the maven-enforcer-plugin which exactly has such a rule to force to activate at least one of two or more profiles.
Unfortunately the requireActiveProfile has currently a bug. But currently a preparation for a new release is on going which solves this.
Update The bug mentioned above has been fixed in release 1.4 (which was released in 2015).
This can still be done with Maven Enforcer plugin
Although mutual exclusion, <requireActiveProfile>...<all>false</all>... is buggy as reported by #khmarbaise, there's still the <evaluateBeanshell/> built-in rule that lets one do whatever he wants.
I wrote one especially for this case: XOR of two profiles. I hope it helps.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
<executions>
<execution>
<id>enforce-PROFILE_ONE-XOR-PROFILE_TWO-is-active</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requireActiveProfile>
<profiles>PROFILE_ONE,PROFILE_TWO</profiles>
<all>false</all>
</requireActiveProfile>
<evaluateBeanshell>
<condition><![CDATA[
// ensure PROFILE_ONE XOR PROFILE_TWO
print("Checking if only one of PROFILE_ONE and PROFILE_TWO profiles is active ...");
boolean profile1 = false, profile2 = false;
for(s: "${project.activeProfiles}".replaceAll("\\[?\\s?Profile \\{id: (?<profile>\\w+), source: \\w+\\}\\]?", "${profile}").split(",")) {
if("PROFILE_ONE".equalsIgnoreCase(s)){ profile1 = true;}
if("PROFILE_TWO".equalsIgnoreCase(s)){ profile2 = true;}
}
print("PROFILE_ONE XOR PROFILE_TWO: "+(profile1 != profile2));
return profile1 != profile2;
]]></condition>
</evaluateBeanshell>
</rules>
<failFast>true</failFast>
</configuration>
</execution>
</executions>
</plugin>
The tricky part is looping over active profiles, which I've already done here. You can extend it to more than two profiles if you need. But you'll have to write the long xor expression, since beanshell doesn't implement Java xor ^ operator.
I always issue a build command like so:
mvn package -P-dev,prod
It explicitly disables the dev profile and enables the production one. To my knowledge, you can not conditionally enable one build profile if another is active (which is a bit unfortunate), and because of that you can't ensure that the profiles are mutually exclusive.
I needed a slightly more advanced version of this rule. I ended up writing it myself. I've submitted a patch to them that includes the following 2 rules:
The ability to specify a set of mutually-exclusive profiles (p1,p2:p1,p3 would mean p1 can't be active with either p2 or p3).
The ability to ban profiles (the contrary of requireActiveProfile). p1, p2 would mean neither p1 nor p2 can be active for this build.
Both of these rules support wildcards and consider inherited profiles as well. These are built on v1.4 of the rules.
http://jira.codehaus.org/browse/MENFORCER-225

Custom values in Maven pom.properties file

I'm trying to add custom values in the pom.properties file that Maven generates in the META-INF/maven/${groupId}/${artifactId} location
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.4</version>
<configuration>
<archive>
<manifestEntries>
<build>${BUILD_TAG}</build>
</manifestEntries>
<addMavenDescriptor>true</addMavenDescriptor>
<pomPropertiesFile>${project.build.directory}\interface.properties</pomPropertiesFile>
</archive>
</configuration>
</plugin>
The content of the interface.properties files is
# Build Properties
buildId=746
Using the documentation I have pointed the pomPropertiesFile element to an external properties, but the generated pom.properties file still has the default content after running mvn install
What's the correct usage of the pomPropertiesFile element?
EDIT
I believe that the problem lies in org.apache.maven.archiver.PomPropertiesUtil. If you look at the method sameContents in the source it returns true if the properties in the external file are the same as the defaults and false if different. If the result of sameContents is false, then the contents of the external file are ignored.
Sure enough, this has already been logged as a bug
I think you need to place a file under src/main/resources/META-INF/${groupId}/${artifactId}/interface.properties and let maven do the filtering job (configure the filtering). The file will automatically be copied to target/META-INF/maven/${groupId}/${artifactId}/ location.
See https://issues.apache.org/jira/browse/MNG-4998
Maven 3 will resolve property placeholders eagerly when reading pom.xml for all properties values that are available at this time. Modifying these properties at a later time will not affect the values that are already resolved in pom.xml.
However, if property value is not available (there's no default), then placeholder will not be replaced by the value and it can still be processed later as a placeholder. For example, if a plugin will generate some property during the build, or if placeholder is read and processed by a plugin during some build step.

Resources