How do I access maven project version in javadoc overview page? - maven

I am using PDFDoclet with maven-javadoc-plugin and I've come quite a long way with it now. I have the maven and javadoc config almost at a point that is good enough but my immediate problem now is that I can't work out how to push the project version number into the PDF title page.
Before you leap to answer my question by telling me to use maven's <resource> filtering, let me outline why that isn't working.
Filtering works by taking the original file from somewhere in the src folder, doing variable substitution and putting the output in the target folder.
Javadoc works by reading files in src/main/java and src/main/javadoc and AFAIK outputting the results into target. This means filtering is useless for Javadoc since it won't read anything from target
My results show that any maven variables in javadoc comments don't get substituted.
What trick can I use to get those variables substituted into the javadoc?
The solution can't involve filtering the javadoc output after the site:site task, unless resource filtering works on PDFs.
This is the configuration, FWIW:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.10.3</version>
<configuration>
<show>package</show>
<docfilessubdirs>true</docfilessubdirs>
<tags>
<tag>
<name>pdfInclude</name>
<placement>X</placement>
<head></head>
</tag>
</tags>
</configuration>
<reportSets>
<reportSet>
<id>PDF</id>
<reports>
<report>javadoc</report>
</reports>
<configuration>
<name>PDF</name>
<description>PDF doc</description>
<destDir>pdf</destDir>
<doclet>com.tarsec.javadoc.pdfdoclet.PDFDoclet</doclet>
<docletPath>${basedir}/pdfdoclet/pdfdoclet-1.0.3-all.jar</docletPath>
<useStandardDocletOptions>false</useStandardDocletOptions>
<additionalparam>-pdf my_tech_doc-${project.version}.pdf
-config ${basedir}/pdfdoclet/pdfdoclet.properties</additionalparam>
</configuration>
</reportSet>
</reportSets>
</plugin>
and the pdfdoclet.properties:
# http://pdfdoclet.sourceforge.net/configuration.html
#
#Lets the doclet print additional output in the console and to a logfile.
debug=true
#Print "Author" tags
author=false
#Print "Version" tags
version=true
#Print "since" tags
tag.since=true
#Create summary tables
summary.table=false
#Create hyperlinks
create.links=true
#Encrypt the PDF file
encrypted=false
#Allow printing of the PDF file
allow.printing=true
#Create a bookmark frame
create.frame=true
#Print a title page
api.title.page=true
api.copyright=None
api.author=Hansruedi
#Enables the tag-based filter
filter=true
filter.tags=pdfInclude
font.text.name=resources/arial.ttf
page.orientation=portrait
The PDFDoclet-specific api.* properties should result in a title page as the first page of the PDF, but it doesn't work. If there is a trick that I've missed here and I could get that title page produced, then that might also allow a solution for this somehow.

I realised I can do a quick and dirty hack with the maven <resources> functionality:
<resource>
<directory>${basedir}/src/main/resources</directory>
<targetPath>${basedir}/src/main/javadoc</targetPath>
<filtering>true</filtering>
<includes>
<include>**/overview.html</include>
</includes>
</resource>
This copies my overview.html and filters it, outputting it into the javadoc source directory.
The dirtiness is that this filtered version could then accidentally end up under version control, although using svn I can add it to the ignore list.

Related

Maven Filtering Include/Exclude Understanding

Hello I'm trying to understand the filtering include/exclude in Maven as I am a Maven noob. I was looking up how to read the version from the pom file in JAVA and found the solution but I have a few questions about the filtering. I understand the filtering with include and/or exclude in the same resource block, but I'm not understanding what is happening when you include/exclude in different resource blocks of the same file/folder.
My folder structure with simplified RandomFolder and its contents:
src
├── main
│ ├── resources
│ ├── RandomFolder
│ ├──aFile.dll
│ ├──someFile.txt
│ └── pom.properties
pom.properties file contains:
version=${revision}
#1
This is how I currently have it (simplified) as a working solution. I understand it's filtering the pom.properties file and replacing '${revision}' with '1.0.0' aka 'version=1.0.0'. Would the pom.properties file be filtered (aka 'version=1.0.0') when mvn package is ran and a jar file is generated? I would assume so, but wanted to make sure.
I also understand that typically you wouldn't specify the filename in the filtering and instead it would be **/*.properties, but since this was the only properties file I think it's easier and cleaner to have the filename and it shouldn't cause any issues (let me know if I am wrong on this assumption).
...
<properties>
<revision>1.0.0</revision>
</properties>
...
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
<includes>
<include>**/pom.properties</include>
</includes>
</resource>
</resources>
...
</build>
#2
I believe this is the way tutorial websites would normally represent include and exclude in a resource block. Really unnecessary in this example as it filters everything besides the pom.properties (found and tested that the excludes block overwrites the includes block no matter the order).
No question here. Just wanted to state that I know about this approach.
...
<properties>
<revision>1.0.0</revision>
</properties>
...
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
<includes>
<include>**/pom.properties</include>
</includes>
<excludes>
<exclude>**/pom.properties</exclude>
</excludes>
</resource>
</resources>
...
</build>
#3
Why does this work? I would assume its filtering the file first so that the file reads '1.0.0' then is not filtering, but I want to make sure. This stems from a similar solution I found to get the pom version (see #5 further down). I tested this and it filters the pom.properties file.
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
<includes>
<include>**/pom.properties</include>
</includes>
</resource>
<resource>
<directory>src/main/resources</directory>
<filtering>false</filtering>
<excludes>
<exclude>**/pom.properties</exclude>
</excludes>
</resource>
</resources>
...
</build>
#4
The flip flop of #2 with the exclude block before the include block, which seems kinda unnecessary. I would assume its not filtering the file first then filtering the file so that the file reads '1.0.0', but I want to make sure. I tested this and it filters the pom.properties file.
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>false</filtering>
<excludes>
<exclude>**/pom.properties</exclude>
</excludes>
</resource>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
<includes>
<include>**/pom.properties</include>
</includes>
</resource>
</resources>
...
</build>
#5
This I don't really know. This was a top solution found for: Maven resource filtering exclude
And the link provided in that solution I feel doesn't explain the reasoning for its' solution: https://maven.apache.org/plugins/maven-resources-plugin/examples/filter.html
So is it filtering everything except the pom.properties file but then not filter everything including the pom.properties file? Why? What's the purpose of this? Are they both the same thing? Or a double negative? Does it impact what appears in the build/package?
This 'solution' did not filter the pom.properties file when I tested it.
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
<excludes>
<exclude>**/pom.properties</exclude>
</excludes>
</resource>
<resource>
<directory>src/main/resources</directory>
<filtering>false</filtering>
<includes>
<include>**/pom.properties</include>
</includes>
</resource>
</resources>
...
</build>
Please let me know if you need additional information. I would really appreciate it if you can go through each section (#1 to #5) and answer them. Thank you in advance!
Solution
Robert Scholte's response helped me understand that filtering also copied over files. This is something I did not understand before as I thought it was just modifying the file(s) in the desired folder(s).
With additional help with understanding exclusions from:
http://www.avajava.com/tutorials/lessons/how-do-i-exclude-particular-resources-from-being-processed.html
I was able to answer my questions.
#1. Still not sure, but I believe the pom.properties file would be filtered in jar.
#2. N/A. No question to answer.
#3. For #3, #4, #5 I created a test.properties in same folder as pom.properties and contains the same contents.
Testing required me to use command: mvn clean install -DskipTests to visually see the differences between #3, #4 and #5.
This would be the result after creating the test.properties file and running the above command (without the block in pom file) for a visual reference:
src
├── main
│ ├── resources
│ ├── RandomFolder
│ ├──aFile.dll
│ ├──someFile.txt
│ └── pom.properties
target
├── classes
│ ├── MultipleGeneratedFolders
│ ├── RandomFolder
│ ├──aFile.dll
│ ├──someFile.txt
│ ├── pom.properties
│ └── test.properties
So for #3, it filtered only the pom.properties and moved to a '/target/classes/' path (same level as 'src'). But because of the second resource block, it copies over all the contents in the RandomFolder and the test.properties file to '/target/classes/' path without filtering them!
#4. Exact same result as #3. Everything is copied over '/target/classes/', but only pom.properties file is filtered.
#5. Here is the interesting one. The first resource block with filtering true and excluding pom.properties is filtering everything (since there is no , it defaults to everything) and then copies to '/target/classes/' path. So this means the test.properties file gets filtered! The second resource block with filtering false and including pom.properties is not filtering the pom.properties file (so it leaves as raw text) then copies over to '/target/classes/' path. Basically, everything besides pom.properties gets filtered and copied over to '/target/classes/' path.
Summary of Conclusions
In short, filtering refers to replacing the variables in files with the properties block in the pom file, such as ${revision}. Filtering does not require or (if neither are specified it defaults to including all files) and copies selected files to the target folder.
If you filter (set to TRUE) and only a specific pattern or file, it will only filter the selected files and copy them to the target folder.
If you filter (set to TRUE) and only , it will filter everything except for the specified pattern or file and copy only the filtered files to the target folder.
If you don't filter (set to FALSE) and only a specific pattern or file, it will only copy the selected files to the target folder without filtering them.
If you don't filter (set to FALSE) and only a specific pattern or file, it will copy all files except the selected files to the target folder without filtering them.
(I could be wrong in my conclusions so take it may not be completely accurate, but 'good enough for government work' as they say)
You can run mvn process-resources -X to see more details about which files are copied.
But this is how it works:
per resource-block the fileset to copy is selected by choosing all includes (default is all), minus all excludes (default none).
If filtering is set to true, the placehoders in these files will be replaced, otherwise they will be copied as is.

Maven Enforcer: How to access maven properties from beanshell rule

I successfully created a evaluateBeanshell rule with the maven-enforcer-plugin that scans files in the workspace for common mistakes.
With a hardcoded path the rule works fine. When I want to use the ${project.basedir} variable from the surrounding pom, the script breaks on my Windows machine.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>${maven-enforcer-plugin.version}</version>
<executions>
<execution>
<id>enforce-banned-dependencies</id>
<inherited>true</inherited>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<evaluateBeanshell>
<condition>
import scanner.MyScanner;
scanner = new MyScanner();
//hack to read root dir
//project.basedir crashes beanshell with its backslashes
rootPath = new File("");
root = new File(rootPath.getAbsolutePath());
print("Scanning in: " + root);
print("${project.artifactId}"); // works fine
print("${project.basedir}"); // breaks the code
scanner.loopThroughProjects(root);
return everythingIsFine;
</condition>
</evaluateBeanshell>
</rules>
<fail>true</fail>
</configuration>
</execution>
</executions>
In the debug output the line:
print("${project.basedir}");
was replaced by:
print("D:\code\my-maven-project");
Is there another maven property with sanitized slashes or is there another way to access ${project.basedir}?
The hack outlined in the code example kind of works, but I don't like hacks that force me to leave comments.
You could try ${project.baseUri}.
See https://maven.apache.org/ref/3.8.5/maven-model-builder/#Model_Interpolation
On my Windows 10 machine with Java 8 and Maven 3 the following test properties in pom.xml:
<test>${project.baseUri}</test>
<test2>${project.basedir}</test2>
become the following in the 'effective-pom' (via Intellij IDEA maven plugin)
<test>file:/D:/test/path/</test>
<test2>D:\test\path</test2>
This is just as a proof of concept to show the path separators change, and become valid as a Java String.
You could then transform the URI to a file for your needs in the beanshell script as follows:
uri = java.net.URI.create("${project.baseUri}");
root = new java.io.File(uri);
Via https://docs.oracle.com/javase/8/docs/api/java/io/File.html#File-java.net.URI-

How do I access environment variables in xquery?

My program in xquery has a few variables that are based on the environment where the function is being run. For example, dev points to "devserver", test to "testserver", prod to "server", etc.
How do I set this up in my application.xml file and how do I reference these in my .xqy functions?
"workaround" solution 1
use "switch" to determine host:
switch (xdmp:host-name(xdmp:host()))
case <dev environment as string> return "devserver"
case <test environment as string> return "testserver"
.
.
.
default return fn:error(xs:QName("ERROR"), "Unknown host: " || xdmp:host-name(xdmp:host()))
"workaround" solution 2
create an xml file in your project for each host, update your application.xml to place the xml file in a directory depending on the environment name, then refer to the document now built on installation.
application.xml:
<db dbName="!mydata-database">
<dir name="/config/" permissionsMode="set">
<uriPrivilege roles="*_uberuser"/>
<permissions roles="*_uberuser" access="riu"/>
<load env="DEV" root="/config/DEV/" merge="true" include="*.xml"/>
<load env="TEST" root="/config/TEST/" merge="true" include="*.xml"/>
documents located in project directory /config//environment.xml
<environment>
<services>
<damApi>https://stage.mydam.org</damApi>
<dimeApi>https://stage.mydime.org/api/Services</dimeApi>
</services>
</environment>
usage when I need to get the value
fn:doc("/config/environment.xml")/environment/services/damApi/fn:string()
neither solution seems the best to me.
If you use ml-gradle to deploy your project, it can do substitutions in your code. That means you can set up an XQuery library with code like this:
declare variable $ENV = "%%environmentName%%";
You can then import that library wherever you need.
Don't know if MarkLogic supports it, but XQuery 3.1 has functions available-environment-variables() and environment-variable().
You can consider using the tiny “properties” library at https://github.com/marklogic-community/commons/tree/master/properties
We wrote it long, long ago for MarkMail.org with the belief we didn’t want to put the configuration into a database document because configuration should be separate from data. Data get backed up, restored somewhere else, and the new location may not be the same environment as the old.
So instead we did a little hack and put config into the static namespace context (which each group and app server has). The configured prefix is the property name. The configured value is the property value (including type information). Here’s a screen shot from a MarkMail deployment showing it’s a production server, to email on errors, what static file version to serve, and what domain to output as its base.
This approach lets you configure properties administratively (via the Red GUI or REST) and they’re kept separate from the data. They’re statically available to the execution context without extra cost. You can configure them at the Group level or App Server level or both. The library is a convenience wrapper to pull the typed values.
Maybe by now there’s a better way like the XQuery 3.1 functions, but this one has been working well for 10+ years.
Not yet using gradle in our project, I managed to work out how to use maven profiles to find/replace the values I needed based on the environment it was deployed into. I just have to add to the proper profile the plugin to include, the files to update, and what to replace.
pom.xml:
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.2</version>
<executions>
<execution>
<phase>prepare-package</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<includes>
<include>**/modules/runTasks.xqy</include>
<include>**/imports/resetKey.xqy</include>
</includes>
<replacements>
<replacement>
<token>https://stage.mydime.org/api/Services</token>
<value>https://www.mydime.org/api/Services</value>
</replacement>
</replacements>
</configuration>
</plugin>

Asciidoctor does not highlight source code by highlightjs

I try to generate documentation via Spring Rest Docs using Asciidoctor. User manual says: for highlighting source code in documentation I shall to use :source-highlighter: highlightjs attribute in header of .adoc file.
Here's an example of my index.adoc:
:source-highlighter: highlightjs
= Source code listing
Code listings look cool with Asciidoctor and highlight.js with {highlightjs-theme} theme.
[source,groovy]
----
// File: User.groovy
class User {
String username
}
----
[source,sql]
----
CREATE TABLE USER (
ID INT NOT NULL,
USERNAME VARCHAR(40) NOT NULL
);
----
after that I build and run application, and here's an generated doc without highlighting of source code:
My maven plugin configuration:
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<id>generate-docs</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>html</backend>
<doctype>book</doctype>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.springframework.restdocs</groupId>
<artifactId>spring-restdocs-asciidoctor</artifactId>
<version>2.0.2.RELEASE</version>
</dependency>
</dependencies>
</plugin>
What am I doing wrong?
P.S. Also I have tried to install highlight.js locally with renaming highlight/highlight.pack.js to highlight/highlight.min.js and highlight/styles/github.css to highlight/styles/github.min.css and so on, as user manual says but there is no result too
Unfortunately, as you probably figured out, Groovy is not included in the standard highlight.js language pack. It only includes the ones in the "common" section. SQL would work though. As you can see in this picture, the SQL part works with my setup out of the box, but not Groovy.
To fix the Groovy code, you can either use Java as the language (fine for a lot of Groovy code examples) or download the custom HighlightJS pack with Groovy checked. I'm guessing that's where you got to.
If you are using the custom HighlightJS pack, I ran into a similar issue at first. When I went into developer tools in the browser, it showed that the highlight.js files were not found. Another hint that was the problem is that all the Spring REST Docs examples lost their highlighting too. Although the Asciidoctor manual says to put it into the same folder and it should copy over automatically, with Gradle, I still had to tell it to include the highlight files using the resources config option. I'm not a Maven user, but maybe the Maven plugin has a similar setup?
After I fixed the config, it worked for both Groovy and SQL
so I hope that will work for you too.

Custom values in Maven pom.properties file

I'm trying to add custom values in the pom.properties file that Maven generates in the META-INF/maven/${groupId}/${artifactId} location
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.4</version>
<configuration>
<archive>
<manifestEntries>
<build>${BUILD_TAG}</build>
</manifestEntries>
<addMavenDescriptor>true</addMavenDescriptor>
<pomPropertiesFile>${project.build.directory}\interface.properties</pomPropertiesFile>
</archive>
</configuration>
</plugin>
The content of the interface.properties files is
# Build Properties
buildId=746
Using the documentation I have pointed the pomPropertiesFile element to an external properties, but the generated pom.properties file still has the default content after running mvn install
What's the correct usage of the pomPropertiesFile element?
EDIT
I believe that the problem lies in org.apache.maven.archiver.PomPropertiesUtil. If you look at the method sameContents in the source it returns true if the properties in the external file are the same as the defaults and false if different. If the result of sameContents is false, then the contents of the external file are ignored.
Sure enough, this has already been logged as a bug
I think you need to place a file under src/main/resources/META-INF/${groupId}/${artifactId}/interface.properties and let maven do the filtering job (configure the filtering). The file will automatically be copied to target/META-INF/maven/${groupId}/${artifactId}/ location.
See https://issues.apache.org/jira/browse/MNG-4998
Maven 3 will resolve property placeholders eagerly when reading pom.xml for all properties values that are available at this time. Modifying these properties at a later time will not affect the values that are already resolved in pom.xml.
However, if property value is not available (there's no default), then placeholder will not be replaced by the value and it can still be processed later as a placeholder. For example, if a plugin will generate some property during the build, or if placeholder is read and processed by a plugin during some build step.

Resources