maven file.encoding and Charset.defaultCharset() - maven

my maven parent POM contains
<file.encoding>UTF-8</file.encoding>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
and I have a JUnit-Test which contains the following code:
byte[] bytes;
System.out.println("------------------" + System.getProperty("file.encoding"));
try {
bytes = "ü".getBytes(); // german umlaut u - two bytes in utf-8 one byte in latin-1
System.out.println("Byte count: " + bytes.length);
for (int i = 0; i < bytes.length; i++) {
System.out.println(String.format("%02x", bytes[i]));
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("------------------" + Charset.defaultCharset());
When I run mvn clean test (on my windows machine with a default charset of Cp1252) the output is
------------------Cp1252
Byte count: 1
fc
------------------windows-1252
When I run mvn -Dfile.encoding=UTF-8 clean test the output is:
------------------UTF-8
Byte count: 1
fc
------------------windows-1252
Now I have two questions:
1) What is the property <file.encoding> in my POM good for?
2) When I specified -Dfile.encoding=UTF-8 why wasn't the default charset changed to UTF-8 (and therefore getBytes() still used 'cp1252' and returns 1 byte) and how do I change this
Thanks in advance,
Ronald

The editor must set the same encoding too. Evidently you saved the file in Cp1252. Use JEdit or NotePad++ to check that.
getBytes("UTF-8"); // 2
getBytes("Cp1252"); // 1
getBytes(); // Depending on platform, System.getProperty("file.encoding")
What maven does with those properties, I am not entirely sure in case of file.encoding.

If you want to have Charset.defaultCharset returning UTF-8 you need to set it for the plugin argLine as well, because its too late if you specify it only in the properties.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19.1</version>
<configuration>
<skipTests>${skip.unit.tests}</skipTests>
<enableAssertions>true</enableAssertions>
<argLine>${surefireArgLine} -Dfile.encoding=UTF-8</argLine>
</configuration>
</plugin>

Related

Junit5's test report output plugin, are there any other better alternatives? Gradle platform or maven platform

I am learning to use junit5.
I followed the tutorial to write some dynamic tests and then run them using gradle. But the test report output by gradle by default is not good enough for me, it does not contain nested test container structure. Is there any alternative that can output a better-formed test report when running dynamic tests? Similar to the intellij idea test report.
the code:
#TestFactory
Stream<DynamicNode> dynamicTestsWithContainers() {
return Stream.of("A", "B", "C")
.map({ input ->
dynamicContainer("Container " + input, Stream.of(
dynamicTest("not null", { -> assertNotNull(input) }) as DynamicNode,
dynamicContainer("properties", Stream.of(
dynamicTest("length > 0", { -> assertTrue(input.length() > 0) }),
dynamicTest("not empty", { -> assertFalse(input.isEmpty()) })
))
))
})
}
run with gradle:
html report
xml report
run with intellij idea:
intellij idea report
As I found out, maven-surefire-plugin since version 3.0.0 supports better reporting for JUnit5 using JUnit5Xml30StatelessReport. The usage example is given in documentation:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M4</version>
<configuration>
<testFailureIgnore>true</testFailureIgnore>
<statelessTestsetReporter
implementation="org.apache.maven.plugin.surefire.extensions.junit5.JUnit5Xml30StatelessReporter">
<disable>false</disable>
<version>3.0</version>
<usePhrasedFileName>true</usePhrasedFileName>
<usePhrasedTestSuiteClassName>true</usePhrasedTestSuiteClassName>
<usePhrasedTestCaseClassName>true</usePhrasedTestCaseClassName>
<usePhrasedTestCaseMethodName>true</usePhrasedTestCaseMethodName>
</statelessTestsetReporter>
</configuration>
</plugin>
This reporter respects dynamic test descriptions when usePhrasedXXX parameters are set to true. However, it still does not respect the nested structure (all the tescases are in a flat list).

Consuming stack traces noticeably slower in Java 11 than Java 8

I was comparing the performance of JDK 8 and 11 using jmh 1.21 when I ran across some surprising numbers:
Java version: 1.8.0_192, vendor: Oracle Corporation
Benchmark Mode Cnt Score Error Units
MyBenchmark.throwAndConsumeStacktrace avgt 25 21525.584 ± 58.957 ns/op
Java version: 9.0.4, vendor: Oracle Corporation
Benchmark Mode Cnt Score Error Units
MyBenchmark.throwAndConsumeStacktrace avgt 25 28243.899 ± 498.173 ns/op
Java version: 10.0.2, vendor: Oracle Corporation
Benchmark Mode Cnt Score Error Units
MyBenchmark.throwAndConsumeStacktrace avgt 25 28499.736 ± 215.837 ns/op
Java version: 11.0.1, vendor: Oracle Corporation
Benchmark Mode Cnt Score Error Units
MyBenchmark.throwAndConsumeStacktrace avgt 25 48535.766 ± 2175.753 ns/op
OpenJDK 11 and 12 perform similar to OracleJDK 11. I have omitted their numbers for the sake of brevity.
I understand that microbenchmarks do not indicate the performance behavior of real-life applications. Still, I'm curious where this difference is coming from. Any ideas?
Here is the benchmark in its entirety:
pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>jmh</groupId>
<artifactId>consume-stacktrace</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>JMH benchmark sample: Java</name>
<dependencies>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
<version>${jmh.version}</version>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-generator-annprocess</artifactId>
<version>${jmh.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<jmh.version>1.21</jmh.version>
<javac.target>1.8</javac.target>
<uberjar.name>benchmarks</uberjar.name>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
<executions>
<execution>
<id>enforce-versions</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requireMavenVersion>
<version>3.0</version>
</requireMavenVersion>
</rules>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<compilerVersion>${javac.target}</compilerVersion>
<source>${javac.target}</source>
<target>${javac.target}</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<finalName>${uberjar.name}</finalName>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>org.openjdk.jmh.Main</mainClass>
</transformer>
</transformers>
<filters>
<filter>
<!--
Shading signed JARs will fail without this.
http://stackoverflow.com/questions/999489/invalid-signature-file-when-attempting-to-run-a-jar
-->
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<plugin>
<artifactId>maven-clean-plugin</artifactId>
<version>2.6.1</version>
</plugin>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8.2</version>
</plugin>
<plugin>
<artifactId>maven-install-plugin</artifactId>
<version>2.5.2</version>
</plugin>
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<version>3.1.0</version>
</plugin>
<plugin>
<artifactId>maven-javadoc-plugin</artifactId>
<version>3.0.0</version>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.1.0</version>
</plugin>
<plugin>
<artifactId>maven-site-plugin</artifactId>
<version>3.7.1</version>
</plugin>
<plugin>
<artifactId>maven-source-plugin</artifactId>
<version>3.0.1</version>
</plugin>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
</plugin>
</plugins>
</pluginManagement>
</build>
</project>
src/main/java/jmh/MyBenchmark.java:
package jmh;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.infra.Blackhole;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.util.concurrent.TimeUnit;
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.NANOSECONDS)
public class MyBenchmark
{
#Benchmark
public void throwAndConsumeStacktrace(Blackhole bh)
{
try
{
throw new IllegalArgumentException("I love benchmarks");
}
catch (IllegalArgumentException e)
{
StringWriter sw = new StringWriter();
e.printStackTrace(new PrintWriter(sw));
bh.consume(sw.toString());
}
}
}
Here is the Windows-specific script I use. It should be trivial to translate it to other platforms:
set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_192
call mvn -V -Djavac.target=1.8 clean install
"%JAVA_HOME%\bin\java" -jar target\benchmarks.jar
set JAVA_HOME=C:\Program Files\Java\jdk-9.0.4
call mvn -V -Djavac.target=9 clean install
"%JAVA_HOME%\bin\java" -jar target\benchmarks.jar
set JAVA_HOME=C:\Program Files\Java\jdk-10.0.2
call mvn -V -Djavac.target=10 clean install
"%JAVA_HOME%\bin\java" -jar target\benchmarks.jar
set JAVA_HOME=C:\Program Files\Java\oracle-11.0.1
call mvn -V -Djavac.target=11 clean install
"%JAVA_HOME%\bin\java" -jar target\benchmarks.jar
My runtime environment is:
Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 2018-10-24T14:41:47-04:00)
Maven home: C:\Program Files\apache-maven-3.6.0\bin\..
Default locale: en_CA, platform encoding: Cp1252
OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
More specifically, I am running Microsoft Windows [Version 10.0.17763.195].
I investigated the issue with async-profiler which can draw cool flame graphs demonstrating where the CPU time is spent.
As #AlekseyShipilev pointed out, the slowdown between JDK 8 and JDK 9 is mainly the result of StackWalker changes. Also G1 has become the default GC since JDK 9. If we explicitly set -XX:+UseParallelGC (default in JDK 8), the scores will be slightly better.
But the most interesting part is the slowdown in JDK 11.
Here is what async-profiler shows (clickable SVG).
The main difference between two profiles is in the size of java_lang_Throwable::get_stack_trace_elements block, which is dominated by StringTable::intern. Apparently StringTable::intern takes much longer on JDK 11.
Let's zoom in:
Note that StringTable::intern in JDK 11 calls do_intern which in turn allocates a new java.lang.String object. Looks suspicious. Nothing of this kind is seen in JDK 10 profile. Time to look in the source code.
stringTable.cpp (JDK 11)
oop StringTable::intern(Handle string_or_null_h, jchar* name, int len, TRAPS) {
// shared table always uses java_lang_String::hash_code
unsigned int hash = java_lang_String::hash_code(name, len);
oop found_string = StringTable::the_table()->lookup_shared(name, len, hash);
if (found_string != NULL) {
return found_string;
}
if (StringTable::_alt_hash) {
hash = hash_string(name, len, true);
}
return StringTable::the_table()->do_intern(string_or_null_h, name, len,
| hash, CHECK_NULL);
} |
----------------
|
v
oop StringTable::do_intern(Handle string_or_null_h, const jchar* name,
int len, uintx hash, TRAPS) {
HandleMark hm(THREAD); // cleanup strings created
Handle string_h;
if (!string_or_null_h.is_null()) {
string_h = string_or_null_h;
} else {
string_h = java_lang_String::create_from_unicode(name, len, CHECK_NULL);
}
The function in JDK 11 first looks for a string in the shared StringTable, does not find it, then goes to do_intern and immediately creates a new String object.
In JDK 10 sources after a call to lookup_shared there was an additional lookup in the main table which returned the existing string without creation of a new object:
found_string = the_table()->lookup_in_main_table(index, name, len, hashValue);
This refactoring was a result of JDK-8195097 "Make it possible to process StringTable outside safepoint".
TL;DR While interning method names in JDK 11, HotSpot creates redundant String objects. This has happened after JDK-8195097.
I suspect this is due to several changes.
8->9 regression happened while switching to StackWalker for generating the stack traces (JDK-8150778). Unfortunately, this made VM native code intern a lot of strings, and StringTable becomes the bottleneck. If you profile OP's benchmark, you will see the profile like in JDK-8151751. It should be enough to perf record -g the entire JVM that runs the benchmark, and then look into perf report. (Hint, hint, you can do it yourself next time!)
And 10->11 regression must have happened later. I suspect this is due to StringTable preparations for switching to fully concurrent hash table (JDK-8195100, which, as Claes points out, is not entirely in 11) or something else (class data sharing changes?).
Either way, interning on fast path is a bad idea, and patch for JDK-8151751 should have dealt with both regressions.
Watch this:
8u191: 15108 ± 99 ns/op [so far so good]
- 54.55% 0.37% java libjvm.so [.] JVM_GetStackTraceElement
- 54.18% JVM_GetStackTraceElement
- 52.22% java_lang_Throwable::get_stack_trace_element
- 48.23% java_lang_StackTraceElement::create
- 17.82% StringTable::intern
- 13.92% StringTable::intern
- 4.83% Klass::external_name
+ 3.41% Method::line_number_from_bci
"head": 22382 ± 134 ns/op [regression]
- 69.79% 0.05% org.sample.MyBe libjvm.so [.] JVM_InitStackTraceElement
- 69.73% JVM_InitStackTraceElementArray
- 69.14% java_lang_Throwable::get_stack_trace_elements
- 66.86% java_lang_StackTraceElement::fill_in
- 38.48% StringTable::intern
- 21.81% StringTable::intern
- 2.21% Klass::external_name
1.82% Method::line_number_from_bci
0.97% AccessInternal::PostRuntimeDispatch<G1BarrierSet::AccessBarrier<573
"head" + JDK-8151751 patch: 7511 ± 26 ns/op [woot, even better than 8u]
- 22.53% 0.12% org.sample.MyBe libjvm.so [.] JVM_InitStackTraceElement
- 22.40% JVM_InitStackTraceElementArray
- 20.25% java_lang_Throwable::get_stack_trace_elements
- 12.69% java_lang_StackTraceElement::fill_in
+ 6.86% Method::line_number_from_bci
2.08% AccessInternal::PostRuntimeDispatch<G1BarrierSet::AccessBarrier
2.24% InstanceKlass::method_with_orig_idnum
1.03% Handle::Handle

Bug in gmaven-plugin execute goal (using by groovy)

I try to set system property using by gmaven-plugin in build time.
but property result is diffrent in linux and window build environment.
In linux environment, it have double quote string. but window is not.
why is result different? Could you answer me please?
build result
linux : ### commitId : "8def4294ccb346795bd9682b5bcb9174bc64d78f"
window : ### commitId : 8def4294ccb346795bd9682b5bcb9174bc64d78f
pom:
<plugin>
<groupId>org.codehaus.gmaven</groupId>
<artifactId>gmaven-plugin</artifactId>
<version>1.5</version>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<properties>
<script>git log -n1 --pretty=format:"%H" web/</script>
</properties>
<source>
def command = project.properties.script
def process = command.execute()
process.waitFor()
project.properties.setProperty('commitId', process.in.text.trim())
println '### commitId : ' + project.properties.commitId
</source>
</configuration>
</execution>
</executions>
</plugin>
groovy script have a problem.
so need to split parameter like this.
def a = ['git', 'log', "-n1" , "--pretty=format:%H", 'web/'].execute().text.trim()

Gradle / Jacoco / SonarQube issues

I'm going to put a fat bounty on this as soon as the system permits.
What I'm specifically having trouble with is getting coverage and getting integration tests working. For these I see the uninformative error:
Resource not found: com.something.somethingelse.SomeITCase
Uninformative because there's nothing to relate it to, nor does it mean much to someone who has none of the context.
Here's other weirdness I'm seeing. In cases where there's no integration tests for a subproject, I see this:
JaCoCoItSensor: JaCoCo IT report not found: /dev/build/dilithium/target/jacoco-it.exec
Why would I see target? This isn't a Maven project. A global search shows there's no target directory mentioned anywhere in the code base.
Then, there's this section of the documentation:
sonarqube {
properties {
properties["sonar.sources"] += sourceSets.custom.allSource.srcDirs
properties["sonar.tests"] += sourceSets.integTest.allSource.srcDirs
}
}
Near as I can tell sourceSets.integTest.allSource.srcDirs returns Files, not Strings. Also it should be:
sonarqube {
properties {
property "sonar.tests", "comma,separated,file,paths"
}
Note that you get an error if you have a directory in there that doesn't exist. Of course there's apparently no standard for what directory to put integration tests in and for some sub-projects they may not even exist. The Gradle standard would be to simply ignore non-existent directories. Your code ends up looking like:
sonarqube {
StringBuilder builder = new StringBuilder()
sourceSets.integrationTest.allSource.srcDirs.each { File dir ->
if ( dir.exists() ) {
builder.append(dir.getAbsolutePath())
builder.append(",")
}
}
if (builder.size() > 1) builder.deleteCharAt(builder.size() -1 )
if (builder.size() > 1 )
properties["sonar.tests"] += builder.toString()
properties["sonar.jacoco.reportPath"] +=
"$project.buildDir/jacoco/test.exec,$project.buildDir/jacoco/integrationTest.exec"
}
Sonar is reporting no coverage at all. If I search for the *.exec files, I see what I would expect. That being:
./build/jacoco/test.exec
./build/jacoco/integrationTest.exec
...but weirdly, I also see this:
./build/sonar/com.proj_name_component_management-component_proj-recordstate/jacoco-overall.exec
What is that? Why is it in such a non-standard location?
OK, I've added this code:
properties {
println "Before: " + properties.get("sonar.tests")
println "Before: " + properties.get("sonar.jacoco.reportPath")
property "sonar.tests", builder.toString()
property "sonar.jacoco.reportPath", "$project.buildDir/jacoco/test.exec,$project.buildDir/jacoco/integrationTest.exec"
println "After: " + properties.get("sonar.tests")
println "After: " + properties.get("sonar.jacoco.reportPath")
}
...which results in:
[still running]
I don't want any bounty or any points.
Just a suggestion.
Can you get ANY Jacoco reports at all?
Personally I would separate the 2: namely Jacoco report generation and Sonar.
I would first try to simply generate Jacoco THEN I would look at why Sonar can not get a hold of them.

Maven/Gradle way to calculate the total size of a dependency with all its transitive dependencies included

I would like to be able to perform an analysis on each of my project POMs to determine how many bytes each direct dependency introduces to the resulting package based on the sum of all of its transitive dependencies.
For example, if dependency A brings in B, C, and D, I would like to be able to see a summary showing A -> total size = (A + B + C + D).
Is there an existing Maven or Gradle way to determine this information?
Here's a task for your build.gradle:
task depsize {
doLast {
final formatStr = "%,10.2f"
final conf = configurations.default
final size = conf.collect { it.length() / (1024 * 1024) }.sum()
final out = new StringBuffer()
out << 'Total dependencies size:'.padRight(45)
out << "${String.format(formatStr, size)} Mb\n\n"
conf.sort { -it.length() }
.each {
out << "${it.name}".padRight(45)
out << "${String.format(formatStr, (it.length() / 1024))} kb\n"
}
println(out)
}
}
The task prints out sum of all dependencies and prints them out with size in kb, sorted by size desc.
Update: latest version of task can be found on github gist
I keep the a small pom.xml template on my workstation to identify heavy-weight dependencies.
Assuming you want to see the weight of org.eclipse.jetty:jetty-client with all of its transitives create this in a new folder.
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>not-used</groupId>
<artifactId>fat</artifactId>
<version>standalone</version>
<dependencies>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-client</artifactId>
<version>LATEST</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-shade-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Then cd to the folder and run mvn package and check the size of the generated fat jar. On Unix-like systems you can use du -h target/fat-standalone.jar for that.
In order to test another maven artifact just change groupId:artifactId in the above template.
I do not know any way to show the totals but you may get a report for your project which can show per dependency size information. Please check this maven plugin : http://maven.apache.org/plugins/maven-project-info-reports-plugin/dependencies-mojo.html
If you have a configuration which includes all the necessary dependencies that you wish to calculate the size for you can simply put the following snippet in your build.gradle file:
def size = 0
configurations.myConfiguration.files.each { file ->
size += file.size()
}
println "Dependencies size: $size bytes"
This should print out when you run any gradle task after the build file is compiled.

Resources