What is causing spring.io/sagan build to fail? - spring

I am getting this error when trying to build the spring.io/sagan project on GitHub from wiki steps.
java.lang.AssertionError:
Expected: "Spring Tool Suiteâ„¢ Downloads"
but: was "Spring Tool Suite™ Downloads"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
at sagan.tools.support.ToolsPagesTests.showsAllStsGaDownloads(ToolsPagesTests.java:59)
I have tried building through msyGit bash, and through windows CLI without any difference.
I have tried running the site. It builds to 93%. I can browse the site but the site looks like plain HTML without CSS or images.
I have been told its an encoding issue but how to fix this?
It looks like it expects UTF8 but the encoding is ASCII.
Before building I have also tried to change the windows CLI encoding with the following advice:
Unicode characters in Windows command line - how?
But it has not made a difference. I am running windows 8.1

Not tested, but the problem probably comes from the fact that no encoding has been specified to the compiler for the Java files, and the sagan developers probably use a MacOS or Unix machine, whose default file encoding is UTF8, whereas the default encoding is different on Windows.
I would try adding the following lines to the root build.gradle file, inside the configure(javaProjects) section, after the java plugin has been applied:
configure(javaProjects) {
apply plugin: 'java'
// configure encoding to UTF8:
tasks.withType(Compile) {
options.encoding = 'UTF-8'
}
If that solves the problem, you should file a bug report, or even a pull request to the project.

Related

java/spring-boot/gradle Wrong Entrypoint in image built with pack and paketobuildpacks/builder:base

I have a really simple java spring-boot gradle application.
When I build an image from source with:
pack build testapp:0.0.1 --builder paketobuildpacks/builder:base
and try to run it with docker I get the following error:
ERROR: failed to launch: determine start command: when there is no default process a command is required.
The generated Entrypoint in this image is "/cnb/lifecycle/launcher".
When I inspect the image with pack inspect-image there are no processes.
I tried this with different java spring-boot gradle applications. When I use the "bootBuildImage" gradle task, it does nearly the same but uses the pre-build .jar-file and the resulting image works. The generated Entrypoint in this image is "/cnb/process/web" and pack inspect-image shows three processes.
Any ideas?
I can't see your build output, but it sounds like you're hitting a known issue. If this is not your problem, please include the full output of running pack build.
Onto the issue. By default, Spring Boot Gradle projects will build both an executable and non-executable JAR. Because this produces two JAR files, it presently confuses the buildpacks.
There are a couple of solutions:
Tell Gradle to not build the non-executable JAR. The buildpack requires the executable JAR. You can do this by adding the following to your build.gradle file:
jar {
enabled = false
}
This is the solution we have used in the Paketo buildpack samples.
If you don't want to make the change suggested in #1, then you can add the following argument to pack build: -e BP_GRADLE_BUILT_ARTIFACT=build/libs/<your-jar>.jar. For ex: -e BP_GRADLE_BUILT_ARTIFACT=build/libs/demo-0.0.1-SNAPSHOT.jar. You can use glob-style pattern matching here, but you need to make sure that what you enter does not match *-plain.jar. That will be the non-executable JAR that gets built by default.
This option just simply tells the Gradle buildpack more specifically what the JAR file to pass along to subsequent buildpacks.
We also have an open issue that should help to mitigate this problem. When the executable-jar buildpack gains support for multiple JARs, it'll be less likely that you'll need to set this. Essentially, this fill will add support so the executable-jar buildpack can inspect and detect an executable JAR, which would allow it to throw out the -plain.jar file since it's not executable.

How can I make “gradle --console=rich” the default?

Along the lines of this answer (which works for me, BTW) and the javadocs, I tried
gradle.startParameter.consoleOutput = org.gradle.api.logging.configuration.ConsoleOutput.Rich
in my ~/.gradle/init.gradle. However, I still need --console=rich to get color output. Why?
Tested with Gradle 2.14.1 and 3.2.1.
Terminal is cygwin urxvt with TERM variable set to rxvt-unicode-256color.
Since Gradle 4.3 you can use org.gradle.console property in gradle.properties:
org.gradle.console=rich
A new console verbose mode will print outcomes of all tasks (like UP-TO-DATE) like Gradle 3.5 and earlier did. You can set this via --console=verbose or by a new Gradle property org.gradle.console=(plain rich verbose).
I am not sure if you can force the rich console from a gradle script, as the detection happens likely before the script is interpreted.
NativeServices class provides the integration with the console. If you look at the source code, there are two messages possibly printed in log:
Native-platform terminal integration is not available. Continuing with fallback.
Unable to load from native-platform backed ConsoleDetector. Continuing with fallback.
The latter might give you more information why. Try running the gradle script with --debug. You will likely find out that you are missing a native library that is either not available in cygwin or it is, but is not on library path.
I believe it works when you specify the rich console from the command line, because gradle forces the colours even though the console doesn't indicate it supports them.
Does it work if you don't use the cygwin console in Windows native command line or maybe GitBash?
There is a workaround how you can make this work. You can create an alias in cygwin that will always add the --console=rich.
If you are using gradle wrapper, you can edit the gradlew script and add the command line parameter. To make it automated, you can change the wrapper task to alter your script in the doLast part.
Create a file called gradle.properties inside your ~/.gradle/ folder.
Inside gradle.properties, add the line org.gradle.console=rich.
Each builds will run under --console=rich automatically because the new gradle.properties will be merged with the gradle.properties of your project.
If your project's gradle.properties contains the same tag as the local file, your project's will be used overriding the local file's
If you are on Linux/Mac set
alias gradle='gradle --console rich'
in your ~/.bashrc.
In Gradle Wrapper, add the following line:
org.gradle.console=rich
to ./gradle.properties in the root folder, where the gradlew script is located.

core nlp truecaseannotator not found

I just got started with CoreNLP version 3.6.0. I've downloaded this version from this website. Using the commandline pipeline, I've been able to perform the standard pipeline annotators but ran into a problem with the truecase annotator:
Here's a copy of the terminal output:
loadClassifier=edu/stanford/nlp/models/truecase/truecasing.fast.caseless.qn.ser.gz
mixedCaseMapFile=edu/stanford/nlp/models/truecase/MixDisambiguation.list
classBias=INIT_UPPER:-0.7,UPPER:-0.7,O:0
Exception in thread "main" edu.stanford.nlp.io.RuntimeIOException: java.io.IOException: Unable to open "edu/stanford/nlp/models/truecase/truecasing.fast.caseless.qn.ser.gz" as class path, filename or URL
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1499)
at edu.stanford.nlp.pipeline.TrueCaseAnnotator.(TrueCaseAnnotator.java:58)
at edu.stanford.nlp.pipeline.AnnotatorImplementations.trueCase(AnnotatorImplementations.java:199)
at edu.stanford.nlp.pipeline.AnnotatorFactories$10.create(AnnotatorFactories.java:435)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:85)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:375)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:139)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:135)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.main(StanfordCoreNLP.java:1222)
Any ideas?
We tried to make the default models jar a bit smaller and decided to not include this model by default. But it is still contained in the English models jar which you can download from release history page.
After you downloaded the jar, make sure to put it in your classpath before you run CoreNLP. The English models jar should also contain everything in stanford-corenlp-3.6.0-models.jar, so you won't need both of them in your classpath.

wsdl2java generated code causes character encoding problems

I have generated a bunch of java-files from a WSDL source. I used Apache CXF 2.6.1 for generating the files.
When I put the code onto our production box that is running jetty and maven and I send a request to the server via the generated java-files it somehow changes the systems/JVM character encoding. The swedish characters å, ä and ö changes into Ã¥, ä, ö.
I can't reproduce this on my own box.
Someone have any idea?
Since version 2.5.4 there is a new command line option -encoding which is not yet documented on the official documentation. But when you call the tool with the help option (-h|-help) you will see the encoding option:
wsdl2java ... -encoding UTF-8 ....

Howto use maven in a heterogenous environment with different encodings?

I've created a svn repositoy on a linux server (Debian) and used a client on a windows machine to check my java sources in.
Now I've set up a Hudson server on a different linux server (Ubuntu) to periodically run tests on my code. But the tests fail with a compiler error:
Error: unmappable character for encoding ASCII
On my windows machine I've used the default encoding Cp1252.
On my svn server I can do a local checkout of my sources and they look good.
On my Hudson server the checkout contains illegal characters.
What are the parameters I have to adjust so that all three systems use a working encoding?
EDIT 2009-10-15:
I changed the default encoding of my Ubuntu system to latin1. Now I can open the checkedout files with an editor and they look good (thanks to #John-T at superuser.com).
But Hudson still complained about unmappable character for encoding ASCII and I found that this is caused by maven. I found an explantion, but the suggested solution didn't work. Now maven tells me that it uses latin1 when copying some resources, but the compiler (not using this setting?) still complains with the same error message.
No, the maven compiler plugin doesn't use the project.build.sourceEncoding property. So you need to configure the file encoding for it (I'd use UTF-8):
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
The first thing to identify is which character is causing the problem. It may be that the broken char can be replaced by some pure-ASCII entity. SVN itself is encoding agnostic: it'll just store byte-for-byte what's passed in.
If Hudson requires 7-bit ASCII, then this is all you can do. Otherwise, find out what Hudson supports and save your files in this format instead. UTF-8 would probaby be the way to go.
I don't think there is a way to change the encoding of a file with SVN. You can set the encoding for a commit message with the --encoding flag, but not the contents of files themselves. Text files are stored in the same format they appear on your local disk. The only conversion is a translation of line endings according to the svn:eol-style property.

Resources