how do i configure static versioning (digest) for the asset-pipeline (bertramlabs) in my spring boot 2 application? - spring-boot

spring boot version: 2.0.4.RELEASE
asset-pipeline version: 3.0.3
Hi
we're using this plugin, because we know it from our grails applications.
We liked it, because it has a simple configuration (for our requirements)
Now we're developing a spring boot application and we used this plugin too and we're (almost) happy with it.
But when we run the application in the development mode the assets don't have a digest like /assets/my-styles-b5d2d7380a49af2d7ca7943a9aa74f62s.css
How do i configure the plugin to create a digest for all our resources?
currently we're using this configuration:
assets {
minifyJs = true
minifyCss = true
enableSourceMaps = false
includes = ["application.js", "application.scss"]
}
And we're using thymeleaf for our templates:
<link th:href="#{/assets/application.css}" rel="stylesheet">

I found a solution...
when you use the asset-pipeline, you get a gradle task assetCompile.
when creating a .war file, you can add this gradle task and replace all the assets with the versioned files.
when you want to use the versioned files in your production mode you have to use this configuration (build.gradle)
assets {
minifyJs = true
minifyCss = true
skipNonDigests = true
packagePlugin = true
includes = ["application.js", "application.scss"]
}
...
war {
dependsOn 'assetCompile'
from( "${buildDir}/assets", {
into "/WEB-INF/classes/META-INF/assets"
})
baseName = '<your project>'
enabled = true
}
that's all.
When running the assetCompile task, a manifest.properties file is created. This file contains the mapping of the original filename and the versioned one.
This file is used by the application to find the correct resource, e.g. application.css=application-79a3c8a2f085ecefadgfca3cda6fe3d12.css

I created a plugin which enables the url replacement for assets with digest in the production mode:
Dependency
compile 'ch.itds.taglib:asset-pipeline-thymeleaf-taglib:1.0.0'
Configuration
#Configuration
public class ThymeleafConfig {
#Bean
public AssetDialect assetDialect() {
return new AssetDialect();
}
}
Usage
<html xmlns:asset="https://www.itds.ch/taglib/asset">
<script asset:src="#{/assets/main.js}"></script>
</html>
The asset:src="#{/assets/main.js}" will be replaced with src="/assets/main-DIGEST.js".
The replacement happens only if the developmentRuntime of the asset pipeline is disabled.
A little bit more details are available on my blog post: https://kobelnet.ch/Blog/2019/03/12/assetpipelinethymeleaftaglib

Related

Changing data in OpenApi info block

I'm trying to change my OpenApi info block properties.
More specifically I'm trying to change the value of the version tag in my OpenApi programmatically.
For example every new build I want a new version number.
I have tried using placeholders and giving them values in the build.gradle but haven't got it working.
openapi:
openapi: 3.1.0
info:
title: Dummy Bookshop
summary: A fictitious API demonstrating the OpenAPI Specification's features
version: ${apiVersion}
description: A fictius description.
termsOfService: https://www.dummy-book.shop/tos
contact:
name: Bookshop Support team
url: https://www.dummy-book.shop/support
email: support#dummy-book.shop
license:
name: Apache 2.0
identifier: Apache-2.0
paths: {}
build.gradle:
ext {
apiVersion = '1.0.1'
}
Does anyone have any ideas on how to get this working or is there a plugin that does this?
Create a script in your build.gradle to replace the version with the gradle version. This is easily done using the groovy YamlSlurper.
task updateYaml() {
def fileDir = "$projectDir/src/main/resources/static/"
def fileName = "myOpenApi.yml"
def reader = new FileReader(fileDir + fileName)
def slurper = new YamlSlurper()
def builder = new YamlBuilder()
builder slurper.parse(reader)
builder.content.info.version = project.version
doLast {
File newOASFile = new File(fileDir, "finalSpec.yml")
newOASFile.write(builder.toString())
}
}
Once you have the task working, set your dependencies properly. This assumes that your generation task is called generateMyOpenApi
generateMyOpenApi.dependsOn updateYaml
processResources.dependsOn generateMyOpenApi

Serializer for custom type 'janusgraph.RelationIdentifier' not found

Janus Server config
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
Java/Spring boot config
#Bean
public Cluster cluster() {
return Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1())
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
}
Getting the following error,
Caused by: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.io.IOException: Serializer for custom type 'janusgraph.RelationIdentifier' not found
at org.apache.tinkerpop.gremlin.driver.ser.binary.ResponseMessageSerializer.readValue(ResponseMessageSerializer.java:59) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1.deserializeResponse(GraphBinaryMessageSerializerV1.java:180) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:47) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:35) ~[gremlin-driver-3.6.1.jar:3.6.1]
Note: I didn't define the schema. Migrating the code from AWS NEPTUNE (working code) to JanusGraph.
Any help on why I am getting the above error?
Get queries are working and a few mutations queries also working,...
It looks like you have only defined the serializer for JanusGraph types on the server, but not on the client side. You also need to add the JanusGraphIoRegistry on the client side.
This can be done like this:
TypeSerializerRegistry typeSerializerRegistry = TypeSerializerRegistry.build()
.addRegistry(JanusGraphIoRegistry.instance())
.create();
Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1(typeSerializerRegistry))
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
or you can alternatively use a config file for it which simplifies the code down to:
import static org.apache.tinkerpop.gremlin.process.traversal.AnonymousTraversalSource.traversal;
GraphTraversalSource g = traversal().withRemote("conf/remote-graph.properties");
(I have already created the GraphTraversalSource here because the client is directly created internally by withRemote().)
This is also described in the JanusGraph documentation under Connecting from Java. Note that I've linked to a version of the docs for the upcoming 1.0.0 release because the documentation for the latest released version right now still uses Gryo instead of GraphBinary. But you can still use this with JanusGraph 0.6 already and it also makes sense to use GraphBinary instead of Gryo, because support for Gryo will be dropped in version 1.0.0.
The config file conf/remote-graph.properties looks then like this (also taken from the JanusGraph documentation):
hosts: [localhost]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1,
config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
You can also specify the various options that you are currently specifying via the builder. This configuration is documented in the TinkerPop reference docs.

How do I disable spring security in a Grails 4 integration test?

I had a grails 3 app including spring security, which I recently upgraded to grails 4.
My application.yml includes the following:
environments:
test:
grails:
plugin:
springsecurity:
active: false
security:
ignored: '/**'
basic:
enabled: false
spring:
security:
enabled: false
Why doesn't this work in Grails 4? What's a good alternative solution?
Grails 4 seems to be ignoring this configuration. When I run integration tests, I am getting a 403 error with a message:
Could not verify the provided CSRF token because your session was not found.
It seems like spring security enabled, and it's using SecurityFilterAutoConfiguration, which is normally excluded for my app.
Update
I am using the following dependencies:
compile('org.grails.plugins:spring-security-core:3.2.3') {
exclude group: 'org.springframework.security'
}
compile ('org.springframework.security:spring-security-core:4.2.13.RELEASE') {
force = true
}
compile 'org.springframework.security:spring-security-web:4.2.13.RELEASE'
compile 'org.springframework.security:spring-security-config:4.2.13.RELEASE'
Update 2:
In my debugger, I found that the spring security core plugin actually is being disabled. The following code from the plugin class is executed:
SpringSecurityUtils.resetSecurityConfig()
def conf = SpringSecurityUtils.securityConfig
boolean printStatusMessages = (conf.printStatusMessages instanceof Boolean) ? conf.printStatusMessages : true
if (!conf || !conf.active) {
if (printStatusMessages) {
// <-- the code in this block is executed; active flag is false
String message = '\n\nSpring Security is disabled, not loading\n\n'
log.info message
println message
}
return
}
...however, I am still getting the CSRF filter error, so Spring Security must be configuring itself somehow regardless.
Update 3:
The CSRF filter is being set up by ManagementWebSecurityConfigurerAdapter, using the default configuration.
I tried adding the following to resources.groovy:
if (grailsApplication.config.disableSecurity == true && !Environment.isWarDeployed()) {
webSecurityConfigurerAdapter(new WebSecurityConfigurerAdapter(true) {})
}
This did not fix the issue. Although my anonymous WSCA bean is being constructed, the MWSCA default bean is still being used by spring.
Try this in
grails-app/conf/application.groovy
environments {
development {
}
test {
grails.plugin.springsecurity.active = false
}
production {
}
}

SonarQube - specify location of sonar.properties

I'm trying to deploy SonarQube on Kubernetes using configMaps.
The latest 7.1 image I use has a config in sonar.properties embedded in $SONARQUBE_HOME/conf/ . The directory is not empty and contain also a wrapper.conf file.
I would like to mount the configMap inside my container in a other location than /opt/sonar/conf/ and specify to sonarQube the new path to read the properties.
Is there a way to do that ? (environment variable ? JVM argument ? ...)
It is not recommended to modify this standard configuration in any way. But we can have a look at the SonarQube sourcecode. In this file you can find this code for reading the configuration file:
private static Properties loadPropertiesFile(File homeDir) {
Properties p = new Properties();
File propsFile = new File(homeDir, "conf/sonar.properties");
if (propsFile.exists()) {
...
} else {
LoggerFactory.getLogger(AppSettingsLoaderImpl.class).warn("Configuration file not found: {}", propsFile);
}
return p;
}
So the conf-path and filename is hard coded and you get a warning if the file does not exist. The home directory is found this way:
private static File detectHomeDir() {
try {
File appJar = new File(Class.forName("org.sonar.application.App").getProtectionDomain().getCodeSource().getLocation().toURI());
return appJar.getParentFile().getParentFile();
} catch (...) {
...
}
So this can also not be changed. The code above is used here:
#Override
public AppSettings load() {
Properties p = loadPropertiesFile(homeDir);
p.putAll(CommandLineParser.parseArguments(cliArguments));
p.setProperty(PATH_HOME.getKey(), homeDir.getAbsolutePath());
p = ConfigurationUtils.interpolateVariables(p, System.getenv());
....
}
This suggests that you can use commandline parameters or environment variables in order to change your settings.
For my problem, I defined environment variable to configure database settings in my Kubernetes deployment :
env:
- name: SONARQUBE_JDBC_URL
value: jdbc:sqlserver://mydb:1433;databaseName=sonarqube
- name: SONARQUBE_JDBC_USERNAME
value: sonarqube
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: dbpassword
I needed to use also ldap plugin but it was not possible to configure environment variable in this case. As /opt/sonarqube/conf/ is not empty, I can't use configMap to decouple configuration from image content. So, I build my own sonarqube image adding the ldap jar plugin and ldap setting in sonar.properties :
# General Configuration
sonar.security.realm=LDAP
ldap.url=ldap://myldap:389
ldap.bindDn=CN=mysa=_ServicesAccounts,OU=Users,OU=SVC,DC=net
ldap.bindPassword=****
# User Configuration
ldap.user.baseDn=OU=Users,OU=SVC,DC=net
ldap.user.request=(&(sAMAccountName={0})(objectclass=user))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
ldap.group.baseDn=OU=Users,OU=SVC,DC=net
ldap.group.request=(&(objectClass=group)(member={dn}))

What is the difference between hapi.js plugins and nodejs modules

Just started familiarizing myself with Hapi. Hapi uses plugins to add components to your application. I'm having a hard time understanding why i would use plugins when i could just do something like:
var lib = require('whatever lib from npm');
What are the differences between the two?
Hapi Plugins are also node modules, but they are node modules which were built according to the Hapi Plugin API (they expose a register method that register the plugin with your Hapi pack/server)
Plugins can automatically add routes to your server, change the request, payload and response, and in general can change how Hapi behaves.
So in short Plugins are Node modules written specifically to augment Hapi.
Lets look at two packages lout and Lo-Dash.
Lo-Dash module as you might know is high performence js tool set.
lout is a Hapi plugin that will add a /doc route to your app and.
you can find both on npm and lets start with lout -
var Hapi = require('hapi'),
lout = require('lout'),
server = new Hapi.Server(80);
server.pack.register({
plugin: lout
}, function() {
server.start();
}
);
As you can see All we need to do is register lout with our server pack and all its magic is available to us (some plugins will require more options)
now lets use lodash in our code
var Hapi = require('hapi'),
lout = require('lout'),
_ = require('lodash'),
preset = { app: { name: "myApp"}},
server;
if (process.env.DEBUG) {
_.extend(preset, {debug: {request: ['error']});
}
server = new Hapi.Server(80, preset);
_.extend(preset, { endpoint: '/lout'});
server.pack.register({
plugin: lout
}, function() {
server.start();
}
);
here we use lodash to extend our server setting and configure our server to log errors to the console if we set the DEBUG environment parameter when running the server.
note that lodash has no idea about our Hapi server and how it works it just uses as an helper and the programmer needs to know how to stitch them together.
Calling lodash with server.pack.register will have no meaning and would result in an error.
So this wont work -
server.pack.register({
plugin: require('lodash')
}, function() {
server.start();
}
);

Resources