How to get Java Date object from Camel XPathBuilder XPath expression? - xpath

I've got a program that uses Camel's XPathBuilder to create and evaluate xpath expressions in a processor bean.
I've created the object
XPathBuilder xpath = XPathBuilder.xpath( "path/to/dateElement", java.util.Date );
and execute it
Object obj = xpath.evaluate( exchange, Object.class );
however when I log the value of obj it is null. If I request it as a String, it returns the XML date format string as I would expect.
Does XPathBuilder not support conversion to java.util.Date? (I can't see a list of supported output classes in the documentation anywhere.)
I tried casting the xpath expression explicitly to xs:dateTime, but that gave me an exception saying it couldn't convert the expression to a nodeList.
(It works fine when I want a java.lang.Long or java.lang.Double instead of java.util.Date)
How do I get the Xpath to return a Date object?
Thanks! Screwtape.

With XPathBuilder, only conversion to Number, String, Boolean, Node and NodeList is supported out-of-the-box. If you want to support other types, you need to implement custom Type Converter.
import org.apache.camel.Converter;
import org.apache.camel.TypeConverters;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
public class XmlDateTypeConverters implements TypeConverters {
#Converter
public Date convertNodeToDate(Node node) throws ParseException {
return new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssX")
.parse(node.getTextContent());
}
#Converter(allowNull = true)
public Date convertNodeListToDate(NodeList nodeList) throws ParseException {
if (nodeList.getLength()==0){
return null;
}
return convertNodeToDate(nodeList.item(0));
}
}
And registration of XmlDateTypeConverters to CamelContext depends on your preferences, with Java DSL it looks like this:
getContext().getTypeConverterRegistry().addTypeConverters(new XmlDateTypeConverters())
In Spring, is TypeConverter discovered automatically, if it is bean.

Related

spring-data-neo4j v6: No converter found capable of converting from type [MyDTO] to type [org.neo4j.driver.Value]

Situation
I'm migrating a kotlin spring data neo4j application from spring-data-neo4j version 5.2.0.RELEASE to version 6.0.11.
The original application has several Repository interfaces with custom queries which take some DTO as a parameter, and use the various DTO fields to construct the query. All those types of queries currently fail with
org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [MyDTO] to type [org.neo4j.driver.Value]
The reference documentation for spring-data-neo4j v6 only provides examples where parameters passed to custom query methods of a #Repository interface are of the same type as the #Node class associated with that repository. The documentation does not explicitly state that only parameters of the Node class are allowed.
Question
Is there any way to pass an arbitrary DTO (not being a #Node class) to a custom query method in a #Repository interface in spring-data-neo4j v6 like it was possible in v5?
Code samples
Example node entity
#Node
data class MyEntity(
#Id
val attr1: String,
val attr2: String,
val attr3: String
)
Example DTO
data class MyDTO(
val field1: String,
val field2: String
)
Example Repository interface
#Repository
interface MyRepository : PagingAndSortingRepository<MyEntity, String> {
// ConverterNotFoundException is thrown when this method is called
#Query("MATCH (e:MyEntity {attr1: {0}.field1}) " +
"CREATE (e)-[l:LINK]->(n:OtherEntity {attr2: {0}.field2))")
fun doSomethingWithDto(dto: MyDTO)
}
Solutions tried so far
Annotate DTO as if it were a Node entity
Based on the following found in the reference docs https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#custom-queries.parameters
Mapped entities (everything with a #Node) passed as parameter to a
function that is annotated with a custom query will be turned into a
nested map.
#Node
data class MyDTO(
#Id
val field1: String,
val field2: String
)
Replace {0} with $0 in custom query
Based on the following found in the reference docs https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#custom-queries.parameters
You do this exactly the same way as in a standard Cypher query issued
in the Neo4j Browser or the Cypher-Shell, with the $ syntax (from
Neo4j 4.0 on upwards, the old {foo} syntax for Cypher parameters has
been removed from the database).
...
[In the given listing] we are referring to the parameter by its name.
You can also use $0 etc. instead.
#Repository
interface MyRepository : PagingAndSortingRepository<MyEntity, String> {
// ConverterNotFoundException is thrown when this method is called
#Query("MATCH (e:MyEntity {attr1: $0.field1}) " +
"CREATE (e)-[l:LINK]->(n:OtherEntity {attr2: $0.field2))")
fun doSomethingWithDto(dto: MyDTO)
}
Details
spring-boot-starter: v2.4.10
spring-data-neo4j: v6.0.12
neo4j-java-driver: v4.1.4
Neo4j server version: v3.5.29
RTFM Custom conversions ...
Found the solution myself. Hopefully someone else may benefit from this as well.
Solution
Create a custom converter
import mypackage.model.*
import com.fasterxml.jackson.core.type.TypeReference
import com.fasterxml.jackson.module.kotlin.jacksonObjectMapper
import org.neo4j.driver.Value
import org.neo4j.driver.Values
import org.springframework.core.convert.TypeDescriptor
import org.springframework.core.convert.converter.GenericConverter
import org.springframework.core.convert.converter.GenericConverter.ConvertiblePair
import java.util.HashSet
class DtoToNeo4jValueConverter : GenericConverter {
override fun getConvertibleTypes(): Set<ConvertiblePair>? {
val convertiblePairs: MutableSet<ConvertiblePair> = HashSet()
convertiblePairs.add(ConvertiblePair(MyDTO::class.java, Value::class.java))
return convertiblePairs
}
override fun convert(source: Any?, sourceType: TypeDescriptor, targetType: TypeDescriptor?): Any? {
return if (MyDTO::class.java.isAssignableFrom(sourceType.type)) {
// generic way of converting an object into a map
val dataclassAsMap = jacksonObjectMapper().convertValue(source as MyDTO, object :
TypeReference<Map<String, Any>>() {})
Values.value(dataclassAsMap)
} else null
}
}
Register custom converter in config
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.data.neo4j.core.convert.Neo4jConversions
import org.springframework.core.convert.converter.GenericConverter
import java.util.*
#Configuration
class MyNeo4jConfig {
#Bean
override fun neo4jConversions(): Neo4jConversions? {
val additionalConverters: Set<GenericConverter?> = Collections.singleton(DtoToNeo4jValueConverter())
return Neo4jConversions(additionalConverters)
}
}
It's ridiculous that the framework would force you to write a custom converter for this. I made a #Transient object in my overridden User class for a limited set of update-able user profile fields, and I'm encountering the same error. I guess I will just have to break up the object into its component String fields in the method params to get around this problem. What a mess.
#Query("MATCH (u:User) WHERE u.username = :#{#username} SET u.firstName = :#{#up.firstName},u.lastName = :#{#up.firstName},u.intro = :#{#up.intro} RETURN u")
Mono<User> update(#Param("username") String username,#Param("up") UserProfile up);
No converter found capable of converting from type [...UserProfile] to type [org.neo4j.driver.Value]

Prevent Primitive To String Conversion in SpringBoot / Jackson

We have written a Springboot Rest Service, it internally uses Jackson for Serialisation / deserialisation of Json input / output of Rest APIs.
We do not want type conversion of primitives to / from String for API input / output.
We have disabled String to Primitive conversion using
spring.jackson.mapper.allow-coercion-of-scalars=false
But Primitive to String conversion is still being allowed.
e.g.
"name": 123,
from API is still deserialised to "123", Java data type of name is String here.
We have gone through Customize the Jackson ObjectMapper section of Spring Docs and does not look like there is anything in those enums that can be used.
Is there a way to achieve this without writing a custom ObjectMapper / Deserializer?
We did not find any config property that achieves this, finally went with the solution posted by MichaƂ Ziober.
package xyz;
import com.fasterxml.jackson.databind.deser.std.StringDeserializer;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.core.JsonToken;
import java.io.IOException;
public class StrictStringDeserializer extends StringDeserializer {
#Override
public String deserialize(JsonParser p, DeserializationContext ctxt) throws IOException {
JsonToken token = p.currentToken();
if (token.isBoolean()
|| token.isNumeric()
|| !token.toString().equalsIgnoreCase("VALUE_STRING")) {
ctxt.reportInputMismatch(String.class, "%s is not a `String` value!", token.toString());
return null;
}
return super.deserialize(p, ctxt);
}
}
POJO Class
public class XyzAbc {
// ...
#JsonDeserialize(using = StrictStringDeserializer.class)
private String name;
// ...
}

How to customize Jackson ObjectMapper to allow NaN in Spring Boot 2.0?

Moving this here from GitHub as the Spring team only uses GitHub issues for bugs and feature requests.
Per the Spring Boot documentation, it should be possible to customize the Jackson ObjectMapper using environment properties (e.g. in application.properties) such as spring.jackson.parser.<feature_name> as long as you're not defining your own ObjectMapper bean.
I need to activate the ALLOW_NON_NUMERIC_NUMBERS parser features as I'm getting (strictly speaking invalid) JSON with NaN values for floating point fields that I want Jackson to map to java.lang.Double.NaN in Java.
So in my application.properties I've added spring.jackson.parser.ALLOW_NON_NUMERIC_NUMBERS=true and I can see this is being picked up:
Spring Boot's JacksonAutoConfiguration is creating an Jackson2ObjectMapperBuilder
Jackson2ObjectMapperBuilder's StandardJackson2ObjectMapperBuilderCustomizer is picking up my spring.jackson.parser.ALLOW_NON_NUMERIC_NUMBERS=true property and adding it to its features map
Jackson2ObjectMapperBuilder's build() method is eventually calling configureFeature which results in the mask value of the ALLOW_NON_NUMERIC_NUMBERS feature (512) being added to the _parserFeatures value in the JsonFactory of the ObjectMapper
the ObjectMapper being injected in my bean using #Autowired also has the ALLOW_NON_NUMERIC_NUMBERS feature enabled
What's unclear is why I'm still getting the following Jackson error when parsing JSON that has the NaN value for a floating point field:
JSON decoding error: Character N is neither a decimal digit number, decimal point, nor "e" notation exponential mark.
I'm debugging now so I'll probably end up answering my own question. The above detail is to possibly help people coming from the GitHub issue to find a thread to pull on in case their feature flags aren't being applied.
The "problem" is that I'm trying to map float values to BigDecimal in Java, but BigDecimal has no representation for NaN (or (-)Inf for that matter). The problem originates in com.fasterxml.jackson.databind.util.TokenBuffer.Parser which in public BigDecimal getDecimalValue() does:
return BigDecimal.valueOf(n.doubleValue());
which ends up (in java.match.BigDecimal) converting the double value to a String "NaN" and which is then passed into the BigDecimal constructor, which rejects it with a NumberFormatException and the error message I mentioned in the question:
throw new NumberFormatException("Character " + c
+ " is neither a decimal digit number, decimal point, nor"
+ " \"e\" notation exponential mark.");
In my case, I would be happy with NaN being mapped to null, but I understand that's not the correct behavior for everyone using Jackson, so I've written a custom deserializer to do just that:
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonDeserializer;
import com.fasterxml.jackson.databind.deser.std.NumberDeserializers;
import java.io.IOException;
import java.math.BigDecimal;
public class NaNSafeBigDecimalDeserializer extends JsonDeserializer<BigDecimal> {
private BigDecimal nanValue = null;
#Override
public BigDecimal deserialize(JsonParser p, DeserializationContext ctxt) throws IOException, JsonProcessingException {
if (p.isNaN()) {
return nanValue;
} else {
return NumberDeserializers.BigDecimalDeserializer.instance.deserialize(p, ctxt);
}
}
}
Now I can just annotate my BigDecimal fields with #JsonDeserialize(using = NaNSafeBigDecimalDeserializer.class).

Spring Validation of JSON - Why do I need to add `#field`

I've finally made some progress on Spring validation (on a JSON object coming in from RabbitMQ).
However there are a couple of things I don't understand:
In the documentation, it states I can just use the annotation #NotBlank then in my method I use the annotation #Valid. However I find this wasn't doing anything. So instead I did #field:NotBlank and it worked together with the following - why did this #field do the trick?
#JsonIgnoreProperties(ignoreUnknown = true)
data class MyModel (
#field:NotBlank(message = "ID cannot be blank")
val id : String = "",
#field:NotBlank(message = "s3FilePath cannot be blank")
val s3FilePath : String = ""
)
Then the function using this model:
#Service
class Listener {
#RabbitListener(queues = ["\${newsong.queue}"])
fun received(data: MyModel) {
val factory = Validation.buildDefaultValidatorFactory()
val validator = factory.validator
val validate = validator.validate(data)
// Then this `validate` will return an array of validation errors
println(validate)
}
}
Correct me if I'm wrong however I assumed just using #Valid and this point fun received(#Valid data: MyModel) it would just throw some exception for me to catch - any idea based on my code why this could have been?
Any advice/help would be appreciated.
Thanks.
Here are the imports:
import com.fasterxml.jackson.annotation.JsonIgnoreProperties
import com.fasterxml.jackson.module.kotlin.jacksonObjectMapper
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.stereotype.Service
import javax.validation.*
import org.springframework.amqp.rabbit.core.RabbitTemplate
import org.springframework.amqp.rabbit.annotation.RabbitListener
import javax.validation.constraints.NotBlank
Quoting Kotlin's documentation for annotations:
When you're annotating a property or a primary constructor parameter, there are multiple Java elements which are generated from the corresponding Kotlin element, and therefore multiple possible locations for the annotation in the generated Java bytecode. To specify how exactly the annotation should be generated, use the following syntax:
class Example(#field:Ann val foo, // annotate Java field
#get:Ann val bar, // annotate Java getter
#param:Ann val quux) // annotate Java constructor parameter
So, until explicitly mention what you are annotating (field, getter or something else) in Kotlin class constructor, it won't automatically know where you want to put that annotation.

How to convert a form field to Vavr Option in Spring controller

I have a class Outcome, one of whose fields is an instance of Schema. But because the latter might be null, I have defined Outcome's getSchema() to return the value wrapped in Vavr's Option. In my Spring Boot 2 application the controller method that handles updating Outcomes has a parameter #Valid Outcome outcome. However when attempting to populate this parameter from the form data Spring's conversion service flags up the following error:
Cannot convert value of type 'java.lang.String' to required type 'io.vavr.control.Option' for property 'schema': no matching editors or conversion strategy found
That is, it has been unable to map a string identifier such as '1234' to the existing Schema instance with that Long identifier and then set the Outcome instance's schema field to that. This is despite the fact that in my WebMvcConfigurer class I have added to Spring's conversion service Spring Data's QueryExecutionConverters, which are claimed to handle converting Vavr Optional:
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
import org.springframework.core.convert.ConversionService;
import org.springframework.core.convert.support.ConfigurableConversionService;
import org.springframework.core.convert.support.DefaultConversionService;
import org.springframework.data.repository.util.QueryExecutionConverters;
...
#Configuration
public class WebMvcConfig implements WebMvcConfigurer {
#Bean(name="conversionService")
public ConversionService getConversionService() {
ConfigurableConversionService conversionService = new DefaultConversionService();
QueryExecutionConverters.registerConvertersIn(conversionService);
System.out.println("Registered QueryExecutionConverters");
return conversionService;
}
...
If I change Outcome's getSchema() to return Java 8's Optional<Schema> instead then the schema field is successfully set for the Outcome instance passed to my controller method.

Resources