How to access Couchbase RAW document with spring-boot-couchbase CrudRepository - spring

I am migrating an old application from using the Couchbase java client, 2.2.5 to Spring-boot-couchbase. I would like to use Spring Data's Crud Repository.
The problem I am running into is that the document in Couchbase is RAW format, (See more info here: Couchbase non-json docs). I have set up a crud repository interface and when connecting to other buckets with other json-formatted data, it works perfectly. When attempting to read the byte array data I get this error:
org.springframework.dao.DataRetrievalFailureException: Flags (0x802) indicate non-JSON document for id 9900fb3d-1edf-4428-b9e6-0ef6c3251c08, could not decode.; nested exception is com.couchbase.client.java.error.TranscodingException: Flags (0x802) indicate non-JSON document for id 9900fb3d-1edf-4428-b9e6-0ef6c3251c08, could not decode.
I have tried the following object types in the repository:
public interface MyRepository extends CouchbasePagingAndSortingRepository<Object, String> {
Object
byte[]
The object with an id and byte[] fields (and setters & getters)
Objects from the java-client
CouchbaseDocument
RawJsonDocument
AbstractDocument
I've also attempted to write a custom jackson mapper but the error message stays consistent in that it's trying to deserialize a non-json document.

Related

is ToXMLContentHandler thread-safe?

I am currently using Apache-Tika to parse my documents. While parsing the documents I am using AutoDetectParser, ToXMLContentHandler, Metadata classes of Apache Tika in this way:
InputStream stream = new FileInputStream(file);
parser.parse(stream, handler, metadata);
String filecontentInXMLFormat = handler.toString();
I have created the beans of AutoDetectParser, ToXMLContentHandler, Metadata using Springs, so the same class objects will be used when multiple documents are getting parsed through the same code flow.
So for example, if the second document comes in inside the code flow, will the "handler" object contain the data of the first document?
So, is the ToXMLContentHandler class "iorg.apache.tika.sax.ToXMLContentHandler" and Metadata class "org.apache.tika.metadata.Metadata" thread safe?

Conversion exception while reading binary field data to byte[] field in ES 7.5.x

I have my entity class defined as follows:
#Document(indexName = "payload-pojo")
public class PayloadPojo {
#Id
private String id;
#Field(index = false,type = FieldType.Binary)
byte[] payload;
}
and the Repository defined as follows:
public interface PayloadRepository extends ElasticsearchRepository<PayloadPojo, String> {
}
In the ES 6.8.1 (Spring Data Elasticsearch 3.2.0) I managed to store and read the binary data without any problem.
Now I'd like to move to ES 7.5.2, so I migrated the project to use Spring Data Elasticsearch 4.0.0. Since then when I try to call something like payloadRepo.findAll() I get conversion exception:
Failed to convert from type [java.lang.String] to type [byte].
The data is stored as base64 encoded string.
Do you have any idea of what has changed and how to change my code in order to read this value correctly?
Thanks
FieldType.Binary was added for the 4.0 release. What still was missing until now was a internal converter to convert between byte[] and base64 encoded strings (field type binary in elasticsearch is always base64 encoded).
I just added this, should be in the next snapshot release.
As for the date types: We have added support for the different Elasticsearch date formats and support custom date patterns as well - as Elasticsearch does. This support is available for the classes implementing the java.time.TemporalAccessor interface - basically the java.time.* classes.
We dropped the internal support of the old java.util.Date classes in favour of the java.time classes - java.sql.Timestamp is derived from java.util.Date.
We have breaking changes for the 4.0 release and this is one of them (reminds to update the docs).

mapstruct with GSON

We are using mapstruct in our project . While it works awesum for a dto to a domain object (say EmployeeDTO to EmployeeData with similiar properties) - we have a need to process an incoming json string . We are trying to write a very generic mapper that maps from the incoming json string to a java object.
So lets say we have EmployeeDTO like this
{ id: 1,name="xxx"} but its coming as a string and i have a mapstruct mapper thusly
#Mapper
EmployerMapper()
{
EmployeeData toEmployeeFromJsonString( String empString} ;// where empString is a jsonString
}
its not working properly and i am not getting the appropriate object created with the right property from the json String( I also tried with jsonobject as well but that doesn't work either)
The reason why we cant have specific DTOs is because we want to have a loose coupling between the Employee microservice and the rest of the microservices ( there are a handful )
mapstruct is not creating the appropriate getters and setters and there could be more properties in the DTO that we dont care in this microservice.
1. Is there support for json objects in mapstruct directly?
2. If i enhance it with GSON support how can i integrate it with mapstruct so that i have only one way of mapping in my product.
No. MapStruct is a mapping framework, not a parsing framework
For deserialising JSON you have few specific frameworks: checkout How to parse JSON in Java

Missing Converter when using Spring LdapTemplate with Grails Validateable annotation

I'm using the Spring LDAP (docs) library in a Grails application. I have a class annotated with the #Entry annotation, so it is mapped to an LDAP server. This all works quite beautifully.
However, when I add the Grails #Validateable annotation (to enable validating the LDAP class similarly to Grails domain classes) and attempt to retrieve data from LDAP (i.e. a findAll operation on the LdapUserRepo, or similar), I get the following exception:
Message: Missing converter from class java.lang.String to interface org.springframework.validation.Errors, this is needed for field errors on Entry class com.ldap.portal.LdapUser
Basically, it seems like the AST transformation performed by the #Validateable annotation is producing extra fields (namely the errors field) on the LdapUser object. It appears that Spring LDAP, in processing the #Entry logic, assumes a default mapping for the fields property (probably interpreting it as a string field on the LDAP object). When it gets nothing from the LDAP server, it attempts to set the field of type ValidationErrors to a value of type String -- an empty string.
I did some looking in github and found this code that seems relevant and may support my theory.
My question is: is this behavior expected for annotations, and how can one prevent fields added by one annotation from being inappropriately processed by another annotation?
At present the best workaround I've come up with for my specific issue is to add an errors field to my LdapUser object and mark it as transient (so that LDAP ignores it):
#Transient
ValidationErrors errors

Use JSON deserializer for Batch job execution context

I'm trying to get a list of job executions which have been stored in Spring batch related tables in the database using:
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstance);
The above method call seems to invoke ExecutionContextRowMapper.mapRow method in JdbcExecutionContextDao class.
The ExecutionContextRowMapper uses com.thoughtworks.xstream.Xstream.fromXML method to deserialize the JSON string of JobExecutionContext stored in DB.
It looks like an incorrect or default xml deserializer is used for unmarshalling JSONified JobExecutionContext.
Is there any configuration to use a JSON deserializer in this scenario.
The serializer/deserializer for the ExecutionContext is configurable in 2.2.x. We use the ExecutionContextSerializer interface (providing two implementations, one using java serialization and one using the XStream impl you mention). To configure your own serializer, you'll need to implement the org.springframework.batch.core.repository.ExecutionContextSerializer and inject it into the JobRepositoryFactoryBean (so that the contexts are serialized/deserialized correctly) and the JobExplorerFactoryBean (to reserialize the previously saved contexts).
It is important to note that changing the serialization method will prevent Spring Batch from deserializing previously saved ExecutionContexts.

Resources