spring-data-redis, empty list attribute value becomes null - spring-boot

I'm in the process of porting some microservices from SpringBoot1.5 to 2.1.
We are using spring-data-redis. it seems the default internal moves from jedis to lettuce.
The thing is we now observe some weird behaviours, when we save an object and then retrieve it, there is a tiny difference:
empty list attributes are replaced with null.
Here is an example:
//repo
public interface TestRepository extends CrudRepository<Test, String> {}
...
//object
#RedisHash(timeToLive = 60)
public static class Test{
#Id private String id;
int age;
List<String> friends;
}
...
//saving then retreiving
Test test = new Test("1", 15, Collections.emptyList());
System.out.println(test);
testRepository.save(test);
Test testGet = testRepository.findById("1").get();
System.out.println(testGet);
and here is what happens:
//before
{
"id": "1",
"age": 15,
"friends": []
}
//after
{
"id": "1",
"age": 15
}
the friends empty list has disappeared. This new behaviour affects our code in many places leading to NullPointerExceptions etc.
Apparently, there are multiple serializers available but this doesn't seem to have any effect. Any idea?
https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis:serializer
for reference:
springBootVersion = '2.1.5.RELEASE'
springCloudVersion = 'Greenwich.SR1'

I met this problem too. I solved it like this:
#RedisHash(timeToLive = 60)
public class MyData implements Serializable {
#Id
private String id;
private List<Object> objects = new ArrayList<>();
}
If i will save MyData with empty list objects, when i pull it from Redis, objects in it will not be null and will be empty list. If i will save 'MyData' with not empty objects, objects not will be lost after deserialization.

Related

How can I generalize the Outer Class in Json Response mapping using REST template?

I'm developing a series of clients to webservices and I'm observing that they follow a simple json structure, like this:
{
"typex": { "property1" : "value"},
"page": 1,
"count": 200,
"next_page": 2
}
I have near 15 webservices each returning the same structure, where the main information is within the object "typex" and their properties.
Today, these returns needs to have, at least, 2 classes, one for outer data and one for inner information.
public class TypeXWrapper {
#JsonProperty
TypeX typex;
#JsonProperty
Integer page;
#JsonProperty
Integer count;
#JsonProperty
Integer next_page;
}
public class TypeX {
#JsonProperty
String property1;
}
In this way, it would be necessary to create 30 classes.
Is there any way to implement some kind of generalization for situations like this?
I was thinking something like:
public class GenericWrapper<InnerClass> {
#JsonProperty
InnerClass data;
#JsonProperty
Integer page;
#JsonProperty
Integer count;
#JsonProperty
Integer next_page;
}
public class TypeX {
#JsonProperty
String property1;
}
But the problem is that the root property of the inner class data, changes for each webservice endpoint. That means that the next answer, for example TypeY, the property will have the name "typey", like:
{
"typey": { "property2" : "value"},
"page": 1,
"count": 200,
"next_page": 2
}
Is there anything that I could use to achieve this generalization? The environment and frameworks are Spring Boot 2.1.18 (can change if needed), using resttemplate with the return object encapsulated in ParameterizedTypeReference.
Thanks!

Custom Mapping from JSON to Java/POJO in Spring RestTemplate response

I have a Spring boot application where I am using RestTemplate to call a Rest API and I receive following JSON formatted response:
{
"data": [
{
"id": "1",
"type": "type1",
"config": {
"property1" : "value1",
"property2" : "value2"
}
},
{
"id": "2",
"type": "type2",
"config": {
"property3" : "value3",
"property4" : "value4",
"propArray": [ "element1", "element2"]
}
}
]
}
The individual elements within array 'data' has few different structures (2 examples above) where I would like to map different Class Types with individual elements which depends on the value of the element 'type'.
For example value 'type1' should map to an object of Class type 'Type1' and so on.
I have Classes created as below:
MyResponse:
public Class MyResponse {
List<Data> data;
..
\\getter and setters
}
Data:
public Interface Data {}
Type1:
public Class Type1 implements Data {
private String property1;
private String property2;
..
\\getter and setters
}
Type2:
public Class Type1 implements Data {
private String property3;
private String property4;
private List<String> propArray;
..
\\getter and setters
}
How can I map above conditional structure?
Only way I could think of it is get the returned value a String, convert it to JSONObject and process it to create your instances of your classes. For example,
String response = restTemplate.<your-uri>
JSONObject jsonObject = new JSONObject(response);
if (jsonObject.get(type).equals(type1) {
Type1 type1 = new Type1();
// set values
} else if (jsonObject.get(type).equals(type2) {
Type2 type2 = new Type2()
// set values
}
However, this is not scalable and if you are going add more and more types it would be very difficult to maintain a clean code.
Another way you can do this is to create a General Class and receive the response as List of that class. In this way Spring-boot/ jackson cand do the mapping. Again you have to add code to create the other classes from this general class. As Sam pointed out in the comment, this would be a preferred one since Jackson is faster than JSONObject. Here is sample class would look like,
class Response {
private Integer id;
private String type;
private Map<String, Object> config;
}
You still have to check the type and map to corresponding class.
Instead of writing such messy code, I would consider if I could re-architect your design/ response sent if you have control over it.

How to do not send #IdClass object in Spring JSON queries

I'm setting a server to get a CRUD api from a postgresql Database using JPA. Everytime I want to expose an object from the DB it duplicate the idObject.
When I get an object from the database using springframework and send it after that, it duplicate the idObject like this:
{
"siteId": 3,
"contractId": "1",
"name": "sitenumber1",
"siteIdObject": {
"siteId": 3,
"contractId": "1"
}
}
SiteId and contractId are repeating...
but I want something like that:
{
"siteId": 3,
"contractId": "1",
"name": "sitenumber1"
}
I want to avoid using DTO because I think there is a better way but I don't find it. Since I'm using springFramework for just one or two month I'm maybe forgeting something...
there is the code:
Site code:
#Entity
#IdClass(SiteId.class)
#Table(name = "site", schema="public")
public class Site {
#Id
#Column(name="siteid")
private Integer siteId;
#Id
#Column(name="clientid")
private Integer contractId;
private String name;
#JsonIgnore
#OneToMany(cascade=CascadeType.ALL, mappedBy = "site")
public Set<Device> devices;
//setter, getter, hash, equals, tostring, constructor empty one and full one
SiteId code:
public class SiteId implements Serializable {
private Integer siteId;
private Integer contractId;
// setter, getter, constructor empty and full, hash and equals
Thanks to help :)
Bessaix Daniel
If you are using Spring you might also be using Jackson so if you annotate your SiteIdclass with #JsonIgnoreType it shouldn't be serialized at all when the Site object is serialized.
I am however unsure if this will break your application logic now that the id object is not serialized anymore.

#Field(type=FieldType.keyword) being ignored on certain properties

I'm having trouble with the way SD Elasticsearch is creating some of my indices on application startup. I've got some String fields that I want to be of type "keyword" but they are always being created as type "text". This is using Elasticsearch 5.5.1, Spring 5.0.0, Spring Data Kay-RELEASE.
As an example I've got something like follows:
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department", createIndex = false)
public class DepartmentSearchResult implements Serializable {
#Id private String id;
private String foo;
// other fields, getters, setters etc. omitted
}
// DepartmentSearchingRepository.java
public interface DepartmentSearchingRepository extends ElasticsearchRepository<DepartmentSearchResult, String> {}
// ApplicationStartupListener.java
#Component
public class ApplicationStartupListener implements ApplicationListener<ContextRefreshedEvent> {
#Autowired ElasticsearchTempalte elasticSearchTemplate;
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
if (!elasticSearchTemplate.indexExists(DepartmentSearchResult.class)) {
elasticSearchTemplate.createIndex(DepartmentSearchResult.class);
elasticSearchTemplate.putMapping(DepartmentSearchResult.class);
}
}
}
(I've set createIndex to false and have the ApplicationStartupListener instead due to an issue I asked about at How to correctly address "can't add a _parent field that points to an already existing type, that isn't already a parent" on app startup. Including it here in case this is somehow related.)
Anyway, I want DepartmentSearchResult.id to be keyword, not text. So, I've changed from:
#Id private String id;
to:
#Field(type=FieldType.keyword) #Id private String id;
I then drop the index and restart my app. The index and the mapping are created, but "id" is always created as text, not keyword. Other fields are working properly; if I change foo to #Field(type=FieldType.keyword) private String foo; that shows up as keyword in the mapping, it's just this id field that I can't get working.
My current workaround is to just create the mapping manually like:
curl -s -X PUT http://localhost:9200/hr_index/_mapping/department -d '{
"properties": {
"id": {
"type": "keyword"
}
}
}'
and that works, but I'd much prefer for Spring to just create the mapping on the fly so I don't need to maintain a separate mapping installation script. Any places I should be looking that might indicate why this field keeps getting created as "text"?

Spring Data MongoDB: Accessing and updating sub documents

First experiments with Spring Data and MongoDB were great. Now I've got the following structure (simplified):
public class Letter {
#Id
private String id;
private List<Section> sections;
}
public class Section {
private String id;
private String content;
}
Loading and saving entire Letter objects/documents works like a charm. (I use ObjectId to generate unique IDs for the Section.id field.)
Letter letter1 = mongoTemplate.findById(id, Letter.class)
mongoTemplate.insert(letter2);
mongoTemplate.save(letter3);
As documents are big (200K) and sometimes only sub-parts are needed by the application: Is there a possibility to query for a sub-document (section), modify and save it?
I'd like to implement a method like
Section s = findLetterSection(letterId, sectionId);
s.setText("blubb");
replaceLetterSection(letterId, sectionId, s);
And of course methods like:
addLetterSection(letterId, s); // add after last section
insertLetterSection(letterId, sectionId, s); // insert before given section
deleteLetterSection(letterId, sectionId); // delete given section
I see that the last three methods are somewhat "strange", i.e. loading the entire document, modifying the collection and saving it again may be the better approach from an object-oriented point of view; but the first use case ("navigating" to a sub-document/sub-object and working in the scope of this object) seems natural.
I think MongoDB can update sub-documents, but can SpringData be used for object mapping? Thanks for any pointers.
I figured out the following approach for slicing and loading only one subobject. Does it seem ok? I am aware of problems with concurrent modifications.
Query query1 = Query.query(Criteria.where("_id").is(instance));
query1.fields().include("sections._id");
LetterInstance letter1 = mongoTemplate.findOne(query1, LetterInstance.class);
LetterSection emptySection = letter1.findSectionById(sectionId);
int index = letter1.getSections().indexOf(emptySection);
Query query2 = Query.query(Criteria.where("_id").is(instance));
query2.fields().include("sections").slice("sections", index, 1);
LetterInstance letter2 = mongoTemplate.findOne(query2, LetterInstance.class);
LetterSection section = letter2.getSections().get(0);
This is an alternative solution loading all sections, but omitting the other (large) fields.
Query query = Query.query(Criteria.where("_id").is(instance));
query.fields().include("sections");
LetterInstance letter = mongoTemplate.findOne(query, LetterInstance.class);
LetterSection section = letter.findSectionById(sectionId);
This is the code I use for storing only a single collection element:
MongoConverter converter = mongoTemplate.getConverter();
DBObject newSectionRec = (DBObject)converter.convertToMongoType(newSection);
Query query = Query.query(Criteria.where("_id").is(instance).and("sections._id").is(new ObjectId(newSection.getSectionId())));
Update update = new Update().set("sections.$", newSectionRec);
mongoTemplate.updateFirst(query, update, LetterInstance.class);
It is nice to see how Spring Data can be used with "partial results" from MongoDB.
Any comments highly appreciated!
I think Matthias Wuttke's answer is great, for anyone looking for a generic version of his answer see code below:
#Service
public class MongoUtils {
#Autowired
private MongoTemplate mongo;
public <D, N extends Domain> N findNestedDocument(Class<D> docClass, String collectionName, UUID outerId, UUID innerId,
Function<D, List<N>> collectionGetter) {
// get index of subdocument in array
Query query = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query.fields().include(collectionName + "._id");
D obj = mongo.findOne(query, docClass);
if (obj == null) {
return null;
}
List<UUID> itemIds = collectionGetter.apply(obj).stream().map(N::getId).collect(Collectors.toList());
int index = itemIds.indexOf(innerId);
if (index == -1) {
return null;
}
// retrieve subdocument at index using slice operator
Query query2 = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query2.fields().include(collectionName).slice(collectionName, index, 1);
D obj2 = mongo.findOne(query2, docClass);
if (obj2 == null) {
return null;
}
return collectionGetter.apply(obj2).get(0);
}
public void removeNestedDocument(UUID outerId, UUID innerId, String collectionName, Class<?> outerClass) {
Update update = new Update();
update.pull(collectionName, new Query(Criteria.where("_id").is(innerId)));
mongo.updateFirst(new Query(Criteria.where("_id").is(outerId)), update, outerClass);
}
}
This could for example be called using
mongoUtils.findNestedDocument(Shop.class, "items", shopId, itemId, Shop::getItems);
mongoUtils.removeNestedDocument(shopId, itemId, "items", Shop.class);
The Domain interface looks like this:
public interface Domain {
UUID getId();
}
Notice: If the nested document's constructor contains elements with primitive datatype, it is important for the nested document to have a default (empty) constructor, which may be protected, in order for the class to be instantiatable with null arguments.
Solution
Thats my solution for this problem:
The object should be updated
#Getter
#Setter
#Document(collection = "projectchild")
public class ProjectChild {
#Id
private String _id;
private String name;
private String code;
#Field("desc")
private String description;
private String startDate;
private String endDate;
#Field("cost")
private long estimatedCost;
private List<String> countryList;
private List<Task> tasks;
#Version
private Long version;
}
Coding the Solution
public Mono<ProjectChild> UpdateCritTemplChild(
String id, String idch, String ownername) {
Query query = new Query();
query.addCriteria(Criteria.where("_id")
.is(id)); // find the parent
query.addCriteria(Criteria.where("tasks._id")
.is(idch)); // find the child which will be changed
Update update = new Update();
update.set("tasks.$.ownername", ownername); // change the field inside the child that must be updated
return template
// findAndModify:
// Find/modify/get the "new object" from a single operation.
.findAndModify(
query, update,
new FindAndModifyOptions().returnNew(true), ProjectChild.class
)
;
}

Resources