Spring Data MongoDB: Accessing and updating sub documents - spring

First experiments with Spring Data and MongoDB were great. Now I've got the following structure (simplified):
public class Letter {
#Id
private String id;
private List<Section> sections;
}
public class Section {
private String id;
private String content;
}
Loading and saving entire Letter objects/documents works like a charm. (I use ObjectId to generate unique IDs for the Section.id field.)
Letter letter1 = mongoTemplate.findById(id, Letter.class)
mongoTemplate.insert(letter2);
mongoTemplate.save(letter3);
As documents are big (200K) and sometimes only sub-parts are needed by the application: Is there a possibility to query for a sub-document (section), modify and save it?
I'd like to implement a method like
Section s = findLetterSection(letterId, sectionId);
s.setText("blubb");
replaceLetterSection(letterId, sectionId, s);
And of course methods like:
addLetterSection(letterId, s); // add after last section
insertLetterSection(letterId, sectionId, s); // insert before given section
deleteLetterSection(letterId, sectionId); // delete given section
I see that the last three methods are somewhat "strange", i.e. loading the entire document, modifying the collection and saving it again may be the better approach from an object-oriented point of view; but the first use case ("navigating" to a sub-document/sub-object and working in the scope of this object) seems natural.
I think MongoDB can update sub-documents, but can SpringData be used for object mapping? Thanks for any pointers.

I figured out the following approach for slicing and loading only one subobject. Does it seem ok? I am aware of problems with concurrent modifications.
Query query1 = Query.query(Criteria.where("_id").is(instance));
query1.fields().include("sections._id");
LetterInstance letter1 = mongoTemplate.findOne(query1, LetterInstance.class);
LetterSection emptySection = letter1.findSectionById(sectionId);
int index = letter1.getSections().indexOf(emptySection);
Query query2 = Query.query(Criteria.where("_id").is(instance));
query2.fields().include("sections").slice("sections", index, 1);
LetterInstance letter2 = mongoTemplate.findOne(query2, LetterInstance.class);
LetterSection section = letter2.getSections().get(0);
This is an alternative solution loading all sections, but omitting the other (large) fields.
Query query = Query.query(Criteria.where("_id").is(instance));
query.fields().include("sections");
LetterInstance letter = mongoTemplate.findOne(query, LetterInstance.class);
LetterSection section = letter.findSectionById(sectionId);
This is the code I use for storing only a single collection element:
MongoConverter converter = mongoTemplate.getConverter();
DBObject newSectionRec = (DBObject)converter.convertToMongoType(newSection);
Query query = Query.query(Criteria.where("_id").is(instance).and("sections._id").is(new ObjectId(newSection.getSectionId())));
Update update = new Update().set("sections.$", newSectionRec);
mongoTemplate.updateFirst(query, update, LetterInstance.class);
It is nice to see how Spring Data can be used with "partial results" from MongoDB.
Any comments highly appreciated!

I think Matthias Wuttke's answer is great, for anyone looking for a generic version of his answer see code below:
#Service
public class MongoUtils {
#Autowired
private MongoTemplate mongo;
public <D, N extends Domain> N findNestedDocument(Class<D> docClass, String collectionName, UUID outerId, UUID innerId,
Function<D, List<N>> collectionGetter) {
// get index of subdocument in array
Query query = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query.fields().include(collectionName + "._id");
D obj = mongo.findOne(query, docClass);
if (obj == null) {
return null;
}
List<UUID> itemIds = collectionGetter.apply(obj).stream().map(N::getId).collect(Collectors.toList());
int index = itemIds.indexOf(innerId);
if (index == -1) {
return null;
}
// retrieve subdocument at index using slice operator
Query query2 = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query2.fields().include(collectionName).slice(collectionName, index, 1);
D obj2 = mongo.findOne(query2, docClass);
if (obj2 == null) {
return null;
}
return collectionGetter.apply(obj2).get(0);
}
public void removeNestedDocument(UUID outerId, UUID innerId, String collectionName, Class<?> outerClass) {
Update update = new Update();
update.pull(collectionName, new Query(Criteria.where("_id").is(innerId)));
mongo.updateFirst(new Query(Criteria.where("_id").is(outerId)), update, outerClass);
}
}
This could for example be called using
mongoUtils.findNestedDocument(Shop.class, "items", shopId, itemId, Shop::getItems);
mongoUtils.removeNestedDocument(shopId, itemId, "items", Shop.class);
The Domain interface looks like this:
public interface Domain {
UUID getId();
}
Notice: If the nested document's constructor contains elements with primitive datatype, it is important for the nested document to have a default (empty) constructor, which may be protected, in order for the class to be instantiatable with null arguments.

Solution
Thats my solution for this problem:
The object should be updated
#Getter
#Setter
#Document(collection = "projectchild")
public class ProjectChild {
#Id
private String _id;
private String name;
private String code;
#Field("desc")
private String description;
private String startDate;
private String endDate;
#Field("cost")
private long estimatedCost;
private List<String> countryList;
private List<Task> tasks;
#Version
private Long version;
}
Coding the Solution
public Mono<ProjectChild> UpdateCritTemplChild(
String id, String idch, String ownername) {
Query query = new Query();
query.addCriteria(Criteria.where("_id")
.is(id)); // find the parent
query.addCriteria(Criteria.where("tasks._id")
.is(idch)); // find the child which will be changed
Update update = new Update();
update.set("tasks.$.ownername", ownername); // change the field inside the child that must be updated
return template
// findAndModify:
// Find/modify/get the "new object" from a single operation.
.findAndModify(
query, update,
new FindAndModifyOptions().returnNew(true), ProjectChild.class
)
;
}

Related

QueryDSL Predicate for use with JPARepository where field is a JSON String converted using an AttributeConverter to a List<Object>

I have a JPA Entity (Terminal) which uses an AttributeConverter to convert a Database String into a list of Objects (ProgrmRegistration). The converter just uses a JSON ObjectMapper to turn the JSON String into POJO objects.
Entity Object
#Entity
#Data
public class Terminal {
#Id
private String terminalId;
#NotEmpty
#Convert(converter = ProgramRegistrationConverter.class)
private List<ProgramRegistration> programRegistrations;
#Data
public static class ProgramRegistration {
private String program;
private boolean online;
}
}
The Terminal uses the following JPA AttributeConverter to serialize the Objects from and to JSON
JPA AttributeConverter
public class ProgramRegistrationConverter implements AttributeConverter<List<Terminal.ProgramRegistration>, String> {
private final ObjectMapper objectMapper;
private final CollectionType programRegistrationCollectionType;
public ProgramRegistrationConverter() {
this.objectMapper = new ObjectMapper().setSerializationInclusion(JsonInclude.Include.NON_EMPTY);
this.programRegistrationCollectionType =
objectMapper.getTypeFactory().constructCollectionType(List.class, Terminal.ProgramRegistration.class);
}
#Override
public String convertToDatabaseColumn(List<Terminal.ProgramRegistration> attribute) {
if (attribute == null) {
return null;
}
String json = null;
try {
json = objectMapper.writeValueAsString(attribute);
} catch (final JsonProcessingException e) {
LOG.error("JSON writing error", e);
}
return json;
}
#Override
public List<Terminal.ProgramRegistration> convertToEntityAttribute(String dbData) {
if (dbData == null) {
return Collections.emptyList();
}
List<Terminal.ProgramRegistration> list = null;
try {
list = objectMapper.readValue(dbData, programRegistrationCollectionType);
} catch (final IOException e) {
LOG.error("JSON reading error", e);
}
return list;
}
}
I am using Spring Boot and a JPARepository to fetch a Page of Terminal results from the Database.
To filter the results I am using a BooleanExpression as the Predicate. For all the filter values on the Entity it works well, but the List of objects converted from the JSON string does not allow me to easily write an Expression that will filter the Objects in the list.
REST API that is trying to filter the Entity Objects using QueryDSL
#GetMapping(path = "/filtered/page", produces = MediaType.APPLICATION_JSON_VALUE)
public Page<Terminal> findFilteredWithPage(
#RequestParam(required = false) String terminalId,
#RequestParam(required = false) String programName,
#PageableDefault(size = 20) #SortDefault.SortDefaults({ #SortDefault(sort = "terminalId") }) Pageable p) {
BooleanBuilder builder = new BooleanBuilder();
if (StringUtils.isNotBlank(terminalId))
builder.and(QTerminal.terminal.terminalId.upper()
.contains(StringUtils.upperCase(terminalId)));
// TODO: Figure out how to use QueryDsl to get the converted List as a predicate
// The code below to find the programRegistrations does not allow a call to any(),
// expects a CollectionExpression or a SubqueryExpression for calls to eqAny() or in()
if (StringUtils.isNotBlank(program))
builder.and(QTerminal.terminal.programRegistrations.any().name()
.contains(StringUtils.upperCase(programName)));
return terminalRepository.findAll(builder.getValue(), p);
}
I am wanting to get any Terminals that have a ProgramRegistration object with the program name equal to the parameter passed into the REST service.
I have been trying to get CollectionExpression or SubQueryExpression working without success since they all seem to be wanting to perform a join between two Entity objects. I do not know how to create the path and query so that it can iterate over the programRegistrations checking the "program" field for a match. I do not have a QProgamRegistration object to join with, since it is just a list of POJOs.
How can I get the predicate to match only the Terminals that have programs with the name I am searching for?
This is the line that is not working:
builder.and(QTerminal.terminal.programRegistrations.any().name()
.contains(StringUtils.upperCase(programName)));
AttributeConverters have issues in Querydsl, because they have issues in JPQL - the query language of JPA - itself. It is unclear what actually the underlying query type of the attribute is, and whether the parameter should be a basic type of that query type, or should be converted using the conversion. Such conversion, whilst it appears logical, is not defined in the JPA specification. Thus a basic type of the query type needs to be used instead, which leads to new difficulties, because Querydsl can't know the type it needs to be. It only knows the Java type of the attribute.
A workaround can be to force the field to result into a StringPath by annotating the field with #QueryType(PropertyType.STRING). Whilst this fixes the issue for some queries, you will run into different issues in other scenarios. For more information, see this thread.
Although the following desired QueryDsl looks like it should work
QTerminal.terminal.programRegistrations.any().name().contains(programName);
In reality JPA would never be able to convert it into something that would make sense in terms of SQL. The only SQL that JPA could convert it into could be as follows:
SELECT t.terminal_id FROM terminal t where t.terminal_id LIKE '%00%' and t.program_registrations like '%"program":"MY_PROGRAM_NAME"%';
This would work in this use case, but be semantically wrong, and therefore it is correct that it should not work. Trying to select unstructured data using a structured query language makes no sense
The only solution is to treat the data as characters for the DB search criteria, and to treat it as a list of Objects after the query completes and then perform filtering of the rows in Java. Although This makes the paging feature rather useless.
One possible solution is to have a secondary read only String version of the column that is used for the DB search criteria, that is not converted to JSON by the AttributeConverter.
#JsonIgnore
#Column(name = "programRegistrations", insertable = false, updatable = false)
private String programRegistrationsStr;
The real solution is do not use unstructured data when you want structured queries on that data Therefore convert the data to either a database that supports the JSON natively for queries or model the data correctly in DDL.
To have a short answer: the parameter used in the predicate on attribute with #QueryType must be used in another predicate on attribute of type String.
It's a clearly known issue describe in this thread: https://github.com/querydsl/querydsl/issues/2652
I simply want to share my experience about this bug.
Model
I have an entity like
#Entity
public class JobLog {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private String id;
#QueryType(PropertyType.STRING)
private LocalizedString message;
}
Issue
I want to perform some predicate about message. Unfortunately, with this configuration, I can't do this:
predicates.and(jobLog.message.likeIgnoreCase(escapedTextFilter));
because I have the same issues that all people!
Solution
But I find a way to workaround :)
predicates.and(
(jobLog.id.likeIgnoreCase(escapedTextFilter).and(jobLog.id.isNull()))
.or(jobLog.message.likeIgnoreCase(escapedTextFilter)));
Why it workaround the bug?
It's important that escapedTextFilter is the same in both predicate!
Indeed, in this case, the constant is converter to SQL in the first predicate (which is of String type). And in the second predicate, we use the conterted value
Bad thing?
Add a performance overflow because we have OR in predicate
Hope this can help someone :)
I've found one way to solve this problem, my main idea is to use mysql function cast(xx as char) to cheat hibrenate. Below is my base info. My code is for work , so I've made an example.
// StudentRepo.java
public interface StudentRepo<Student, Long> extends JpaRepository<Student, Long>, QuerydslPredicateExecutor<Student>, JpaSpecificationExecutor<Student> {
}
// Student.java
#Data
#AllArgsConstructor
#NoArgsConstructor
#EqualsAndHashCode(of = "id")
#Entity
#Builder
#Table(name = "student")
public class Student {
#Convert(converter = ClassIdsConvert.class)
private List<String> classIds;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
}
// ClassIdsConvert.java
public class ClassIdsConvert implements AttributeConverter<List<String>, String> {
#Override
public String convertToDatabaseColumn(List<String> ips) {
// classid23,classid24,classid25
return String.join(",", ips);
}
#Override
public List<String> convertToEntityAttribute(String dbData) {
if (StringUtils.isEmpty(dbData)) {
return null;
} else {
return Stream.of(dbData.split(",")).collect(Collectors.toList());
}
}
}
my db is below
id
classIds
name
address
1
2,3,4,11
join
北京市
2
2,31,14,11
hell
福建省
3
2,12,22,33
work
福建省
4
1,4,5,6
ouy
广东省
5
11,31,34,22
yup
上海市
-- ----------------------------
-- Table structure for student
-- ----------------------------
DROP TABLE IF EXISTS `student`;
CREATE TABLE `student` (
`id` int(11) NOT NULL,
`classIds` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
`name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
`address` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;
SET FOREIGN_KEY_CHECKS = 1;
Use JpaSpecificationExecutor solve the problem
Specification<Student> specification = (root, query, criteriaBuilder) -> {
String classId = "classid24"
String classIdStr = StringUtils.wrap(classId, "%");
var predicate = criteriaBuilder.like(root.get("classIds").as(String.class), classIdStr);
return criteriaBuilder.or(predicate);
};
var students = studentRepo.findAll(specification);
log.info(new Gson().toJson(students))
attention the code root.get("classIds").as(String.class)
In my opinion, if I don't add .as(String.class) , hibernate will think the type of student.classIds is list and throw an Exception as below.
SQL will like below which runs correctly in mysql. But hibnerate can't work.
org.springframework.dao.InvalidDataAccessApiUsageException: Parameter value [%classid24%] did not match expected type [java.util.List (n/a)]; nested exception is java.lang.IllegalArgumentException: Parameter value [%classid24%] did not match expected type [java.util.List (n/a)]
SELECT
student0_.id AS id1_0_,
student0_.class_ids AS class_ids2_0_
FROM
student student0_
WHERE
student0_.class_ids LIKE '%classid24%' ESCAPE '!'
if you add .as(String.class) , hibnerate will think the type of student.classIds as string and won't check it at all.
SQL will be like below which can run correct in mysql. Also in JPA.
SELECT
student0_.id AS id1_0_,
student0_.class_ids AS class_ids2_0_
FROM
student student0_
WHERE
cast( student0_.class_ids AS CHAR ) LIKE '%classid24%' ESCAPE '!'
when the problem is solved by JpaSpecificationExecutor, so I think this can be solve also in querydsl. At last I find the template idea in querydsl.
String classId = "classid24";
StringTemplate st = Expressions.stringTemplate("cast({0} as string)", qStudent.classIds);
var students = Lists.newArrayList<studentRepo.findAll(st.like(StringUtils.wrap(classId, "%"))));
log.info(new Gson().toJson(students));
it's sql is like below.
SELECT
student0_.id AS id1_0_,
student0_.class_ids AS class_ids2_0_
FROM
student student0_
WHERE
cast( student0_.class_ids AS CHAR ) LIKE '%classid24%' ESCAPE '!'

Hibernate Criteria FetchMode.JOIN is doing lazy loading

I have a paginated endpoint which internally uses Hibernate Criteria to fetch certain objects and relations. The FetchMode is set as FetchMode.JOIN.
When I am trying to hit the endpoint, the request seems to work fine for few pages but is then erring out with :
could not initialize proxy - no Session
Method is as as below:
#Override
public Page<Person> findAllNotDeleted(final Pageable pageable)
{
final var criteria = createCriteria();
criteria.add(Restrictions.or(Restrictions.isNull(DELETED), Restrictions.eq(DELETED, false)));
criteria.setFetchMode(PERSON_RELATION, FetchMode.JOIN);
criteria.setFetchMode(DEPARTMENT_RELATION, FetchMode.JOIN);
criteria.setFirstResult((int) pageable.getOffset());
criteria.setMaxResults(pageable.getPageSize());
criteria.addOrder(asc("id"));
final var totalResult = getTotalResult();
return new PageImpl<>(criteria.list(), pageable, totalResult);
}
private int getTotalResult()
{
final Criteria countCriteria = createCriteria();
countCriteria.add(Restrictions.or(Restrictions.isNull(DELETED), Restrictions.eq(DELETED, false)));
return ((Number) countCriteria.setProjection(Projections.rowCount()).uniqueResult()).intValue();
}
Also, the call to findAllNotDeleted is done from a method anotated with #Transactional.
Not sure what is going wrong.
Any help would be highly appreciated.
EDIT
I read that FetchMode.Join does not work with Restrictions. So I tried implementing it using CriteriaBuilder but again stuck with the issue.
#Override
public Page<Driver> findAllNotDeleted(final Pageable pageable)
{
final var session = getCurrentSession();
final var builder = session.getCriteriaBuilder();
final var query = builder.createQuery(Person.class);
final var root = query.from(Driver.class);
root.join(PERSON_RELATION, JoinType.INNER)
.join(DEPARTMENT_RELATION,JoinType.INNER);
//flow does not reach here.....
var restrictions_1 = builder.isNull(root.get(DELETED));
var restrictions_2 = builder.equal(root.get(DELETED), false);
query.select(root).where(builder.or(restrictions_1,restrictions_2));
final var result = session.createQuery(query).getResultList();
return new PageImpl<>(result, pageable, result.size());
}
The flow does not seem to reach after root.join.
EDIT-2
The relations are as follows:
String PERSON_RELATIONSHIP = "person.address"
String DEPARTMENT_RELATION = "person.department"
and both person, address, department themselves are classes which extend Entity
I guess the associations you try to fetch i.e. PERSON_RELATION or DEPARTMENT_RELATION are collections? In such a case, it is not possible to directly do pagination on the entity level with Hibernate. You would have to fetch the ids first and then do a second query to fetch just the entities with the matching ids.
You could use Blaze-Persistence on top of Hibernate though which has a special pagination API that does these tricks for you behind the scenes. Here is the documentation about the pagination: https://persistence.blazebit.com/documentation/core/manual/en_US/index.html#pagination
There is also a Spring Data integration, so you could also use the Spring Data pagination convention along with Blaze-Persistence Entity-Views which are like Spring Data Projections on steroids. You'd use Page<DriverView> findByDeletedFalseOrDeletedNull(Pageable p) with
#EntityView(Driver.class)
interface DriverView {
Long getId();
String getName();
PersonView getPersonRelation();
DepartmentView getDepartmentRelation();
}
#EntityView(Person.class)
interface PersonView {
Long getId();
String getName();
}
#EntityView(Department.class)
interface DepartmentView {
Long getId();
String getName();
}
Using entity views will only fetch what you declare, nothing else. You could also use entity graphs though:
#EntityGraph(attributePaths = {"personRelation", "departmentRelation"})
Page<Driver> findByDeletedFalseOrDeletedNull(Pageable p);

How can I save a SparseArray in a Room database?

I'm trying to persist a SparseArray in a Room database and can not get it to compile. I keep getting the "Not sure how to convert a Cursor to this method's return type" error message along with "The query returns some columns [plannerLineData] which are not use by android.util.SparseArray."
I have tried using a single field in the PlannerLine Entity alone with a separate PlannerLineData class.
I have data converters to convert SparseArray to String and to convert String back to SparseArray.
I have checked several questions on stackoverflow and have successfully used the Date to Long and the Long to Date converters in other projects, but I seem to be missing something somewhere.
Data Files:
#Entity
public class PlannerLine implements Serializable {
private static final long serialVersionUID = 1L;
#TypeConverters(Converters.class)
#PrimaryKey
#SerializedName("planner_line")
#NonNull
public SparseArray plannerLineData;
public SparseArray getPlannerLineData() {
return plannerLineData;
}
public void setPlannerLineData(SparseArray plannerLineData) {
this.plannerLineData = plannerLineData;
}
public class PlannerLineData implements Serializable {
#SerializedName("lineId")
public int lineId;
#SerializedName("plan_text")
public String planText;
public int getLineId() {
return lineId;
}
public void setLineId(int lineId) {
this.lineId = lineId;
}
public String getPlanText() {
return planText;
}
public void setPlanText(String planText) {
this.planText = planText;
}
}
DAO problem area:
#Dao
public interface PlannerDao {
#Query("SELECT * from PlannerLine")
public SparseArray getPlannerLine(); <---Doesn't like this line
I have also tried returning SparseArray<PlannerLine> and SparseArray<PlannerLineData>, but no joy.
Converters class:
public class Converters {
#TypeConverter
public static String sparseArrayToString(SparseArray sparseArray) {
if (sparseArray == null) {
return null;
}
int size = sparseArray.size();
if (size <= 0) {
return "{}";
}
StringBuilder buffer = new StringBuilder(size * 28);
buffer.append('{');
for (int i = 0; i < size; i++) {
if (i > 0) {
buffer.append("-,- ");
}
int key = sparseArray.keyAt(i);
buffer.append(key);
buffer.append("-=-");
Object value = sparseArray.valueAt(i);
buffer.append(value);
}
buffer.append('}');
return buffer.toString();
}
#TypeConverter
public static SparseArray stringToSparseArray(String string) {
if (string == null) {
return null;
}
String entrySeparator = "-=-";
String elementSeparator = "-,-";
SparseArray sparseArray = new SparseArray();
String[] entries = StringUtils.splitByWholeSeparator(string, elementSeparator);
for (int i = 0; i < entries.length; i++) {
String[] parts = StringUtils.splitByWholeSeparator(entries[i], entrySeparator);
int key = Integer.parseInt(parts[0]);
String text = parts[1];
sparseArray.append(key, text);
}
return sparseArray;
}
Suggestions would be appreciated. Thanks
Edit:
My original vision for this app was to store all the plan lines in a single SparseArray, along with two additional SparseIntArrays (which I did not mention before because the solution would be similar to the SparseArray) to hold info on how the plan lines interact with each other.
After reading through #dglozano's helpful responses, I have decided to re-design the app to just store regular DB files in Room and load the data into the SparseArray (and the two SparseIntArrays) at startup, use only the in memory SparseArray and SparseIntArrays while the app is active, then write changes in the Sparse Arrays to the DB during onStop(). I am also considering updating the DB in the background as I work through app.
Because the answers and suggestions provided by #dglozano led me to the re-design decision, I am accepting his answer as the solution.
Thanks for the help.
It seems that you are doing the Conversion properly. However, the problem is in your DAO Query:
#Query("SELECT * from PlannerLine") // This returns a List of PlannerLine, not a SparseArray
public SparseArray getPlannerLine(); // The return type is SparseArray, not a List of PlannerLine
Therefore, you can try two different things:
1 - Change the Query to #Query("SELECT plannerLineData FROM PlannerLine WHERE lineId == :lineId") , so that the query returns the SparseArray inside the PlannerLine with id lineId. You should change the method signature so it accepts the parameter lineId
#Query("SELECT plannerLineData FROM PlannerLine WHERE lineId == :lineId")
public SparseArray getPlannerLine(int lineId);
2 - If you want to return the full PlannerLine object and then access to its SparseArray field, then you should change the return type. You should also add the lineId parameter to return just one record and not a list of all the PlannerLine stored in the database table.
#Query("SELECT * FROM PlannerLine WHERE lineId == :lineId")
public PlannerLine getPlannerLine(int lineId);
UPDATE
If you want to get a List<PlannerLine> with all the PlannerLine stored in the database, use the following query in your Dao.
#Query("SELECT * FROM PlannerLine")
public List<PlannerLine> getAllPlannerLines();
Then you can access to the SparseArray of each PlannerLine in the list as usual.

how does Chronicle-wire support schema evolution?

I am new to Chronicle-wire. In the document it claims support for "setting of fields to the default, if not available" in the schema evolution section.
Do we have an example of how this works?
I have an example of adding an array field to a simple Marshallable object. When reading the journals contains the old version of the object, how can we set a default value (eg. new String[0]) to the field instead of a null?
There're a few ways to achieve that, one example is below:
public class TestMarshallable implements Marshallable {
private long a;
private int b;
private String newField = "defaultValue";
#Override
public void readMarshallable(#NotNull WireIn wire) throws IORuntimeException {
a = wire.read("a").int64();
b = wire.read("b").int32();
if (wire.bytes().readRemaining() > 0)
newField = wire.read("newField").text();
}
}
In this example, it is assumed that your new field will be written last, hence you can simply check if there's more to read - and do so. Default value is the one you assign to the field.
More complicated, but way more flexible way:
public class TestMarshallable implements Marshallable {
private long a = 0;
private int b = 1;
private String newField = "defaultValue";
#Override
public void readMarshallable(#NotNull WireIn wire) throws IORuntimeException {
#NotNull StringBuilder name = new StringBuilder();
while (!wire.isEmpty()) {
#NotNull ValueIn in = wire.read(name);
if (StringUtils.isEqual(name, "a"))
a = in.int64();
else if (StringUtils.isEqual(name, "b"))
b = in.int32();
else if (StringUtils.isEqual(name, "newField"))
newField = in.text();
else
unexpectedField(name, in);
wire.consumePadding();
}
}
}
In the last example readMarshallable simply overwrites the fields it could find in the stream leaving others with default values (NB this can also be used to save certain amount of writes, if you often write default values you can skip them altogether in writeMarshallable)

Add data to database from Controller, different methods but same row

I have an entity model, for simplification purposes let's say it looks like this :
public class Results {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private Long firstUser;
private Long secondUser;
private Double average;
private Double median;
private Double score;
}
This is my ResultsService Implementation:
public class ResultsServiceImpl implements ResultsService{
#Autowired
private CalculateDataRepository calculateDataRepository;;
#Autowired
private ResultsService resultsService;
Results results=new Results();
public void Average(Long id1, Long id2){
UserData firstClient = calculateDataRepository.findOne(id1);
userData secondClient = calculateDataRepository.findOne(id2);
clientId = firstClient.getClient().getId();
secondId = secondClient.getClient().getId();
Double average=(firstClient.getA()+secondClient.getA())/2;
results.setAverage(average);
}
public void Score(Long id1, Long id2){
SurveyData firstClient = surveyDataRepository.findOne(id1);
SurveyData secondClient = surveyDataRepository.findOne(id2);
clientId = firstClient.getClient().getId();
secondId = secondClient.getClient().getId();
Double average=(firstClient.getB()+secondClient.getB());
results.setScore(average);
results.setFirstUser(clientId );
results.setSecondUser(secondId );
resultsService.save(results);
}
....
I tried declaring Results results=new Results(); inside every method, but when I save them they get saved in different rows, instead of the same one.
How do I hold the reference so that when I call the setter of a field in one function, it's in the same row as the setter of a field in the other function.
To keep the problem focused, I tried to avoid showing the implementation of calculateDataRepository which is just the repository of an entity where some results are saved for different users.
The Results Method has no foreign field reference nor a reference from somewhere else, as there are fields firstUser and secondUser which I set from one of the methods;
Thank you.
Edit:
Results results=resultsService.findByFirstUserAndSecondUser(clientId, secondId);
if(results==null) {
results= new Results();
// Store to db ?
}
results.setAverage();
resultsService.save(results);
Actually you need a method in ResultsRepository
Results findByFirstAndSecond(Long first, Long second);
In the each Average and Score methods (BTW Java naming convention requires to have method names start from lowercase letter) call the findByFirstAndSecond(id1, id2)
If the method returns null (no such result) create a new instance and save in the DB (INSERT). If some Results is returned store the info there and save changes in DB (UPDATE).

Resources