I'm trying to persist a SparseArray in a Room database and can not get it to compile. I keep getting the "Not sure how to convert a Cursor to this method's return type" error message along with "The query returns some columns [plannerLineData] which are not use by android.util.SparseArray."
I have tried using a single field in the PlannerLine Entity alone with a separate PlannerLineData class.
I have data converters to convert SparseArray to String and to convert String back to SparseArray.
I have checked several questions on stackoverflow and have successfully used the Date to Long and the Long to Date converters in other projects, but I seem to be missing something somewhere.
Data Files:
#Entity
public class PlannerLine implements Serializable {
private static final long serialVersionUID = 1L;
#TypeConverters(Converters.class)
#PrimaryKey
#SerializedName("planner_line")
#NonNull
public SparseArray plannerLineData;
public SparseArray getPlannerLineData() {
return plannerLineData;
}
public void setPlannerLineData(SparseArray plannerLineData) {
this.plannerLineData = plannerLineData;
}
public class PlannerLineData implements Serializable {
#SerializedName("lineId")
public int lineId;
#SerializedName("plan_text")
public String planText;
public int getLineId() {
return lineId;
}
public void setLineId(int lineId) {
this.lineId = lineId;
}
public String getPlanText() {
return planText;
}
public void setPlanText(String planText) {
this.planText = planText;
}
}
DAO problem area:
#Dao
public interface PlannerDao {
#Query("SELECT * from PlannerLine")
public SparseArray getPlannerLine(); <---Doesn't like this line
I have also tried returning SparseArray<PlannerLine> and SparseArray<PlannerLineData>, but no joy.
Converters class:
public class Converters {
#TypeConverter
public static String sparseArrayToString(SparseArray sparseArray) {
if (sparseArray == null) {
return null;
}
int size = sparseArray.size();
if (size <= 0) {
return "{}";
}
StringBuilder buffer = new StringBuilder(size * 28);
buffer.append('{');
for (int i = 0; i < size; i++) {
if (i > 0) {
buffer.append("-,- ");
}
int key = sparseArray.keyAt(i);
buffer.append(key);
buffer.append("-=-");
Object value = sparseArray.valueAt(i);
buffer.append(value);
}
buffer.append('}');
return buffer.toString();
}
#TypeConverter
public static SparseArray stringToSparseArray(String string) {
if (string == null) {
return null;
}
String entrySeparator = "-=-";
String elementSeparator = "-,-";
SparseArray sparseArray = new SparseArray();
String[] entries = StringUtils.splitByWholeSeparator(string, elementSeparator);
for (int i = 0; i < entries.length; i++) {
String[] parts = StringUtils.splitByWholeSeparator(entries[i], entrySeparator);
int key = Integer.parseInt(parts[0]);
String text = parts[1];
sparseArray.append(key, text);
}
return sparseArray;
}
Suggestions would be appreciated. Thanks
Edit:
My original vision for this app was to store all the plan lines in a single SparseArray, along with two additional SparseIntArrays (which I did not mention before because the solution would be similar to the SparseArray) to hold info on how the plan lines interact with each other.
After reading through #dglozano's helpful responses, I have decided to re-design the app to just store regular DB files in Room and load the data into the SparseArray (and the two SparseIntArrays) at startup, use only the in memory SparseArray and SparseIntArrays while the app is active, then write changes in the Sparse Arrays to the DB during onStop(). I am also considering updating the DB in the background as I work through app.
Because the answers and suggestions provided by #dglozano led me to the re-design decision, I am accepting his answer as the solution.
Thanks for the help.
It seems that you are doing the Conversion properly. However, the problem is in your DAO Query:
#Query("SELECT * from PlannerLine") // This returns a List of PlannerLine, not a SparseArray
public SparseArray getPlannerLine(); // The return type is SparseArray, not a List of PlannerLine
Therefore, you can try two different things:
1 - Change the Query to #Query("SELECT plannerLineData FROM PlannerLine WHERE lineId == :lineId") , so that the query returns the SparseArray inside the PlannerLine with id lineId. You should change the method signature so it accepts the parameter lineId
#Query("SELECT plannerLineData FROM PlannerLine WHERE lineId == :lineId")
public SparseArray getPlannerLine(int lineId);
2 - If you want to return the full PlannerLine object and then access to its SparseArray field, then you should change the return type. You should also add the lineId parameter to return just one record and not a list of all the PlannerLine stored in the database table.
#Query("SELECT * FROM PlannerLine WHERE lineId == :lineId")
public PlannerLine getPlannerLine(int lineId);
UPDATE
If you want to get a List<PlannerLine> with all the PlannerLine stored in the database, use the following query in your Dao.
#Query("SELECT * FROM PlannerLine")
public List<PlannerLine> getAllPlannerLines();
Then you can access to the SparseArray of each PlannerLine in the list as usual.
Related
I have an endpoint to get all Posts, I also have multiple #RequestParams used to filter and search for values etc.
The issue I'm having is that when trying to filter based on specific #RequestParams, I would need to have multiple checks to see whether that specific parameter is passed when calling the endpoint, so in my Controller I have something like this. The parameters are optional, I also have parameters for Pagination etc, but I left it out below.
I have these criteria:
#RequestParam(required=false) List<String> brand - Used to filter by multiple brands
#RequestParam(required=false) String province - Used to filter by province
#RequestParam(required=false) String city - Used to filter by city
// Using these 2 for getting Posts within a certain price range
#RequestParam(defaultValue = "0", required = false) String minValue - Used to filter by min price
#RequestParam(defaultValue = "5000000", required = false) String maxValue - Used to filter by max price
I also have this in my Controller when checking which of my service methods to call based on the parameters passed.
if(query != null) {
pageTuts = postService.findAllPosts(query, pagingSort);
} else if(brand != null) {
pageTuts = postService.findAllByBrandIn(brand, pagingSort);
} else if(minValue != null && maxValue != null) {
pageTuts = postService.findAllPostsByPriceBetween(minValue, maxValue, pagingSort);
} else if(brand != null & minValue != null & maxValue != null) {
pageTuts = postService.findAllPostsByPriceBetween(minValue, maxValue, pagingSort);
} else {
// if no parameters are passed in req, just get all the Posts available
pageTuts = postService.findAllPosts(pagingSort);
}
// I would need more checks to handle all parameters
The issue is that I'm struggling to find out, if I need this condition for each and every possible parameter, which will be a lot of checks and Repository/Service methods based on that parameter.
For example in my Repository I have abstract methods like these:
Page<Post> findAllByProvince(String province, Pageable pageable);
Page<Post> findAllByCity(String city, Pageable pageable);
Page<Post> findAllByProvinceAndCity(String province, String city, Pageable pageable);
Page<Post> findAllByBrandInAndProvince(List<String> brand, String province, Pageable pageable);
And I'd need much more so I could handle the other potential values, ie. findAllByPriceBetween(), findAllByCityAndPriceBetween(), findAllByProvinceAndPriceBetween()...
So I'd like some suggestions on how to handle this?.
Edit
Managed to get it working by overriding the toPredicate method as shown by #M. Deinum with some small tweaks according to my use case.
#Override
public Predicate toPredicate(Root root, CriteriaQuery query, CriteriaBuilder builder) {
List<Predicate> predicates = new ArrayList<>();
// min/max is never not set as they have default values
predicates.add(builder.between(root.get("price"), params.getMinValue(), params.getMaxValue()));
if (params.getProvince() != null) {
predicates.add(builder.equal(root.get("province"), params.getProvince()));
}
if (params.getCity() != null) {
predicates.add(builder.equal(root.get("city"), params.getCity()));
}
if (!CollectionUtils.isEmpty(params.getBrand())) {
Expression<String> userExpression = root.get("brand");
Predicate p = userExpression.in(params.getBrand());
predicates.add(p);
}
return builder.and(predicates.toArray(new Predicate[0]));
}
Create an object to hold your variables instead of individual elements.
Move the logic to your service and pass the object and pageable to the service
Ditch those findAll methods from your repository and add the JpaSpecificationExecutor in your extends clause.
In the service create Predicate and use the JpaSpecificationExecutor.findAll to return what you want.
public class PostSearchParameters {
private String province;
private String city;
private List<String> brand;
private int minValue = 0;
private int maxValue = 500000;
//getters/setters or when on java17+ use a record instead of class
}
Predicate
public class PostSearchParametersSpecification implements Specification {
private final PostSearchParameters params;
PostSearchParametersPredicate(PostSearchParameters params) {
this.params=params;
}
#Override
public Predicate toPredicate(Root<T> root, CriteriaQuery<?> query, CriteriaBuilder builder) {
List<Predicate> predicates = new ArrayList<>();
// min/max is never not set as they have default values
predicates.add(builder.between(root.get("price", params.getMinValue(), params.getMaxValue());
if (params.getProvince() != null) {
predicates.add(builder.equal(root.get("province"), params.getProvince());
}
if (params.getCity() != null) {
predicates.add(builder.equal(root.get("city"), params.getCity());
}
if (!CollectionUtils.isEmpty(params.getBrand()) {
predicates.add(builder.in(root.get("brand")).values( params.getBrand());
}
return builder.and(predicates.toArray(new Predicate[0]));
}
}
Repository
public interface PostRepository extends JpaRepository<Post, Long>, JpaSpecificationExecutor<Post> {}
Service method
public Page<Post> searchPosts(PostSearchParameters params, Pageable pageSort) {
PostSearchParametersSpecification specification =
new PostSearchParametersSpecification(params)
return repository.findAll(specification, pageSort);
}
Now you can query on all available parameters, adding one is extending/modifying the predicate and you are good to go.
See also the Spring Data JPA Reference guide on Specifications
I'm developing a Spring Boot application with Spring Data JPA. I'm using a custom JPQL query to group by some field and get the count. Following is my repository method.
#Query("SELECT v.status.name, count(v) as cnt FROM Pet v GROUP BY v.status.name")
List<Object[]> countByStatus();
It's working and result is obtained as follows:
[
[
"pending",
1
],
[
"available",
4
]
]
However, I would like my Rest endpoint to respond with an output which is formatted like this
{
"pending": 1,
"available": 4
}
How can I achieve this?
Basically you want to produce a JSON where its properties ("pending", "available") are dynamic and come from the SELECT v.status.name part of the query.
Create a DTO to hold the row values:
package com.example.demo;
public class ResultDTO {
private final String key;
private final Long value;
public ResultDTO(String key, Long value) {
this.key = key;
this.value = value;
}
public String getKey() {
return key;
}
public Long getValue() {
return value;
}
}
Change your query to create a new ResultDTO per row:
#Query("SELECT new com.example.demo.ResultDTO(v.status.name, count(v)) as cnt FROM Pet v GROUP BY v.status.name")
List<ResultDTO> countByStatus();
"com.example.demo" is my package, you should change it to yours.
Then from your service class or from your controller you have to convert the List<ResultDTO> to a Map<String, Long> holding all rows' keys and values.
final List<ResultDTO> repositoryResults = yourRepository.countByStatus();
final Map<String, Long> results = repositoryResults.stream().collect(Collectors.toMap(ResultDTO::getKey, ResultDTO::getValue));
Your controller should be able to transform final Map<String, Long> results to the desired JSON
I have a Rest service with two operations /balance and /transactions to get the balance and transactions of a customer.
The return type of this operations is BalanceResponse and TransactionResponse and both these type is extended from Response
when documenting for /balance service operation it is also listing response fields in second subType(TransactionResponse).
How to display only the fields corresponding to its return type ?
If its /balance then display (status, balance and restrictions), and If its /transaction only display (status and list of transactions) in the Response field
Can somebody please let me know how to handle basically inheritance types in docs
Please find below code snippet and Auto RestDoc generated doc
//Base class
#JsonTypeInfo(use = NAME, include = PROPERTY, property = "type", visible = true)
#JsonSubTypes({
#JsonSubTypes.Type(value = BalanceResponse.class, name = "BalanceResponse"),
#JsonSubTypes.Type(value = TransactionsResponse.class, name = "TransactionResponse")})
public class Response {
public Status status;
....
...
public Response(StatusCode status) {
this.status = new Status(status.getCode(), status.getDescription());
}
}
// Type1: BalanceResponse
#JsonPropertyOrder({ "status", "balance", "restrictions" })
public class BalanceResponse extends Response {
/**
* The balance of this account
*/
public int balance = -1;
/**
* List of limitations on this account.
*/
public List<String> restrictions = Collections.emptyList();
}
// SubType-2 TransactionResponse
public class TransactionsResponse extends Response {
public List<Transaction> transactions;
enter image description hereAuto RestDoc Response field
You need to return a specific subtype in your controller method, for Spring Auto Rest Docs to output only that type's fields. If you return parent type, then the response can be anything and SARD will output all possible fields from all types.
// returns all subtype fields
public Response anything() {
return new BalanceResponse();
}
// returns only BalanceResponse fields
public BalanceResponse balances() {
return new BalanceResponse();
}
I am new to Chronicle-wire. In the document it claims support for "setting of fields to the default, if not available" in the schema evolution section.
Do we have an example of how this works?
I have an example of adding an array field to a simple Marshallable object. When reading the journals contains the old version of the object, how can we set a default value (eg. new String[0]) to the field instead of a null?
There're a few ways to achieve that, one example is below:
public class TestMarshallable implements Marshallable {
private long a;
private int b;
private String newField = "defaultValue";
#Override
public void readMarshallable(#NotNull WireIn wire) throws IORuntimeException {
a = wire.read("a").int64();
b = wire.read("b").int32();
if (wire.bytes().readRemaining() > 0)
newField = wire.read("newField").text();
}
}
In this example, it is assumed that your new field will be written last, hence you can simply check if there's more to read - and do so. Default value is the one you assign to the field.
More complicated, but way more flexible way:
public class TestMarshallable implements Marshallable {
private long a = 0;
private int b = 1;
private String newField = "defaultValue";
#Override
public void readMarshallable(#NotNull WireIn wire) throws IORuntimeException {
#NotNull StringBuilder name = new StringBuilder();
while (!wire.isEmpty()) {
#NotNull ValueIn in = wire.read(name);
if (StringUtils.isEqual(name, "a"))
a = in.int64();
else if (StringUtils.isEqual(name, "b"))
b = in.int32();
else if (StringUtils.isEqual(name, "newField"))
newField = in.text();
else
unexpectedField(name, in);
wire.consumePadding();
}
}
}
In the last example readMarshallable simply overwrites the fields it could find in the stream leaving others with default values (NB this can also be used to save certain amount of writes, if you often write default values you can skip them altogether in writeMarshallable)
First experiments with Spring Data and MongoDB were great. Now I've got the following structure (simplified):
public class Letter {
#Id
private String id;
private List<Section> sections;
}
public class Section {
private String id;
private String content;
}
Loading and saving entire Letter objects/documents works like a charm. (I use ObjectId to generate unique IDs for the Section.id field.)
Letter letter1 = mongoTemplate.findById(id, Letter.class)
mongoTemplate.insert(letter2);
mongoTemplate.save(letter3);
As documents are big (200K) and sometimes only sub-parts are needed by the application: Is there a possibility to query for a sub-document (section), modify and save it?
I'd like to implement a method like
Section s = findLetterSection(letterId, sectionId);
s.setText("blubb");
replaceLetterSection(letterId, sectionId, s);
And of course methods like:
addLetterSection(letterId, s); // add after last section
insertLetterSection(letterId, sectionId, s); // insert before given section
deleteLetterSection(letterId, sectionId); // delete given section
I see that the last three methods are somewhat "strange", i.e. loading the entire document, modifying the collection and saving it again may be the better approach from an object-oriented point of view; but the first use case ("navigating" to a sub-document/sub-object and working in the scope of this object) seems natural.
I think MongoDB can update sub-documents, but can SpringData be used for object mapping? Thanks for any pointers.
I figured out the following approach for slicing and loading only one subobject. Does it seem ok? I am aware of problems with concurrent modifications.
Query query1 = Query.query(Criteria.where("_id").is(instance));
query1.fields().include("sections._id");
LetterInstance letter1 = mongoTemplate.findOne(query1, LetterInstance.class);
LetterSection emptySection = letter1.findSectionById(sectionId);
int index = letter1.getSections().indexOf(emptySection);
Query query2 = Query.query(Criteria.where("_id").is(instance));
query2.fields().include("sections").slice("sections", index, 1);
LetterInstance letter2 = mongoTemplate.findOne(query2, LetterInstance.class);
LetterSection section = letter2.getSections().get(0);
This is an alternative solution loading all sections, but omitting the other (large) fields.
Query query = Query.query(Criteria.where("_id").is(instance));
query.fields().include("sections");
LetterInstance letter = mongoTemplate.findOne(query, LetterInstance.class);
LetterSection section = letter.findSectionById(sectionId);
This is the code I use for storing only a single collection element:
MongoConverter converter = mongoTemplate.getConverter();
DBObject newSectionRec = (DBObject)converter.convertToMongoType(newSection);
Query query = Query.query(Criteria.where("_id").is(instance).and("sections._id").is(new ObjectId(newSection.getSectionId())));
Update update = new Update().set("sections.$", newSectionRec);
mongoTemplate.updateFirst(query, update, LetterInstance.class);
It is nice to see how Spring Data can be used with "partial results" from MongoDB.
Any comments highly appreciated!
I think Matthias Wuttke's answer is great, for anyone looking for a generic version of his answer see code below:
#Service
public class MongoUtils {
#Autowired
private MongoTemplate mongo;
public <D, N extends Domain> N findNestedDocument(Class<D> docClass, String collectionName, UUID outerId, UUID innerId,
Function<D, List<N>> collectionGetter) {
// get index of subdocument in array
Query query = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query.fields().include(collectionName + "._id");
D obj = mongo.findOne(query, docClass);
if (obj == null) {
return null;
}
List<UUID> itemIds = collectionGetter.apply(obj).stream().map(N::getId).collect(Collectors.toList());
int index = itemIds.indexOf(innerId);
if (index == -1) {
return null;
}
// retrieve subdocument at index using slice operator
Query query2 = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query2.fields().include(collectionName).slice(collectionName, index, 1);
D obj2 = mongo.findOne(query2, docClass);
if (obj2 == null) {
return null;
}
return collectionGetter.apply(obj2).get(0);
}
public void removeNestedDocument(UUID outerId, UUID innerId, String collectionName, Class<?> outerClass) {
Update update = new Update();
update.pull(collectionName, new Query(Criteria.where("_id").is(innerId)));
mongo.updateFirst(new Query(Criteria.where("_id").is(outerId)), update, outerClass);
}
}
This could for example be called using
mongoUtils.findNestedDocument(Shop.class, "items", shopId, itemId, Shop::getItems);
mongoUtils.removeNestedDocument(shopId, itemId, "items", Shop.class);
The Domain interface looks like this:
public interface Domain {
UUID getId();
}
Notice: If the nested document's constructor contains elements with primitive datatype, it is important for the nested document to have a default (empty) constructor, which may be protected, in order for the class to be instantiatable with null arguments.
Solution
Thats my solution for this problem:
The object should be updated
#Getter
#Setter
#Document(collection = "projectchild")
public class ProjectChild {
#Id
private String _id;
private String name;
private String code;
#Field("desc")
private String description;
private String startDate;
private String endDate;
#Field("cost")
private long estimatedCost;
private List<String> countryList;
private List<Task> tasks;
#Version
private Long version;
}
Coding the Solution
public Mono<ProjectChild> UpdateCritTemplChild(
String id, String idch, String ownername) {
Query query = new Query();
query.addCriteria(Criteria.where("_id")
.is(id)); // find the parent
query.addCriteria(Criteria.where("tasks._id")
.is(idch)); // find the child which will be changed
Update update = new Update();
update.set("tasks.$.ownername", ownername); // change the field inside the child that must be updated
return template
// findAndModify:
// Find/modify/get the "new object" from a single operation.
.findAndModify(
query, update,
new FindAndModifyOptions().returnNew(true), ProjectChild.class
)
;
}