Spring Redis: Range query "greater than" on a field - spring

I am using Redis to store some data and later query it and update it with latest information.
Considering an example:
I receive File data, which carries info on the file and the physical storage location of that file.
One shelf has multiple racks, and each rack can have multiple files.
Each file has a version field, and it gets updated (incremented) when an operation on file is performed.
How do I plan to store?
I need to query based on "shelfID + rack ID" -- To get all files.
I need to query based on "shelfID + rack ID + version > XX" -- To get all files with version more than specified.
Now, to get all files belonging to a shelf and rack, is achievable in Spring Data Redis.
I create a key of the combination of 2 ID's and later query based on this Key.
private <T> void save(String id, T entity) {
redisTemplate.opsForValue().set(id, entity);
}
But, how do I query for version field?
I had kept "version" field as #Indexed, but spring repository query does not work.
#RedisHash("shelves")
public class ShelfEntity {
#Indexed
#Id
private String id;
#Indexed
private String shelfId;
#Indexed
private String rackId;
#Indexed
private String fileId;
#Indexed
private Integer version;
private String fileName;
// and other updatable fields
}
Repository method:
List<ShefEntity> findAllByShelfIdAndRackIdAndVersionGreaterThan(String centerCd,
String floorCd, int version);
Above, gives error:
java.lang.IllegalArgumentException: GREATER_THAN (1): [IsGreaterThan,
GreaterThan]is not supported for redis query derivation
Q. How do I query based on Version Greater than?
Q. Is it even possible with Spring Data Redis?
Q. If possible, how should I model the data (into what data structure), in order to make such queries?
Q. If we don't use Spring, how to do this in Redis using redis-cli and data structure?
May be something like:
<key, key, value>
<shelfId+rackId, version, fileData>
I am not sure how to model this in Redis?
Update 2:
One shelf can have N racks.
One rack can have N files.
Each file object will have a version.
This version gets updated (o -> 1 -> 2....)
I want to store only the latest version of a file.
So, if we have 1 file object
shelfId - 1
rackId - 1
fileId - 1
version - 0
.... on update of version ... we should still have 1 file object.
version - 1
I tried keeping key as a MD5 hash of shelfId + rackId, in hash data structure.
But cannot query on version.
I also tried using a ZSet.
Saving it like this:
private void saveSet(List<ShelfEntity> shelfInfo) {
for (ShelfEntity item : shelfInfo) {
redisTemplate.opsForZSet()
.add(item.getId(), item, item.getVersion());
}
}
So, version becomes the score.
But the problem is we cannot update items of set.
So for one fileId, there are multiple version.
When I query, I get duplicates.
Get code:
Set<ShelfEntity> objects = (Set<ShelfEntity>) (Object) redisTemplate.opsForZSet()
.rangeByScore(generateMd5Hash("-", shelfId, rackId), startVersion,
Double.MAX_VALUE);
Now, this is an attempt to mimic version > XX

Create ZSET for each shelfId and rackId combination
Use two methods to save and update records in Redis
// this methods stores all shelf info in db
public void save(List<ShelfEntity> shelfInfo) {
for (ShelfEntity item : shelfInfo) {
redisTemplate.opsForZSet()
.add(item.getId(), clonedItem, item.getVersion());
}
}
Use update to remove old and insert new one, Redis does not support key update as it's a table so you need to remove the existing and add a new record
public void update(List<ShelfEntity> oldRecords, List<ShelfEntity> newRecords) {
if (oldRecords.size() != newRecords.size()){
throw new IlleagalArgumentException("old and new records must have same number of entries");
}
for (int i=0;i<oldRecords.size();i++) {
ShelfEntity oldItem = oldRecords.get(i);
ShelfEntity newItem = newRecords.get(i);
redisTemplate.opsForZSet().remove(oldItem.getId(), oldItem);
redisTemplate.opsForZSet()
.add(newItem.getId(), newItem, newItem.getVersion());
}
}
Read items from ZSET with score.
List<ShefEntity> findAllByShelfIdAndRackIdAndVersionGreaterThan(String shelfId,
String rackId, int version){
Set<TypedTuple<ShelfEntity>> objects = (Set<TypedTuple<ShelfEntity>>) redisTemplate.opsForZSet()
.rangeWithScores(generateMd5Hash("-", shelfId, rackId), new Double(version),
Double.MAX_VALUE);
List<ShelfEntity> shelfEntities = new ArrayList<>();
for (TypedTuple<ShelfEntity> entry: objects) {
shelfEntities.add(entry.getValue().setVersion( entry.getScore().intValue()));
}
return shelfEntities;
}

Related

Spring Data elastic search with out entity fields

I'm using spring data elastic search, Now my document do not have any static fields, and it is accumulated data per qtr, I will be getting ~6GB/qtr (we call them as versions). Lets say we get 5GB of data in Jan 2021 with 140 columns, in the next version I may get 130 / 120 columns, which we do not know, The end user requirement is to get the information from the database and show it in a tabular format, and he can filter the data. In MongoDB we have BasicDBObject, do we have anything in springboot elasticsearch
I can provide, let say 4-5 columns which are common in every version record and apart from that, I need to retrieve the data without mentioning the column names in the pojo, and I need to use filters on them just like I can do in MongoDB
List<BaseClass> getMultiSearch(#RequestBody Map<String, Object>[] attributes) {
Query orQuery = new Query();
Criteria orCriteria = new Criteria();
List<Criteria> orExpression = new ArrayList<>();
for (Map<String, Object> accounts : attributes) {
Criteria expression = new Criteria();
accounts.forEach((key, value) -> expression.and(key).is(value));
orExpression.add(expression);
}
orQuery.addCriteria(orCriteria.orOperator(orExpression.toArray(new Criteria[orExpression.size()])));
return mongoOperations.find(orQuery, BaseClass.class);
}
You can define an entity class for example like this:
public class GenericEntity extends LinkedHashMap<String, Object> {
}
To have that returned in your calling site:
public SearchHits<GenericEntity> allGeneric() {
var criteria = Criteria.where("fieldname").is("value");
Query query = new CriteriaQuery(criteria);
return operations.search(query, GenericEntity.class, IndexCoordinates.of("indexname"));
}
But notice: when writing data into Elasticsearch, the mapping for new fields/properties in that index will be dynamically updated. And there is a limit as to how man entries a mapping can have (https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-settings-limit.html). So take care not to run into that limit.

How to filter Range criteria using ElasticSearch Repository

I need to fetch Employees who joined between 2021-12-01 to 2021-12-31. I am using ElasticsearchRepository to fetch data from ElasticSearch index.
How can we fetch range criteria using repository.
public interface EmployeeRepository extends ElasticsearchRepository<Employee, String>,EmployeeRepositoryCustom {
List<Employee> findByJoinedDate(String joinedDate);
}
I have tried Between option like below: But it is returning no results
List<Employee> findByJoinedDateBetween(String fromJoinedDate, String toJoinedDate);
My Index configuration
#Document(indexName="employee", createIndex=true,type="_doc", shards = 4)
public class Employee {
#Field(type=FieldType.Text)
private String joinedDate;
Note: You seem to be using an outdated version of Spring Data Elasticsearch. The type parameter of the #Document
annotation was deprecated in 4.0 and removed in 4.1, as Elasticsearch itself does not support typed indices since
version 7.
To your question:
In order to be able to have a range query for dates in Elasticsearch the field in question must be of type date (the
Elasticsearch type). For your entity this would mean (I refer to the attributes from the current version 4.3):
#Nullable
#Field(type = FieldType.Date, pattern = "uuuu-MM-dd", format = {})
private LocalDate joinedDate;
This defines the joinedDate to have a date type and sets the string representation to the given pattern. The
empty format argument makes sure that the additional default values (DateFormat.date_optional_time and DateFormat. epoch_millis) are not set here. This results in the
following mapping in the index:
{
"properties": {
"joinedDate": {
"type": "date",
"format": "uuuu-MM-dd"
}
}
}
If you check the mapping in your index (GET localhost:9200/employee/_mapping) you will see that in your case the
joinedDate is of type text. You will either need to delete the index and have it recreated by your application or
create it with a new name and then, after the application has written the mapping, reindex the data from the old
index into the new one (https://www.elastic.co/guide/en/elasticsearch/reference/7.16/docs-reindex.html).
Once you have the index with the correct mapping in place, you can define the method in your repository like this:
List<Employee> findByJoinedDateBetween(LocalDate fromJoinedDate, LocalDate toJoinedDate);
and call it:
repository.findByJoinedDateBetween(LocalDate.of(2021, 1, 1), LocalDate.of(2021, 12, 31));

Neo4j 6 Spring boot 2.4 migrate driver session query

I am trying to migrate to neo4j 6. Whats the equivalent of this method in neo4j6?
The Result here contains: {ref=Employee.... etc, so the actual Java objects.
//org.neo4j.ogm.session.Session is autowired
#GetMapping("/companies/{companyId}/refs")
public Result getCompanyRefs(#PathVariable final String companyId)
{
String query = "MATCH (company:Company)-[ref]-(refObj) where company.id=\"" + companyId + "\" RETURN company,ref,refObj";
return this.session.query(query, Collections.emptyMap());
}
I tried with the new neo4j driver like so:
//org.neo4j.driver.Driver is autowired
#GetMapping("/persons/{personId}/refs")
public Result getPersonRefs(#PathVariable final String personId)
{
String query = "MATCH (person:Person)-[ref]-(refObj) where person.id=\"" + personId + "\" RETURN person,ref,refObj";
return this.driver.session().run(query, Collections.emptyMap());
}
but this gives a Result which is not convertable to my #Node (entity) classes. The previous version gave a Result which contained the actual java objects(mapped to classes).
the result here is:Record<{person: node<7>, ref: relationship<8>, refObj: node<0>}>
Basically the main thing is: i need the nodes mapped to java objects. But i need them via a cypher query, because i need to do some things on the (Result) Nodes before deleting the relationships between them.
so it turns out it does give back the things i need.
Result result = this.getPersonRefs(id);
result.list().forEach((entry) -> {
The problem was that for example neither entry.get("refObj").asObject(), nor asNode() actually gave back what i thought it was supposed to give back.
The Solution:
entry.get("refObj").asMap()
this gives back the actual properties of the object. Then you just need to convert it to MyClass.class with an ObjectMapper.

Spring Data JPA custom count query not showing all records

The query is not returning all the records, i.e all the records whose count is same out of them only of is being returned.
Where as the same code of MYSQL workbench works like a charm
JPA Custom Query
public interface BookingRepository extends JpaRepository<Booking, Long> {
#Query("select count(v.source), concat(v.source,'-', v.destination) as bus_route from Booking v group by v.source, v.destination")
public List<Object[]> groupByBus();
}
Query in MYSQL
SELECT count(source), concat(source," - ", destination) as bus_route
FROM booking
GROUP BY source, destination;
As you can see there are two records with count of one, but only one is being returned by Spring data jpa
Your query return a List<Object[]> but the object array could be almost everything.
Actually, Object[] contains, for each position, another Object[] with two values: count and bus_route.
You can iterate over every value in this way (I've tested and I've needed BigInteger to cast the object value):
Map<BigInteger,String> map = new HashMap<BigInteger,String>();
for(Object object[] : objectList) map.put((BigInteger)object[0], (String)object[1]);
And you will get the map you want.
Also, if there could be repeated values, only create a new list instead of Map.

Spring Mongodb findandModify fails to update entire document

I am new to mongodb and struggling to understand how document update works.
I have a document called 'menu':
{
"someId":"id123",
"someProperty":"property123",
"list" : [{
"innerProperty":"property423"
}]
}
which maps to my entity:
#Document(collection = "menu")
public class Menu {
#Id
private String id;
private String someid;
private String someProperty;
private List<SomeClass> list;
// accessors
}
when I try to find and update this document like this it does not update the document. It sure does find the menu as as it returns the original entity with Id:
#Override
public Menu update(Menu menu) {
Query query = new Query(
Criteria.where("someId").is(menu.getSomeId()));
Update update = Update.update("menu", menu);
return mongoOperations.findAndModify(query, update,
FindAndModifyOptions.options().returnNew(true), Menu.class);
}
But if I change it to this, it works:
#Override
public Menu update(Menu menu) {
Query query = new Query(
Criteria.where("someId").is(menu.getSomeId()));
Update update = new Update().set("someProperty", menu.getSomeProperty())
.set("list", menu.getList());
return mongoOperations.findAndModify(query, update,
FindAndModifyOptions.options().returnNew(true), Menu.class);
}
I don't really like this second method where each element of the document is individually set, as you might imagine I have a rather large document and is prone to errors.
Why does the first method not work? And what could be a better approach to update the document?
Check out the docs for findAndModify - it returns the version of the document before the fields were modified. If you do a new find() straight after, you will see that your changes were actually saved to MongoDB.

Resources