How to get distance - MongoDB Template Near function - spring

I'm trying to find Near by places.
Below code is working fine.
But i'm not able to get actual distance of place from my given lat,lng.
Criteria criteria = new Criteria("coordinates")
.near(new Point(searchRequest.getLat(),searchRequest.getLng()));
Query query = new Query();
query.addCriteria(criteria);
query.addCriteria(criteriaName);
query.limit(5);
List<Place> ls = (List<Place>) mongoTemplate.find(query, Place.class);

You can do it with geoNear aggregation. In spring-data-mongodb GeoNearOperation is representing this aggregation.
Extend or create inherit Place class with field where you would like to have distance information (example with inheritance):
public class PlaceWithDistance extends Place {
private double distance;
public double getDistance() {
return distance;
}
public void setDistance(final double distance) {
this.distance = distance;
}
}
Instead of Criteria with Query use aggregation. Second argument of geoNear is name of field where distance should be set:
final NearQuery nearQuery = NearQuery
.near(new Point(searchRequest.getLat(), searchRequest.getLng()));
nearQuery.num(5);
nearQuery.spherical(true); // if using 2dsphere index, otherwise delete or set false
// "distance" argument is name of field for distance
final Aggregation a = newAggregation(geoNear(nearQuery, "distance"));
final AggregationResults<PlaceWithDistance> results =
mongoTemplate.aggregate(a, Place.class, PlaceWithDistance.class);
// results.forEach(System.out::println);
List<PlaceWithDistance> ls = results.getMappedResults();
Just to make it easier - associated imports:
import static org.springframework.data.mongodb.core.aggregation.Aggregation.geoNear;
import static org.springframework.data.mongodb.core.aggregation.Aggregation.newAggregation;
import org.springframework.data.mongodb.core.aggregation.Aggregation;
import org.springframework.data.mongodb.core.aggregation.AggregationResults;
import org.springframework.data.mongodb.core.aggregation.GeoNearOperation;
import org.springframework.data.mongodb.core.query.NearQuery;

Walery Strauch's example was useful for me...
However I wanted to :
run aggregate query to get all the points in 2dsphere index with-in given distance in Kilometers or Meters. You can use Metrics.KILOMETERS & Metrics.MILES
collection name is not specified as part of pojo
I have 2dsphere index with old way of representation in MongoDB. I am using Mongo as sharded databased for Geo-Spatial queries. My nearSphere query (without aggregation) was failing only when there is a shard key added into the same collection where I have 2dsphere index.
After using below implementation with shard key in the same collection. I am successfully able to fetch the required data.
Here is the sample :
import org.springframework.data.geo.Metrics;
final NearQuery query = NearQuery.near(new Point(longitude, latitude), Metrics.KILOMETERS)
.num(limit)
.minDistance(distanceInKiloMeters)
.maxDistance(maxNearByUEDistanceInKiloMeters)
.spherical(true);
final Aggregation a = newAggregation(geoNear(query, "distance"));
final AggregationResults<PlaceWithDistance> results = offlineMongoTemplate.aggregate(a, "myCollectionName", PlaceWithDistance.class);
final List<PlaceWithDistance> measurements = new ArrayList<PlaceWithDistance>(results.getMappedResults());

Related

Cannot get percentiles or value from Aggregation

The parentSalaries is a list of Buckets of size 1 and contains Aggregations of size 2, which are "precentials_salary" and "avg_salary".
I am trying to get the percentiles values (5.0, 25.0 etc) and the value under the "average_salary" aggregation. However, there is no function like "getValue" or "getPercentiles" for the Aggregation.
I can see the data but can not extract them.
The code that I have is as below;
private void doSomething(Aggregations aggregations) {
//aggregations is the Aggregations from the SearchResponse
Terms parentSalaryRatio = aggregations.get("parent_salary_ratio");
if (parentSalaryRatio != null) {
List<? extends Terms.Bucket> parentSalaries = parentSalaryRatio.getBuckets();
getTotalAvgSalaries(parentSalaries);
}
}
private void getTotalAvgSalaries(List<? extends Terms.Bucket> parentSalaries) {
Aggregations aggregations = parentSalaries.get(0).getAggregations();
Aggregation precentials = aggregations.get("precentials_salary");
Aggregation precentials = aggregations.get("avg_salary");
}
Any help would be greatly appreciated.
found the issue;
I used ParsedSingleValueNumericMetricsAggregation to extract the "value" data. It has the value() function. The ParsedAvg can be used as well. It extends ParsedSingleValueNumericMetricsAggregation
And for the percentiles, I used ParsedTDigestPercentiles as P.J.Meisch suggested

Spring Data elastic search with out entity fields

I'm using spring data elastic search, Now my document do not have any static fields, and it is accumulated data per qtr, I will be getting ~6GB/qtr (we call them as versions). Lets say we get 5GB of data in Jan 2021 with 140 columns, in the next version I may get 130 / 120 columns, which we do not know, The end user requirement is to get the information from the database and show it in a tabular format, and he can filter the data. In MongoDB we have BasicDBObject, do we have anything in springboot elasticsearch
I can provide, let say 4-5 columns which are common in every version record and apart from that, I need to retrieve the data without mentioning the column names in the pojo, and I need to use filters on them just like I can do in MongoDB
List<BaseClass> getMultiSearch(#RequestBody Map<String, Object>[] attributes) {
Query orQuery = new Query();
Criteria orCriteria = new Criteria();
List<Criteria> orExpression = new ArrayList<>();
for (Map<String, Object> accounts : attributes) {
Criteria expression = new Criteria();
accounts.forEach((key, value) -> expression.and(key).is(value));
orExpression.add(expression);
}
orQuery.addCriteria(orCriteria.orOperator(orExpression.toArray(new Criteria[orExpression.size()])));
return mongoOperations.find(orQuery, BaseClass.class);
}
You can define an entity class for example like this:
public class GenericEntity extends LinkedHashMap<String, Object> {
}
To have that returned in your calling site:
public SearchHits<GenericEntity> allGeneric() {
var criteria = Criteria.where("fieldname").is("value");
Query query = new CriteriaQuery(criteria);
return operations.search(query, GenericEntity.class, IndexCoordinates.of("indexname"));
}
But notice: when writing data into Elasticsearch, the mapping for new fields/properties in that index will be dynamically updated. And there is a limit as to how man entries a mapping can have (https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-settings-limit.html). So take care not to run into that limit.

Spring Data Solr - Multiple FilterQueries separated by OR

I'm trying to implement a filter search using spring data solr. I've following filters types and all have a set of filters.
A
aa in (1,2,3)
ab between (2016-08-02 TO 2016-08-10)
B
ba in (2,3,4)
bb between (550 TO 1000)
The Solr query which I want to achieve using Spring data solr is:
q=*:*&fq=(type:A AND aa:(1,2,3) AND ab:[2016-08-02 TO 2016-08-10]) OR (type:B AND ba:(2,3,4) AND bb:[550 TO 1000])
I'm not sure how to group a number of clauses of a type of filter and then have an OR operator.
Thanks in advance.
The trick is to flag the second Criteria via setPartIsOr(true) with an OR-ish nature. This method returns void, so it cannot be chained.
First aCriteria and bCriteria are defined as described. Then bCriteria is flagged as OR-ish. Then both are added to a SimpleFilterQuery. That in turn can be added to the actual Query. That is left that out in the sample.
The DefaultQueryParser in the end is used only to generate a String that can be used in the assertion to check that the query is generated as desired.
import org.junit.jupiter.api.Test;
import org.springframework.data.solr.core.DefaultQueryParser;
import org.springframework.data.solr.core.query.Criteria;
import org.springframework.data.solr.core.query.FilterQuery;
import org.springframework.data.solr.core.query.SimpleFilterQuery;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class CriteriaTest {
#Test
public void generateQuery() {
Criteria aCriteria =
new Criteria("type").is("A")
.connect().and("aa").in(1,2,3)
.connect().and("ab").between("2016-08-02", "2016-08-10");
Criteria bCriteria =
new Criteria("type").is("B")
.connect().and("ba").in(2,3,4)
.connect().and("bb").between("550", "1000");
bCriteria.setPartIsOr(true); // that is the magic
FilterQuery filterQuery = new SimpleFilterQuery();
filterQuery.addCriteria(aCriteria);
filterQuery.addCriteria(bCriteria);
// verify the generated query string
DefaultQueryParser dqp = new DefaultQueryParser(null);
String actualQuery = dqp.getQueryString(filterQuery, null);
String expectedQuery =
"(type:A AND aa:(1 2 3) AND ab:[2016\\-08\\-02 TO 2016\\-08\\-10]) OR "
+ "((type:B AND ba:(2 3 4) AND bb:[550 TO 1000]))";
System.out.println(actualQuery);
assertEquals(expectedQuery, actualQuery);
}
}

Filter Search query in Spring Mongo DB

In feed collection "likeCount" and "commentCount" are two column. I want to get all document where "likeCount" + "commentCount" greater than 100. How can I write the search filter query in Spring Mongo DB?
Below is my sample feed collection data.
{
"_id" : ObjectId("55deb33dcb9be727e8356289"),
"channelName" : "Facebook",
"likeCount" : 2,
"commentCount" : 10,
}
For compare single field we can write search query like :
BasicDBObject searchFilter = new BasicDBObject();
searchFilter.append("likeCount", new BasicDBObject("$gte",100));
DBCursor feedCursor = mongoTemplate.getCollection("feed").find(searchFilter);
Try this
db.collection.aggregate([{$project:{total:{'$add':["$likeCount","$commentCount"]}}},{$match:{total:{$gt:100}}}])
You would need to use the MongoDB Aggregation Framework with Spring Data MongoDB. In Spring Data the following returns all feeds with a combined likes and comments counts greater than 100, using the aggregation framework. :
Entities
class FeedsCount {
#Id String id;
String channelName;
long likeCount;
long commentCount;
long totalLikesComments;
//...
}
Aggregation
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
Aggregation agg = newAggregation(Feed.class,
project("id", "channelName", "likeCount", "commentCount")
.andExpression("likeCount + commentCount").as("totalLikesComments"),
match(where("totalLikesComments").gt(100))
);
//Convert the aggregation result into a List
AggregationResults<FeedsCount> groupResults
= mongoTemplate.aggregate(agg, FeedsCount.class);
List<FeedsCount> results = groupResults.getMappedResults();
In the code above, first create a new aggregation via the newAggregation static factory method to which you pass a list of aggregation operations. These aggregate operations define the aggregation pipeline of your Aggregation.
As a first step, select the "id", "channelName", "likeCount", "commentCount" fields from the input collection with the project operation and add a new field "totalLikesComments" which is a computed property that stores the sum of the "likeCount" and "commentCount" fields.
Finally in the second step, filter the intermediate result by using a match operation which accepts a Criteria query as an argument.
Note that you derive the name of the input-collection from the Feed-class passed as first parameter to the newAggregation-Method.

Spring Data Neo4j - Combining Fulltext and Simple Indexes in the same Cypher query

I wonder how can i build a Cypher query that will combine Fulltext and Simple indexes using spring data neo4j. Consider the following node entity:
#NodeEntity
public class SomeObject {
public SomeObject() {
}
public SomeObject(String name, int height) {
this.name = name;
this.height = height;
}
#Indexed(indexType = IndexType.FULLTEXT, indexName = "search_name")
String name;
#Indexed(numeric = false)
int height;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
OK, so my question is how can i run a query (By using a SomeObject Graph Repository) that will start from SomeObject nodes, by referencing their simple indexes and full-text indexes in the same query. For example i would like to write something like that:
START n=node:SomeObject('name: Roy AND height: [170 TO 190]') RETURN n
I know that i cannot write it exactly like that, because spring data neo4j forces me to give a seperate index name for fields that are needed to be FULLTEXT indexed. But what if i need make an index lookup for my SomeObject entity which combines both fileds? (name & height)
What are the best practices in such case? Is there a way to combine them both in the same query? or maybe should i query each of them separably, and then to perform some some kind of intersection between the two results, so i will get exactly the nodes the meet my original query lookup condition? (name: Roy AND height: [170 TO 190]).
Thanks!
Roy.
I'd never launch two separate queries. Maybe just use one index as a starting point in your query?
START n=node:search_name('name: Roy')
WHERE n.height >= 170 AND n.height <= 190
RETURN n
How's the performance of this query? This bypasses the SomeObject index, but I don't see any other option as you indeed cannot combine both indexes.
I as also thinking about the following query, but you'd still end up with duplicates:
START n=node:search_name('name: Roy'), m=node:SomeObject('height: [170 TO 190]')
RETURN DISTINCT n,m

Resources