Is paging broken with spring data solr when using group fields? - spring

I currently use the spring data solr library and implement its repository interfaces, I'm trying to add functionality to one of my custom queries that uses a Solr template with a SimpleQuery. it currently uses paging which appears to be working well, however, I want to use a Group field so sibling products are only counted once, at their first occurrence. I have set the group field on the query and it works well, however, it still seems to be using the un-grouped number of documents when constructing the page attributes.
is there a known work around for this?
the query syntax provides the following parameter for this purpose, but it would seem that Spring Data Solr isn’t taking advantage of it. &group.ngroups=true should return the number of groups in the result and thus give a correct page numbering.
any other info would be appreciated.

There are actually two ways to add this parameter.
Queries are converted to the solr format using QueryParsers, so it would be possible to register a modified one.
QueryParser modifiedParser = new DefaultQueryParser() {
#Override
protected void appendGroupByFields(SolrQuery solrQuery, List<Field> fields) {
super.appendGroupByFields(solrQuery, fields);
solrQuery.set(GroupParams.GROUP_TOTAL_COUNT, true);
}
};
solrTemplate.registerQueryParser(Query.class, modifiedParser);
Using a SolrCallback would be a less intrusive option:
final Query query = //...whatever query you have.
List<DomainType> result = solrTemplate.execute(new SolrCallback<List<DomainType>>() {
#Override
public List<DomainType> doInSolr(SolrServer solrServer) throws SolrServerException, IOException {
SolrQuery solrQuery = new QueryParsers().getForClass(query.getClass()).constructSolrQuery(query);
//add missing params
solrQuery.set(GroupParams.GROUP_TOTAL_COUNT, true);
return solrTemplate.convertQueryResponseToBeans(solrServer.query(solrQuery), DomainType.class);
}
});
Please feel free to open an issue.

Related

Project New Array Field with Spring Data

As part of an aggregate operation, I need to unwind an array. I am wondering how I can put the object back into an array as part of the project. Here is the MongoDB aggregate operation that works:
db.users.aggregate([ { "$match" : {...} , { "$unwind" : "$profiles"} ,{$project: {'profiles': ['$profiles']}}...}
And more specifically, how can I implement this using Spring Data mongoDB ProjectionOperation:
{$project: {'profiles': ['$profiles']}}
This feature has been added since 3.2.
Edit 1:
I looked through some of the posts and one answer by
Christoph Strobl:
and based on the answer I came up with something that works which is as follows:
AggregationOperation project = aggregationOperationContext -> {
Document projection = new Document();
projection.put("profiles", Arrays.<Object> asList("$profiles"));
projection.put("_id","$id");
return new Document("$project", projection);
};
I am wondering if there is a better way of doing it though.
Any help/suggestion is very much appreciated. Thanks.
Unfortunately there is not.
You can replace $project by project() with an AggregationExpression to shorten it a bit.
// ...
unwind("profiles"),
project().and(ctx -> new Document("profiles", asList("$profiles"))).as("profiles")
I created DATAMONGO-2312 to provide support for new array field projections in one of the next versions.

Is there any way to implement pagination in spring webflux and spring data reactive

I'm trying to understand reactive part of spring 5. I have created simple rest endpoint for finding all entities using spring web-flux and spring data reactive (mongo) but don't see any way how to implement pagination.
Here is my simple example in Kotlin:
#GetMapping("/posts/")
fun getAllPosts() = postRepository.findAll()
Does it mean that reactive endpoint does not require pagination? Is some way to implement pagination from server side using this stack?
The reactive support in Spring Data does not provide means of a Page return type. Still, the Pageable parameter is supported in method signatures passing on limit and offset to the drivers and therefore the store itself, returning a Flux<T> that emits the range requested.
Flux<Person> findByFirstname(String firstname, Pageable pageable);
For more information please have a look at the current Reference Documentation for 2.0.RC2 and the Spring Data Examples.
Flux provides skip and take methods to get pagination support, and you also can use filter and sort to filter and sort the result. The filter and sort below is not a good example, but use skip and Pageable as 2nd parameter are no different.
The following codes work for me.
#GetMapping("")
public Flux<Post> all(
//#RequestParam(value = "q", required = false) String q,
#RequestParam(value = "page", defaultValue = "0") long page,
#RequestParam(value = "size", defaultValue = "10") long size) {
return this.postRepository.findAll()
//.filter(p -> Optional.ofNullable(q).map(key -> p.getTitle().contains(key) || p.getContent().contains(key)).orElse(true))//(replace this with query parameters)
.sort(comparing(Post::getCreatedDate).reversed())
.skip(page * size).take(size);
}
Update: The underlay drivers should be responsible for handling the result in the reactivestreams way.
And as you see in the answer from Christoph, if using a findByXXX method, Spring Data Mongo Reactive provides a variant to accept a pageable argument, but the findAll(reactive version) does not include such a variant, you have to do skip in the later operations if you really need the pagination feature. When switching to Flux instead of List, imagine the data in Flux as living water in the rivers or oil in the pipes, or the tweets in twitter.com.
I have tried to compare the queries using Pageale and not in the following case.
this.postRepository.findByTitleContains("title")
.skip(0)
.limitRequest(10)
.sort((o1, o2) -> o1.getTitle().compareTo(o2.getTitle()))
this.postRepository.findByTitleContains("title", PageRequest.of(0, 10, Sort.by(Sort.Direction.ASC, "title")))
When enabling logging for logging.level.org.springframework.data.mongodb.core.ReactiveMongoTemplate=DEBUG and found they print the same log for queries.
find using query: { "title" : { "$regularExpression" : { "pattern" : ".*title.*", "options" : ""}}} fields: Document{{title=1}} for class: class com.example.demo.Post in collection: post
//other logging...
find using query: { "title" : { "$regularExpression" : { "pattern" : ".*title.*", "options" : ""}}} fields: Document{{title=1}} for class: class com.example.demo.Post in collection: post
Keep in mind, all these operations should be DELEGATED to the underlay R2dbc drivers which implemented the reactive streams spec and performed on the DB side, NOT in the memory of your application side.
Check the example codes.
The early sample code I provided above maybe is not a good sample of filter and sort operations(MongoDB itself provides great regularexpression operations for it). But pagination in the reactive variant is not a good match with the concept in the reactive stream spec. When embracing Spring reactive stack, most of the time, we just move our work to a new collection of APIs. In my opinion, the realtime update and elastic response scene could be better match Reactive, eg. using it with SSE, Websocket, RSocket, application/stream+json(missing in the new Spring docs) protocols, etc
This is not efficient but it works for me while I look for another solution
Service
public Page<Level> getPage(int page, int size, Sort.Direction direction, String properties) {
var pageRequest = PageRequest.of(page, size, direction, properties);
var count = levelRepository.count().block();
var levels = levelRepository.findAllLevelsPaged(pageRequest).collectList().block();
return new PageImpl<>(Objects.requireNonNull(levels), pageRequest, Objects.requireNonNull(count));
}
Repo
#Repository
public interface LevelRepository extends ReactiveMongoRepository<Level, String> {
#Query("{ id: { $exists: true }}")
Flux<Level> findAllLevelsPaged(final Pageable page);
}
Ref example

Save Spark Dataframe into Elasticsearch - Can’t handle type exception

I have designed a simple job to read data from MySQL and save it in Elasticsearch with Spark.
Here is the code:
JavaSparkContext sc = new JavaSparkContext(
new SparkConf().setAppName("MySQLtoEs")
.set("es.index.auto.create", "true")
.set("es.nodes", "127.0.0.1:9200")
.set("es.mapping.id", "id")
.set("spark.serializer", KryoSerializer.class.getName()));
SQLContext sqlContext = new SQLContext(sc);
// Data source options
Map<String, String> options = new HashMap<>();
options.put("driver", MYSQL_DRIVER);
options.put("url", MYSQL_CONNECTION_URL);
options.put("dbtable", "OFFERS");
options.put("partitionColumn", "id");
options.put("lowerBound", "10001");
options.put("upperBound", "499999");
options.put("numPartitions", "10");
// Load MySQL query result as DataFrame
LOGGER.info("Loading DataFrame");
DataFrame jdbcDF = sqlContext.load("jdbc", options);
DataFrame df = jdbcDF.select("id", "title", "description",
"merchantId", "price", "keywords", "brandId", "categoryId");
df.show();
LOGGER.info("df.count : " + df.count());
EsSparkSQL.saveToEs(df, "offers/product");
You can see the code is very straightforward. It reads the data into a DataFrame, selects some columns and then performs a count as a basic action on the Dataframe. Everything works fine up to this point.
Then it tries to save the data into Elasticsearch, but it fails because it cannot handle some type. You can see the error log here.
I'm not sure about why it can't handle that type. Does anyone know why this is occurring?
I'm using Apache Spark 1.5.0, Elasticsearch 1.4.4 and elaticsearch-hadoop 2.1.1
EDIT:
I have updated the gist link with a sample dataset along with the source code.
I have also tried to use the elasticsearch-hadoop dev builds as mentionned by #costin on the mailing list.
The answer for this one was tricky, but thanks to samklr, I have managed to figure about what the problem was.
The solution isn't straightforward nevertheless and might consider some “unnecessary” transformations.
First let's talk about Serialization.
There are two aspects of serialization to consider in Spark serialization of data and serialization of functions. In this case, it's about data serialization and thus de-serialization.
From Spark’s perspective, the only thing required is setting up serialization - Spark relies by default on Java serialization which is convenient but fairly inefficient. This is the reason why Hadoop itself introduced its own serialization mechanism and its own types - namely Writables. As such, InputFormat and OutputFormats are required to return Writables which, out of the box, Spark does not understand.
With the elasticsearch-spark connector one must enable a different serialization (Kryo) which handles the conversion automatically and also does this quite efficiently.
conf.set("spark.serializer","org.apache.spark.serializer.KryoSerializer")
Even since Kryo does not require that a class implement a particular interface to be serialized, which means POJOs can be used in RDDs without any further work beyond enabling Kryo serialization.
That said, #samklr pointed out to me that Kryo needs to register classes before using them.
This is because Kryo writes a reference to the class of the object being serialized (one reference is written for every object written), which is just an integer identifier if the class has been registered but is the full classname otherwise. Spark registers Scala classes and many other framework classes (like Avro Generic or Thrift classes) on your behalf.
Registering classes with Kryo is straightforward. Create a subclass of KryoRegistrator,and override the registerClasses() method:
public class MyKryoRegistrator implements KryoRegistrator, Serializable {
#Override
public void registerClasses(Kryo kryo) {
// Product POJO associated to a product Row from the DataFrame
kryo.register(Product.class);
}
}
Finally, in your driver program, set the spark.kryo.registrator property to the fully qualified classname of your KryoRegistrator implementation:
conf.set("spark.kryo.registrator", "MyKryoRegistrator")
Secondly, even thought the Kryo serializer is set and the class registered, with changes made to Spark 1.5, and for some reason Elasticsearch couldn't de-serialize the Dataframe because it can't infer the SchemaType of the Dataframe into the connector.
So I had to convert the Dataframe to an JavaRDD
JavaRDD<Product> products = df.javaRDD().map(new Function<Row, Product>() {
public Product call(Row row) throws Exception {
long id = row.getLong(0);
String title = row.getString(1);
String description = row.getString(2);
int merchantId = row.getInt(3);
double price = row.getDecimal(4).doubleValue();
String keywords = row.getString(5);
long brandId = row.getLong(6);
int categoryId = row.getInt(7);
return new Product(id, title, description, merchantId, price, keywords, brandId, categoryId);
}
});
Now the data is ready to be written into elasticsearch :
JavaEsSpark.saveToEs(products, "test/test");
References:
Elasticsearch's Apache Spark support documentation.
Hadoop Definitive Guide, Chapter 19. Spark, ed. 4 – Tom White.
User samklr.

Separation of Concerns: Returning Projected Data between layers From a Linq Query

I'm using Linq and having trouble doing something that I believe should be trivial. I want to return data from one layer so it can be used independently of linq in another layer.
Suppose I have a Data Access Layer. It knows about the entity framework and how to interact with it. But, it doesn't care who accesses it. The one interesting requirement I have is that the queries in the entity framework return projected data that is not part of the Entity Model itself. Please don't ask me to change this part of the requirement and make POCOs for each return type, as it is not the best design given the problem I am trying to solve. Below is an example.
public class ChartData
{
public function <<returnType??>> GetData()
{
MyEntities context = new MyEntities();
var results = from context.vManyColumnsOfData as v
where v.CompanyName = "acme"
select new {Year = v.SalesYear, Income = v.Income};
return ??;
}
}
Then, I would like to have an ASP.Net UI layer be able to call into the Data Access Layer to get the data in order to bind it to a control. The UI layer should have no notion of where the data came from. It should only know that it has the data it needs to bind. Below is an example.
protected void chart_Load(object sender, EventArgs e)
{
// set some chart properties
chart.Skin = "Default";
...
// Set the data source
ChartData dataMgr = new ChartData();
<<returnType?>> data = dataMgr.GetData();
chart.DataSource = data;
chart.DataBind();
}
What is the best way to send linq projected data back to another layer?
If you don't need to use the projected type statically, just return IEnumerable<object>.
Please don't ask me to change this part of the requirement and make
POCOs for each return type, as it is not the best design given the
problem I am trying to solve.
I feel like I should rightly ignore this, as the best thing to do is to return a defined type. Anonymous types are useful when they are wholly contained within the method that creates them. Once you start passing them around, it is time to go ahead and give them the proper class treatment.
However, to live within your imposed limitations, you can return IEnumerable<object> from the method and use that or var at the callsite and rely upon the dynamic binding of the control to get at the data. It's not going to help you if you need to deal with the object programmatically, but it will serve fine for databinding.
You can not return an anonymous type, so basically for this you will need POCO's even though you don't want them.
"not the best design given the problem I am trying to solve"
Could you explain what you are trying to achieve a little more? It might be possible to return some type of list containing a dictionary of items (ie rows and columns). Think something like an untyped dataset (yuck)
Your GetData method can use IEnumerable (the "old" non-generic interface) as its return type.
Any dynamic resolution (e.g. ASP.NET or XAML bindings) should work as expected, which seems to be what you want to do.
However, if you want to use the results in your code, you will probably have to resort to .NET 4's dynamic keyword.
The following example can be run in LINQPad (in "C# Program" mode) and illustrates this:
void Main()
{
var v = GetData();
foreach (dynamic element in v)
{
((string)element.Name).Dump();
}
}
public IEnumerable GetData()
{
return from i in Enumerable.Range(1, 10)
select new
{
Name = "Item " + i,
Value = i
};
}
Keep in mind that, design-wise, coding like this will make most people frown and can affect performance.

Can someone help me understand Guava CacheLoader?

I'm new to Google's Guava library and am interested in Guava's Caching package. Currently I have version 10.0.1 downloaded. After reviewing the documentation, the JUnit tests source code and even after searching google extensively, I still can't figure out how to use the Caching package. The documentation is very short, as if it was written for someone who has been using Guava's library not for a newbie like me. I just wish there are more real world examples on how to use Caching package propertly.
Let say I want to build a cache of 10 non expiring items with Least Recently Used (LRU) eviction method. So from the example found in the api, I build my code like the following:
Cache<String, String> mycache = CacheBuilder.newBuilder()
.maximumSize(10)
.build(
new CacheLoader<String, String>() {
public String load(String key) throws Exception {
return something; // ?????
}
});
Since the CacheLoader is required, I have to include it in the build method of CacheBuilder. But I don't know how to return the proper value from mycache.
To add item to mycache, I use the following code:
mycache.asMap().put("key123", "value123");
To get item from mycache, I use this method:
mycache.get("key123")
The get method will always return whatever value I returned from CacheLoader's load method instead of getting the value from mycache. Could someone kindly tell me what I missed?
Guava's Cache type is generally intended to be used as a computing cache. You don't usually add values to it manually. Rather, you tell it how to load the expensive to calculate value for a key by giving it a CacheLoader that contains the necessary code.
A typical example is loading a value from a database or doing an expensive calculation.
private final FooDatabase fooDatabase = ...;
private final LoadingCache<Long, Foo> cache = CacheBuilder.newBuilder()
.maximumSize(10)
.build(new CacheLoader<Long, Foo>() {
public Foo load(Long id) {
return fooDatabase.getFoo(id);
}
});
public Foo getFoo(long id) {
// never need to manually put a Foo in... will be loaded from DB if needed
return cache.getUnchecked(id);
}
Also, I tried the example you gave and mycache.get("key123") returned "value123" as expected.

Resources