Elastic - Search across object without key specification - elasticsearch

I have an index with hundreds of millions docs and each of them has an object "histogram" with values for each day:
"_source": {
"proxy": {
"histogram": {
"2017-11-20": 411,
"2017-11-21": 34,
"2017-11-22": 0,
"2017-11-23": 2,
"2017-11-24": 1,
"2017-11-25": 2692,
"2017-11-26": 11673
}
}
}
And I need one of two solutions:
Find docs where any value inside histogram object is greater then XX
Find docs where avg of values in histogram object is greater then XX
In point 1 I can use range query, but I must specify exactly name of field (i.e. proxy.histogram.2017-11-20). And wildcard version (proxy.histogram.*) doesnot work.
In point 2 I found in ES only average aggregation, but I don't want aggregation of these fields after query (because large of data), I want to only search these docs.

Related

restructure elasticsearch index to allow filtering on sum of values

I've an index of products.
Each product, has several variants (can be a few or hundreds, each has a color & size e.g. Red)
Each variant, is available (in a certain quantity) at several warehouses (aronud 100 warehouses).
Warehouses have codes e.g. AB, XY, CD etc.
If I had my choice, I'd index it as:
stock: {
Red: {
S: { AB: 100, XY: 200, CD: 20 },
M: { AB: 0, XY: 500, CD: 20 },
2XL: { AB: 5, XY: 0, CD: 9 }
},
Blue: {
...
}
}
Here's a kind of customer query I might receive:
Show me all products, that have Red.S color in stock (minimum 100) at warehouses AB & XY.
So this would probably be a filter like
Red.S.AB > 100 AND Red.S.XY > 100
I'm not writing whole filter query here, but its straightforward in elastic.
We might also get SUM queries, e.g. the sum of inventories at AB & XY should be > 500.
That'd be easy through a script filter, say Red.S.AB + Red.S.XY > 500
The problem is, given 100 warehouses, 100 sizes, 25 colors, this easily needs 100*100*25 = 250k mappings. Elasticsearch simply can't handle that many number of keys.
The easy answer is use nested documents, but nested documents pose a particular problem. We cannot sum across a given selection of nested documents, and nested docs are slow, specially when we're going to have 250k per product.
I'm open to external solutions than elastic as well. We're rails/postgres stack.
You have your product index with variants, that's fine, but I'd use another index for managing anything related to the multi-warehouse stock. One document per product/size/color/warehouse with the related count. For instance:
{
"product": 123,
"color": "Red",
"size": "S",
"warehouse": "AB",
"quantity": 100
}
{
"product": 123,
"color": "Red",
"size": "S",
"warehouse": "XY",
"quantity": 200
}
{
"product": 123,
"color": "Red",
"size": "S",
"warehouse": "CD",
"quantity": 20
}
etc...
That way, you'll be much more flexible with your stock queries, because all you'll need is to filter on the fields (product, color, size, warehouse) and simply aggregate on the quantity field, sums, averages or whatever you might think of.
You will probably need to leverage the bucket_script pipeline aggregation in order to decide whether sums are above or below a desired threshold.
It's also much easier to maintain the stock movements by simply indexing the new quantity for any given combination than having to update the master product document every time an item gets out of the stock.
No script, no nested documents required.
The best possible solution will be to create separate indexes for the warehouses and each warehouse index will have documents. One document per product/size/color/warehouse with related values like this:
{
"product": 123,
"color": "Red",
"size": "S",
"warehouse": "AB",
"quantity": 100
}
This will reduce your mappings 100 * 25 = 2500 mappings per index.
Rest for other operations, I feel #Val has mentioned in his answer which is quite impressive and beautiful.
Coming to external solutions, I would say you want to carry to out task of storing data, searching it and fetching it. Elasticsearch and Apache Solr are the best search engines to carry out these kind of tasks. I have not tried Apache Solr but I would highly recommend to go with Elasticsearch because of it's features, active community support and searching is really fast. Searching can also be made fast using analyzers and tokenizers. It also has some features like Full-Text Searching and Term Level Searching to customize searching according to situation or problem statement.

Elasticsearch "size" value not working in terms aggregation with partitions

I am trying to paginate over a specific field using the terms aggregation with partitions.
The problem is that the number of returned terms for each partition is not equal to the size parameter that I set.
These are the steps that I am doing:
Retrieve the number of different unique values for the field with "cardinality" aggregation.
In my data, the result is 21.
From the web page, the user wants to display a table with 10 items per page.
if unique_values % page_size != 0:
partitions_number = (unique_values // page_size) + 1
else:
partitions_number = (unique_values // page_size)
Than I am making this simple query:
POST my_index/_search?pretty
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"match": {
"field_to_paginate": "foo"
}
}
]
}
},
"aggs": {
"by_pchostname": {
"terms": {
"size": 10,
"field": "field_to_paginate",
"include": {
"partition": 0,
"num_partitions": 3
}
}
}
}
}
I am expecting to retrieve 10 results. But if I run the query I have only 7 results.
What am I missing here? Do I need to use a different solution here?
As a side note, I can't use composite aggregation because I need to sort results by doc_count over the whole dataset.
Partitons in terms aggregation divide the values in equal chunks.
In your case no of partition num_partitions is 3 so 21/3 == 7.
Partitons are meant for getting large values in the order of 1000 s.
You may be able to leverage shard_size parameter. My suggestion is to read this part of manual and work with the shard_size param
Terms aggregation does not allow pagination. Use composite aggregation instead (requires ES >= 6.1.0). Below is the quote from reference docs:
If you want to retrieve all terms or all combinations of terms in a
nested terms aggregation you should use the Composite aggregation
which allows to paginate over all possible terms rather than setting a
size greater than the cardinality of the field in the terms
aggregation. The terms aggregation is meant to return the top terms
and does not allow pagination.

Elasticsearch filter on aggregation result (for search and aggregation)

Part of this question is related to : Elasticsearch filter on aggregation
Context
Let's say my Elasticsearch index contains some orders. Each order has one field price and one field amount. This result in an index that look like this :
[
{
"docKey": "order01",
"user": "1",
"price": 8,
"amount": 20
},
{
"docKey": "order02",
"user": "1",
"price": 14,
"amount": 3
},
{
"docKey": "order03",
"user": "2",
"price": 5,
"amount": 1
},
{
"docKey": "order04",
"user": "2",
"price": 10,
"amount": 3
}
]
What I would like to do
What I want to do is a filter on some values aggregated per user. I want to do this kind of filter for search and also in order to apply aggregation on it. For example in this example I would like to retrieve the documents of all user that have their average order with a price in the range of 9-14.
User 1 has an average price order of 11 so we keep both of his orders.
User 2 has an average price order of 7.5 so both his orders are not kept.
This was the easy part. After I filter to only get the user one. I want to do some more aggregates on the result.
So for example : I want the repartition of the average per user of the amout field among the bucket [0,10] and [10,20] for all user that have an average order with a price in the range of 9-14.
The answer I except for this question is 0 in the bucket [0,10] and one in the bucket [10,20] (Only user 1 is kept because of his average price. His average amount is 11.5 so in the bucket [10,20]).
What I have tried
I have manage do to my filter in order to retrieve the users that have their average order with a price in the range of 9-14. I did this by first doing a term aggregation on the user filed. Then I do a subaggregation that is an avg aggregation on the price. Then I do a bucket selector pipeline aggregation that check if the previous computed average price is between 9 and 14.
I have also manage to do the aggregation I wanted but without the previous filter. I did exactly the same thing that for the filter for each range. Then I count the number of results in each bucket.
I havn't find any way to apply an other aggregation on bucket selector result. So i could not first do the filter and then apply the range...
Also theses solution are not elegant.. I don't think they will scale up as a big part of the document need to be returned in the answer and processed further (even if it's off internet I prefer avoiding doing this and I might be limited in the result size of an aggregation ?).
I manage to find a solution but it's not elegant and might be poorly scalable.
Make a term aggregation on the user.
As a sub-aggregation of the term aggregation do an avg aggregation that compute the average of the price.
As a sub-aggregation of the term aggregation do an avg aggregation that compute the average of the amount.
Do a bucket selector pipeline aggregation that filter to only keep avg_price in range [9-14].
Do a bucket selector pipeline aggregation that filter to only keep avg_amount in a [0-10]
Do a "count" bucket script pipeline aggregation (with script returning one).
Do a bucket sum pipeline aggregation that sum the count.
Repeat all the steps for all ranges wanted ([0-10], [10-20])

ElasticSearch: Metric aggregation and doc values / field-data

How does ES internally implement metric aggregations ?
Suppose documents in the index have below structure:
{
category: A,
measure: 20
}
Would for the below query which does terms aggregation on category and calculate sum(measure), the 'measure' field values
be extracted from the document (i.e. _source) and summed or
would the values be taken from doc-values / field data of 'measure' field
Query:
{
size: 0,
aggs: {
cat_aggs: {
terms: {
field: 'category'
},
aggs: {
sumAgg: {
sum: {field: 'measure'}
}
}
}
}
}
From the official documentation on metrics aggregations (emphasis added):
The aggregations in this family compute metrics based on values extracted in one way or another from the documents that are being aggregated. The values are typically extracted from the fields of the document (using the field data), but can also be generated using scripts.
If you're using a newer ES 2.x version, then doc_values have become the norm over field data.
All fields which support doc values have them enabled by default. If you are sure that you don’t need to sort or aggregate on a field, or access the field value from a script, you can disable doc values in order to save disk space
So to answer your question clearly, metrics aggregations are computed based on either field data or doc values that have been stored at indexing time, i.e. not computed based on source parsing at query time, unless your doing it from a script which accesses the _source directly.

Possible to have a document always return above certain position

I've got a bunch of documents from a query which are sorted by a modified date. However I'd like certain documents (identified by a field value) to always return in the top ten results regardless of whether there are ten or more documents with a more recent modified date.
From what I've read about the various ways of sorting in Elasticsearch (score, boost, scripts) I don't think I have any way of determining the actual position of a document in the search results, let alone some way of manipulating the score to push a document into the top ten.
Assuming that you have a field called "important_field" which contains value 1, for documents you in top and say 0 for all other documents, you can use multi field sorting as below
{
"sort": [
{ "important_field": { "order": "desc" }},
{ "modified_date": { "order": "desc" }}
]
}
This way of sorting means it will sort by important_field value and if they are same then will be sorted by modified_date. So all documents with important_field value 1 will come on top and rest will still be sorted by modified_date.

Resources