I have a large number of documents(around 34719074 documents) in a type of an index(ES 2.4.4). While searching, my ES Cluster seems to be in high impact(Search Latency, CPU Usage, JVM Memory and Load Average) when the "from" parameter is high(greater than 100000, "size" parameter being constant). Any specific reason for it? My query looks like:
{
"explain": false,
"size": 100,
"from": <>,
"_source": {
"excludes": [],
"includes": [
<around 850 fields>
]
},
"sort": [
<sorting from an string field>
]
}
This is a classic problem of deep pagination. You may read the link on pagination in Elasticsearch. Essentially, to get the next set documents after skipping 100000 documents would be an memory intensive task because to attain a result set of 100000+ documents, 100000+ documents need to fetched from each shard and then processed (ranking, sorting, etc.). Ranking/Sorting over a smaller result set takes lesser time that doing that on a larger result set.
Related
There is an aggregation to identify duplicate records:
{
"size": 0,
"aggs": {
"myfield": {
"terms": {
"field": "myfield.keyword",
"size": 250,
"min_doc_count": 2
}
}
}
}
However it is missing many duplicates due to the low size. The actual cardinality is over 2 million. If size is changed to the actual size or some other much larger number, all of the duplicate documents are found, but the operation takes 5X more time to complete.
If I change the size to a larger number, should I expect slow performance or other adverse effects on other operations while this is running?
Yes, size param is very critical in Elasticsearch aggregation performance and if you change it very big number like 10k (limit set by Elasticsearch but you can change that by changing search.max_buckets but it will surely have adverse impact not only on the aggregation you are running but on all the operation running in Elasticsearch cluster.
As you are using terms aggregation which is of bucket aggregation, you can read more here
Note: Reason for increasing the latency when you increase the size is that Elasticsearch has to do significant processing creating those many buckets and compute the entries for those buckets.
I have an elastic index with default sort mapping of price:
shop_prices_sort_index
"sort" : {
"field" : "enrich.price",
"order" : "desc"
},
If I insert 10 documents:
100, 98, 10230, 34, 1, 23, 777, 2323, 3, 109
And Fetch results using /_search. By default it returns documents in order of price descending.
10230, 2323...
But if I distribute my documents into 3 shards, Then the same query returns some other sequence of products:
100, 98, 34...
I am really stuck here, I am not sure if I am missing out something basic or Do I need any extra settings to make a Sorted Index behave correctly.
PS: I also tried 'routing' & 'preference'. but no luck.
Any help much appreciated.
When configuring index sorting, you're only making sure that each segment inside each shard is properly sorted. The goal of index sorting is to provide some more optimization during searches
Due to the distributed nature of ES, when your index has many shards, each shard will be properly sorted, but your search query will still need to use sorting explicitly.
So if your index settings contains the following to apply sorting at indexing time
"sort" : {
"field" : "enrich.price",
"order" : "desc"
}
your search queries will also need to contain the same sort specification at query time
"sort" : {
"field" : "enrich.price",
"order" : "desc"
}
By using index sorting you'll hit a little overhead at indexing time, but your search queries will be a bit faster in the end.
The query:
{
"aggregations": {
"sigTerms": {
"significant_terms": {
"field": "translatedTitle"
},
"aggs": {
"assocs": {
"significant_terms": {
"field": "translatedTitle"
}
}
}
}
},
"size": 0,
"from": 0,
"query": {
"range": {
"timestamp": {
"lt": "now+1d/d",
"gte": "now/d"
}
}
},
"track_scores": false
}
Error:
{
"bytes_limit": 6844055552,
"bytes_wanted": 6844240272,
"reason": "[request] Data too large, data for [<reused_arrays>] would be larger than limit of [6844055552/6.3gb]",
"type": "circuit_breaking_exception"
}
Index size is 5G. How much memory does the cluster need to execute this query?
You can try to increase the request circuit breaker limit to 41% (default is 40%) in your elasticsearch.yml config file and restart your cluster:
indices.breaker.request.limit: 41%
Or if you prefer to not restart your cluster you can change the setting dynamically using:
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"indices.breaker.request.limit" : "41%"
}
}'
Judging by the numbers showing up (i.e. "bytes_limit": 6844055552, "bytes_wanted": 6844240272), you're just missing ~190 KB of heap, so increasing by 1% to 41% you should get 17 MB of additional heap (your total heap = ~17GB) for your request breaker which should be sufficient.
Just make sure to not increase this value too high, as you run the risk of going OOM since the request circuit breaker also shares the heap with the fielddata circuit breaker and other components.
I am not sure what you are trying to do, but I'm curious to find out. Since you get that exception, I can assume the cardinality of that field is not small. You are basically trying to see, I guess, the relationships between all the terms in that field, based on significance.
The first significant_terms aggregation will consider all the terms from that field and establish how "significant" they are (calculating frequencies of that term in the whole index and then comparing those with the frequencies from the range query set of documents).
After it's doing that (for all the terms), you want a second significant_aggregation that should do the first step, but now considering each term and doing for it another significant_aggregation. That's gonna be painful. Basically, you are computing number_of_term * number_of_terms significant_terms calculations.
The big question is what are you trying to do?
If you want to see a relationship between all the terms in that field, that's gonna be expensive for the reasons explained above. My suggestion is to run a first significant_terms aggregation, take the first 10 terms or so and then run a second query with another significant_terms aggregation but limiting the terms by probably doing a parent terms aggregation and include only those 10 from the first query.
You can, also, take a look at sampler aggregation and use that as a parent for your only one significant terms aggregation.
Also, I don't think increasing the circuit breaker limit is the real solution. Those limits were chosen with a reason. You can increase that and maybe it will work, but it has to make you ask yourself if that's the right query for your use case (as it doesn't sound like it is). That limit value that it's in the exception might not be the final one... reused_arrays refers to an array class in Elasticsearch that is resizeable, so if more elements are needed, the array size is increased and you may hit the circuit breaker again, for another value.
Circuit breakers are designed to deal with situations when request processing needs more memory than available. You can set limit by using following query
PUT /_cluster/settings
{
"persistent" : {
"indices.breaker.request.limit" : "45%"
}
}
You can get more information on
https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker.html
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/index-modules-fielddata.html
The term filter that is used:
curl -XGET 'http://localhost:9200/my-index/my-doc-type/_search' -d '{
"filter": {
"term": {
"void": false
}
},
"fields": [
[
"user_id1",
"user_name",
"date",
"status",
"q1",
"q1_unique_code",
"q2",
"q3"
]
],
"size": 50000,
"sort": [
"date_value"
]
}'
The void field is a boolean field.
The index store size is 504mb.
The elasticsearch setup consists of only a single node and the index
consists of only a single shard and 0 replicas. The version of
elasticsearch is 0.90.7
The fields mentioned above is only the first 8 fields. The actual
term filter that we execute has 350 fields mentioned.
We noticed the memory spiking by about 2-3gb though the store size is only 504mb.
Running the query multiple times seems to continuously increase the memory.
Could someone explain why this memory spike occurs?
It's quite an old version of Elasticsearch
You're returning 50,000 records in one get
Sorting the 50k records
Your documents are pretty big - 350 fields.
Could you instead return a smaller number of records? and then page through them?
Scan and Scroll could help you.
it's not clear whether you've indexed individual fields - this could help as the _source being read from disk may be incurring a memory overhead.
We have a two node cluster (VM in a private cloud, 64GB of RAM, 8 core CPU each node, CentOS), a few small indices ( ~1 mil documents) and one big index with ~220 mil docs (2 shards, 170GB of space). 24GB of memory is allocated to elastic search on each box.
Document structure:
{
'article_id': {
'index': 'not_analyzed',
'store': 'yes',
'type': 'long'
},
'feed_id': {
'index': 'not_analyzed',
'store': 'yes',
'type': 'string'
},
'title': {
'index': 'analyzed',
'type': 'string'
},
'content': {
'index': 'analyzed',
'type': 'string'
},
'lang': {
'index': 'not_analyzed',
'type': 'string'
}
}
It takes about 1-2 seconds to run the following query:
{
"query" : {
"multi_match" : {
"query" : "some search term",
"fields" : [ "title", "content" ],
"type": "phrase_prefix"
}
},
"size": 20,
"fields" :["article_id", "feed_id"]
}
Are we hitting hardware limits at this point or are there ways to optimize the query or data structure to increase performance?
Thanks in advance!
It's possible you are hitting the limits of your hardware, but there are a few things you can do to your query first to help optimize it.
Max Expansions
The first thing I would do is limit max_expansions. The way the prefix-queries work is by generating a list of prefixes that match the last token in your query. In your search query "some search term", the last token "term" would be expanded using "term" as the prefix seed. You may generate a list like this:
term
terms
terminate
terminator
termite
The prefix expansion process runs through your posting list looking for any word which matches the seed prefix. By default, this list is unbounded, which means you can generate a very large list of expansions.
The second phase rewrites your original query into a series of term queries using the expansions. The bigger the expansion list, the more terms are evaluated against your index and a corresponding decrease in speed.
If you limit the expansion process to something reasonable, you can maintain speed and still usually get good prefix matching:
{
"query" : {
"multi_match" : {
"query" : "some search term",
"fields" : [ "title", "content" ],
"type": "phrase_prefix",
"max_expansions" : 100
}
},
"size": 20,
"fields" :["article_id", "feed_id"],
}
You'll have to play with how many expansions you want. It is a tradeoff between speed and recall.
Filtering
In general, the other thing you can add is filtering. If there is some type of criteria you can filter on, you can potentially drastically improve speed. Currently, your query is executing against the entire index (250m documents), which is a lot to evaluate. If you can add filter that cuts that number down, you can see much improved latency.
At the end of the day, the fewer documents which the query evaluates, the faster the query will run. Filters decrease the number of docs that a query will see, are cached, operate very quickly, etc etc.
Your situation may not have any applicable filters, but if it does, they can really help!
File System Caching
This advice is entirely dependent on the rest of the system. If you aren't fully utilizing your heap (24gb) because you are doing simple search and filtering (e.g. not faceting / geo / heavy sorts / scripts) you may be able to reallocate your heap to the file system cache.
For example, if your max heap usage peaks at 12gb, it may make sense to decrease heap size down to 15gb. The extra 10gb that you freed will go back to the OS and help cache segments, which will help boost search performance simply by the fact that more operations are diskless.