ElasticSearch retrieves documents slowly - performance

I'm using Java_API to retrieve records from ElasticSearch, it needs approximately 5 second to retrieve 100000 document (record/row) in Java application.
Is it slow for ElasticSearch? or is it normal?
Here is the index settings:
I tried to get better performance but without result, here is what I did:
Set ElasticSearch heap space to 3GB it was 1GB(default) -Xms3g -Xmx3g
Migrate the ElasticSearch on SSD from 7200 RPM Hard Drive
Retrieve only one filed instead of 30
Here is my Java Implementation Code
private void getDocuments() {
int counter = 1;
try {
lgg.info("started");
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300));
SearchResponse scrollResp = client.prepareSearch("ebpp_payments_union").setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
.setQuery(QueryBuilders.matchAllQuery())
.setScroll(new TimeValue(1000))
.setFetchSource(new String[] { "payment_id" }, null)
.setSize(10000)
.get();
do {
for (SearchHit hit : scrollResp.getHits().getHits()) {
if (counter % 100000 == 0) {
lgg.info(counter + "--" + hit.getSourceAsString());
}
counter++;
}
scrollResp = client.prepareSearchScroll(scrollResp.getScrollId())
.setScroll(new TimeValue(60000))
.execute()
.actionGet();
} while (scrollResp.getHits().getHits().length != 0);
client.close();
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
I know that TransportClient is deprecated, I tried by
RestHighLevelClient also, but it does not changes anything.
Do you know how to get better performance?
Should I change something in ElasticSearch or modify my Java code?

Performance troubleshooting/tuning is hard to do with out understanding all of the stuff involved but that does not seem very fast. Because this is a single node cluster you're going to run into some performance issues. If this was a production cluster you would have at least a replica for each shard which can also be used for reading.
A few other things you can do:
Index your documents based on your most frequently searched attribute - this will write all of the documents with the same attribute to the same shard so ES does less work reading (This won't help you since you have a single shard)
Add multiple replica shards so you can fan out the reads across nodes in the cluster (once again, need to actually have a cluster)
Don't have the master role on the same boxes as your data - if you have a moderate or large cluster you should have boxes that are neither master nor data but are the boxes your app connects to so they can manage the meta work for the searches and let the data nodes focus on data.
Use "query_then_fetch" - unless you are using weighted searches, then you should probably stick with DFS.

I see three possible axes for optimizations:
1/ sort your documents on _doc key :
Scroll requests have optimizations that make them faster when the sort
order is _doc. If you want to iterate over all documents regardless of
the order, this is the most efficient option:
( documentation source )
2/ reduce your page size, 10000 seems a high value. Can you make differents test with reduced values like 5000 /1000?
3/ Remove the source filtering
.setFetchSource(new String[] { "payment_id" }, null)
It can be heavy to make source filtering, since the elastic node needs to read the source, transformed in Object and then filtered. So can you try to remove this? The network load will increase but its a trade :)

Related

OpenSearch Query Intermittent slow response

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
OpenSearch 1.1
Describe the issue:
Intermittent slowness on the query response.
Configuration:
Cluster :
data nodes
Instance type c5.large.search (3)
Dedicated Master Node : r4.large.search (3)
Storage Type : EBS
EBS Volume Type : General Purpose (SSD) - gp2
EBS Size : 10 GiB
Relevant Logs or Screenshots:
Collection size is 1MB ; with below settings:
{
“
spec_proc_comb_exp”: {
“
settings”: {
“
index”: {
“
refresh_interval”: “86400 s”,
“number_of_shards”: “5”,
“plugins”: {
“
index_state_management”: {
“
rollover_skip”: “true”
}
},
“provided_name”: “spec_proc_comb_exp”,
“creation_date”: “1671629334835”,
“number_of_replicas”: “2”,
“uuid”: “aht7O4QQTV6WtozcVCfi1A”,
“version”: {
“
created”: “135227827”
}
}
}
}
}
Query run:
GET spec_proc_comb_exp / _search {
“
query”: {
“
bool”: {
“
must”: [{
“
multi_match”: {
“
query”: “dent”,
“fields”: [“name”, “alias_terms”],
“fuzziness”: “4”
}
}],
“filter”: {
“
match_phrase”: {
“
category”: “Specialty”
}
}
}
}
}
Problem:
We are using OpenSearch as backend to perform exact/fuzzy match for a UI search bar. The index currently used is pretty small 1MB. We see an issue with not stable Response time. Most of the time it is around 100ms; but at times intermittently (10% ) is around 1000 ms or higher.
Request experts to please help troubleshoot this issue. I am new to OpenSearch or any search tech.
Tried reviewing the cluster configuration. Need guidance on the troubleshooting steps in detail. Would be very helpful.
Here are some notes to increase the performance.
set number_of_shards to 1
It's recommended to keep the shard size between 10-50GB. To do that you can reindex the data into a new index with 1 primary shard.
use term query rather than match_phrase
If you don't need the full-text search feature you can use the term query. The term query is faster than match_phrase.
decrease fuzziness or remove it if you can
The fuzzy query is an expensive query. If you can decrease the query will return faster.
You can enable the slow logs to detect slow search logs.
Note: your hardware looks good.

Share data between Elasticsearch data nodes on recipient request

I have two Elasticsearch data nodes, Slave and Master.
M and S can communicate with each other, however for security reasons S cannot send data to M when it receives it, M must request data from S, and (assuming no other requirements on what data S exports) when this happens M receives the requested data from S.
S's data is then incorporated in to M's data.
Is this behaviour achievable with Elasticsearch? Unless I am mistaken, neither replication nor snapshotting achieve this behaviour, and while I am aware that I could use S's REST API to receive this data on M before purging copied data from S, this solution seems clunky and prone to error.
Is there an elegant solution to achieve this architecture?
It is true that Cross Cluster Replication (CCR) is a potential solution for this, but that solution requires the most expensive version of elasticsearch, and there is a free alternative.
The elasticsearch input and output plugins for logstash work for this, albeit with some tweaking to get it to behave exactly as you want.
Below is a crude example which queries one elasticsearch node for data, and exports to another. This does mean that you require a logstash instance between the Slave and Master nodes to handle this behaviour.
input {
elasticsearch {
docinfo => true #Necessary to get metadata info
hosts => "192.168.0.1" #Slave (Target) elasticsearch instance
query => '{ "query": { "query_string": { "query": "*" } } }' #Query to return documents, this example returns all data which is bad if you combine with the below schedule
schedule => "* * * * *" #Run periodically, this example runs every minute
}
}
output {
elasticsearch {
hosts => "192.168.0.2:9200" #Master (Destination) elasticsearch instance
index => "replica.%{[#metadata][_index]}"
document_id => "%{[metadata][_id]}"
}
}

Discover historical trends in Elasticsearch (not visual)

I have some experience with Elastic as logs storage, but I'm stuck on basic trends recognition (where I need to compare found documents to each other) over time periods.
Easy query would answer following question:
Find all occurrences of document rows (row is specified by growing/continues #timestamp value), where specific field (e.g. threads_count) is growing for fixed count of documents, or time period.
So if I have thread_count of some application, logged every minute over a day including timestamp. And I specify that I'm looking for growing trend in 10 minutes - result should return documents or document sets where thread_count was greater over the one from document minute before at least for 10 documents.
It is very similar task to see line graph, and identify growing parts by eye.
Maybe I just miss proper function name for search. I'm not interested in visualization, I would like to search similar situations over the API and take needed actions.
Any reference to documentation or simple example is welcome!
Well script cannot be used between documents. So you will have to use a payload.
In your query sort the result by date.
https://www.elastic.co/guide/en/elastic-stack-overview/6.3/how-watcher-works.html
A script in the payload could tell you if a field is increasing (something like that, don't have access to a es index right now)
"transform": {
"script": {
"source": "ctx.payload.transform = []; def current_score = -1;
def current = []; for (int j=0;j<ctx.payload.hits.hits;j++){
//check in the loop if current_score increasing using ctx.payload.hits.hits[j]._source.message], if not return "FALSE"
} ; return "TRUE",
"lang": "painless"
}
}
If you use logstash to index your documents, take a look to elapsed, could be nice too: https://www.elastic.co/guide/en/logstash/current/plugins-filters-elapsed.html

Elasticsearch 2.1: Result window is too large (index.max_result_window)

We retrieve information from Elasticsearch 2.1 and allow the user to page thru the results. When the user requests a high page number we get the following error message:
Result window is too large, from + size must be less than or equal
to: [10000] but was [10020]. See the scroll api for a more efficient
way to request large data sets. This limit can be set by changing the
[index.max_result_window] index level parameter
The elastic docu says that this is because of high memory consumption and to use the scrolling api:
Values higher than can consume significant chunks of heap memory per
search and per shard executing the search. It’s safest to leave this
value as it is an use the scroll api for any deep scrolling https://www.elastic.co/guide/en/elasticsearch/reference/2.x/breaking_21_search_changes.html#_from_size_limits
The thing is that I do not want to retrieve large data sets. I only want to retrieve a slice from the data set which is very high up in the result set. Also the scrolling docu says:
Scrolling is not intended for real time user requests https://www.elastic.co/guide/en/elasticsearch/reference/2.2/search-request-scroll.html
This leaves me with some questions:
1) Would the memory consumption really be lower (any if so why) if I use the scrolling api to scroll up to result 10020 (and disregard everything below 10000) instead of doing a "normal" search request for result 10000-10020?
2) It does not seem that the scrolling API is an option for me but that I have to increase "index.max_result_window". Does anyone have any experience with this?
3) Are there any other options to solve my problem?
If you need deep pagination, one possible solution is to increase the value max_result_window. You can use curl to do this from your shell command line:
curl -XPUT "http://localhost:9200/my_index/_settings" -H 'Content-Type: application/json' -d '{ "index" : { "max_result_window" : 500000 } }'
I did not notice increased memory usage, for values of ~ 100k.
The right solution would be to use scrolling.
However, if you want to extend the results search returns beyond 10,000 results, you can do it easily with Kibana:
Go to Dev Tools and just post the following to your index (your_index_name), specifing what would be the new max result window
PUT your_index_name/_settings
{
"max_result_window" : 500000
}
If all goes well, you should see the following success response:
{
"acknowledged": true
}
The following pages in the elastic documentation talk about deep paging:
https://www.elastic.co/guide/en/elasticsearch/guide/current/pagination.html
https://www.elastic.co/guide/en/elasticsearch/guide/current/_fetch_phase.html
Depending on the size of your documents, the number of shards, and the
hardware you are using, paging 10,000 to 50,000 results (1,000 to
5,000 pages) deep should be perfectly doable. But with big-enough from
values, the sorting process can become very heavy indeed, using vast
amounts of CPU, memory, and bandwidth. For this reason, we strongly
advise against deep paging.
Use the Scroll API to get more than 10000 results.
Scroll example in ElasticSearch NEST API
I have used it like this:
private static Customer[] GetCustomers(IElasticClient elasticClient)
{
var customers = new List<Customer>();
var searchResult = elasticClient.Search<Customer>(s => s.Index(IndexAlias.ForCustomers())
.Size(10000).SearchType(SearchType.Scan).Scroll("1m"));
do
{
var result = searchResult;
searchResult = elasticClient.Scroll<Customer>("1m", result.ScrollId);
customers.AddRange(searchResult.Documents);
} while (searchResult.IsValid && searchResult.Documents.Any());
return customers.ToArray();
}
If you want more than 10000 results then in all the data nodes the memory usage will be very high because it has to return more results in each query request. Then if you have more data and more shards then merging those results will be inefficient. Also es cache the filter context, hence again more memory. You have to trial and error how much exactly you are taking. If you are getting many requests in small window you should do multiple query for more than 10k and merge it by urself in the code, which is supposed to take less application memory then if you increase the window size.
2) It does not seem that the scrolling API is an option for me but that I have to increase "index.max_result_window". Does anyone have any experience with this?
--> You can define this value in index templates , es template will be applicable for new indexes only ,so you either have to delete old indexes after creating template or wait for new data to be ingested in elasticsearch .
{
"order": 1,
"template": "index_template*",
"settings": {
"index.number_of_replicas": "0",
"index.number_of_shards": "1",
"index.max_result_window": 2147483647
},
In my case it looks like reducing the results via the from & size prefixes to the query will remove the error as we don't need all the results:
GET widgets_development/_search
{
"from" : 0,
"size": 5,
"query": {
"bool": {}
},
"sort": {
"col_one": "asc"
}
}

Why not use min_score with Elasticsearch?

New to Elasticsearch. I am interested in only returning the most relevant docs and came across min_score. They say "Note, most times, this does not make much sense" but doesn't provide a reason. So, why does it not make sense to use min_score?
EDIT: What I really want to do is only return documents that have a higher than x "score". I have this:
data = {
'min_score': 0.9,
'query': {
'match': {'field': 'michael brown'},
}
}
Is there a better alternative to the above so that it only returns the most relevant docs?
thx!
EDIT #2:
I'm using minimum_should_match and it returns a 400 error:
"error": "SearchPhaseExecutionException[Failed to execute phase [query], all shards failed;"
data = {
'query': {
'match': {'keywords': 'michael brown'},
'minimum_should_match': '90%',
}
}
I've used min_score quite a lot for trying to find documents that are a definitive match to a given set of input data - which is used to generate the query.
The score you get for a document depends on the query, of course. So I'd say try your query in many permutations (different keywords, for example) and decide which document is the first you would rather it didn't return for each, and and make a note of each of their scores. If the scores are similar, this would give you a good guess at the value to use for your min score.
However, you need to bear in mind that score isn't just dependant on the query and the returned document, it considers all the other documents that have data for the fields you are querying. This means that if you test your min_score value with an index of 20 documents, this score will probably change greatly when you try it on a production index with, for example, a few thousands of documents or more. This change could go either way, and is not easily predictable.
I've found for my matching uses of min_score, you need to create quite a complicated query, and set of analysers to tune the scores for various components of your query. But what is and isn't included is vital to my application, so you may well be happy with what it gives you when keeping things simple.
I don't know if it's the best solution, but it works for me (java):
// "tiny" search to discover maxScore
// it is fast, because it returns only 1 item
SearchResponse response = client.prepareSearch(INDEX_NAME)
.setTypes(TYPE_NAME)
.setQuery(queryBuilder)
.setSize(1)
.execute()
.actionGet();
// get the maxScore and
// and set minScore = 70%
float maxScore = response.getHits().maxScore();
float minScore = maxScore * 0.7;
// second round with minimum score
SearchResponse response = client.prepareSearch(INDEX_NAME)
.setTypes(TYPE_NAME)
.setQuery(queryBuilder)
.setMinScore(minScore)
.execute()
.actionGet();
I search twice, but the first time it's fast because it returns only 1 item, then we can get the max_score
NOTE: minimum_should_match work different. If you have 4 queries, and you say minimum_should_match = 70%, it doesn't mean that item.score should be > 70%. It means that the item should match 70% of the queries, that is minimum 3/4 queries

Resources