elasticsearch query is giving results very slow on a simple query - elasticsearch

I am using elasticsearch to perform some aggregations. Everything used to work fine, but currently I have 2 million docs in an index. I am performing a very simple search query list all documents in a given type of a given index.
{
"size":100000,
"query":
{"match_all":{}
}
}
This query is very slow and gives about 300k hits. What could be the possible reasons for it?
NOTE: i am having 2G ram . 2 cores

You are trying to get a response with 100.000 documents in it. This is just too much. Elasticsearch is intended for paging. Paging means fetch in small chunks. You try to fetch a bulk of 100.000. There is a reason why it defaults with a size of 10.

I finally found out that this configuration is enough for my needs that is searching over 2 million documents. i was having a wrong configuration and also the method of simply doing match_all is not correct even if we have 2 million docs performing a search based on some criteria would be very fast.

Related

Check if document is part of Elasticsearch query?

Curious if there is some way to check if document ID is part of a large (million+ results) Elasticsearch query/filter.
Essentially I’ll have a group of related document ID’s and only want to return them if they are part of a larger query. Hoping to do database side. Theoretically seemed possible since ES has to cache stuff related to large scrolls.
It's a interesting use-case but you need to understand that Elasticsearch(ES) doesn't return all the matching documents ids in the search result and return by default only the 10 documents in the response, which can be changed by the size parameter.
And if you increase the size param and have millions of matching docs in your query then ES query performance would be very bad and it might bring even entire cluster down if you frequently fire such queries(in absence of circuit breaker) so be cautious about it.
You are right that, ES cache the stuff, but again that if you try to cache huge amount of data and that is getting invalidate very frequent then you will not get the required performance benefits, so better do the benchmark against it.
You are already on the correct path to use, scroll API to iterate on millions on search result, just see below points to improve further.
First get the count of search result, this is included in default search response with eq or greater value which will give you idea that how many search results you have based on which you can give size param for subsequent calls to see if your id is present or not.
See if you effectively utilize the filters context in your query, which is by default cached at ES.
Benchmark your some heavy scroll API calls with your data.
Refer this thread to fine tune your cluster and index configuration to optimize ES response further.

Elasticsearch caching a single field for quick response

I have a cluster of 10 nodes where I index about a 100 million records daily. Total close to 6 billion records. I am constantly loading data. Each record has about 75 fields associated with it. 99% of my queries are based on the same field query. Essentially select * from table where groupid = 'value'. The majority of the queries returning bring back about a hundred records.
My queries currently take about 30 seconds to run the first 2 times and then are in the milliseconds. The problem is that all the user queries are searching for a different groupID so there queries are going to be slow for the most part until they run it the third time.
Is it possible to "cache" the groupid field so that I can get sub second queries.
My current query looks like this. (Psuedo-query) (I'm using non-analyzed field which I believe is better?)
query : {
filtered : {
filter : {
"term" : { groupID : "valuex" }
}
}
}
I"ve researched and not sure how to go about this. I've looked into doc_values = yes and possibly field cache?
I do not care about scoring, aggregates. My only use case is to filter out records and only bringing back the 100 or so out of 5 billion that have the correct groupID.
We have about 64G Memory on each server.
Just looking for help on how to achieve optimal performance/caching? or anything else that would help.
I thought about routing but this would be difficult based on our groupid values.
thanks
Starting from elasticsearch 2.0 we did some caching changes, like:
Keeps track of 256 most recently used queries
Only caches those that appear 5 times or more
Does not cache segments which have less than 10000 documents or 3% of the documents of the index
Wondering if you are hitting this last one.
Note that we did that because the File System cache might be probably better than internal caching.
Could you try with a bool query instead of a filtered query BTW? Filtered has been deprecated (and is removed in 5.0). And see how it performs?

Elasticsearch search does not return exact results

I was trying to run elasticsearch on couchdb using river plugin. Unfortunately, the number of hits i have got is the same with the total of actual documents. Say that i have 10000 documents in my couch database, however, it only returns 9340 hits in elasticsearch. Anyone knows why this problem arise? Would you mind to explain it to me please?
Regards,
Jemie Effendy
Look into the bool query. This will return results that MUST have what you've defined in your query. Also look into your mapping, Elasticsearch may be tokenising your data in a way in which you don't expect.

Loading all documents from ElasticSearch takes too long

In order to load all the documents index by ElasticSearch, I am using the following query through tire.
def all
max = total
Tire.search 'my_documents' do
query { all }
size max
end.results.map { |entry| entry.to_hash }
end
Where max, respectively total is a count query to return the number of present documents. I have indexed about 10,000 documents. Currently, the request takes too long.
I am aware, that I should not query all documents like this. What is the best alternative here? Using pagination, if yes, toward which metric would I define the number of documents per page?
I am also planning to extend the size of the documents, to 100,000 or even 1,000,000 and I don't see yet how this can scale.
I appreciate every comment.
Rationale: I do this, because I am running calculations over these data. Hence, I need all the data, run the computations and save the results back into the documents.
Have a look at the scroll API, which is highly optimized to fetch a large amount of results. It uses the scan search type and doesn't support sorting but let you provide a query to filter the documents you want to fetch. Have a look at the reference to know more about it. Remember the size that you define in the request is per shard; that means that if you have 5 primary shards, setting 10 would lead to have 50 results back per request.

Get all the results from solr without 10 as limit

How to get all the rows returned from the solr instead of getting only 10 rows?
You can define how many rows you want (see Pagination in SolrNet), but you can't get all documents. Solr is not a database. It doesn't make much sense to get all documents in Solr, if you feel you need it you might be using the wrong tool for the job.
This is also explained in detail in the Solr FAQ.
As per Solr Wiki,
About the row that query returns,
The default value is "10", which is used if the parameter is not specified. If you want to tell Solr to return all possible results from the query without an upper bound, specify rows to be 10000000 or some other ridiculously large value that is higher than the possible number of rows that are expected.
refer this https://wiki.apache.org/solr/CommonQueryParameters
You can setup rows=x, where x is the desired number of doc in the query url.
You can also get groups of 10 doc, by looping over the founds docs by changing start value and leaving row=10
Technically it is possible to get all results from a SOLR search. All you need to do is to specify the limit as -1.

Resources