Extremely slow performance of some MongoDB queries - performance

I have a collection of about 30K item all of which have an element called Program. "Program" is a first part of a compound index, so looking up an item with specific Program value is very fast. It is also fast to run range queries, e.g.:
db.MyCollection.find(
{ $and: [ { Program: { "$gte" : "K", "$lt" : "L" } },
{ Program: { "$gte" : "X", "$lt" : "Y" } } ] }).count();
The query above does not return any results because I am querying for an overlap of two non-overlaping ranges (K-L) and (X-Y)). The left range (K-L) contains about 7K items.
However if I replace the second "and" clause with "where" expression, the query execution takes ages:
db.MyCollection.find(
{ $and: [ { Program: { "$gte" : "K", "$lt" : "L" } }, { "$where" : "this.Program == \"Z\"" } ] }).count();
As you can see, the query above should also return an empty result set (range K-L is combined with Program=="Z"). I am aware of slow performance of "where", but should not Mongo first reduce potential result set by evaluating the left clause (that would result in about 7K items) and only then apply "where" check? If it does, should not processing of a few thousand items take seconds and not minutes as it does on my machine with Mongo service consuming about 3GB RAM while peforming this operation? Looks too heavy for relatively small collection.

There are a few things that you can do -
Use explain() to see what is happening on your query. explain() is described here. Use the $explain operator to return a document that describes the process and indexes used to return the query. For example -
db.collection.find(query).explain()
If that doesn't return enough information, you can look at using the Database Profiler. However, please bear in mind that this is not free and adds load itself. Within this page, you can also find some basic notes about optimising the query performance.
However, in your case, it all boils down to the $where operator:
$where evaluates JavaScript and cannot take advantage of indexes. Therefore, query performance improves when you express your query using the standard MongoDB operators (e.g., $gt, $in).
In general, you should use $where only when you can’t express your query using another operator. If you must use $where, try to include at least one other standard query operator to filter the result set. Using $where alone requires a table scan. $where, like Map Reduce, limits your concurrency.
As a FYI: couple of things to note about the output from explain():
ntoreturn Number of objects the client requested for return from a query. For example, findOne(), sets ntoreturn to limit() sets the appropriate limit. Zero indicates no limit.
query Details of the query spec.
nscanned Number of objects scanned in executing the operation.
reslen Query result length in bytes.
nreturned Number of objects returned from query.

Related

Elasticsearch - Limit of total fields [1000] in index exceeded

I saw that there are some concerns to raising the total limit on fields above 1000.
I have a situation where I am not sure how to approach it from the design point of view.
I have lots of simple key value pairs:
key1:15, key2:45, key99999:1313123.
Where key is a string and value is a integer on which I would like to sort my results upon on where as if a certain document receives a key it gets sorted by the value.
I ended up creating an object and just put the key value pairs inside so I can match it easy.
For example I have sorting: "object.key".
I was wondering if I just use a simple object with bunch of strings inside that are just there for exact matching should I worry about raising this limit to 10k, or 20k.
Because I now have an issue where there can be more then 1k of these records. I've found I could use nested sorting but it still has a default limit of 10k.
Is there a good design pattern approach for this or should I not be worried by raising the field limits?
Simplified version of the query:
GET products/_search
{
"query": {
"match_all": {}
},
"sort": [
{
"sortingObject.someSortingKey1": {
"order": "desc",
"missing": 2,
"unmapped_type":"float"
}
}
]
}
Point is that I get the sortingKey from request and I use it to sort my results. There are 100k different ways to sort the result for example
There were some recent improvements (in 7.16) that should help there, but 10K or 20K fields is still a lot of overhead.
I'm not sure what kind of queries you need to run on those keyX fields, but maybe the flattened data-type would work for you? https://www.elastic.co/guide/en/elasticsearch/reference/current/flattened.html

Nested count queries

i'm looking to add a feature to an existing query. Basically, I run a query that returns say 1000 documents. Those documents all have the same structure, only the values of certain fields vary. What i'd like, is to not only get the full list as a result, but also count how many results have a field X with the value Y, how many results have the same field X with the value Z etc...
Basically get all the results + 4 or 5 "counts" that would act like the SQL "group by", in a way.
The point of this is to allow full text search over all the clients in our database (without filtering), while showing how many of those are active clients, past clients, active prospects etc...
Any way to do this without running additional / separate queries ?
EDIT WITH ANSWER :
Aggregations is the way to go. Here's how I did it, it's so straightforward that I expected much harder work !
{
"query": {
"term": {
"_type":"client"
}
},
"aggregations" : {
"agg1" : {
"terms" : {
"field" : "listType.typeRef.keyword"
}
}
}
}
Note that it's even in a list of terms and not a single field, that's just how easy it was !
I believe what you are looking for is the aggregation query.
The documentation should be clear enough, but if you struggle please give us your ES query and we will help you from there.

Why not use min_score with Elasticsearch?

New to Elasticsearch. I am interested in only returning the most relevant docs and came across min_score. They say "Note, most times, this does not make much sense" but doesn't provide a reason. So, why does it not make sense to use min_score?
EDIT: What I really want to do is only return documents that have a higher than x "score". I have this:
data = {
'min_score': 0.9,
'query': {
'match': {'field': 'michael brown'},
}
}
Is there a better alternative to the above so that it only returns the most relevant docs?
thx!
EDIT #2:
I'm using minimum_should_match and it returns a 400 error:
"error": "SearchPhaseExecutionException[Failed to execute phase [query], all shards failed;"
data = {
'query': {
'match': {'keywords': 'michael brown'},
'minimum_should_match': '90%',
}
}
I've used min_score quite a lot for trying to find documents that are a definitive match to a given set of input data - which is used to generate the query.
The score you get for a document depends on the query, of course. So I'd say try your query in many permutations (different keywords, for example) and decide which document is the first you would rather it didn't return for each, and and make a note of each of their scores. If the scores are similar, this would give you a good guess at the value to use for your min score.
However, you need to bear in mind that score isn't just dependant on the query and the returned document, it considers all the other documents that have data for the fields you are querying. This means that if you test your min_score value with an index of 20 documents, this score will probably change greatly when you try it on a production index with, for example, a few thousands of documents or more. This change could go either way, and is not easily predictable.
I've found for my matching uses of min_score, you need to create quite a complicated query, and set of analysers to tune the scores for various components of your query. But what is and isn't included is vital to my application, so you may well be happy with what it gives you when keeping things simple.
I don't know if it's the best solution, but it works for me (java):
// "tiny" search to discover maxScore
// it is fast, because it returns only 1 item
SearchResponse response = client.prepareSearch(INDEX_NAME)
.setTypes(TYPE_NAME)
.setQuery(queryBuilder)
.setSize(1)
.execute()
.actionGet();
// get the maxScore and
// and set minScore = 70%
float maxScore = response.getHits().maxScore();
float minScore = maxScore * 0.7;
// second round with minimum score
SearchResponse response = client.prepareSearch(INDEX_NAME)
.setTypes(TYPE_NAME)
.setQuery(queryBuilder)
.setMinScore(minScore)
.execute()
.actionGet();
I search twice, but the first time it's fast because it returns only 1 item, then we can get the max_score
NOTE: minimum_should_match work different. If you have 4 queries, and you say minimum_should_match = 70%, it doesn't mean that item.score should be > 70%. It means that the item should match 70% of the queries, that is minimum 3/4 queries

ElasticSearch - Statistical facet on length of string field

I would like to retrieve data about a string field like the min, max and average length (by counting the number of characters inside the string). My issue is that aggregations can only be used for numeric fields. Besides, I tried it using a simple statistical facet,
"query":{
"match_all": {}
},
"facets":{
"stat1":{
"statistical":{
"field":"title"}
}
}
but I get shard failures and SearchPhaseExecutionException. When trying with a script field the error returned is an OutOfMemoryError:
"query":{
"match_all": {}
},
"script_fields":{
"test1":{"script": "doc[\"title\"].value" }
}
Is it possible to retrive such data about a simple "title" string field using CURL? Thank you!
I haven't actually tried the following, but I believe it should work.
First some useful doc-references:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-facets-statistical-facet.html.
In order to implement the statistical facet, the relevant field values
are loaded into memory from the index. This means that per shard,
there should be enough memory to contain them. Since by default,
dynamic introduced types are long and double, one option to reduce the
memory footprint is to explicitly set the types for the relevant
fields to either short, integer, or float when possible.
I'm not sure directly how to set the type of the script-field to 'short' which is probably what you want. to reduce memory. it SHOULD be possible though.
ALSO: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-script-fields.html
It’s important to understand the difference between
doc['my_field'].value and _source.my_field. The first, using the doc
keyword, will cause the terms for that field to be loaded to memory
(cached), which will result in faster execution, but more memory
consumption. Also, the doc[...] notation only allows for simple valued
fields (can’t return a json object from it) and make sense only on
non-analyzed or single term based fields.
So ALTERNATIVE: would be to use _source instead of doc which would not cache the lengths.
Gives:
{
"query" : {
"match_all" : {}
},
"facets" : {
"stat1" : {
"statistical" : {
"script" : "doc['title'].value.length()
//"script" : "_source.title.length() //ALTERNATIVE which isn't cached
}
}
}
}

Capped Collection in mongodb issues

I made an analysis on capped collection, I found that there is no performance improvement in capped collection.
I created a collection named test1 with 20,000 data, I did copyTo test2 with same data which is capped true with data size specified. I gave the following queries to examine the performance
db.test1.find( { $query: { "group" : "amazonTigers"}, $explain: 1 } ).pretty()
db.test2.find( { $query: { "group" : "amazonTigers"}, $explain: 1 } ).pretty()
Both result in the same response time as 124ms...
More Over, I dont understand how the capped collection works???
I read through lots of blogs, But am not able to find the correct working principle of mongo capped collection.
I read through the disadvantages of capped collection, Its given like we are not able to use $set and $push in it. Is there any other disadvantages there in capped collection for the specified collection entries???
Regards,
Harry

Resources