I'm looking to fine tune a string search query that I am using on Mongo. In the SQL Server world, I'd like to believe I have a decent understanding of how indexes work and how to build proper indexes. I tried giving it a shot with Mongo, but, I don't believe that I'm not going about it the right way.
My collection has roughly 4.3 million documents. The document structure looks like this:
{
"_id":{
"$oid":"527027456239d1212c07a621"
},
"ReleaseId":2451,
"Status":"Accepted",
"Title":"Hard Rhythmic Motions",
"Country":"US",
"MasterId":"35976",
"Images":[
{
"Type":"primary",
"URI":"http://api.discogs.com/image/R-2451-1117047026.jpg",
"URI150":"http://api.discogs.com/image/R-150-2451-1117047026.jpg",
"Height":307,
"Width":307
},
{
"Type":"secondary",
"URI":"http://api.discogs.com/image/R-2451-1117047033.jpg",
"URI150":"http://api.discogs.com/image/R-150-2451-1117047033.jpg",
"Height":307,
"Width":307
}
],
"Artists":[
{
"_id":2894,
"Name":"DJ Hyperactive"
}
],
"Formats":[
{
"Name":null,
"Quantity":1
}
],
"Genres":[
"Electronic"
],
"Styles":[
"Hardcore",
"Acid"
]
}
I am executing a case insensitive search on one of the top-level document properties and on one of the nested document properties:
db.releases.find({$or: [{Title: new RegExp('.*mozart.*',"i")},{'Artists.Name': new RegExp('.*mozart.*',"i")}]})
I tried creating an index; when I execute .getIndexes() I can see the index I created:
{
"v" : 1,
"key" : {
"Title" : 1,
"Artists.Name" : 1
},
"ns" : "discogs.releases",
"name" : "Title_1_Artists.Name_1"
}
At this point I thought that I would be all set. However, the query ends up taking between 28 and 32 seconds to execute. I tried calling .explain() to get a little more insight:
{
"cursor" : "BasicCursor",
"isMultiKey" : false,
"n" : 4098,
"nscannedObjects" : 4292400,
"nscanned" : 4292400,
"nscannedObjectsAllPlans" : 4292400,
"nscannedAllPlans" : 4292400,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 29,
"nChunkSkips" : 0,
"millis" : 29958,
"indexBounds" : {
},
"server" : "lambic:27017"
}
From my limited knowledge of Mongo, this looks like a table scan which is why the query isn't performing very well. However, I don't know how to make this query better! I would expect that the index that I created to cover this query, but, that must not be the case.
Now, the last thing I want to point out is that this is certainly not on the most robust server. The hardware specs (including CPU and RAM) are very limited. However, if my analysis is correct and I'm doing a table scan, there must be some performance improvements I can make on the Mongo side.
A fulltext index is probably what you need. You could also parse the document before inserting it, and put the keywords in an array inside the document and index this array.
Thanks everyone for the responses. I wanted to follow up on this question since it had a few votes and to make sure anyone who stumbles upon this page in the future knows what I ended up doing.
The fulltext index sounds like a great solution. However, because this is only a small side-project of mine I'm not willing to throw more hardware at the architecture (the fulltext index requires a good amount of disk space for 4 million records).
What I ended up doing is flattening my data structures to make them easier to query and removed the wildcard search so that my indexes on that new structure can actually be used. By doing this I can achieve an indexOnly query (although the performance still isn't amazing, I find it to be adequate given my weak hardware stack).
Related
I am working on an E-Commerce application. Catalog Data is being served by Elastic Search.
I have document's for Product which is already indexed in Elastic Search.
Document Looks something like this (Excluded few fields for the purpose of better readability):
{
"title" : "Product Name",
"volume" : "200gm",
"brand" : {
"brand_code" : XXXX,
"brand_name" : "Brand Name"
},
"#timestamp" : "2021-08-26T08:08:11.319Z",
"store" : [
{
"physical_unit" : 0,
"default_price" : 115.0,
"_id" : "1234_111",
"product_code" : "1234",
"warehouse_code" : 111,
"available_unit" : 100
}
],
"category" : {
"category_code" : 987,
"category_name" : "CategoryName",
"category_url_link" : "CategoryName",
"super_category_name" : "SuperCategoryName",
"parent_category_name" : "ParentCategoryName"
}
}
store object in the above document is the one where ES Query will look for price and to decide if item is in stock or Out Of Stock.
I would like to add more child objects to store (Basically data from multiple inventory). This can go up to more than 150 child objects for each product.
Eventually, A product document will look something like this with multiple inventory's data mapped to a particular document.
{
"title" : "Product Name",
"volume" : "200gm",
"brand" : {
"brand_code" : XXXX,
"brand_name" : "Brand Name"
},
"#timestamp" : "2021-08-26T08:08:11.319Z",
"store" : [
{
"physical_unit" : 0,
"default_price" : 115.0,
"_id" : "1234_111",
"product_code" : "1234",
"warehouse_code" : 111,
"available_unit" : 100
},
{
"physical_unit" : 0,
"default_price" : 125.0,
"_id" : "1234_112",
"product_code" : "1234",
"warehouse_code" : 112,
"available_unit" : 100
},
{
"physical_unit" : 0,
"default_price" : 105.0,
"_id" : "1234_113",
"product_code" : "1234",
"warehouse_code" : 113,
"available_unit" : 100
}
Upto N no of stores
],
"category" : {
"category_code" : 987,
"category_name" : "CategoryName",
"category_url_link" : "CategoryName",
"super_category_name" : "SuperCategoryName",
"parent_category_name" : "ParentCategoryName"
}
}
Functional Requirement :
For any product, we should show lowest price across all warehouse.
For EX: If a particular product has 50 store mapped to it, Elastic Search query should look into the nested object and get the value which is lowest in all 50 stores if item is available.
Performance should not be degraded.
Challenges :
If we start storing those many stores for each product, data will go considerably high. Will that be a problem ?
What would be the efficient way to extract the lowest price from nested document?
How would facets work within nested document ? Like if i apply price range filter ES picks up the data which was not showed earlier. (It might pick the data from other store which matches the range)
We are using template to query ES and the Version of the Elastic Search is 6.0.
Thanks in Advance!!
First there are improvements to nested document search in version 7.x that are worth the upgrade.
As for version 6.x, there are a lot of factors there that I could not give you a concrete answer. It also seems you may not be understanding the way that nested documents work, they are not relational.
In particular when you say that each product might have 50 stores mapped to it that sounds like you are implying a relationship, which will not exist with a nested document. However, the values from those 50 stores would be stored within an index nested under the parent document. Having 50 stores under a product or category does not sound concerning.
ElasticSearch has not really talked in terms of facets since the introduction of the aggregation framework. Its not that they dont exist, just not how they are discussed.
So lets try this. ElasticSearch optimizes its search and query through a divide and conquer mechanism. The data is spread across several shards, a configurable number, and each shard is responsible for reviewing its own data. Further, those shards can be distributed across many machines so that there are many cpus and lots of memory for the search. So growing the data doesn't matter if you are willing to grow the cluster, as it is possible to maintain a situation where each machine is doing the same amount of work as it was doing before.
Unlike a relational database, filters search terms allow Elastic to drastically reduce the data that it is looking at and a larger number of filters will improve performance where on a relational database performance declines.
Now back to nested documents. They are stored as a separate index, but instead of mapping the results to the nested doc, the results map to the parent doc id. So you're nested docs arent exactly in the same index as the rest of the document, though they are not truly separate either. But that does mean that the nested documents should have minimal impact the performance of the queries against the parent documents. But if your data size grows beyond the capacity of your current system you will still need to increase its size.
As to how you would query, you would use Elastic aggregations. These will allow you to calculate your "facet" counts and identify the best prices. The Elastic aggregations are very powerful and very fast. There are caveats that are well documented, but in general they will work as you expect.
In version 6.x query string queries cannot access the search criteria in a nested document, and a complex query must be used.
To recap
Functional Requirement :
For any product, we should show lowest price across all warehouse.
For EX: If a particular product has 50 store mapped to it,
ElasticSearch query should look into the nested object and get the
value which is lowest in all 50 stores if item is available.
Yes a nested aggregation will do this.
Performance should not be degraded.
Performance will continue to depend on the ratio of the size of the data to the overall cluster size.
Challenges :
If we start storing those many stores for each product, data will go considerably high. Will that be a problem ?
No this should not be a problem
What would be the efficient way to extract the lowest price from nested document?
Elastic Aggregations
How would facets work within nested document ? Like if i apply price range filter ES picks up the data which was not showed earlier. (It might pick the data from other store which matches the range)
Yes filtering can work with Aggregations very well. The aggregation will be based on the filtered data. In fact you could have an aggregation based on just minimum price, and in the same query then have an aggregation using your price ranges, which will give you the count of documents that have a store within that price range, and you could have a sub aggregation showing the stores under each price range.
We are using template to query ES and the Version of the Elastic Search is 6.0. Thanks in Advance!!
I know nothing about template. The ElasticSearch API is so dead simple I do not know why anyone uses additional tools on top of the API, they just add weight, and increase complexity and make key features not available because the wrapper author did not pass through the feature.
I have data stored in elastic search. One of the fields is logging level. These are defined in Java enum.
The enums are :
0 => undefined
1 => info
2 => low
3 => high
4 => fatal
EDIT:
This is what I am trying, but keep getting Variable [level] is not defined error.
curl -H 'Content-Type: application/json' "http://localhost:33206/_search" -d'
{
"sort" : {
"_script" : {
"type" : "number",
"script" : {
"lang": "painless",
"source": "params.mapping[doc['level'].value]",
"params" : {
"UNDEFINED": 0,
"INFO": 1,
"LOW": 2,
"HIGH": 3,
"FATAL": 4
}
},
"order" : "asc"
}
}
}
'
In elastic search we are storing Strings rather than the number.
If I wanted to query elastic search and have it ordered by the corresponding numbers, how do I do that? Of course sorting by string will produce wrong results.
It is not possible and not recommended with scripting as it is not good from performance perspective.
You should have a separate field where you need to store the integer value and sort it.
Reasons not to have scripts:
If possible, avoid using scripts or scripted fields in searches.
Because scripts can’t make use of index structures, using scripts in
search queries can result in slower search speeds.
If you often use scripts to transform indexed data, you can speed up
search by making these changes during ingest instead. However, that
often means slower index speeds.
And one more thing is security. There are loopholes which makes it vulnerable.
Reference
My script was correct, except for the single quote around "level". Changing it to double quotes makes it work.
I have 2 indexes, one that stores data about an event and one that stores the availability of that event. I am trying to create a single query that gets events by a query but only returns ones that are available, and I am having difficulty doing so.
The events index stores
{
"id" : "152ce52d-e975-4ebd-849a-0a12f535e644",
"createdAt" : 1.5519999143126902E12,
"description" : "A very not so concise description",
"geoHash" : "dnh00x6x5",
"name" : "a name",
...etc...
}
The availability index stores availability like so:
{
"eventId" : "152ce52d-e975-4ebd-849a-0a12f535e644",
"maxGuests" : 8,
"availability" : {
"lte" : "2019-10-18T22:15:00.000Z",
"gte" : "2019-10-18T02:30:00.000Z"
}
}
I am trying to create a query like below, but what I can't figure out is how to filter by listings that meet the criteria in the events index AND are available in the availability index.
GET events,availability/_search
{
"size": 5,
"from": 0,
"_source": [
"id"
],
"query": {
"bool": {
"must": [
{
"geo_distance": {
"distance": "25mi",
"geoHash": {
"lat": 34.0389,
"lon": -84.3826
}
}
}
],
"should": [],
"filter":[
{
"range" : {
"availability" : {
"gte" : "2019-10-31",
"lte" : "2020-11-01",
"relation" : "within"
}
}
}
]
}
}
}
--
The reason I want to only do one query is that the client is expecting a certain specified number of events. If I filter out the unavailable events after I get the event data then I am likely to be left with fewer events than the client expected and would need to do yet another search to fill the gap.
Also, of course, I could merge the two indices so that the event also stores the availability info, but I originally set them up this way because the availability info may have hundreds or thousands of entries per event.
What you want to accomplish is an equivalent of a foreign key of SQL (join). There is no way to have exactly what you want, meaning to filter documents from index A by querying an index B. Your options are:
As you've mentioned solve it on application level (although this causes other problems for you, so it's not a solution).
Merge the data in one index and have duplicated event informatin. Although it seems expensive, the duplication of data in a NoSQL database is to be expected. If you need a relational model then maybe you should use a SQL solution.
Use parent/child (join datatype). The problem here is that you will need to have the data in the same index overall. Moreover, parent and child will be stored in the same shard as well.
One approach to this (a bit more complex though) that I believe would work for you is to use the nested datatype, which actually is a more compact approach for the solution number 2 (combine your data in one index, but save root information only once). Make events be at the root and availability appear as nested. When you want to add one availability you can use the update api, and when you query, you can search by the root fields and by the nested. If you need to retrieve specific availability entries for an event you can use inner hits
What you are trying to do (multi-index search) will not join your data automatically, it will not work. Elasticsearch doesn't work that way, and the relational model is not suited for this product.
One last thing, it's a good thing to plan ahead, but it's a bad thing to try to optimize early on.
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
An interesting read that summarizes the above
I have a Mongo server running on an VPS with 16GB of memory (although probably with slow IO using magnetic disks).
I have a collection of around 35 million records which doesn't fit into main memory (db.stats() reports a size of 35GB and a storageSize of 14GB), however the 1.7GB reported for totalIndexSize should comfortably fit there.
There is particular field bg I'm querying over which can be present with value true or absent entirely (please no discussions about whether this is the best data representation – I still think Mongo is behaving weirdly). This field is indexed with a non-sparse index with a reported size of 146MB.
I'm using the WiredTiger storage engine with a default cache size (so it should be around 8GB).
I'm trying to count the number of records missing the bg field.
Counting true values is tolerably fast (a few seconds):
> db.entities.find({bg: true}).count()
8300677
However the query for missing values is extremely slow (around 5 minutes):
> db.entities.find({bg: null}).count()
27497706
To my eyes, explain() looks ok:
> db.entities.find({bg: null}).explain()
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "testdb.entities",
"indexFilterSet" : false,
"parsedQuery" : {
"bg" : {
"$eq" : null
}
},
"winningPlan" : {
"stage" : "FETCH",
"filter" : {
"bg" : {
"$eq" : null
}
},
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"bg" : 1
},
"indexName" : "bg_1",
"isMultiKey" : false,
"direction" : "forward",
"indexBounds" : {
"bg" : [
"[null, null]"
]
}
}
},
"rejectedPlans" : [ ]
},
"serverInfo" : {
"host" : "mongo01",
"port" : 27017,
"version" : "3.0.3",
"gitVersion" : "b40106b36eecd1b4407eb1ad1af6bc60593c6105"
},
"ok" : 1
}
However the query remains stubbornly slow, even after repeated calls. Other count queries for different values are fast:
> db.entities.find({bg: "foo"}).count()
0
> db.entities.find({}).count()
35798383
I find this kind of strange, since my understanding is that missing fields in non-sparse indexes are simply stored as null, so the count query with null should be similar to counting an actual value (or maybe up to three times for three times as many positive values, if it has to count more index entries or something). Indeed, this answer reports vast speed improvements over similar queries involving null values and .count(). The only point of differentiation I can think of is WiredTiger.
Can anyone explain why is my query to count null values so slow or what I can do to fix it (apart from doing the obvious subtraction of the true counts from the total, which would work fine but wouldn't satisfy my curiosity)?
This is expected behavior, see: https://jira.mongodb.org/browse/SERVER-18653. Seems like a strange call to me to, but there you go, I'm sure there are programmers that know more about MongoDB than I do that are responsible.
You will need to use a different value to mean null. I guess this will depend on what you use the field for. In my case it is a foreign reference, so I'm just going to start using false to mean null. If you are using it to store a boolean value then you may need to use "null", -1, 0, etc.
So I have a MongoDB instance where I am trying to update data in one collection with data from another collection. The two collections are participants with about 180k documents and questions with about 95k documents.
Documents in participants typically look something like this:
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b4"),
"answers" : [
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b9"),
"question_id" : 2081,
"sub_id" : null,
"values" : [
"Yes"
]
},
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b8"),
"question_id" : 2082,
"sub_id" : 123,
"values" : [
"Would prefer to go alone"
]
},
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b7"),
"question_id" : 2082,
"sub_id" : 456,
"values" : [
"Yes"
]
}
],
"created" : ISODate("2012-03-01T17:40:21Z"),
"email" : "anonymous",
"id" : 65,
"survey" : ObjectId("52f41d579af1ff4221399a7b"),
"survey_id" : 374
}
I am using the query below to perform the update:
db.participants.ensureIndex({"answers.question_id": 1, "answers.sub_id": 1});
print("created index for answer arrays!")
db.questions.find().forEach(function(doc){
db.participants.update(
{
"answers.question_id": doc.id,
"answers.sub_id": doc.sub_id
},
{
$set:
{
"answers.$.question": doc._id
}
},
false,
true
);
});
db.participants.dropIndex({"answers.question_id": 1, "answers.sub_id": 1});
But this takes about 20 minutes to run. I was hoping that adding the index would help with the performance, but it is still pretty slow. Is this index setup correctly considering that I am indexing fields in an array of objects? Can anyone see anything that I am doing that would cause the slowness? Suggestions on where to start looking to improve the performance of this query?
I think you need to consider what you are actually doing here in order to understand why the index is not helping and indeed why this operation takes so long.
The first part of the answer is explained by what you are doing here:
db.questions.find()
Now that part alone basically says that you are asking to retrieve every document in your questions collection. So we can see what you are trying to do is exactly that, as you want to update that content into your participants collection, particularly the document _id for the "question". But here, by definition of getting all documents, no index will be used.
So what you are doing is looping every document in the questions, then asking with your update operation to match the participants record with data from the "question". And what that means is you are pulling "over the wire" all of your 95K documents and sending back "over the wire" your update operation, 95K times. This is not happening on the server and there is network traffic between your application and your MongoDB.
The index itself is not going to do much other than improve the search of each participants record, which is better than scanning and you should be getting the match. But that's not the part that taking the time, its the fetching of the questions that will be the largest issue. Also note that if you were updating
So if it's possible to run your update process on a machine that is as close as possible in networking terms to the MongoDB server then that is going to be your best performance improvement. You could also wind back your Write Concern if you want to be a little daring and/or can live with checking the integrity in another opertation, and that will reduce your network traffic and waiting for a response to the update (which is actually happening) if you put it in "fire and forget" mode.
Also see the guide if you are not sure of the concepts:
http://docs.mongodb.org/manual/core/write-concern/
In case anyone is interested I was able to take the run time of this update query from 20 minutes down to about a minute and a half by using projection when selecting the questions documents. Since I am only using the _id, id and sub_id fields I was able to do the following:
db.questions.find({},{_id: 1, id: 1, sub_id: 1}).forEach(function(doc){
....
Which drastically improved performance. Hope this helps someone!