We're struggling with modeling our data in Elasticsearch, and decided to change it.
What we have today: single index to store product data, which holds data of 2 types -
[1] Some product data that changes rarely -
* `name, category, URL, product attributes(e.g. color,price) etc...`
[2] Product data that might change frequentley for past documents,
and indexed on a daily level - [KPIs]
* `product-family, daily sales, daily price, daily views...`
Our requirements are -
Store product-related data (for millions of products)
Index KPIs on a daily level, and store those KPIs for a period of 2 years.
Update "product-family" on a daily level, for thousands of products. (no need to index it daily)
Query and aggregate the data with low latency, to display it in our UI. aggregation examples -
Sum all product sales in the last 3 months, from category 'A' and sort by total sales.
Same as the above, but in-addition aggregate based on product-family field.
Keep efficient indexing rate.
Currently, we're storing everything on the same index, daily, meaning we store repetitive data such as name, category and URL over and over again. This approach is very problematic for multiple reasons-
We're holding duplicates for data of type [1], which hardly changes and causes the index to be very large.
when data of type [2] changes, specifically the product-family field(this happens daily), it requires updating tens of millions of documents (from more than a year ago), which causes the system to be very slow and timeout on queries.
Splitting this data into 2 different indices won't work for us since we have to filter data of type [2] by data of type [1] (e.g. all sales from category 'A'), moreover, we'll have to join that data somehow, and our backend server won't handle this load.
We're not sure how to model this data properly, our thoughts are -
Using parent-child relations - parent is product data of type [1] and children are KPIs of type [2]
Using nested fields to store KPIs (data of type [2]).
Both of these methods allow us to reduce the current index size by eliminating the duplicated data of type [1], and efficiently updating data of type [2] for very old documents.
Specifically, both methods allow us to store product-family for each product once in the parent/non-nested fields, which implies we can only update a single document per product. (these updates are daily)
We think parent-child relation is more suitable, due to the fact that we're adding KPIs on a daily level,
which per our understanding - will cause re-indexing for documents with new KPIs when using nested fields.
On the other side, we're afraid that parent-child relations will increase query latency dramatically, hence will cause our UI to be very slow.
We're not sure what is the proper way to model the data, and if our solutions are on the right path,
we would appreciate any help since we're struggling with it for a long time.
First off, I would recommend against indexing data that changes frequently in Elasticsearch. It is not designed for this and you will get poor performance as well as encounter difficulties when cleaning up old data.
Elasticsearch is best used for immutable data (once you insert it, you don't modify it). For time based data, I would recommend inserting measurements once with their timestamp, in e.g. daily indices (see: index templates), and leaving them alone. Each measurement document would look something like
{"product_family": "widget", # keyword
"timestamp": "2022-08-23", # date
"sales": 798137,
"price": "and so on"}
This document would be inserted into the index yourindex_20220823.
You can have Elasticsearch run roll-up jobs for aggregating historical data, and set up index lifecycle management so that indices older than your retention period get deleted. This is very fast, way faster than running delete-by-query requests to remove all documents with insertionDate > -2yrs.
Now, we have the issue of storing the product category metadata. As you might have found out, ES is better at denormalized data, but it does lead to repetition and you might find your index size blowing up.
For minimizing disk usage, the trick is to tweak individual field mappings (and no, you can't rely on dynamic mapping). You can avoid storing a lot of stuff in the inverted index. See https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-disk-usage.html. I'd need to see your current mapping to check if there are any obvious gains to be made here.
Lastly, a feature that I've never tried out is to move older data (again, having daily indices helps here) to slower storage modes. See cold/frozen storage tiers.
Related
I have an ElasticSearch cluster and my system handles events coming from an API.
Each event is a document stored in an index and I create a new index per source (the company calling the API). Sources come and go, so I have new sources every week and most sources become inactive after a few weeks. Each source send between 100k and 10M new events every day.
Right now my indices are named api-events-sourcename
The documents contain a datetime field and most of my queries look like "fetch the data for that source between those dates.
I frequently use Kibana and I have configured a filter that matches all my indices (api-events-*) at once, and I then add terms to filter a specific source and specific days.
My requests can be slow at times and they tend to slow down the ingestion of new data.
Given that workflow, should I see any performance benefits to create an index per source and per day, instead of the index per source only that I use today ?
Are there other easy tricks to avoid putting to much strain on the cluster ?
Thanks!
I have a web app that is used to search and view documents in Elastic Search.
The goal now is to maintain two values.
1. How many times the document was fetched in total (life time views)
2. How many times the document was fetched in last 30 days.
Achieving the first is somewhat possible, but the second one seems to be a very hard problem.
The two values need to be part of the document as they will be used for sorting the results.
What is the best way to achieve this.
To maintain expiring data like that you will need to store each view with its timestamp. I suppose you could store them in an array in the ES document, but you're asking for trouble doing it like that, as the update operation that you'd need to call every time the document is viewed will have to delete and recreate the document (that's how ES does updates), and if two views happen at the same time it will be difficult to make sure they both get stored.
There are two ways to store the views, and make use of them in the query:
Put them in a separate store (could be a different index in ES if you like), and run a cron job or similar every day to update every item in the main index with the number of views from the last thirty days in the view store. Even with a lot of data it should be possible to make this quite efficient, depending on your choice of store for views.
Use the ElasticSearch parent/child datatype to store views in the same index as the main documents, as children. I'm not sure that I'd particularly recommend this approach, but I think it should be possible with aggregations to write a query that sorts primary documents by the number of children (filtered by date). It might be quite slow though.
I doubt there is any other way to do this with current versions of ES, because it doesn't support joining across indices. Either the data must be aggregated in advance onto the document, or it has to be available in the same index.
I am looking at sending my App logs to Elastic (6.x) via FileBeat and Logstash. As mentioned in Configure the Logstash output and recommended elsewhere, it seems that I need add the Date to the Index name. The reason for doing so was that when the time came to delete old data, it was easier to delete an entire Index by date, rather than individual documents. Is this true?
If I should be following this recommendation of adding the Date to the Index Name, I’m curious what additional things I need to do to ensure seamless querying? By this I mean querying esp. in Kibana, for e.g. over the past day which would need to look at today’s index as well as yesterday’s index.
Speaking of querying in Kibana, is there a way of simply working with the base index name without the date stamp i.e. setting it up so that I do not see or have to deal with the date named indexes?
Edit: Kamal raised a good point that I have not provided any information about my cluster and my needs. The following is what I'm working with:
What is your daily data creation/expected count
I'm not sure. I don't expect anything more than a GB of data day, and no more than a couple of 100K documents a day. Since these are logs, I don't expect any updates to the documents once they are created.
Growth rate of the data in the future (1 year - 5 years)
At the moment, I don't see the growth rate to cross a GB a day.
How many teams are using the same cluster apart from yours if there is
any
The cluster would be used (actually queried) by just my team. We are about 5 right now, but I don't see more than 10 users (and that's not concurrent, just over a day or month)
Usage patterns, type of queries used etc.
I'm not sure, but there certainly would not be updates to the data other than deletions
Hardware details
I've not worked this out with management. For most part I expect 3 nodes. Also this is not critical i.e. if we lose all of our logs for some reason, I would not lose sleep over it.
First of all you need to take a step back and understand do you really need multiple index or single one(where you need to filter documents while querying using a date field for a particular date).
Some of questions you must have before you take on such decision
What is your daily data creation/expected count
Growth rate of the data in the future (1 year - 5 years)
How many teams are using the same cluster apart from yours if there is any
Usage patterns, type of queries used etc.
Hardware details
Advantages
In a way, having multiple indexes(with date field as its index name) would be more beneficial.
You can delete the old indexes without affecting new ones.
In case if you have to change the mapping, you can do so with the new index without affecting the old ones. Comparatively less overhead while for single index, you have to reindex all the documents which would take lot more time if size is pretty huge. And if this keeps happening every now and then, you would need to come up with solution where you have to execute such operations at the times of minimal usages. That means, it can harm productivity.
searching using multiple indexes still is convenient.
not really sure but its easier for scaling using multiple indexes.
Disadvantages are:
Additional shards are created for each and every index that can waste some storage space.
Overhead to maintain multiple indexes by monitoring/operations team.
At times can lead to over-creation of indexes.
No mapping changes and less documents insertion(in 100s or few 100s), it'd be better to use single index.
The only way and the only correct way to figure out what's best is to have a cluster that closely resembles the production one with data too resembling to production, try various configurations and see which solution fits best.
Speaking of querying in Kibana, is there a way of simply working with
the base index name without the date stamp i.e. setting it up so that
I do not see or have to deal with the date named indexes?
Yes there is. If you have indexes with names like logs-0001, logs-0002, you can use logs-* as indexname when you query.
Including date in an index name is a very common use case implemened by many Elasticsearch users. It helps with archiving/ purging old indices as you mentioned. You dont need to do anything additionally to be able to query. Setup your index basename as an index pattern for your indices for ex. logstash-* and you can query on that particular index pattern in Kibana.
We have a rather difficult set of requirements for our search engine replacement and they go as follows.
Every instance will have a unique schema, we have multiple client installations that we don't control that have varying data structures
Frequent updates, it's not uncommon for every record to have a field be updated in a single action. Some fields are updated frequently, others are never changed
Some of our fields can be very large (50mb+) though these are never changed and are rare in a data set.
We'd like to have near real-time search if possible
We're looking at making the fields that are updated semi-frequently/frequently into child documents. The issue with this is that we have a set of tags that change quite frequently on the record that we want to search against in near real time. There is a strong expectation in our application that when this data is modified that searching immediately reflect that. We've tried child documents, but they don't seem to update as quickly as we'd like over a large data set.
So the questions are as follows:
Are there strategies I'm not aware of for updating child documents quickly? Maybe a plugin? Right now we're only using the RESTFUL interface
Would it be better to store the data that isn't frequently changed in ES but keep the tags in a database? Possibly creating a plugin in ES that maps the two together? Would this plugin in be difficult? Ideally, we'd be able to mix our searches together (Tags+regular ES queries) in a boolean fashion including the tags stored in a table.
Hopefully this will be helpful to other people in this situation, here is the solution I came up with.
Use Child/Parent documents
There was a single parent that contained static information for the record that rarely/never changes (bulk of the data indexed)
Create child documents for other data I wanted to index so they could be indexed independently of the primary document
Since I had split the record data I wanted to index into static and non static documents, then broke that non static data into further child documents I was able to create a high throughput indexer. The total number of records to be indexed were split into sub chunks, which were then further split into their child document types. I would split these chunks out to various indexer instances which would then be only limited by the throughput of the data source or the ES cluster in determining how many documents could be indexed per second.
This was all done through the bulk API. Keeping the static data away from the frequently changing data allowed the frequently changed data to be updated quite quickly and this speed was only limited by the available hardware. It was a little tougher to craft queries using the child document clauses and aggregates but everything seemed to work.
Notes
There is a performance penalty to using parent/child documents which was a non issue for us considering what ES gave us over our previous solution but it may cause issues for other implementations.
I am trying to understand and effectively use the index type available in elasticsearch.
However, I am still not clear how _type meta field is different from any regular field of an index in terms of storage/implementation. I do understand avoiding_type_gotchas
For example, if I have 1 million records (say posts) and each post has a creation_date. How will things play out if one of my index types is creation_date itself (leading to ~ 1 million types)? I don't think it affects the way Lucene stores documents, does it?
In what way my elasticsearch query performance be affected if I use creation_date as index type against a namesake type say 'post'?
I got the answer on elastic forum.
https://discuss.elastic.co/t/index-type-effective-utilization/58706
Pasting the response as is -
"While elasticsearch is scalable in many dimensions there is one where it is limited. This is the metadata about your indices which includes the various indices, doc types and fields they contain.
These "mappings" exist in memory and are updated and shared around all nodes with every change. For this reason it does not make sense to endlessly grow the list of indices, types (and therefore fields) that exist in this cluster state. A type-per-document-creation-date registers a million on the one-to-ten scale of bad design decisions" - Mark_Harwood