Can I update multiple documents with different field values at once? - elasticsearch

I have an index of 280,000 documents in elasticsearch. I need to assign unique field values for each document. Currently I am iterating through all ID values and updating each document using _update. This process works fine but is very slow taking around 8 hours for 280,000 documents.
Any ideas on how I could speed this process up? Is it possible to update multiple documents at once assigning different field values to each document.

Try using ES Bulk API, which will allow you to update multiple documents with single request.
I would suggest to check the refresh property on the index as well, if you refresh the index every time you insert a record the performance will go down and I think that is whats happening to you just now. But if you use bulk update that should be fine, jsut something to keep in mind.

Related

Editing and re-indexing large amounts of data in elasticsearch (millions of records)

I recently made a new version of an index for my elasticsearch data with some new fields included. I re-indexed from the old index, so the new index has all of the old data with also the new mapping to include the new fields.
Now, I'd like to update all of my elasticsearch data in the index to include these new fields, which I can calculate by making some separate database + api calls to other sources.
What is the best way to do this, given that there are millions of records in the index?
Logistically speaking I'm not sure how to accomplish this... as in how can I keep track of the records that I've updated? I've been reading about the scroll api, but not certain if this is valid because of the max scroll time of 24 hours (what if it takes longer than that)? Also a serious consideration is that since I need to make other database calls to calculate the new field values, I don't want to hammer that database for too long in a single session.
Would there be some way to run an update for say 10 minutes every night, but keep track of what records have been updated/need updating?
I'm just not sure about a lot of this, would appreciate any insights or other ideas on how to go about it.
you would need to run an update by query on your original index, which is expensive
you might be able to use aliases to point to indices behind that, and when you want to make a change, create a new index with the new mappings etc and attach it to the alias so new data coming in gets written correctly. then reindex the "old" data into the new index
that will depend on the details of what you're doing though

Elastic Search Number of Document Views

I have a web app that is used to search and view documents in Elastic Search.
The goal now is to maintain two values.
1. How many times the document was fetched in total (life time views)
2. How many times the document was fetched in last 30 days.
Achieving the first is somewhat possible, but the second one seems to be a very hard problem.
The two values need to be part of the document as they will be used for sorting the results.
What is the best way to achieve this.
To maintain expiring data like that you will need to store each view with its timestamp. I suppose you could store them in an array in the ES document, but you're asking for trouble doing it like that, as the update operation that you'd need to call every time the document is viewed will have to delete and recreate the document (that's how ES does updates), and if two views happen at the same time it will be difficult to make sure they both get stored.
There are two ways to store the views, and make use of them in the query:
Put them in a separate store (could be a different index in ES if you like), and run a cron job or similar every day to update every item in the main index with the number of views from the last thirty days in the view store. Even with a lot of data it should be possible to make this quite efficient, depending on your choice of store for views.
Use the ElasticSearch parent/child datatype to store views in the same index as the main documents, as children. I'm not sure that I'd particularly recommend this approach, but I think it should be possible with aggregations to write a query that sorts primary documents by the number of children (filtered by date). It might be quite slow though.
I doubt there is any other way to do this with current versions of ES, because it doesn't support joining across indices. Either the data must be aggregated in advance onto the document, or it has to be available in the same index.

Elasticsearch slow performance for huge data retrieval with source field

I'm using ElasticSearch to search from more than 10 million records, most records contains 1 to 25 words. I want to retrieve data from it, the method I'm using now is drastically slow for big data retrieval as I'm trying to get data from the source field. I want a method that can make this process faster. I'm free to use other database or anything with ElasticSearch. Can anyone suggest some good Ideas and Example for this?
I've tried searching for solution on google and one solution I found was pagination and I've already applied it wherever it's possible but pagination is not an option when I want to retrieve many(5000+) hits in one query.
Thanks in advance.
Try using scroll
While a search request returns a single “page” of results, the scroll
API can be used to retrieve large numbers of results (or even all
results) from a single search request, in much the same way as you
would use a cursor on a traditional database.

Elasticsearch index alias

I am trying to use elasticsearch to filter millions of data. All data are in one index and I want to access them in a 'direct' way.
What I mean with direct way?
Direct way means for example accessing the 700000th element of this index (not by id). Is this possible somehow?
What I tried already:
from + size works, but seems not to be fast if number of elements > 10000
Scrolling I didn't try, but it's seem somehow not the right thing for my use-case.
So any other ideas?
Scrolling will not work. That will fetch all the data.
I think elasticseach is not the correct use case for what you want to do.
It would be better to use a linked list of the ids, that will let you fetch the id by index and then you can query elasticsearch to get the data.
If you data is such that it does not get modified or deleted then you can add an extra field in the mapping that will act like an auto increment field in a database. You can fetch the data using that field.

Bulk read of all documents in an elasticsearch alias

I have the following elasticsearch setup:
4 to 6 small-ish indices (<5 million docs, <5Gb each)
they are unioned through an alias
they all contain the same doc type
they change very infrequently (i.e. >99% of the indexing happens when the index is created)
One of the use cases for my app requires to read all documents for the alias, ordered by a field, do some magic and serve the result.
I understand using deep pagination will most likely bring down my cluster, or at the very least have dismal performance so I'm wondering if the scroll API could be the solution. I know the documentation says it is not intended for use in real-time user queries, but what are the actual reasons for that?
Generally, how are people dealing with having to read through all the documents in an index? Should I look for another way to chunk the data?
When you use the scroll API, Elasticsearch creates a sort of a cursor for the current state of the index, so the reason for it not being recommended for real time search is because you will not see any new documents that were inserted after you created the scroll token.
Since your use case indicates that you rarely update or insert new documents into your indices, that may not be an issue for you.
When generating the scroll token you can specify a query with a sort, so if your documents have some sort of timestamp, you could create one scroll context for all documents with timestamp: { lte: "now" } and another scroll (or every a simple query) for the rest of the documents that were not included in the first search context by specifying a certain date range filter.

Resources