I've created some tests for my ElasticSearch functionality and I've noticed some strange behavior. If I have a test that:
Inserts a document and confirms there are no errors
Retrieves that same document, confirms there are no errors and confirms it has the expected values
Deletes the document, confirms there are no errors and confirms 1 document was deleted
Then the 3rd test will fail because 0 documents were deleted. If I take one of the following steps:
Debug the test and put a breakpoint after insert but before delete
Add time.Sleep(time.Second) immediately before the delete step
then 1 document is deleted and the 3rd test will pass. In the cases when the 3rd test has failed, I've gone into my ES instance and confirmed that the document exist.
This leads me to believe that after inserting a document there is some span of time where something has to happen before I can delete the document.
My questions is - what needs to happen after insert so that I can delete a document and is there a better way for me to handle this in my tests than sleeping for 1 second?
I am coding in Golang and I am using the Olivere ES Client
Elasticsearch operations can be inconsistent.
You can check the option refresh or wait_for_active_shards if it fit your test.
NB: it’s always difficult to add test to an inconsistent system.
I would not use the term inconsistence. Storing and retrieving a document are real-time operations. search is happening in near-real-time.
While you can always search for documents, they will only make it into your result set once the data structures for search exist (typically the inverted indices). Creating and maintaining this data structure for every single document that gets indexed would be costly and inefficient, that's why the data structure gets created at latest when the refresh interval has expired (default refresh interval is 1 second).
Also, when deleting a document, the document does not get immediately removed from disk. It first gets marked for deletion, ensuring that it will no longer show up in any results. But only after some Elasticsearch internal housekeeping (segment merges), the documents marked for deletion eventually get wiped.
That should give you an idea why for search we talk about a near real-time behaviour, or what you describe as "gap"
Especially for unit/integration test you would want to make sure that a document can get found after having it indexed. You can easily achieve this by converting your index/write-request into a blocking one by adding the parameter refresh=wait_for. With this, the indexing request only returns, AFTER the data structures needed for search have been created. Making sure that in your next request the document is available for whatever action you want to execute.
Related
In the write tuning section, Elastic recommends to Increase the Refresh Interval
We're doing document ingestions where during ingestion we may do reads, essentially like,
GET /my-index/_doc/mydocumentid
that is, a read of the document by its _id, as opposed to a search. Some descriptions suggest that the document id is just added to the Lucene index like other attributes. Does this mean that the read by id would still reset the refresh_interval and force a re-index instead of allowing it to wait for the full refresh_interval?
This is actually a tricky one:
You are correct that a GET on an _id works right away (unlike a multi-document operation like a search, which need to wait for an explicit ?refresh from you or the refresh_interval). But the underlying implementation changed twice:
Initially the GET on an _id read the data right from the translog, so it didn't need a refresh / the creation of a segment.
The code was complex and so we changed it in 5.0 that it would be read from a segment, but a GET on an _id would automatically trigger the _refresh. So it looked the same on the outside and the code was simpler.
But for use-cases that did a lot of GETs on _id this was expensive, since it creates lots of tiny shards. So we changed it back in 7.6 to read again from the translog.
So if you are using a current version, it doesn't trigger a _refresh.
a get on the _id is not a search, so no
I am facing a strange issue in the number of docs getting deleted in an elasticsearch index. The data is never deleted, only inserted and/or updated. While I can see that the total number of docs are increasing, I have also been seeing some non-zero values in the docs deleted column. I am unable to understand from where did this number come from.
I tried reading whether the update doc first deletes the doc and then re-indexes it so in this way the delete count gets increased. However, I could not get any information on this.
The command I type to check the index is:
curl -XGET localhost:9200/_cat/indices
The output I get is:
yellow open e0399e012222b9fe70ec7949d1cc354f17369f20 zcq1wToKRpOICKE9-cDnvg 5 1 21219975 4302430 64.3gb 64.3gb
Note: It is a single node elasticsearch.
I expect to know the reason behind deletion of docs.
You are correct that updates are the cause that you see a count for documents delete.
If we talk about lucene then there is nothing like update there. It can also be said that documents in lucene are immutable.
So how does elastic provides the feature of update?
It does so by making use of _source field. Therefore it is said that _source should be enabled to make use of elastic update feature. When using update api, elastic refers to the _source to get all the fields and their existing values and replace the value for only the fields sent in update request. It marks the existing document as deleted and index a new document with the updated _source.
What is the advantage of this if its not an actual update?
It removes the overhead from application to always compile the complete document even when a small subset of fields need to update. Rather than sending the full document, only the fields that need an update can be sent using update api. Rest is taken care by elastic.
It reduces some extra network round-trips, reduce payload size and also reduces the chances of version conflict.
You can read more how update works here.
We use an ELK stack for our logging. I've been asked to design a process for how we would remove sensitive information that had been logged accidentally.
Now based on my reading around how ElasticSearch (Lucene) handles deletes and updates the data is still in the index just not available. It will ultimately get cleaned up as indexes get merged, etc..
Is there a process to run an update (to redact something) or delete (to remove something) and guarantee its removal?
When updating or deleting some value, ES will mark the current document as deleted and index the new document. The deleted value will still be available in the index, but will never get back from a search. Granted, if someone gets access to the underlying index files, he might be able to use some tool (Luke or similar) to view what's inside the index files and potentially see the deleted sensitive data.
The only way to guarantee that the documents marked as deleted are really deleted from the index segments, is to force a merge of the existing segments.
POST /myindex/_forcemerge?only_expunge_deletes=true
Be aware, though, that there is a setting called index.merge.policy.expunge_deletes_allowed that defines a threshold below which the force merge doesn't happen. By default this threshold is set at 10%, so if you have less than 10% deleted documents, the force merge call won't do anything. You might need to lower the threshold in order for the deletion to happen... or maybe easier, make sure to not index sensitive information.
I am updating existing documents by deleting and reindexing them. I did it this way because the documents have nested components and it was easier to massage the document myself rather than construct an update operation.
Mostly this works fine but occasionally the system updates the same document twice very quickly. I think what is happening is that the the search for the second update gets the original document (before it was updated the first time) because the the previous updates have not yet been reflected in the indexes. By the time I try to delete the document (by id) the index has updated and it comes up as not found.
I am not doing bulk updates.
Is this a known issue and if so how does one work around it?
I can't find any reference to problems like this anywhere so I am puzzled.
I am writing some code where we are inserting 200,000 items into an ElasticSearch index.
Whilst this works fine, when we get a count of items in the index to ascertain everything went in, we are not getting the same number. However, if we wait a second or two, the count is correct.
Therefore, is there a programmatic way we can get a real count from ElasticSearch without having to sleep or similar?
Newly indexed records become visible in search results only after the Refresh operation. Refresh is called automatically with frequency specified by index.refresh_interval setting, which is 1s by default. When writing elasticsearch tests, it's customary to call refresh after indexing to make sure that all indexed records are available in searches. However, excessive refresh calls (after each record, for example) in production code might hamper the elasticsearch indexing performance.