I am trying to fix an issue related to Elasticsearch in our production which is not reproducible always. I am using Elasticsearch 1.5(yet to upgrade).
The issue is that while trying to create an index I get an error with exception IndexAlreadyExistsException because call to
client.admin().indices().prepareExists(index).get().isExists();
returns false as Elasticsearch is in recovery mode then when I try to create an index I get that exception.
Below are few links to the issues which says that Elasticsearch returns false while recovering indexes.
8945
8105
As I am not able to reproduce the issue always I am not able to test my fix which is to check the health first before checking isExists().
My question is that when will Elasticsearch start recovery?
You can use the prepareHealth() admin method in order to wait for the cluster to reach a given status before doing your index maintenance operations:
ClusterHealthResponse health = client.admin().cluster().prepareHealth(index)
.setWaitForGreenStatus()
.get();
Or you can also wait for the whole cluster to get green:
ClusterHealthResponse health = client.admin().cluster().prepareHealth()
.setWaitForGreenStatus()
.get();
Related
My opensearch sometimes reaches this error when i adding new index:
Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;
So i have to increase cluster.max_shards_per_node larger.
I wonder if is there any way to check current shards we are using to avoid this error happening?
The best way to see indexing and search activity is by using a monitoring system. And the best monitoring system for Elasticsearch is Opster. You can try it for free at the following link.
https://opster.com/
For the manual check and sort, you can try the following APIs.
You can sort your indices according to the creation date string (cds). It will help you to understand which one is the old one. So you can have an idea about your indices (shards).
GET _cat/indices?v&h=index,cds&s=cds
Also, you check the indices stats to see if is there any activity in searching or indexing.
To check all indices you can use GET _all/_stats
To check only one index you can use GET index_name/_stats
I am using Elasticsearch 7.9.0
I was updating the document very frequently. So I was getting the below exception
Elasticsearch exception [type=version_conflict_engine_exception, reason=[111]: version conflict, required seqNo [4348], primary term [2]. current document has seqNo [4427] and primary term [2]]
Then I have given a delay of 1 second between each update.(I can't give more then that)
But still the problem exists. How can we solve this.
Please help me.
Thanks.
This issue happens because of the versioning of document in elasticsearch. This feature exists in order to prevent concurrent changes to the same documents by tasks that runs simultaneously.
When you try to update a document that is already being updated by another task you might run into this issue.
If you want to track the update process of documents by your updates you may want to use the Task management API by elastic: https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html
Also you might want to check this documentation on Index API as it explains further: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html
I received nearly the same error in OpenSearch configuration but it wasn't due to too frequent updates like in OP's case.
In my case, I was unknowingly trying to update an existing Role in the domain. My requests were trying to create a 'new' Role when it already existed. When I tried to do this, I received the error.
My resolution was to create a Role with an entirely new name and then update that.
I have ES running on my local development machine for my Rails app (Using Searchkick). I am getting these error messages:
299 Elasticsearch-6.8.8-2f4c224 "In a future major version, this
request will fail because this action would add [1] total shards, but
this cluster currently has [1972]/[1000] maximum shards open. Before
upgrading, reduce the number of shards in your cluster or adjust the
cluster setting [cluster.max_shards_per_node]."
My config file already has cluster.max_shards_per_node: 2000. Am I missing something here?
299 Elasticsearch-6.8.8-2f4c224 "[types removal] The parameter
include_type_name should be explicitly specified in create index
requests to prepare for 7.0. In 7.0 include_type_name will default to
'false', and requests are expected to omit the type name in mapping
definitions."
I have zero clue where to start looking on this one.
These flood my terminal when I run my re-indexing - looking to resolve it.
I think this is dynamic cluster setting an you should use _cluster/settings API.
obviously it is very wrong that have this number of shards in one node. please read followning article:
https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster
you can use shrink index API. The shrink index API allows you to shrink an existing index into a new index with fewer primary shards
We are using elasticsearch 6.0.
We have an index in our cluster and would like to know if there are any clients who are querying it.
Is there a way to know if an index is receiving read( get, search, aggregation etc) on an index?
If you don't have monitoring enabled on your cluster please have a look on the index stats api. You'll find a lot of metrics worth to monitor.
You can see which thread has completed how many write tasks completed with this command:
GET /_cat/thread_pool/write?v&h=id,node_name,active,rejected,completed
or for getting get task:
GET /_cat/thread_pool/get?v&h=id,node_name,active,rejected,completed
I'm using both elasticsearch 1.4.4/2.1.0 with cluster of 5 hosts on AWS. In the config file, I've set index shard num to 10.
So here comes the strange behavior: I created a index everyday, and when there's about 400 shards or more, the whole cluster returns Read Timeout when using Buik index API.
If I delete some indices, the timeout error disappeared.
Anyone meets similar problem? This is really a big obstacle for storing more data