Unassigned shards after elasticsearch repurpose command is executed - elasticsearch

I have an elastic search cluster of 3 nodes (1 master and 2 data nodes), I have enabled xpack after that I was not able to start the master node. So I ran the elasticsearch-node repurpose command. And the cluster restarted.
But now I have the shards which are unassigned.
analytics-2019-11-19 0 p UNASSIGNED
analytics-2019-11-19 0 r UNASSIGNED
and the cluster status is red. I am new to elk. Let me know how to fix this and make the cluster green?
Thanks

In order to resolve UNASSIGNED shards issue you have to follow these steps:
Let's find out which shards are unassigned, and why run:
curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED
Via Kibana
GET _cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED
Let's use the cluster allocation explain API to try to garner more information about shard allocation issues
curl -XGET localhost:9200/_cluster/allocation/explain?pretty
Via Kibana
GET _cluster/allocation/explain?pretty
The resulting output will provide helpful details about why certain shards in your cluster remain unassigned.
For example:
You might see this explanation: "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists"
Meaning there is an index that you don’t need anymore and you can delete it to restore your cluster status to green.
If it is not the issue (the example) then it could be one of the following reasons:
-Shard allocation is purposefully delayed
-Too many shards, not enough nodes
-You need to re-enable shard allocation
-Shard data no longer exists in the cluster
-Low disk watermark
-Multiple Elasticsearch versions
Follow this guide to resolve unassigned shards issue
Hope this helps

Related

Elasticsearch index in RED health

When I run curl -X GET "elastic01:9200/_cat/indices?v"
I am observing that one of my index is having value red in health
I checked my cluster health and even That is in red
What can be done to bring the elasticsearch index health status from red to green.
Good start, you already know which index health value is in RED, which means that index is missing one or more primary shard, please identify them using this great blog post of elastic and see if some of your nodes in cluster is disconnected, holding the primary shards of RED index?
If you can't get back the nodes, holding the primary shards of the index, then as mentioned in the same blog, you have to loose the data and create empty primary shards using reroute API.
In the odd event that all nodes holding copies of this particular
shard are all permanently dead, the only recourse is to use the
reroute commands to allocate an empty/stale primary shard and accept
the fact that data has been lost.

Settings for Elasticsearch readonly single node cluster

I have been asked to restore data for a 3 node ES cluster to a new read-only cluster.
The new cluster is only for showing old log data and have very few request.
I have set up one server that will be my "cluster".
When I run my restore command I get 5 shards and 5 unassigned shards and I think that this is redundant as one must be enough.
How can I restore my data so I use as little disk space as possible?
Your cluster must be yellow since there are unassigned shards. Simply run the following command to remove the unassigned replica shards and the cluster will turn green again:
PUT index-name/_settings
{
"number_of_replicas": 0
}
Just note, though, that removing the unassigned replicas will not save you any disk space since those replica shards do not take up any space because they are unassigned anayway.

Yellow status of 'small' ElasticSearch cluster against green status of 'big' cluster in the process of data uploading

I have script for uploading the data to ElasticSearch and it works fine with ES clusters containing 3 ES instances. But running the script against a 2-instance cluster throws that cluster into yellow status. Deleting the index restores them to green.
Found this: "A yellow cluster status means that the primary shards for all indices are allocated to nodes in a cluster, but the replica shards for at least one index are not."
How could I fix that? Should I improve my script somehow with a cluster size switch?
You certainly have in your index settings that you need 2 replicas. And as you cant have a replica and a primary shard on the same node, your cluster cant allocate all your shards in a 2 node cluster.
Could you try to decrease your number of replica to 1 ?
see here for the doc:
PUT /<your_index>/_settings
{
"index" : {
"number_of_replicas" : 1
}
}
Keep us posted !

Elasticsearch primary shard lost - how to recover?

I'm running with 3 nodes cluster on AWS EC2, one of my nodes crashed and after reboot I see 2900 unassigned shards and cluster state RED.
I configured indices to have 5 shards with 1 replica - and I don't understand why after rebooting the shards are not recovered from the replicas.
I tried to manually migrate shards with elasticsearch reroute API https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html
But got errors:
can't cancel 2, failed to find it on node {infra-elasticsearch-1}
can't move 2, failed to find it on node {infra-elasticsearch-1}
[allocate_replica] trying to allocate a replica shard
[filebeat-demo00-2018.07.21][2], while corresponding primary shard is
still
unassigned"}],"type":"illegal_argument_exception","reason":"[allocate_replica]
trying to allocate a replica shard [filebeat-demo00-2018.07.21][2],
while corresponding primary shard is still unassigned
It's look like the some primary shard was lost (don't exists on disk) and I don't know how to the state back to GREEN.
thanks
Make sure the shard allocation is enabled in the active nodes by using the below API request
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": null
}
}
Also you can check if the replica exists for the indexes whose primary shard has been lost by looking at the Indices information of the Monitoring app on Kibana.
To check the undergoing recovery process use the below API
GET /_recovery
I don't if this can help, but I just restarted the elasticsearch and kibana services. I waited for a few minutes, the cluster health changed from red to yellow then green in a matter of minutes.
on elastic cluster nodes:
#systemctl restart elasticsearch.service
on kibana node:
#systemctl restart kibana.service

Elasticsearch some indices unassigned after a brain-split happened

Using ES 1.3.1 version
Found a brain-split then restart the entire cluster. Now only the latest index got correctly allocated, leave all other indices unassigned...
I've checked on several nodes, there are index data saved on disk, and I've tried to restart those nodes, still won't get a shard allocate...
Please see this screen shot:
http://i.stack.imgur.com/d6jT7.png
I've tried the "Cluster reroute": http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html. However, got a exception like "cannot allocate Primary Shard"...
Please help and any comment is welcome. Thanks a lot.
Don't allocate primary shards with the _cluster/reroute API, this will create an empty shard with no data.
Try setting your replica count to 0.
If that doesn't work, set index.gateway logging to be TRACE and restart a node that contains saved index data for one of the unassigned shards. What do you see in the logs for that node or in the logs for the master node?

Resources