How to move shards around in a cluster - elasticsearch

I have a 5 node cluster with 5 indices and 5 shards for each index. Currently the shards of each index are evenly distributed accross the nodes. I need to move shards belonging to 2 different indices from a specific node to a different node on the same cluster

You can use the shard reroute API
A sample command looks like below -
curl -XPOST 'localhost:9200/_cluster/reroute' -H 'Content-Type: application/json' -d '{
"commands" : [ {
"move" :
{
"index" : "test", "shard" : 0,
"from_node" : "node1", "to_node" : "node2"
}
}
]
}'
This moves shard 0 of index test from node1 to node2

Related

ElasticSearch (cURL REST API request) not giving results expected

I use this command, which should match all documents:
curl -XGET 'localhost:9200/users/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} }
}
'
I get this response:
{
"took" : 0,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
}
}
But I'm 99.9% sure I have documents on that index. If I am right, why isn't it showing the matches? If I am wrong, how can I confirm this?
You should be able to determine what's happening if you know (a) where all your documents are being stored and (b) what the server thinks the 'users' index actually is.
For the first question, you can hit the _cat/indices endpoint to see how many documents you have in each index (the "docs.count" column):
$curl -XGET 'http://localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open query 1 0 0 0 159b 159b
green open some_index 1 0 54 0 24.7kb 24.7kb
green open autocomplete 1 0 0 0 159b 159b
green open test_index 2 0 10065 4824 7.9mb 7.9mb
For the second question, check the aliases defined on your server. It's possible that "users" has been defined as an alias to an index that doesn't have any documents, or it's possible that a filtered alias has been defined on that index with a filter that is excluding all of your documents (many aliases have date-related filters that will exclude all documents outside of a very specific date range). To check for the presence of aliases you can use
$curl -XGET 'http://localhost:9200/_aliases?pretty=true'

Unable to move Elasticsearch shards

I have an Elasticsearch cluster with two nodes and eight shards. I am in the situation where all primaries are on one node and all replicas are on the other.
Running the command:
http://xx.xx.xx.1:9200/_cat/shards
returns this result:
myindex 2 r STARTED 16584778 1.4gb xx.xx.xx.2 node2
myindex 2 p STARTED 16584778 1.4gb xx.xx.xx.1 node1
myindex 1 r STARTED 16592755 1.4gb xx.xx.xx.2 node2
myindex 1 p STARTED 16592755 1.4gb xx.xx.xx.1 node1
myindex 3 r STARTED 16592009 1.4gb xx.xx.xx.2 node2
myindex 3 p STARTED 16592033 1.4gb xx.xx.xx.1 node1
myindex 0 r STARTED 16610776 1.3gb xx.xx.xx.2 node2
myindex 0 p STARTED 16610776 1.3gb xx.xx.xx.1 node1
I am trying to swap around certain shards by posting this command:
http://xx.xx.xx.1:9200/_cluster/reroute?explain
with this body:
{
"commands" : [
{
"move" : {
"index" : "myindex",
"shard" : 1,
"from_node" : "node1",
"to_node" : "node2"
}
},
{
"allocate_replica" : {
"index" : "myindex",
"shard" : 1,
"node" : "node1"
}
}
]
}
It doesn't work, and the only "NO" I get in the list of decisions in the explainitions is:
{
"decider": "same_shard",
"decision": "NO",
"explanation": "the shard cannot be allocated on the same node id [xxxxxxxxxxxxxxxxxxxxxx] on which it already exists"
},
It's not fully clear to me if this is the actual error, but there is no other negative feedback. How can I resolve this and move my shard?
This is expected.
Why would you do that? Primaries and replicas are doing the same job.
What problems do you think this would solve?

Elasticsearch doesn't allow to allocate unassigned shard

I have an ES cluster of 2 nodes. As I restarted nodes the cluster status is yellow as some of the shards are unassigned. I've tried to google and the common solution is to reroute unassigned shards. Unfortunately, it doesn't work for me.
curl localhost:9200/_cluster/health?pretty=true
{
"cluster_name" : "infra",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 34,
"active_shards" : 68,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 31,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 68.68686868686868
}
curl localhost:9200/_cluster/settings?pretty
{
"persistent" : { },
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
}
}
curl localhost:9200/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open logstash-log-2016.05.13 5 2 88314 0 300.5mb 150.2mb
yellow open logstash-log-2016.05.12 5 2 254450 0 833.9mb 416.9mb
yellow open .kibana 1 2 3 0 47.8kb 25.2kb
green open .marvel-es-data-1 1 1 3 0 8.7kb 4.3kb
yellow open logstash-log-2016.05.11 5 2 313095 0 709.1mb 354.6mb
yellow open logstash-log-2016.05.10 5 2 613744 0 1gb 520.2mb
green open .marvel-es-1-2016.05.18 1 1 88720 495 89.9mb 45mb
green open .marvel-es-1-2016.05.17 1 1 69430 492 59.4mb 29.7mb
yellow open logstash-log-2016.05.17 5 2 188924 0 518.2mb 259mb
yellow open logstash-log-2016.05.18 5 2 226775 0 683.7mb 366.1mb
Rerouting
curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
"commands": [
{
"allocate": {
"index": "logstash-log-2016.05.13",
"shard": 3,
"node": "elasticsearch-mon-1",
"allow_primary": true
}
}
]
}'
{
"error" : {
"root_cause" : [ {
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-1}{K-J8WKyZRB6bE4031kHkKA}{172.45.0.56}{172.45.0.56:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [K-J8WKyZRB6bE4031kHkKA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
} ],
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-1}{K-J8WKyZRB6bE4031kHkKA}{172.45.0.56}{172.45.0.56:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [K-J8WKyZRB6bE4031kHkKA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
},
"status" : 400
}
curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
"commands": [
{
"allocate": {
"index": "logstash-log-2016.05.13",
"shard": 3,
"node": "elasticsearch-mon-2",
"allow_primary": true
}
}
]
}'
{
"error" : {
"root_cause" : [ {
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-2}{Rxgq2aWPSVC0pvUW2vBgHA}{172.45.0.166}{172.45.0.166:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [Rxgq2aWPSVC0pvUW2vBgHA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
} ],
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-2}{Rxgq2aWPSVC0pvUW2vBgHA}{172.45.0.166}{172.45.0.166:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [Rxgq2aWPSVC0pvUW2vBgHA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
},
"status" : 400
}
So it fails and doesn't make any change. Shards are still in unassigned state.
Thank you.
Added
curl localhost:9200/_cat/shards
logstash-log-2016.05.13 2 p STARTED 17706 31.6mb 172.45.0.166 elasticsearch-mon-2
logstash-log-2016.05.13 2 r STARTED 17706 31.5mb 172.45.0.56 elasticsearch-mon-1
logstash-log-2016.05.13 2 r UNASSIGNED
logstash-log-2016.05.13 4 p STARTED 17698 31.6mb 172.45.0.166 elasticsearch-mon-2
logstash-log-2016.05.13 4 r STARTED 17698 31.4mb 172.45.0.56 elasticsearch-mon-1
logstash-log-2016.05.13 4 r UNASSIGNED
For all the indices that are yellow you have configured 2 replicas:
health status index pri rep
yellow open logstash-log-2016.05.13 5 2
yellow open logstash-log-2016.05.12 5 2
yellow open .kibana 1 2
yellow open logstash-log-2016.05.11 5 2
yellow open logstash-log-2016.05.10 5 2
yellow open logstash-log-2016.05.17 5 2
yellow open logstash-log-2016.05.18 5 2
2 replicas on two nodes cluster is impossible. You need a third node for all the replicas to be assigned.
Or, decrease the number of replicas:
PUT /logstash-log-*,.kibana/_settings
{
"index": {
"number_of_replicas": 1
}
}
Had same problem with version 5.1.2
I tried below option and it worked out.
curl -XPUT 'localhost:9200/_cluster/settings' -d
'{ "transient":
{ "cluster.routing.allocation.enable" : "all" }
}'
After this it automatically allocated shards.

How to delete unassigned shards in elasticsearch?

I have only one node on one computer and the index have 5 shards without replicas. Here are some parameters describe my elasticsearch node(healthy indexes are ignored in the following list):
GET /_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
red open datas 5 0 344999414 0 43.9gb 43.9gb
GET _cat/shards
datas 4 p STARTED 114991132 14.6gb 127.0.0.1 Eric the Red
datas 3 p STARTED 114995287 14.6gb 127.0.0.1 Eric the Red
datas 2 p STARTED 115012995 14.6gb 127.0.0.1 Eric the Red
datas 1 p UNASSIGNED
datas 0 p UNASSIGNED
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
14 65.9gb 710gb 202.8gb 912.8gb 77 127.0.0.1 127.0.0.1 Eric the Red
3 UNASSIGNED
Although deleting created shards doesn't seem to be supported, as mentioned on the comments above, reducing the number of replicas to zero for the indexes with UNASSIGNED shards might do the job, at least for single node clusters.
PUT /{my_index}/_settings
{
"index" : {
"number_of_replicas" : 0
}
}
reference
You can try deleting unassigned shard following way (Not sure though if it works for data index, works for marvel indices)
1) Install elasticsearch plugin - head. Refer Elastic Search Head Plugin Installation
2) Open your elasticsearch plugin - head URL in brwoser. From here you can easily check out which are unassigned shards and other related info. This will display infor regarding that shard.
{
"state": "UNASSIGNED",
"primary": true,
"node": null,
"relocating_node": null,
"shard": 0,
"index": ".marvel-es-2016.05.18",
"version": 0,
"unassigned_info": {
"reason": "DANGLING_INDEX_IMPORTED",
"at": "2016-05-25T05:59:50.678Z"
}
}
from here you can copy index name i.e. .marvel-es-2016.05.18.
3) Now you can run this query in sense
DELETE .marvel-es-2016.05.18
Hope this helps !

How to remove nodes from a ES cluster

Wowee...how does one remve nodes from ES?
I had 4 nodes, wanted to remove three.
On the node I wanted to keep I ran the below:
curl -XPUT localhost:9200/_cluster/settings -d '{"transient" : {"cluster.routing.allocation.exclude._ip" : "172.31.6.204"}}';echo
curl -XPUT localhost:9200/_cluster/settings -d '{"transient" : {"cluster.routing.allocation.exclude._ip" : "172.31.6.205"}}';echo
curl -XPUT localhost:9200/_cluster/settings -d '{"transient" : {"cluster.routing.allocation.exclude._ip" : "172.31.6.206"}}';echo
Now my cluster health looks like this:
curl localhost:9200/_cluster/health?pretty=true
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 8
How do I fix? What is the proper method for removing nodes?
Thank god this was in dev...
Thanks
Your decommission was good with cluster.routing.allocation.exclude._ip !
The problem you got is that you don't give time for shards to move, each time, to the remained nodes.
You can replay the same commands, one by one, but with paying attention to the migrating shards with some monitoring plug-ins as ElasticHQ or Kopf

Resources