I'm very new to Elasticsearch and try to use it for analyze of data from Suricata IPS. Head plugin shows me this: yellow (131 of 262) unassigned shards
also getting this:
$ curl -XGET http://127.0.0.1:9200/_cluster/health?pretty
{
"cluster_name" : "elasticsearch_brew",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 131,
"active_shards" : 131,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 131,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}
How to get rid of those unassigned shards? And also Kibana says me this from time to time:
Error: Bad Gateway
at respond (https://www.server.kibana/index.js?_b=:85279:15)
at checkRespForFailure (https://www.server.kibana/index.js?_b=:85247:7)
at https://www.server.kibana/index.js?_b=:83885:7
at wrappedErrback (https://www.server.kibana/index.js?_b=:20902:78)
at wrappedErrback (https://www.server.kibana/index.js?_b=:20902:78)
at wrappedErrback (https://www.server.kibana/index.js?_b=:20902:78)
at https://www.server.kibana/index.js?_b=:21035:76
at Scope.$eval (https://www.server.kibana/index.js?_b=:22022:28)
at Scope.$digest (https://www.server.kibana/index.js?_b=:21834:31)
at Scope.$apply (https://www.server.kibana/index.js?_b=:22126:24)
I don't know if these problems connected to each other... Could please anyone help me to get it work. Thank you very much!
A cluster with only one node and indices that have one replica will always be yellow.
yellow is not a bad thing, the cluster works perfectly fine. The downside is it doesn't have the copies of the shards active.
You can have a green cluster if youbset the number of replicas to 0 or you add a second node to the cluster.
But, as I said, there is no problem if you have a yellow cluster.
Setting number of replicas to 0, cluster wide (all indices):
curl -XPUT "http://localhost:9200/_settings" -d'
{
"number_of_replicas" : 0
}'
Related
The health column is showing yellow for logstash index , even after deleting old ones they re recreated with yellow health. I have clusters for this setup and have checked shards using below.
GET _cluster/health :
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 12,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 3,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 88.46153846153845
}
Any idea how this can be turned to green ?
Also the index are not getting created daily due to this issue.
The yellow health indicates that your primary shard is allocated but the replicas are not allocated. This may be because your elastic is deployed using one node only. Elastic does not allocate the primary and the replica shards on the same node as it will serve no purpose. When you have multiple nodes and multiples shards, the elastic by default allocates the primary and the replicas to different nodes.
As seen from the data you provided, you have 22 active shards and only 2 nodes. The unassigned shards, i.e., 3, is the problem leading to yellow cluster health.
In order to solve this, you can do 2 things.
If you are using elastic for testing, you can initiate the server with one shard (no replicas). In this case you have one node in your elastic service.
If you are in a production and want multiple shards (primary + replicas), then the number of nodes should be equal to the total number of shards. For instance, if you have 1 primary and 2 replicas, then initiate the server with 3 nodes.
Please remember to do this when you are initiating your elastic server.
The harm in yellow health is that if your primary shard goes bad, you will lose the service and the data as well.
Problem: I've started five elasticsearch nodes, but only 66,84 % of the Data is in kibana available. When I check the cluster health with localhost:9200/_cluster/health?pretty=true I've got the following informations: {
"cluster_name" : "A2A",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 4,
"active_primary_shards" : 612,
"active_shards" : 613,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 304,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 66.8484187568157
}
And also all my indices are red, except of the kibana index.
Small Part:
red open logstash-2015.11.08 5 0 47256 668 50.5mb 50.5mb
red open logstash-2015.11.09 5 0 46540 1205 50.4mb 50.4mb
red open logstash-2015.11.06 5 0 65645 579 69.2mb 69.2mb
red open logstash-2015.11.07 5 0 62733 674 66.4mb 66.4mb
green open .kibana 1 1 2 0 19.7kb 9.8kb
red open logstash-2015.11.11 5 0 49254 1272 53mb 53mb
red open logstash-2015.11.12 5 0 50885 466 53.6mb 53.6mb
red open logstash-2015.11.10 5 0 49174 1288 52.6mb 52.6mb
red open logstash-2016.04.12 5 0 92508 585 104.8mb 104.8mb
red open logstash-2016.04.13 5 0 95120 279 107.2mb 107.2mb
I've tried to fix the problem with curl -XPUT 'localhost:9200/_settings' -d ' {"index.routing.allocation.disable_allocation": false}' but it doesn't work!
So has anyone of you some ideas how to assign my shards?
And when you need some other infos please ask and I will try to offer you the data:
Have you seen this answer? https://stackoverflow.com/a/23816954/1834331
You could also try restarting elasticsearch first: service elasticsearch restart.
Otherwise, just try reallocating the shards manually (as your indices have 5 shards, run the command with the shard flag 0, 1, 2, .. 5):
curl -XPOST -d '{ "commands" : [ {
"allocate" : {
"index" : "logstash-2015.11.08",
"shard" : 0,
"node" : "SOME_NODE_HERE",
"allow_primary":true
}
} ] }' http://localhost:9200/_cluster/reroute?pretty`
You can check the nodes with unassigned shards using: curl -s localhost:9200/_cat/shards | grep UNASS
if shards are stuck in unallocated they can be manually allocated. For example:
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands": [{
"allocate": {
"index": "logstash-2015.11.07",
"shard": 5,
"node": "Frederick Slade",
"allow_primary": 1
}
}]
}'
See the Cluster Reroute documentation, including warnings on the use of allow_primary.
I uploaded the data to my single node cluster and named the index as 'gequest'.
When I GET from http://localhost:9200/_cluster/stats?human&pretty, I get:
"cluster_name" : "elasticsearch",
"status" : "yellow",
"indices" : {
"count" : 1,
"shards" : {
"total" : 5,
"primaries" : 5,
"replication" : 0.0,
"index" : {
"shards" : {
"min" : 5,
"max" : 5,
"avg" : 5.0
},
"primaries" : {
"min" : 5,
"max" : 5,
"avg" : 5.0
},
"replication" : {
"min" : 0.0,
"max" : 0.0,
"avg" : 0.0
}
}
}
When I do GET on http://localhost:9200/_stats?pretty=true
"_shards" : {
"total" : 10,
"successful" : 5,
"failed" : 0
}
How come total number of shards not consistent in two reports? Why total shards are 10 from stats API. How to track the other 5?
From the results it is likely that you have a single elasticsearch node running and created a index with default values(which creates 5 shards and one replica). Since there is only one node running elasticsearch is unable to assign the replica shards anywhere(elasticsearch will never assign the primary and replica of the same shard in a single node).
The _cluster/stats API gives information about the cluster including the current state. From your result it is seen that the cluster state is "yellow" indicating that all the primary shards are allocated but not all replicas have been allocated/initialized. So it is showing only the allocated shards as 5.
The _stats API gives information about your indices in the cluster. It will give information about how many shards the index will have and how many replicas. Since your index needs a total of 10 shards (5 primary and 5 replica as specified when you create the index) the stats contain information as total 10, successful 5 and failed 5(failed because unable to allocate in any node).
Use http://localhost:9200/_cat/shards to see the overall shard status
Wowee...how does one remve nodes from ES?
I had 4 nodes, wanted to remove three.
On the node I wanted to keep I ran the below:
curl -XPUT localhost:9200/_cluster/settings -d '{"transient" : {"cluster.routing.allocation.exclude._ip" : "172.31.6.204"}}';echo
curl -XPUT localhost:9200/_cluster/settings -d '{"transient" : {"cluster.routing.allocation.exclude._ip" : "172.31.6.205"}}';echo
curl -XPUT localhost:9200/_cluster/settings -d '{"transient" : {"cluster.routing.allocation.exclude._ip" : "172.31.6.206"}}';echo
Now my cluster health looks like this:
curl localhost:9200/_cluster/health?pretty=true
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 8
How do I fix? What is the proper method for removing nodes?
Thank god this was in dev...
Thanks
Your decommission was good with cluster.routing.allocation.exclude._ip !
The problem you got is that you don't give time for shards to move, each time, to the remained nodes.
You can replay the same commands, one by one, but with paying attention to the migrating shards with some monitoring plug-ins as ElasticHQ or Kopf
I've an ELK stack with two ElasticSearch nodes running and the cluster state turned red due to some unassigned shards which I can't get rid of. Looking up the unassigned shard, resp. the incomplete index with:
# curl -s elastic01.local:9200/_cat/shards | grep "logstash-2014.09.29"
Shows:
logstash-2014.09.29 4 p STARTED 745489 481.3mb 10.165.98.107 Crimson and the Raven
logstash-2014.09.29 4 r STARTED 745489 481.3mb 10.165.98.106 Glenn Talbot
logstash-2014.09.29 0 p STARTED 781110 502.3mb 10.165.98.107 Crimson and the Raven
logstash-2014.09.29 0 r STARTED 781110 502.3mb 10.165.98.106 Glenn Talbot
logstash-2014.09.29 3 p INITIALIZING 10.165.98.107 Crimson and the Raven
logstash-2014.09.29 3 r UNASSIGNED
logstash-2014.09.29 1 p STARTED 762991 490.1mb 10.165.98.107 Crimson and the Raven
logstash-2014.09.29 1 r STARTED 762991 490.1mb 10.165.98.106 Glenn Talbot
logstash-2014.09.29 2 p STARTED 761811 491.3mb 10.165.98.107 Crimson and the Raven
logstash-2014.09.29 2 r STARTED 761811 491.3mb 10.165.98.106 Glenn Talbot
My attempt to assign the shard to the other node fails:
curl XPOST -s 'http://elastic01.local:9200/_cluster/reroute?pretty=true' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2014.09.29",
"shard" : 3 ,
"node" : "Glenn Talbot",
"allow_primary" : 1
}
}
]
}'
With:
NO(primary shard is not yet active)]
I can't really seem to find an API to push the shard states any further. How could I proceed here?
Just for a complete picture, that what the system health looks like:
{
"cluster_name" : "logstash_es",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 114,
"active_shards" : 228,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}
Thank you for your time and help
I actually ran into this situation with ElasticSearch 1.5 just the other day. After initially getting the same error, I simply repeated the /_cluster/reroute request the next day for lack of other ideas, and it worked, and it put the cluster back into a green state immediately.