Elasticsearch: Inconsistent number of shards in stats & cluster APIs - elasticsearch

I uploaded the data to my single node cluster and named the index as 'gequest'.
When I GET from http://localhost:9200/_cluster/stats?human&pretty, I get:
"cluster_name" : "elasticsearch",
"status" : "yellow",
"indices" : {
"count" : 1,
"shards" : {
"total" : 5,
"primaries" : 5,
"replication" : 0.0,
"index" : {
"shards" : {
"min" : 5,
"max" : 5,
"avg" : 5.0
},
"primaries" : {
"min" : 5,
"max" : 5,
"avg" : 5.0
},
"replication" : {
"min" : 0.0,
"max" : 0.0,
"avg" : 0.0
}
}
}
When I do GET on http://localhost:9200/_stats?pretty=true
"_shards" : {
"total" : 10,
"successful" : 5,
"failed" : 0
}
How come total number of shards not consistent in two reports? Why total shards are 10 from stats API. How to track the other 5?

From the results it is likely that you have a single elasticsearch node running and created a index with default values(which creates 5 shards and one replica). Since there is only one node running elasticsearch is unable to assign the replica shards anywhere(elasticsearch will never assign the primary and replica of the same shard in a single node).
The _cluster/stats API gives information about the cluster including the current state. From your result it is seen that the cluster state is "yellow" indicating that all the primary shards are allocated but not all replicas have been allocated/initialized. So it is showing only the allocated shards as 5.
The _stats API gives information about your indices in the cluster. It will give information about how many shards the index will have and how many replicas. Since your index needs a total of 10 shards (5 primary and 5 replica as specified when you create the index) the stats contain information as total 10, successful 5 and failed 5(failed because unable to allocate in any node).
Use http://localhost:9200/_cat/shards to see the overall shard status

Related

Interpreting the output of elasticsearch GET query

The output of the curl -X GET "localhost:9200/_cat/shards?v" is as follows:
index
shard
prirep
state
docs
store
ip
node
test_index
1
p
STARTED
0
283b
127.0.0.1
Deepaks-MacBook-Pro-2.local
test_index
1
r
UNASSIGNED
0
test_index
1
r
UNASSIGNED
0
test_index
0
p
STARTED
1
12.5kb
127.0.0.1
Deepaks-MacBook-Pro-2.local
test_index
0
r
UNASSIGNED
0
test_index
0
r
UNASSIGNED
0
And the output of the query curl -X GET "localhost:9200/test_index/_search?size=1000" | json_pp is as follows:
{
"_shards" : {
"failed" : 0,
"skipped" : 0,
"successful" : 2,
"total" : 2
},
"hits" : {
"hits" : [
{
"_id" : "101",
"_index" : "test_index",
"_score" : 1,
"_source" : {
"in_stock" : -4,
"name" : "pizza maker",
"prize" : 10
},
"_type" : "_doc"
}
],
"max_score" : 1,
"total" : {
"relation" : "eq",
"value" : 1
}
},
"timed_out" : false,
"took" : 2
}
MY QUESTION: As you can see, only text_index 0 primary shard has the data (from the output of first query), why successful key inside the _shards key has the value of 2?
Also, there is only 1 document, then why the value of total key inside _shards key is 2?
test_index has two primary shards and four unassigned replica shards (probably because you have a single node). Since a primary shard is a partition of your index, a document can only be stored in a single primary shard, in your case primary shard 0.
total: 2 means that the search was run over the two primary shards of your index (i.e. 100% of the data) and successful: 2 means that all primary shards responded. So you know you can trust the response to have searched over all your test_index data.
There's nothing wrong here.

Elasticsearch showing yellow health for logstash index

The health column is showing yellow for logstash index , even after deleting old ones they re recreated with yellow health. I have clusters for this setup and have checked shards using below.
GET _cluster/health :
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 12,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 3,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 88.46153846153845
}
Any idea how this can be turned to green ?
Also the index are not getting created daily due to this issue.
The yellow health indicates that your primary shard is allocated but the replicas are not allocated. This may be because your elastic is deployed using one node only. Elastic does not allocate the primary and the replica shards on the same node as it will serve no purpose. When you have multiple nodes and multiples shards, the elastic by default allocates the primary and the replicas to different nodes.
As seen from the data you provided, you have 22 active shards and only 2 nodes. The unassigned shards, i.e., 3, is the problem leading to yellow cluster health.
In order to solve this, you can do 2 things.
If you are using elastic for testing, you can initiate the server with one shard (no replicas). In this case you have one node in your elastic service.
If you are in a production and want multiple shards (primary + replicas), then the number of nodes should be equal to the total number of shards. For instance, if you have 1 primary and 2 replicas, then initiate the server with 3 nodes.
Please remember to do this when you are initiating your elastic server.
The harm in yellow health is that if your primary shard goes bad, you will lose the service and the data as well.

Elasticsearch timout doesn't work when do searching

Elasticsearch version (bin/elasticsearch --version):5.2.2
JVM version (java -version): 1.8.0_121
OS version (uname -a if on a Unix-like system): opensuse
Do search with " curl -XGET 'localhost:9200/_search?pretty&timeout=1ms' "
The part of response is :
{
"took" : 5,
"timed_out" : false,
"_shards" : {
"total" : 208,
"successful" : 208,
"failed" : 0
},
"hits" : {
"total" : 104429,
"max_score" : 1.0,
"hits" :
...
The took time is 5ms, and timeout setting is 1ms. Why "timed_out" is false rather than true.
Thanks
The timeout is per searched shard (looks like 208 in your case), while the took is for the entire query. On a per shard level you are within the limit. The documentation has some additional information on when you will hit timed_out and more caveats.
Try with a more expensive query (leading wildcard, fuzziness,...) — I guess then you should hit the (shard) limit.

Elasticsearch doesn't allow to allocate unassigned shard

I have an ES cluster of 2 nodes. As I restarted nodes the cluster status is yellow as some of the shards are unassigned. I've tried to google and the common solution is to reroute unassigned shards. Unfortunately, it doesn't work for me.
curl localhost:9200/_cluster/health?pretty=true
{
"cluster_name" : "infra",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 34,
"active_shards" : 68,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 31,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 68.68686868686868
}
curl localhost:9200/_cluster/settings?pretty
{
"persistent" : { },
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
}
}
curl localhost:9200/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open logstash-log-2016.05.13 5 2 88314 0 300.5mb 150.2mb
yellow open logstash-log-2016.05.12 5 2 254450 0 833.9mb 416.9mb
yellow open .kibana 1 2 3 0 47.8kb 25.2kb
green open .marvel-es-data-1 1 1 3 0 8.7kb 4.3kb
yellow open logstash-log-2016.05.11 5 2 313095 0 709.1mb 354.6mb
yellow open logstash-log-2016.05.10 5 2 613744 0 1gb 520.2mb
green open .marvel-es-1-2016.05.18 1 1 88720 495 89.9mb 45mb
green open .marvel-es-1-2016.05.17 1 1 69430 492 59.4mb 29.7mb
yellow open logstash-log-2016.05.17 5 2 188924 0 518.2mb 259mb
yellow open logstash-log-2016.05.18 5 2 226775 0 683.7mb 366.1mb
Rerouting
curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
"commands": [
{
"allocate": {
"index": "logstash-log-2016.05.13",
"shard": 3,
"node": "elasticsearch-mon-1",
"allow_primary": true
}
}
]
}'
{
"error" : {
"root_cause" : [ {
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-1}{K-J8WKyZRB6bE4031kHkKA}{172.45.0.56}{172.45.0.56:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [K-J8WKyZRB6bE4031kHkKA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
} ],
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-1}{K-J8WKyZRB6bE4031kHkKA}{172.45.0.56}{172.45.0.56:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [K-J8WKyZRB6bE4031kHkKA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
},
"status" : 400
}
curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
"commands": [
{
"allocate": {
"index": "logstash-log-2016.05.13",
"shard": 3,
"node": "elasticsearch-mon-2",
"allow_primary": true
}
}
]
}'
{
"error" : {
"root_cause" : [ {
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-2}{Rxgq2aWPSVC0pvUW2vBgHA}{172.45.0.166}{172.45.0.166:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [Rxgq2aWPSVC0pvUW2vBgHA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
} ],
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-2}{Rxgq2aWPSVC0pvUW2vBgHA}{172.45.0.166}{172.45.0.166:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [Rxgq2aWPSVC0pvUW2vBgHA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
},
"status" : 400
}
So it fails and doesn't make any change. Shards are still in unassigned state.
Thank you.
Added
curl localhost:9200/_cat/shards
logstash-log-2016.05.13 2 p STARTED 17706 31.6mb 172.45.0.166 elasticsearch-mon-2
logstash-log-2016.05.13 2 r STARTED 17706 31.5mb 172.45.0.56 elasticsearch-mon-1
logstash-log-2016.05.13 2 r UNASSIGNED
logstash-log-2016.05.13 4 p STARTED 17698 31.6mb 172.45.0.166 elasticsearch-mon-2
logstash-log-2016.05.13 4 r STARTED 17698 31.4mb 172.45.0.56 elasticsearch-mon-1
logstash-log-2016.05.13 4 r UNASSIGNED
For all the indices that are yellow you have configured 2 replicas:
health status index pri rep
yellow open logstash-log-2016.05.13 5 2
yellow open logstash-log-2016.05.12 5 2
yellow open .kibana 1 2
yellow open logstash-log-2016.05.11 5 2
yellow open logstash-log-2016.05.10 5 2
yellow open logstash-log-2016.05.17 5 2
yellow open logstash-log-2016.05.18 5 2
2 replicas on two nodes cluster is impossible. You need a third node for all the replicas to be assigned.
Or, decrease the number of replicas:
PUT /logstash-log-*,.kibana/_settings
{
"index": {
"number_of_replicas": 1
}
}
Had same problem with version 5.1.2
I tried below option and it worked out.
curl -XPUT 'localhost:9200/_cluster/settings' -d
'{ "transient":
{ "cluster.routing.allocation.enable" : "all" }
}'
After this it automatically allocated shards.

Elasticsearch cluster health: yellow (131 of 262) unassigned shards

I'm very new to Elasticsearch and try to use it for analyze of data from Suricata IPS. Head plugin shows me this: yellow (131 of 262) unassigned shards
also getting this:
$ curl -XGET http://127.0.0.1:9200/_cluster/health?pretty
{
"cluster_name" : "elasticsearch_brew",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 131,
"active_shards" : 131,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 131,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}
How to get rid of those unassigned shards? And also Kibana says me this from time to time:
Error: Bad Gateway
at respond (https://www.server.kibana/index.js?_b=:85279:15)
at checkRespForFailure (https://www.server.kibana/index.js?_b=:85247:7)
at https://www.server.kibana/index.js?_b=:83885:7
at wrappedErrback (https://www.server.kibana/index.js?_b=:20902:78)
at wrappedErrback (https://www.server.kibana/index.js?_b=:20902:78)
at wrappedErrback (https://www.server.kibana/index.js?_b=:20902:78)
at https://www.server.kibana/index.js?_b=:21035:76
at Scope.$eval (https://www.server.kibana/index.js?_b=:22022:28)
at Scope.$digest (https://www.server.kibana/index.js?_b=:21834:31)
at Scope.$apply (https://www.server.kibana/index.js?_b=:22126:24)
I don't know if these problems connected to each other... Could please anyone help me to get it work. Thank you very much!
A cluster with only one node and indices that have one replica will always be yellow.
yellow is not a bad thing, the cluster works perfectly fine. The downside is it doesn't have the copies of the shards active.
You can have a green cluster if youbset the number of replicas to 0 or you add a second node to the cluster.
But, as I said, there is no problem if you have a yellow cluster.
Setting number of replicas to 0, cluster wide (all indices):
curl -XPUT "http://localhost:9200/_settings" -d'
{
"number_of_replicas" : 0
}'

Resources