I have created a ES cluster with ES running on three different machine. In order to make them as cluster i have added the unicast config as below in all the 3 machine in elasticsearch.yml config file.
discovery.zen.ping.unicast.hosts:[IP1, IP2, IP3]
When i run
curl -XGET localhost:9200/_cluster/health?pretty
Am getting No_of_nodes as 3. Now i wanted to remove one node from the cluster
so without changing any config file i ran the below command
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "IP_adress_of_Node3"
}
}';
After this i ran the second command again to get the cluster details, expected output is NO_of_nodes should be 2 but in the result it is showing number of nodes=3 still even after excluding the node. It will of great help if someone can please tell me what is wrong in the steps followed for removing node.
Thanks
The command cluster.routing.allocation.exclude._ip that you sent to your cluster will not actually remove the node from your cluster, but rather prepare it for removal. What this does is, it instructs Elasticsearch to move all shards that are held on this node away from this node and store them on other nodes instead.
This allows you to then remove the node once it is empty, without causing under-replication of the shards stored on this node.
To actually remove the node from your cluster you would need to remove it from your list of unicast hosts. Of course you can also just shut it down and leave it in the list until you next need to restart your cluster anyway, as far as I am aware that won't hurt anything.
Related
I can't find where to get the number of current open shards.
I want to make monitoring to avoid cases like this:
this cluster currently has [999]/[1000] maximum shards open
I can get maximum limit - max_shards_per_node
$ curl -X GET "${ELK_HOST}/_cluster/settings?include_defaults=true&flat_settings=true&pretty" 2>/dev/null | grep cluster.max_shards_per_node
"cluster.max_shards_per_node" : "1000",
$
But can't find out how to get number of the current open shards (999).
A very simple way to get this information is to call the _cat/shards API and count the number of lines using the wc shell command:
curl -s -XGET ${ELK_HOST}/_cat/shards | wc -l
That will yield a single number that represents the number of shards in your cluster.
Another option is to retrieve the cluster stats using JSON format, pipe the results into jq and then grab whatever you want, e.g. below I'm counting all STARTED shards:
curl -s -XGET ${ELK_HOST}/_cat/shards?format=json | jq ".[].state" | grep "STARTED" | wc -l
Yet another option is to query the _cluster/stats API:
curl -s -XGET ${ELK_HOST}/_cluster/stats?filter_path=indices.shards.total
That will return a JSON with the shard count
{
"indices" : {
"shards" : {
"total" : 302
}
}
}
To my knowledge there is no single number that ES spits out from any API with the single number. To be sure of that, let's look at the source code.
The error is thrown from IndicesService.java
To see how currentOpenShards is computed, we can then go to Metadata.java.
As you can see, the code is iterating over the index metadata that is retrieved from the cluster state, pretty much like running the following command and count the number of shards, but only for indices with "state" : "open"
GET _cluster/state?filter_path=metadata.indices.*.settings.index.number_of*,metadata.indices.*.state
From that evidence, we can pretty much be sure that the single number you're looking for is nowhere to be found, but needs to be computed by one of the methods I showed above. You're free to open a feature request if needed.
The problem: Seems that your elastic cluster number of shards per node are getting limited.
Solution:
Verify the number of shards per node in your configuration and increase it using elastic API.
For getting the number of shards - use _cluster/stats API:
curl -s -XGET 'localhost/_cluster/stats?filter_path=indices.shards.total'
From elastic docs:
The Cluster Stats API allows to retrieve statistics from a cluster
wide perspective. The API returns basic index metrics (shard numbers,
store size, memory usage) and information about the current nodes that
form the cluster (number, roles, os, jvm versions, memory usage, cpu
and installed plugins).
For updating number of shards (increasing/decreasing), use - _cluster/settings api:
For example:
curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_cluster/settings' -d '{ "persistent" : {"cluster.max_shards_per_node" : 5000}}'
From elastic docs:
With specifications in the request body, this API call can update
cluster settings. Updates to settings can be persistent, meaning they
apply across restarts, or transient, where they don’t survive a full
cluster restart.
You can reset persistent or transient settings by assigning a null
value. If a transient setting is reset, the first one of these values
that is defined is applied:
the persistent setting the setting in the configuration file the
default value. The order of precedence for cluster settings is:
transient cluster settings persistent cluster settings settings in the
elasticsearch.yml configuration file. It’s best to set all
cluster-wide settings with the settings API and use the
elasticsearch.yml file only for local configurations. This way you can
be sure that the setting is the same on all nodes. If, on the other
hand, you define different settings on different nodes by accident
using the configuration file, it is very difficult to notice these
discrepancies.
curl -s '127.1:9200/_cat/indices' | awk '{ if ($2 == "open") C+=$5*$6} END {print C}'
This works:
GET /_stats?level=shards&filter_path=_shards.total
Reference:
https://stackoverflow.com/a/38108448/4271117
We have an elasticsearch cluster dynamically re-sizing in respect to percolate message count in a rabbitmq queue.
We have a single shard and ~18K query in our index, and we use auto_expand_replicas: "0-all" at index settings to copy single shard to all nodes when a node becomes available.
But during heavy load and cluster re-sizing, some requests produces unexpected empty matches.
We send ~1M percolate request daily and we were losing ~1K content. We added a cluster status control to our code, if cluster status is not green before and after percolate request we're waiting for green status and re-sending percolate request, we were able to reduce lost content count from 1K to ~100 in this way. We do not live this problem in a cluster with fixed node size.
Unfortunately any loss is not acceptable in our scenario, and we don't want to give up auto scaling, we need to find a workaround for this problem.
To repeat problem, you can use following bash script:
https://gist.github.com/ekesken/de41598a1e7e54c6f33c
This script will download and install elasticsearch 1.5.2 on your current directory, create a cluster with 10 nodes on your local and create index and percolation queries and will start testing.
Normally we expect following output for single percolate request:
curl -XGET 'localhost:9200/my-index/my-type/_percolate' -d '{
"doc" : {
"message" : "A new bonsai tree in the office"
}
}'
{"took":95,"_shards":{"total":1,"successful":1,"failed":0},"total":1,"matches":[{"_index":"my-index","_id":"tree"}]}
After running script, if you see all shards in all nodes are started at http://localhost:9200/_cat/shards response and test script is still running, that means you couldn't reproduce problem, try increasing node count which was 10 by default:
./repeat_percolation_loss.sh 15 test-only
When you reproduce problem, script will exit with following output:
{"took":209,"_shards":{"total":1,"successful":1,"failed":0},"total":0,"matches":[]}
Problem repeated! Congratulations.
You can shutdown all servers and clean all directory and files created via script with command:
./repeat_percolation_loss.sh 15 clean
Change node count above with latest node count you've tried.
Linking back to this question:
Why not enable virtual node in an Hadoop node?
I'm running a mixed 3 node cluster with 2 cassandra and 1 analytics nodes and disabled the virtual nodes by generating 3 tokens with the utility given by DataStax enterprise.
But when I run 'nodetool status' command, I still see 256 tokens with each node and when a mapreduce job is created, it creates 257 mappers and takes a very long time to execute a query with small data.
So my specific questions are:
Is virtual node setting still not disabled? How can I verify if its disabled?
If its disabled then why 257 mappers are still created for each job? Is there a different configuration for that?
Thanks much for any help!!
1) It's not disabled. You can tell because it still says 256 tokens in nodetool status.
To disable vnodes you need to make sure that you change the num_tokens variable in the cassandra.yamnl
# If you already have a cluster with 1 token per node, and wish to migrate to
# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
# num_tokens: 256 << Make sure this line is commented out
# initial_token allows you to specify tokens manually. While you can use it with
# vnodes (num_tokens > 1, above) -- in which case you should provide a
# comma-separated list -- it's primarily used when adding nodes to legacy clusters
# that do not have vnodes enabled.
initial_token: << Your generated token goes here
Is there a way to tell a node to remove all of its data (spread it back out among the other nodes) so that I can shut it down and not deal with a rebalance/re-replicate once its down?
If I have 2 copies of each shard, and I drop one node, some of the shards now only have 1 live copy and it has to be re-replicated. I'd prefer to not drop down to 1 live copy for any period of time if I can.
After posting to the ES mailing list, I was informed the proper answer lies in the _cluster/settings api, specifically the cluster.routing.allocation.exclude._ip option.
From the docs: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.exclude._ip" : "10.0.0.1"
}
}'
The IP address can be a comma separated list. To 'un-exclude', just remove the IP from the list (or set the list to "" to remove all excluded IPs).
Hopefully this helps others looking for the answer to this same question.
I have the cassandra cluster of 12 nodes on EC2.
Because of some failure we lost one of the node completely.I mean that machine do not exist anymore.
So i have created the new EC2 instance with different ip and same token as that of the dead node and i also had the backup of data on that node so it works fine
But the problem is the dead nodes ip still appears as a unreachable node in describe cluster.
As that node (EC2 instance) does not exist anymore I can not use the nodetool decommission or nodetool disablegossip
How can i get rid of this unreachable node
I had the same problem and I resolved it with removenode, which does not require you to find and change the node token.
First, get the node UUID:
nodetool status
DN 192.168.56.201 ? 256 13.1% 4fa4d101-d8d2-4de6-9ad7-a487e165c4ac r1
DN 192.168.56.202 ? 256 12.6% e11d219a-0b65-461e-babc-6485343568f8 r1
UN 192.168.2.91 156.04 KB 256 12.4% e1a33ed4-d613-47a6-8b3b-325650a2bbd4 RAC1
UN 192.168.2.92 156.22 KB 256 13.6% 3a4a086c-36a6-4d69-8b61-864ff37d03c9 RAC1
UN 192.168.2.93 149.6 KB 256 11.3% 20decc72-8d0a-4c3b-8804-cc8bc98fa9e8 RAC1
As you can see the .201 and .202 are dead and on a different network. These have been changed to .91 and .92 without proper decommissioning and recommissioning. I was working on installing the network and made a few mistakes...
Second, remove the .201 with the following command:
nodetool removenode 4fa4d101-d8d2-4de6-9ad7-a487e165c4ac
(in older versions it was nodetool remove ...)
But just like for the nodetool removetoken ..., it blocks... (see comment by samarth in psandord answer) However, it has a side effect, it puts that UUID in a list of nodes to be removed. So next we can force the removal with:
nodetool removenode force
(in older versions it was nodetool remove ...)
Now the node accepts the command it tells me that it is removing the invalid entry:
RemovalStatus: Removing token (-9136982325337481102). Waiting for replication confirmation from [/192.168.2.91,/192.168.2.92].
We also see that it communicates with the two other nodes that are up and thus it takes a little time, but it is still quite fast.
Next a nodetool status does not show the .201 node. I repeat with .202 and now the status is clean.
After that you may also want to run a cleanup as mentioned in psanford answer:
nodetool cleanup
The cleanup should be run on all nodes, one by one, to make sure the change is fully taken in account.
Normally when replacing a node you want to set the new node's token to (failure node's token) - 1 and let it bootstrap. As of 1.0 there is now a flag you can specify on startup to replace a dead node: "cassandra.replace_token=".
Since you have already added the new node with the same token there's an extra step:
Move the new node's token to (failure node's token) - 1 using nodetool move
Run nodetool removetoken <failed node's token> from one of the up nodes
Run nodetool cleanup on each node
These are basically the pre 1.0 instructions for replacing a dead node with the additional token move.