We have an ES cluster with 2 nodes. When we delete an index not all folders in the cluster (on filesystem) are deleted which causes some problems when restarting one server.
Then our deleted indices gets distributed with some weird state indicating that the cluster health is not green.
Example. We delete index with name someIndex and after deletion we check file system, one can see this:
Node1
ElasticSearch\data\clustername\nodes\0\indices\
ElasticSearch\data\clustername\nodes\1\indices\
Node2
ElasticSearch\data\clustername\nodes\0\indices\
ElasticSearch\data\clustername\nodes\1\indices\someIndex (<-- still present)
Anyone know whats causing this?
ES-version: 0.90.5
There are two nodes directories for each on your filesystem (these are nodes\0 and nodes\1).
When you start Elasticsearch, you start up a node (in ES-lingo). Your machine can host multiple nodes, which happens if you start Elasticsearch multiple times. The default settings for the http port is 9200-9300, that means, ES is looking for a free port in that range and binds its node to it (the same is true for the transport module with 9300-9400)
So, if you start an ES process while another is still running, that is, it's bound to a port, you start a second node and ES will create a new directory for it. Maybe this has happened if you issued a restart, but ES couldn't shut down in time before the new node started up.
But now you have a third node in your cluster and ES will assign shards to it. Then you do a cluster restart or something similar and you start one node on each of your machine. ES cannot find the shards that were assigned to the third node, because it's not spun up, and it will show you a red or yellow state, depending on what shards live on the third node. If you delete you index data, you won't delete the data from this missing node.
If you don't care about the data, you can just shutdown ES and delete these directories or start two ES nodes on each of your machines and then delete the index again.
Then you could change the port settings to one specific port, that would prevent second processes from starting up, since they won't be able to bind to a free port.
Related
I am running a three node Elasticsearch (ELK) cluster. All nodes have all and the same roles, e.g. data, master, etc. The disk on node 3 where the data folder is assigned became corrupt and that data is probably unrecoverable. The other nodes are running normally and one of them assumed the master role instead.
Will the cluster work normally if I replace the disk and make the empty directory available to elastic again, or am I risking crashing the whole cluster?
EDIT: As this is not explicitly mentioned in the answer, yes, if you add your node with an empty data folder, the cluster will continue normally as if you added a new node to the cluster, but you have to deal with the missing data. In my case, I lost the data as I do not have replicas.
Let me try to explain that in simple way.
Your data got corrupt at node-3 so if you add that that node again, it will not have the older data, i.e. the shards stored in node-3 will remain unavailable for the cluster.
Did you have the replica shards configured for the indexes?
What is the current status(yellow/red) of the cluster when you have
node-3 removed?
If a primary shard isn't available then the master-node promotes one of the active replicas to become the new primary. If there are currently no active replicas then status of the cluster will remain red.
I have 3 nodes elasticsearch cluster. If more than one node goes down then I can easily check them manually. Suppose nodes in the cluster got increased then it will be difficult to check them manually. So, how can I get all the nodes(specifically name of the nodes) of the cluster even if they are down?
To get live/healthy nodes I hit the api endpoint:
curl -X GET "hostname/ip:port/_cat/nodes?v&pretty"
Is there any endpoint by using which I can get total nodes and unhealthy/down nodes in elasticsearch cluster?
I was trying to list all the nodes using discovery.seed.hosts present in elasticsearch.yml config file. But I don't know how to do it or is it the right approach or not.
I don't think there is any API to know about offline nodes. If your entire cluster is down or single node down, then Elastic doesn't provide any way to check the node's health. You need to depend on an external script or code or monitoring tool which will ping all your nodes and print status.
You can write a custom script which will call below API and it will return all the nodes which are available in the cluster. Once you have received response, you can filter out IP or hostname of the node and whichever are not coming in response you can consider it as down node.
GET _cat/nodes?format=json&filter_path=ip,name
Another option is to enable cluster monitoring which will give you status of entire cluster but again it will show information about running node only.
Please check this answer for how Kibana show offline node in Cluster Monitoring.
I have a master/data Elasticsearch node. It has now reached 90% capacity and I need to provision additional space to continue adding more data.
I have created a new server with 700gb disk space, installed ES & Kibana, and now wish for this second server to provide additional space to / work with the master node.
My problem:
As it says on the ES website:
When you add more nodes to a cluster, it automatically allocates
replica shards.
My issue is that I do not wish to replicate the data from the master node, but instead just provide additional space using this second server which can then be queried by the master node.
My question:
What is the best way to achieve this? Is adding a node the incorrect thing to do here?
Using index-level shard allocation filtering, you can constrain a given index (or set of indexes) to stay on a given node (or set of nodes).
Simply run this:
PUT orders,orders_1,orders_2,orders_3,orders_4,orders_5/_settings
{
"index.routing.allocation.require._name": "your-first-node-name"
}
Note that you can also use ._ip or ._host instead of ._name if you prefer.
Then you can add a new node and let it join the cluster and nothing will rebalance, all your current shards will stay on your current node.
And if you need to create a new index on the second node and want to make sure that it will stay on that node you can specify the same settings at index creation time:
PUT new_orders
{
"settings": {
"index.routing.allocation.require._name": "your-second-node-name"
}
}
The index called new_orders will be created on the second node and stay there.
Our Elasticsearch cluster has two data directories. We recently restarted all the nodes in the cluster. After the successful restart process, we observed increased disk space usage on few nodes. When we examined the folders inside the data directory, we found that there are orphaned shards.
For example, an orphaned shard "15" exists at location data_dir0/cluster_name/nodes/0/indices/index_name/15, while one of the replicas of the same shard "15" exists on the same node inside other data directory, here at data_dir1/cluster_name/nodes/0/indices/index_name/15. This shard "15" from data_dir1 is also included in cluster metadata and thus, we assume that shard "15" from data_dir0 is an orphaned shard and has to be deleted by Elasticsearch. But Elasticsearch hasn't deleted the orphaned shard yet, even after 6 days since last restart.
We found this topic https://discuss.elastic.co/t/old-shards-on-re-joining-nodes-useful/182661 relating to our issue but it did not help us as in ES did not take care of that orphaned shard. We also raised the question on Elastic forum but we are not getting quick replies. So, I am asking it here as stack overflow has larger community.
This also happened to our cluster, we run elastic 6.1.3. One specific node had 88% of it disk used, it seems there were some shard leftovers from previous relocation on our production index.
To fix this I stopped Elasticsearch on the node (make sure you have plenty of disk-space on your other datanodes), let the relocation of elastic do its work. Once it is done and re-balanced, delete the index folder and start Elasticsearch again, this went quite painless.
What version of Elasticsearch are you running?
Is your cluster green? If so, those shard files should be deleted by Elasticsearch during initialization. But if that shard has unallocated replicas at the time the node rejoined the cluster, Elasticsearch won't remove pre-existing shard files on disk.
You can manually delete the directory if you don't need the shard. Or you can try restarting Elasticsearch on the node and let it delete the files for you.
We also got help from Elastic forum here https://discuss.elastic.co/t/old-shards-not-deleted-upon-relocation/71161/6
Restarting the node did not help and we do not want to manually delete the folders. So, we are going to replace the affected nodes one by one.
#chani It would be great if you can provide any official link to the manual delete suggestion.
I work with a single elasticsearch node in development.
When I do about 10 000 PUT requests (about 64MB data) in a short time period another node is created automatically.
Why does this happen?
After this happens the cluster is in yellow state till I don't shut down this node manually.
I'm using logstash to put the data into elasticsearch.