After many trials and errors, here is the pattern:
1 Node: Works perfect
2 Nodes: root / query show 200, and cluster health is green, but cannot index new doc (no response). Search query has no response too. But once I shut down one node, everything would work again.
I do make sure that port 9300 firewall is open between the nodes. Am I missing other important config? The cluster API reports 2 nodes, so the communication should be fine. But is there other factor that prevent new documents from indexing in the two nodes?
Related
I would like to know on a multi-node Elasticsearch cluster (3 nodes), to which node we can send curl call to fetch some results (by running query)?
If we can use any node IP what is can be the best practice? , for example, if
I am using node 1's URL from "node 1, node 2, and node 3", let's say node 1 goes down, I have to manually update the query URL to "node 2 or node 3" is their way so that I can have one centralized URL which does itself.
Do I have to manually do it using Nginx or load balancer, Or there is something in the elastic search itself
Although in ES if you send the request to any node, that is part of a valid ES cluster, it will route the request internally and provide you the result.
But You shouldn't use the directly node ip to communicate with the Elasticsearch for obvious reasons and one of that you already mentioned. You can use the load balancer, ngnix or DNS for your Elasticsearch cluster.
But if you are accessing it programmatically you don't need this also, while creating the Elasticsearch clients, you can specify all the nodes ip in Elasticsearch client, this way even when some nodes are down still your request will not fail.
RestClientBuilder restClientBuilder = RestClient.builder(
new HttpHost(esConfig.getHost(), esConfig.getPort()), new HttpHost(esConfig.getHost2(), esConfig.getPort2()));
As you can see i created my Elasticsearch client(works with Elasticsearch 8.5) WITH two Elasticsearch hosts.
I have 3 nodes elasticsearch cluster. If more than one node goes down then I can easily check them manually. Suppose nodes in the cluster got increased then it will be difficult to check them manually. So, how can I get all the nodes(specifically name of the nodes) of the cluster even if they are down?
To get live/healthy nodes I hit the api endpoint:
curl -X GET "hostname/ip:port/_cat/nodes?v&pretty"
Is there any endpoint by using which I can get total nodes and unhealthy/down nodes in elasticsearch cluster?
I was trying to list all the nodes using discovery.seed.hosts present in elasticsearch.yml config file. But I don't know how to do it or is it the right approach or not.
I don't think there is any API to know about offline nodes. If your entire cluster is down or single node down, then Elastic doesn't provide any way to check the node's health. You need to depend on an external script or code or monitoring tool which will ping all your nodes and print status.
You can write a custom script which will call below API and it will return all the nodes which are available in the cluster. Once you have received response, you can filter out IP or hostname of the node and whichever are not coming in response you can consider it as down node.
GET _cat/nodes?format=json&filter_path=ip,name
Another option is to enable cluster monitoring which will give you status of entire cluster but again it will show information about running node only.
Please check this answer for how Kibana show offline node in Cluster Monitoring.
I started two clusters of ElasticSearch with different names but the other one won't show up either in Marvel or querying for health manually.
curl 'http://127.0.0.1:9200/_cat/health?v'
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1501062768 15:22:48 Cove_dev_cluster yellow 1 1 8 8 0 0 8 0 - 50.0%
But it's running on my screen.
I am assuming you are running both clusters (single nodes I believe in this case) on the same machine... In this case the nodes have a default port range setting of 9200-9300 and they are configured to bind to first available port in the specified range. More details available in Network Settings documentation.
So in your case the other cluster is running on port 9201 most likely. If you check for Marvel or query the health manually on port 9201 you should find the other cluster.
However, if you want to have two nodes participating in the same cluster, then make sure that the cluster name matches in the configuration of both instances of elasticsearch you have running.
Hope this helps.
I have 2 nodes running elastic search cluster with 5 shards and 1 replica.It shows cluster health to green but suddenly it shows cluster health to yellow and I fix it by shard re-routing.
.
I wants to understand root cause of unassigned shards because when it goes to yellow state I tried telnet between both nodes on port 9300 and 9200 and connected successfully
I also faced that problem earlier and that time I went to elasticsearch logs.So based on that I fixed the issue.
So my suggestion is please go to logs and check what is the route cause of the issue.
regards
Kartheek Gummaluri
elasticsearch 1.7.2 on centos
3 node cluster
This question is how to manage ES config via mods to elasticsearch.yml + restart of elasticsearch service. (Not via api.)
Out of box, the config is:
index.number_of_replicas: 1
So on a 3 node cluster, any 2 nodes have the whole package.
If I want any 1 node to be complete, I would set:
index.number_of_replicas: 2
a) Correct?
b) Can I just walk up to an existing setup and make this change?
c) And, can I just walk up , and adjust it up to 2, and down to 1, whenever? (up to make each node a possible stand alone, down to save disk space)
The number of replica can be changed at any point of time. You can increase or decrease the replica dynamically. There is a good example shown here.
Also please note that , you cant change the number of shards after index creation , but number of replica is open to change via index settings API.
fwiw, another way to do this (I have now proven out) is to update the yml file (elasticsearch.yml). Change the element:
index.number_of_replicas: 2
Up or down, as desired, and restart the elasticsearch service
service elasticsearch restart
The cluster will go yellow (yellow status) while the replicas are being created/moved, and then will go green.