Elasticsearch: setting up cluster with two nodes - elasticsearch

I've set two master nodes in my cluster but it only shows "elastic-master" not "elastic-slave".(second node is not shown).How can I resolve it?
I've already followed Elasticsearch documents but nothing changed!
My host file setting:
sudo nano /etc/hosts
192.168.143.30 elastic-master
192.168.143.23 elastic-slave
My config file:
# ---------------------------------- Cluster ------------------------------
cluster.name: elastic-a
# ------------------------------------ Node -------------------------------
node.name: elastic-master
node.master: true
node.data: true
# ---------------------------------- Network ------------------------------
network.host: 192.168.143.30
http.port: 9200
# --------------------------------- Discovery -----------------------------
discovery.seed_hosts: ["192.168.143.30","192.168.143.23"]
cluster.initial_master_nodes: ["elastic-slave", "elastic-master"]
My config file on second one:
# ---------------------------------- Cluster ------------------------------
cluster.name: elastic-a
# ------------------------------------ Node -------------------------------
node.name: elastic-slave
node.master: true
node.data: true
# ---------------------------------- Network ------------------------------
network.host: 192.168.143.23
http.port: 9200
# --------------------------------- Discovery -----------------------------
discovery.seed_hosts: ["192.168.143.30","192.168.143.23"]
cluster.initial_master_nodes: ["elastic-slave", "elastic-master"]

"First stopped the service (sudo service elasticsearch stop). Delete the data (sudo rm -rf /var/lib/elasticsearch/*) and restart the service (sudo service elasticsearch restart). Now work all !!"
Found answer in here:
Add nodes to make local cluster Elasticsearch [7.8]

Related

Want to setup a two-node elastic search cluster on same machine

I am on elastic search [7.10.1. For learning purposes want to setup a clutser of two nodes on the same host. Unable to find a configuration which works for me.
I have done same elastic serach installation in two different folders node1 and node2.
node1 elasticsearch.yml file:
...
cluster.name: tktest_esclutser
node.name: tkesnode-1
http.port: 19200
transport.port: 19201
discovery.seed_hosts: ["localhost:29200"]
...
node2 elasticsearch.yml file:
...
cluster.name: tktest_esclutser
node.name: tkesnode-2
http.port: 29200
transport.port: 29201
discovery.seed_hosts: ["localhost:19200"]
...
For some reason cluster is not being formed. log file shows master-node could not be discovered.
Below configuration works perfectly fine for me, initially you may have to delete the folders inside the data folder of both nodes to start with clean state.
Node 1 elasticsearch.yml
cluster.name: es_710
node.name: opster
http.port: 9900
cluster.initial_master_nodes: ["opster"]
Node 2 elasticsearch.yml
http.port: 9910
xpack.ml.enabled: false
cluster.name: es_710
cluster.initial_master_nodes: ["opster"]

Setting up ElasticSearch cluster on different VPS

I want to make a basic ElasticSearch cluster with two nodes.
I am using two VPS servers:
VPS1 has public IP address: 5.xxx.96.233
VPS2 has public IP address: 5.xxx.96.234
This is how the elasticsearch.yml file looks like (besides the default settings):
VPS1:
cluster.name: mx-cluster
node.name: mx-node-1
network.host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["5.xxx.96.233", "5.xxx.96.234"]
VPS2:
cluster.name: mx-cluster
node.name: mx-node-2
network.host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["5.xxx.96.233", "5.xxx.96.234"]
The ufw rules are set to allow to port 9300 from the other server.
VPS1:
9300 ALLOW 5.xxx.96.234
VPS2:
9300 ALLOW 5.xxx.96.233
Now an ElasticSearch instance is running on both of them, but it's unable to discover eachother to make a cluster.
Both servers are new and I only installed ElasticSearch on it.
I am not sure if this is possible or this is the way to do it, I wasn't able to find an answer online so I'm posting this.
Below 2 configs issue solved the issue, I just make only 1 master node mx-node-1 which also act as a data-node and another node mx-node-1 act as only data-node.
Master and data node config(mx-node-1)
cluster.name: mx-cluster
node.name: mx-node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["5.255.96.233"]
logger.org.elasticsearch.discovery: TRACE --> note used this to debug issue
Data node(mx-node-2) config
cluster.name: mx-cluster
node.name: mx-node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
node.master: false --> note this would mark it as data node only
discovery.seed_hosts: ["5.255.96.233"]
logger.org.elasticsearch.discovery: TRACE

Elasticsearch 7.4 Multi Clustering configuration

I am trying to make an Elasticsearch cluster with two nodes. Most of the documents in Google are about lower version clustering since they talk about discover.zen.ping.unicast.hosts which 7.4 doesn't have.
Two nodes are AWS EC2 instances.
Each Elasticservice is running fine but I don't think they are clustered. ( _cluster/health , _nodes API).
I made changes in /etc/hosts.
elasticsearch.yml for node-1 :
# ---------------------------------- Cluster -----------------------------------
cluster.name: dsm-001
# ------------------------------------ Node ------------------------------------
node.name: node-1
# ----------------------------------- Paths ------------------------------------
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# ---------------------------------- Network -----------------------------------
#network.host: 192.168.0.1
http.port: 9200
# --------------------------------- Discovery ----------------------------------
discovery.seed_hosts: ["node-1","node-2"]
cluster.initial_master_nodes: ["node-1","node-2"]
elasticsearch.yaml for node-2 :
# ---------------------------------- Cluster -----------------------------------
cluster.name: dsm-001
# ------------------------------------ Node ------------------------------------
node.name: node-2
# ----------------------------------- Paths ------------------------------------
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# ---------------------------------- Network -----------------------------------
#network.host: 192.168.0.1
http.port: 9200
# --------------------------------- Discovery ----------------------------------
discovery.seed_hosts: ["node-1","node-2"]
cluster.initial_master_nodes: ["node-1","node-2"]
While the cluster.initial_master_nodes property requires names that match actual elasticsearch node names ("node-1" and "node-2"), the discovery.seed_hosts property expects host names or addresses (e.g. "server1", 192.168.1.12") and not Elasticsearch node names. This is what you need to fix first.
But as you run your nodes in AWS, you are expected to install the EC2 Discovery Plugin to help you locating
the seed addresses of your Elasticsearch nodes (see the Elasticsearch Reference Documentation: EC2 Discovery Plugin)
With Elasticsearch 7 the cluster coordination layer has been rewritten, making your cluster much more robust, but also making the first start-up of a node extremely important. The cluster.initial_master_nodes property is only used when starting up a node for the very first time. If you did so, and your node did not join the expected cluster (and most likely created its own cluster), you need to stop your node, delete the data directory (to clear the cluster state) and restart it.
For Additional Help
Finally, I figured it out.
If Daniel's answer doesn't work for you, add the following.
---------------------------Network-----------------------------------
network.host: Your Public DNS (IPv4) // For example, ec2-1-1-1-1.ap-northeast-1.compute.amazonaws.com
----------------------------Discovery------------------------------------
discovery.seed_hosts : [" ip" , "ip" ] // like Daniel said.
discovery.seed_providers: ec2

Failed to send join request to master ElasticSearch on AWS EC2 owned cluster

I am trying to build a cluster of 3 EC2 instances (I do not want to use the ElasticSearch service of amazon) and after installing the software and configuring it in all three instances I encounter the problem that they do not communicate with each other.
I’m working with ES 5.5.1 on instances with Ubuntu 16.04
All nodes are up and running
All nodes has a Security Groupof AWS with permissions for all traffic between nodes (all ports)
Internal firewall on very machine white list for every node
Master
cluster.name: excelle
node.name: ${HOSTNAME}
node.master: true
path.data: /srv/data
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 172.31.MAS.TER
discovery.zen.ping.unicast.hosts: ["172.31.MAS.TER", "172.31.NODE.TWO", "172.31.NODE.THREE"]
Node two
cluster.name: excelle
node.name: ${HOSTNAME}
node.master: false
path.data: /srv/data
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 172.31.NODE.TWO
discovery.zen.ping.unicast.hosts: ["172.31.MAS.TER", "172.31.NODE.TWO", "172.31.NODE.THREE"]
Node 3
cluster.name: excelle
node.name: ${HOSTNAME}
node.master: false
path.data: /srv/data
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 172.31.NODE.THREE
discovery.zen.ping.unicast.hosts: ["172.31.MAS.TER", "172.31.NODE.TWO", "172.31.NODE.THREE"]
But on logs, on node 3 for exmple...
[2017-08-15T11:01:41,241][INFO ][o.e.d.z.ZenDiscovery ] [es03] failed to send join request to master [{esmaster}{scquEEaETDKMKLHzZvEHZQ}{NdLtMUXtT7WXnv1a4uHWqQ}{172.31.44.107}{172.31.44.107:9300}], reason [RemoteTransportException[[esmaster][172.31.44.107:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[es03][172.31.18.76:9300] connect_timeout[30s]]; nested: IOException[connection timed out: 172.31.18.76/172.31.18.76:9300]; ]
I testing connection from node 3 to master not problem (for network question)
telnet 172.31.MAS.TER 9300
Trying 172.31.MAS.TER...
Connected to 172.31.MAS.TER.
Escape character is '^]'.
What it's wrong? Any idea?
I found an answer to this posted on ElasticSearch
The gem was from manst:
"solution for this error (you must deleted contents of data folder(/var/lib/elasticsearch/nodes/0) and restarted both the servers ):"
I deleted the nodes folder from each of my SpotInst instances and rebooted. My 3 ES distributed master-only nodes all came online. My 8 data-only nodes have connected automatically without any issue.

Elastic data node with shield

but it can't working after I setup shield
I added user to elastic by command
shield/esusers useradd es_admin -r admin
This is my master node config
cluster.name: vision
node.name: "node_master"
node.master: true
node.data: false
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.1.5"]
path.logs: /var/elastic/log
path.data: /var/elastic/data
This is my data node config
cluster.name: vision
node.name: "node_data"
node.master: false
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.1.5"]
path.logs: /var/elastic/log
path.data: /var/elastic/data
How can I connect data node to master node?
There is no extra work you need to do to join data and master node to form a cluster.It treats both type of nodes same.
Your hosts setting is mentioning only one host.
discovery.zen.ping.unicast.hosts: ["host1:port","host2:port"]
Each node will keep pinging the hosts listed above until both are initialized.Adding the local host is of no harm to array as ping wont fail but help in automated deployement of elasticsearch on multinode ecosystem.
since you are using shield make sure if you enabled ssl for node communicatioon then also specify the path to SSL keystore files.

Resources