How do I connect two nodes in elasticsearch? - elasticsearch

Now I have two nodes(192.168.72.129, 192.168.72.130)
It's the setting in config/elasticsearch.yml
======node-1======
cluster.name: cluster-es
node.name: node-1
network.host: 0.0.0.0
node.master: true
node.data: true
http.port: 9200
http.cors.allow-origin: "*"
http.cors.enabled: true
transport.port: 9300
http.max_content_length: 200mb
cluster.initial_master_nodes: ["node-1"]
discovery.seed_hosts: ["192.168.72.129","192.168.72.130"]
gateway.recover_after_nodes: 2
network.tcp.keep_alive: true
network.tcp.no_delay: true
transport.tcp.compress: true
cluster.routing.allocation.cluster_concurrent_rebalance: 16
cluster.routing.allocation.node_concurrent_recoveries: 16
cluster.routing.allocation.node_initial_primaries_recoveries: 16
======node-2======
cluster.name: cluster-es
node.name: node-2
network.host: 0.0.0.0
node.master: true
node.data: true
http.port: 9200
http.cors.allow-origin: "*"
http.cors.enabled: true
transport.port: 9300
http.max_content_length: 200mb
cluster.initial_master_nodes: ["node-1"]
discovery.seed_hosts: ["192.168.72.129","192.168.72.130"]
gateway.recover_after_nodes: 2
network.tcp.keep_alive: true
network.tcp.no_delay: true
transport.tcp.compress: true
cluster.routing.allocation.cluster_concurrent_rebalance: 16
cluster.routing.allocation.node_concurrent_recoveries: 16
cluster.routing.allocation.node_initial_primaries_recoveries: 16
but when I curl http://192.168.72.129:9200/_cat/nodes
there is only one node to show, how can I solve it?

Related

how to create elasticsearch cluster with docker compose in different servers?

I have 2 servers, and create elasticsearch nodes in the 2 servers. the content of docker-compose.yml files are like these:
es0:
image: elasticsearch:7.6.0
container_name: es0
environment:
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9300:9300
volumes:
- "/mnt/docker/es0/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
- "/mnt/docker/es0/data:/usr/share/elasticsearch/data"
- "/mnt/docker/es0/plugins:/usr/share/elasticsearch/plugins"
- "/mnt/docker/es0/config/cert:/usr/share/elasticsearch/config/cert"
es1:
image: elasticsearch:7.6.0
container_name: es1
environment:
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9300:9300
volumes:
- "/mnt/docker/es1/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
- "/mnt/docker/es1/data:/usr/share/elasticsearch/data"
- "/mnt/docker/es1/plugins:/usr/share/elasticsearch/plugins"
- "/mnt/docker/es1/config/cert:/usr/share/elasticsearch/config/cert"
and I configured the elasticsearch.yml like these:
cluster.name: hs-cluster
node.name: es-00
node.master: true
node.data: true
http.host: 0.0.0.0
http.port: 9200
transport.host: 0.0.0.0
transport.tcp.port: 9300
#network.host: 0.0.0.0
network.bind_host: ["192.168.0.2", "101.xx.xx.136"]
network.publish_host: 192.168.0.2
gateway.recover_after_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.initial_master_nodes: ["es-00", "es-01"]
discovery.seed_hosts: [ "192.168.0.2:9300", "192.168.0.3:9300" ]
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
cluster.name: hs-cluster
node.name: es-01
node.master: true
node.data: true
http.host: 0.0.0.0
http.port: 9200
transport.host: 0.0.0.0
transport.tcp.port: 9300
#network.host: 0.0.0.0
network.bind_host: ["192.168.0.3", "101.xx.xx.137"]
network.publish_host: 192.168.0.3
gateway.recover_after_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.initial_master_nodes: ["es-00", "es-01"]
discovery.seed_hosts: [ "192.168.0.2:9300", "192.168.0.3:9300" ]
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
when I run the instances, they all started successfully. But when I call _cluster/state?pretty, they all gave the error message:
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
that means they can't find each other.
I also tried to set network.host: 0.0.0.0
but the result was the same.
Who know the reason of this master not discovered exception? How to resolve it?
btw, I can ran the cluster in the same server with docker compose. But in different servers, it is failed. I also ran telnet xxx 9300 in each server, they all connected.
What is your default docker-engine network configuration?
Sometimes multiple servers have the same network, so dockers don't route from one server to another.
To resolve this you have to modify the daemon.json file to the following:
node1
{
"bip": "10.40.18.1/28"
}
node2
{
"bip": "10.40.18.65/28"
}

Search Guard connect to remote Elasticsearch cluster using SSL

Used this guide for SSL certs creation
I'm trying to connect to remote Elasticsearch cluster. Both clusters are using SSL certificates (signed by same CA), is it possible ?
Local cluster:
cluster.name: client1
searchguard.enterprise_modules_enabled: false
node.name: ekl.test.com
node.master: true
node.data: true
node.ingest: true
network.host: 0.0.0.0
#http.host: 0.0.0.0
network.publish_host: ["ekl1.test1.com","ekl.test.com"]
http.port: 9200
discovery.zen.ping.unicast.hosts: ["ekl.test.com", "ekl2.test2.com"]
discovery.zen.minimum_master_nodes: 1
xpack.security.enabled: false
searchguard.ssl.transport.pemcert_filepath: '/etc/elasticsearch/ssl/node1.pem'
searchguard.ssl.transport.pemkey_filepath: 'ssl/node1.key'
searchguard.ssl.transport.pemtrustedcas_filepath: '/etc/elasticsearch/ssl/root-ca.pem'
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: '/etc/elasticsearch/ssl/node1_http.pem'
searchguard.ssl.http.pemkey_filepath: '/etc/elasticsearch/ssl/node1_http.key'
searchguard.ssl.http.pemtrustedcas_filepath: '/etc/elasticsearch/ssl/root-ca.pem'
searchguard.nodes_dn:
- CN=ekl.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
- CN=ekl1.test1.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
searchguard.authcz.admin_dn:
- CN=admin.test.com,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com
Remote cluster:
cluster.name: client2
searchguard.enterprise_modules_enabled: false
node.name: ekl1.test.com
node.master: false
node.data: true
node.ingest: false
network.host: 0.0.0.0
#http.host: 0.0.0.0
network.publish_host: ["ekl.test.com","ekl1.test1.com"]
http.port: 9200
discovery.zen.ping.unicast.hosts: ["ekl6.test1.com", "ekl1.test1.com"]
discovery.zen.minimum_master_nodes: 1
xpack.security.enabled: false
searchguard.ssl.transport.pemcert_filepath: '/etc/elasticsearch/ssl/node2.pem'
searchguard.ssl.transport.pemkey_filepath: 'ssl/node2.key'
searchguard.ssl.transport.pemtrustedcas_filepath: '/etc/elasticsearch/ssl/root-ca.pem'
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: '/etc/elasticsearch/ssl/node2_http.pem'
searchguard.ssl.http.pemkey_filepath: '/etc/elasticsearch/ssl/node2_http.key'
searchguard.ssl.http.pemtrustedcas_filepath: '/etc/elasticsearch/ssl/root-ca.pem'
searchguard.nodes_dn:
- CN=ekl.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
- CN=ekl1.test1.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
searchguard.authcz.admin_dn:
- CN=admin.test.com,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com
Certificates are self-signed
I can make curl to remote cluster from local one.
curl -vX GET "https://admin:Pass#ekl1.test1.com:9200"
I added remote domain in Kibana GUI: ekl1.test1.com:9200
and getting this error in ES log:
RemoteClusterConnection] [4P1fXFO] fetching nodes from external cluster >[client2] failed
org.elasticsearch.transport.ConnectTransportException: [][172.31.37.123:9200] >handshake_timeout[30s]
Solved by specifying port 9300 instead 9200 in Kibana interface
and
http.cors.enabled: true
http.cors.allow-origin: "*"

Could not able to connect to with Elastic search cluster

I am trying to setup 2 nodes cluster for Elastic search.
cluster.name:test-cluster
node.name: es-node1
node.master: true
node.data: true
path.data: /es/data
path.logs: /es/log
network.host: privateIP
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: [PublicIP]
discovery.zen.minimum_master_nodes: 1
On node 1 you need to have this so that node 2 can see node 1
network.host: PublicIP-Node1
discovery.zen.ping.unicast.hosts: [PublicIP-Node2]
Similarly, on node 2 you need to have this:
network.host: PublicIP-Node2
discovery.zen.ping.unicast.hosts: [PublicIP-Node1]

How to setup Three Machines with different IP Address?

I have three machines, installed same ELK(6.2.2) version in all machine,
One is master and another two's are client node,
** Each machine is the different IP address
I have tried like this, but not working
Server:
cluster.name: sever
node.name: main-server
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
path.data: E:/ELK-6.2.2/elasticsearch/data
path.logs: E:/ELK-6.2.2/elasticsearch/logs
network.host: 11.xx.xx.xx
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["11.XX.XX.XX", "12.xx.xx.xx:9200", "13.xx.xx.xx:9200"]
discovery.zen.minimum_master_nodes: 1
Client:1
cluster.name: client-one
node.name: client-node-one
node.data: true
node.ingest: true
node.master: false
node.max_local_storage_nodes: 1
path.data: E:/ELK-6.2.2/elasticsearch/data
path.logs: E:/ELK-6.2.2/elasticsearch/logs
network.host: 12.xx.xx.xx
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["11.XX.XX.XX", "12.xx.xx.xx:9200", "13.xx.xx.xx:9200"]
discovery.zen.minimum_master_nodes: 1
Client: 2
cluster.name: client-two
node.name: client-node-two
node.data: true
node.ingest: true
node.master: false
node.max_local_storage_nodes: 1
path.data: E:/ELK-6.2.2/elasticsearch/data
path.logs: E:/ELK-6.2.2/elasticsearch/logs
network.host: 13.xx.xx.xx
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["11.XX.XX.XX", "12.xx.xx.xx:9200", "13.xx.xx.xx:9200"]
discovery.zen.minimum_master_nodes: 1
Please guide me how to setup these machines?
cluster.name must be equal for all your hosts.

Can't start elasticsearch an node slave

I cant't start elasticsearch with node.master:false
elasticsearch.yml
cluster.name: graylog2
node.name: "second"
node.master: false
node.data: true
index.number_of_shards: 2
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: 192.168.93.76
script.disable_dynamic: true
service elasticsearch restart
netstat -an | grep 9200
NULL
YML has very strict syntax, you need to add a space between node.master and false:
node.master: false

Resources