ElasticSearch-Camel component connect to remote cluster - elasticsearch

I'm trying to connect to an elastic search cluster on a different subnet. I've tried connecting using the cluster name, node names and the ip
elasticsearch://aaclust?ip=10.10.1.11&port=9300&operation=INDEX&indexName=statistics&indexType=customer_statistic
The cluster has 2 nodes aaclustes1(10.10.1.11) and aaclustes2(10.10.1.12) and the cluster name is aaclust. I'm using camel 2.15.1 and the same version of elastic search component
Any tips would be appreciated I'm new to Elastic Search.

Related

How to create self signed certifcates for 2 set of Statefulset's pods that are communicating with each other through service

I am trying to secure communication between Elasticsearch, Logstash, Filebeat, and Kibana. I have generated certificates as per this blog using x-pack certutil, but when my logstash service is trying to communicate with elasticsearch's data nodes service I am getting the following error:
Host name 'elasticsearch' does not match the certificate subject provided by the peer (CN=elasticsearch-data-2)"
I know this is a pretty common error and I have tried out multiple ways but unable to find a solution. I am confused about what CN and SAN I should provide so that all my data nodes, master nodes, logstash and kibana instances can communicate with each other.
PS: I have 1 statefulset(elasticsearch-data, elasticsearch-master) with one ClusterIP service(elasticsearch, elasticsearch-master) for each ES data node and master node.

FQDN on Azure Service Fabric on Premise

I don't see a way to configure the cluster FQDN for On Premise installation.
I create a 6 nodes cluster (each nodes running on a physical server) and I'm only able to contact each node on their own IP instead of contacting the cluster on a "general FQDN". With this model, I'm to be are of which node is up, and which node is down.
Does somebody know how to achieve it, based on the sample configurations files provided with Service Fabric standalone installation package?
You need to add a network load balancer to your infrastructure for that. This will be used to route traffic to healthy nodes.

ElasticSearch Couchbase Replication Issue

I have a problem with my ElasticSearch cluster in Couchbase XDCR configuration.
I put private ip 10.28.0.21 as IP ( my elasticsearch and couchbase in the same server) when creating cluster reference. Then the system change that by public IP (92.222..) of my server. It is very strange. I don't know why?
Couchbase logs show :
Updated remote cluster 'ElasticSearch' hostname to "92.222..:9091" because old one
("10.28.0.21:9091") is not part of the cluster anymore
Thanks for any suggestions.
Couchbase uses the IP that ElasticSearch returns as its host address. If you want ElasticSearch to publish the private IP instead of the public, you can override it with the network.publish_host setting in elasticsearch.yml. If the private IP isn't static, you might have to set it to the IP of a particular network interface, such as _eth0_. Take a look here for more details: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html

Spark in DSE4.5 with EC2MultiRegionSnitch

I'm have DSE 4.5 cassandra cluster with multi region data center on EC2. So i'm using EC2MultiRegionSnitch which is returning public IP. And I'm trying to create Spark node but logs says "Failed to bind PUBLIC IP:7077. I am sure it is due to EC2MultiRegionSnitch property. I spoke to Amazon guys , as they are not able to help me in binding port with public ip.
Now, I'm not sure which snitch I can use for EC2 multi region data center cluster. (apart from EC2MultiRegionSnitch). So that I'm bind cluster in multi region data center and run Spark.
Can you please suggest ??
The EC2MultiRegionSnitch should be fine for your setup. I suspect the issue is that port 7077 is not opened in your security group in each data center.

Elastic Search Clustering in the Cloud

I have 2 Linux VM's (both at same datacenter of Cloud Provider): Elastic1 and Elastic2 (where Elastic 2 is a clone of Elastic 1). Both have same version centos, same cluster name, and same version ES, again - Elastic2 is a clone.
I use the service wrapper to automatically start them both at boot, and introduced each others ip to their respective iptables file, so now I can successfully ping between nodes.
I thought this would be enough to allow ES to form a cluster, but to no avail.
Both Elastic1 and Elastic2 have 1 index each named e1 and e2 respectfully. Each index has 1 shard with no replicas.
I can use the head and paramedic plugins on each server successfully. And use curl -XGET 'http://localhost:9200/_cluster/nodes?pretty=true' to validate the cluster name is the same and each server only has 1 node listed.
Is there anything glaring out at why these nodes arent talking? Ive restarted the ES service and rebooted on both servers to no avail. Could cloning be the problem??
In your elasticsearch.yml:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ['host1:9300', 'host2:9300']
So, just list your node IPs with the transport port (default is 9300) under unicast hosts. Multicast is enabled by default, but is generally impossible on cloud environments without use of external plugins.
Also, make sure to check your IP rules / security groups! That's easy to forget.

Resources