How to create self signed certifcates for 2 set of Statefulset's pods that are communicating with each other through service - elasticsearch

I am trying to secure communication between Elasticsearch, Logstash, Filebeat, and Kibana. I have generated certificates as per this blog using x-pack certutil, but when my logstash service is trying to communicate with elasticsearch's data nodes service I am getting the following error:
Host name 'elasticsearch' does not match the certificate subject provided by the peer (CN=elasticsearch-data-2)"
I know this is a pretty common error and I have tried out multiple ways but unable to find a solution. I am confused about what CN and SAN I should provide so that all my data nodes, master nodes, logstash and kibana instances can communicate with each other.
PS: I have 1 statefulset(elasticsearch-data, elasticsearch-master) with one ClusterIP service(elasticsearch, elasticsearch-master) for each ES data node and master node.

Related

Cannot connect LogStash to AWS ElasticSearch "Attempted to resurrect connection to dead ES instance, but got an error"

I am building a setup which consists of AWS ElasticSearch (includes both ElasticSearch and Kibana), LogStash and FileBeat. I have been following this tutorial which explains how to Setup a Logstash Server for Amazon Elasticsearch Service and Auth with IAM.
I am using an Ubuntu 18.04 EC2 m4.large instance to host both LogStash and FileBeat. I have provisioned all of my assets inside a VPC. So far, I have provisioned an AWS ES domain, an Ubuntu 18.04 EC2 and then installed LogStash inside that. Right now, I am ignoring FileBeat and I just want to connect my LogStash service to the AWS ES domain.
As per the tutorial, I have
Created an IAM Access Policy
Created Role logstash-system-es with "ec2.amazonaws.com" as trusted entity
Authorized the Role in my AWS ES domain dashboard
Installed LogStash and configured as specified
(Here I entered the Access Key I am using and its ID into the output section. However, I am not sure how the Role and an Access Key relates to each other)
Started LogStash and tailed the logstash-plain.log file to see the output
When I check the output it appears LogStash cannot connect to the ES domain.The following line starts occurring infinitely. (I have replaced the AWS ES domain name with AWSESDOMAIN).
Attempted to resurrect connection to dead ES instance, but got an
error.
{:url=>"https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/",
:error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '403' contacting Elasticsearch at URL
'https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/'"}
FYI I have configured my AWS ES domain with Fine Grained Access Control when setting it up.
What seems to be the issue here? Is it regarding Fine Grained Access Control? Security Groups? IAM issue?

FQDN on Azure Service Fabric on Premise

I don't see a way to configure the cluster FQDN for On Premise installation.
I create a 6 nodes cluster (each nodes running on a physical server) and I'm only able to contact each node on their own IP instead of contacting the cluster on a "general FQDN". With this model, I'm to be are of which node is up, and which node is down.
Does somebody know how to achieve it, based on the sample configurations files provided with Service Fabric standalone installation package?
You need to add a network load balancer to your infrastructure for that. This will be used to route traffic to healthy nodes.

ElasticSearch-Camel component connect to remote cluster

I'm trying to connect to an elastic search cluster on a different subnet. I've tried connecting using the cluster name, node names and the ip
elasticsearch://aaclust?ip=10.10.1.11&port=9300&operation=INDEX&indexName=statistics&indexType=customer_statistic
The cluster has 2 nodes aaclustes1(10.10.1.11) and aaclustes2(10.10.1.12) and the cluster name is aaclust. I'm using camel 2.15.1 and the same version of elastic search component
Any tips would be appreciated I'm new to Elastic Search.

Elasticsearch Access Log

I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.

Elastic Search Clustering in the Cloud

I have 2 Linux VM's (both at same datacenter of Cloud Provider): Elastic1 and Elastic2 (where Elastic 2 is a clone of Elastic 1). Both have same version centos, same cluster name, and same version ES, again - Elastic2 is a clone.
I use the service wrapper to automatically start them both at boot, and introduced each others ip to their respective iptables file, so now I can successfully ping between nodes.
I thought this would be enough to allow ES to form a cluster, but to no avail.
Both Elastic1 and Elastic2 have 1 index each named e1 and e2 respectfully. Each index has 1 shard with no replicas.
I can use the head and paramedic plugins on each server successfully. And use curl -XGET 'http://localhost:9200/_cluster/nodes?pretty=true' to validate the cluster name is the same and each server only has 1 node listed.
Is there anything glaring out at why these nodes arent talking? Ive restarted the ES service and rebooted on both servers to no avail. Could cloning be the problem??
In your elasticsearch.yml:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ['host1:9300', 'host2:9300']
So, just list your node IPs with the transport port (default is 9300) under unicast hosts. Multicast is enabled by default, but is generally impossible on cloud environments without use of external plugins.
Also, make sure to check your IP rules / security groups! That's easy to forget.

Resources