Receiving logs from filebeat to ECK on GKE - elasticsearch

I built a cluster on GKE with ECK operator and am trying to send logs from an on premises Filebeat installation to the cloud.
Elasticsearch has LoadBlancer IP. I specified certificate, password and necessary things, but I couldn't make it work. Is there a tutorial?

Related

Cannot connect LogStash to AWS ElasticSearch "Attempted to resurrect connection to dead ES instance, but got an error"

I am building a setup which consists of AWS ElasticSearch (includes both ElasticSearch and Kibana), LogStash and FileBeat. I have been following this tutorial which explains how to Setup a Logstash Server for Amazon Elasticsearch Service and Auth with IAM.
I am using an Ubuntu 18.04 EC2 m4.large instance to host both LogStash and FileBeat. I have provisioned all of my assets inside a VPC. So far, I have provisioned an AWS ES domain, an Ubuntu 18.04 EC2 and then installed LogStash inside that. Right now, I am ignoring FileBeat and I just want to connect my LogStash service to the AWS ES domain.
As per the tutorial, I have
Created an IAM Access Policy
Created Role logstash-system-es with "ec2.amazonaws.com" as trusted entity
Authorized the Role in my AWS ES domain dashboard
Installed LogStash and configured as specified
(Here I entered the Access Key I am using and its ID into the output section. However, I am not sure how the Role and an Access Key relates to each other)
Started LogStash and tailed the logstash-plain.log file to see the output
When I check the output it appears LogStash cannot connect to the ES domain.The following line starts occurring infinitely. (I have replaced the AWS ES domain name with AWSESDOMAIN).
Attempted to resurrect connection to dead ES instance, but got an
error.
{:url=>"https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/",
:error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '403' contacting Elasticsearch at URL
'https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/'"}
FYI I have configured my AWS ES domain with Fine Grained Access Control when setting it up.
What seems to be the issue here? Is it regarding Fine Grained Access Control? Security Groups? IAM issue?

Elastic.co APM with gke network policies

I have gke clusters, and I have elasticsearch deployments on elastic.co. Now on my gke cluster I have network policies for each pod with egress and ingress rules. My issue is that in order to use elastic APM I need to allow egress to my elastic deployment.
Anyone has an idea how to do that? I am thinking either a list of IPs for elastic.co on the gcp instances to be able to whitelist them in my egress rules, or some kind of proxy between my gke cluster and elastic apm.
I know a solution can be to have a local elastic cluster on gcp, but I prefer not to go this way.
About the possibility of using some kind of Proxy between your gke cluster and elastic apm. You can check the following link [1], to see if it can fit your necessities.
[1] https://cloud.google.com/vpc/docs/special-configurations#proxyvm

How to create self signed certifcates for 2 set of Statefulset's pods that are communicating with each other through service

I am trying to secure communication between Elasticsearch, Logstash, Filebeat, and Kibana. I have generated certificates as per this blog using x-pack certutil, but when my logstash service is trying to communicate with elasticsearch's data nodes service I am getting the following error:
Host name 'elasticsearch' does not match the certificate subject provided by the peer (CN=elasticsearch-data-2)"
I know this is a pretty common error and I have tried out multiple ways but unable to find a solution. I am confused about what CN and SAN I should provide so that all my data nodes, master nodes, logstash and kibana instances can communicate with each other.
PS: I have 1 statefulset(elasticsearch-data, elasticsearch-master) with one ClusterIP service(elasticsearch, elasticsearch-master) for each ES data node and master node.

How can I import data from MySQL(AWS RDS) using Logstash of Elastic Cloud via AWS VPC?

I'm trying to import some data from AWS RDS to Elasticsearch of Hosted Elastic Cloud
- It's not AWS Elasticsearch Service
What I want to do is below.
What: Data import
From: AWS RDS MySQL
To: Elasticsearch in Elastic Cloud
How: Using Logstash of Elastic Cloud
However, my AWS RDS MySQL is in AWS VPC and Elastic Cloud doesn't provide static ip address (please see Elasticsearch F&Q)
So Logstash can't access to AWS RDS MySQL preserving security rule of AWS VPC.
In previous data transfer case, I used to add trasferer's ip address to whitelist of VPC. For this case, it can't be done.
I totally don't know whether this trial is wrong or not.
How can I handle this case?
After some research, I concluded that there is no way to do that for now. However, there is compromise plan. By setting Logstash EC2 instance in Amazon VPC, Logstash can access to AWS RDS MySQL. With Elastic cloud's credential, it can also send data to Elasticsearch in Elastic Cloud.

Kubernetes and Prometheus not working together with Grafana

I have created a kubernetes cluster on my local machine with one master and at the moment zero workers, using kubeadm as the bootstrap tool. I am trying to get Prometheus (from the helm packet manager) and Kuberntes matrics together to the Grafana Kubernetes App, but this is not the case. The way I am setting up the monitoring is:
Open grafana-server at port 3000 and install the kuberntes app.
Install stable/prometheus from helm and using this custom YAML file I found in another guide.
Adding Prometheus data source to Grafana with IP from kubernetes Prometheus service (or pods, tried both and both works well) and use TLS Client Auth.
Starting proxy port with kubectl proxy
Filling in all information needed in the Kubernetes Grafana app and then deploy it. No errors.
All kubernetes metric shows, but no Prometheus metric.
If kubernetes proxy connection is stopped, Prometheus metric can be seen. There are no problems connecting to the Prometheus pod or service IP when kubernetes proxy is running. Does someone have any clue what I am doing wrong?

Resources