Elasticsearch/kibana Logs export via raw log format - elasticsearch

We have elasticsearch, filebeat, kibana at our stateful deployment inside kubernetes cluster. We have nfs server outside of kuberentes cluster as VM from where we've using static provisioning of NFS mounted inside Elasticsearch pods to preserve log.
Is there's any ways by which we can export logs from elasticsearch/ kibana in raw format?

Related

Using Logstash to pass airflow logs to Elasticsearch

When using logstash to retrieve airflow logs from a folder you have access to, would I still need to make any changes in the airflow.cfg file?
For instance, I have airflow and ELK deployed on same ec2 instance. The logstash .conf file has access to the airflow logs path since they are on the same instance. Do I need to turn on remote logging in airflow config?
In fact you have two options to push airflow logs to Elastic Search:
Using a log collector (logstash, fluentd, ...) to collect Airflow log then send it to Elastic Search server, in this case you don't need to change any Airflow config, you can just read the logs from the files or stdout and send it to ES.
Using Airflow remote logging feature, in this case Airflow will log directly to your remote logging server (ES in your case), and will store a local version of this log to show it when the remote server is unavailable.
So the answer to your question is no, if you have a logstash, you don't need Airflow remote logging config

Use Nifi to copy/move logs from different Nifi servers into AWS S3

We have a Nifi cluster of 4 servers and we want to ingest the logs of all the servers on S3 . Is there a way in Nifi using which we can ingest the logs of each Nifi server to S3. Logs on each node are stored on its local (separate disk mounted for Nifi logs -: /data/logs/nifi)

Cannot connect LogStash to AWS ElasticSearch "Attempted to resurrect connection to dead ES instance, but got an error"

I am building a setup which consists of AWS ElasticSearch (includes both ElasticSearch and Kibana), LogStash and FileBeat. I have been following this tutorial which explains how to Setup a Logstash Server for Amazon Elasticsearch Service and Auth with IAM.
I am using an Ubuntu 18.04 EC2 m4.large instance to host both LogStash and FileBeat. I have provisioned all of my assets inside a VPC. So far, I have provisioned an AWS ES domain, an Ubuntu 18.04 EC2 and then installed LogStash inside that. Right now, I am ignoring FileBeat and I just want to connect my LogStash service to the AWS ES domain.
As per the tutorial, I have
Created an IAM Access Policy
Created Role logstash-system-es with "ec2.amazonaws.com" as trusted entity
Authorized the Role in my AWS ES domain dashboard
Installed LogStash and configured as specified
(Here I entered the Access Key I am using and its ID into the output section. However, I am not sure how the Role and an Access Key relates to each other)
Started LogStash and tailed the logstash-plain.log file to see the output
When I check the output it appears LogStash cannot connect to the ES domain.The following line starts occurring infinitely. (I have replaced the AWS ES domain name with AWSESDOMAIN).
Attempted to resurrect connection to dead ES instance, but got an
error.
{:url=>"https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/",
:error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '403' contacting Elasticsearch at URL
'https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/'"}
FYI I have configured my AWS ES domain with Fine Grained Access Control when setting it up.
What seems to be the issue here? Is it regarding Fine Grained Access Control? Security Groups? IAM issue?

Receiving logs from filebeat to ECK on GKE

I built a cluster on GKE with ECK operator and am trying to send logs from an on premises Filebeat installation to the cloud.
Elasticsearch has LoadBlancer IP. I specified certificate, password and necessary things, but I couldn't make it work. Is there a tutorial?

Kubernetes and Prometheus not working together with Grafana

I have created a kubernetes cluster on my local machine with one master and at the moment zero workers, using kubeadm as the bootstrap tool. I am trying to get Prometheus (from the helm packet manager) and Kuberntes matrics together to the Grafana Kubernetes App, but this is not the case. The way I am setting up the monitoring is:
Open grafana-server at port 3000 and install the kuberntes app.
Install stable/prometheus from helm and using this custom YAML file I found in another guide.
Adding Prometheus data source to Grafana with IP from kubernetes Prometheus service (or pods, tried both and both works well) and use TLS Client Auth.
Starting proxy port with kubectl proxy
Filling in all information needed in the Kubernetes Grafana app and then deploy it. No errors.
All kubernetes metric shows, but no Prometheus metric.
If kubernetes proxy connection is stopped, Prometheus metric can be seen. There are no problems connecting to the Prometheus pod or service IP when kubernetes proxy is running. Does someone have any clue what I am doing wrong?

Resources