Elasticsearch Fleet Server Outputs > Specify where agents will send data - elasticsearch

Is posible to send data to an external elasticsearch deployment with fleet server?
I have tried with the Kibana Fleet UI settings but there is no username, password field for connection, if I specify those on Advanced YAML configuration give me these error: cannot set both api_key and username/password accessing 'elasticsearch'
Fleet > Settings > Outputs | Specify where agents will send data
I can see the Kibana Fleet Settings xpack.fleet.outputs > config described as Extra config for that output to set this manually but there is no example to set this config variable.
Kibana version: kibana:8.5.3
Elasticsearch version: elasticsearch:8.5.3
Install method: Elastic ECK 2.6

Agents don't support sending the logs to remote cluster, so you won't be able to send data to any external Elasticsearch per say.
However, you can either opt for beats and provide the list of ES hosts where you want to send the logs OR use logstash to receive input from Agent and configure your output with list of ES hosts.

Related

Using Logstash to pass airflow logs to Elasticsearch

When using logstash to retrieve airflow logs from a folder you have access to, would I still need to make any changes in the airflow.cfg file?
For instance, I have airflow and ELK deployed on same ec2 instance. The logstash .conf file has access to the airflow logs path since they are on the same instance. Do I need to turn on remote logging in airflow config?
In fact you have two options to push airflow logs to Elastic Search:
Using a log collector (logstash, fluentd, ...) to collect Airflow log then send it to Elastic Search server, in this case you don't need to change any Airflow config, you can just read the logs from the files or stdout and send it to ES.
Using Airflow remote logging feature, in this case Airflow will log directly to your remote logging server (ES in your case), and will store a local version of this log to show it when the remote server is unavailable.
So the answer to your question is no, if you have a logstash, you don't need Airflow remote logging config

Cannot connect LogStash to AWS ElasticSearch "Attempted to resurrect connection to dead ES instance, but got an error"

I am building a setup which consists of AWS ElasticSearch (includes both ElasticSearch and Kibana), LogStash and FileBeat. I have been following this tutorial which explains how to Setup a Logstash Server for Amazon Elasticsearch Service and Auth with IAM.
I am using an Ubuntu 18.04 EC2 m4.large instance to host both LogStash and FileBeat. I have provisioned all of my assets inside a VPC. So far, I have provisioned an AWS ES domain, an Ubuntu 18.04 EC2 and then installed LogStash inside that. Right now, I am ignoring FileBeat and I just want to connect my LogStash service to the AWS ES domain.
As per the tutorial, I have
Created an IAM Access Policy
Created Role logstash-system-es with "ec2.amazonaws.com" as trusted entity
Authorized the Role in my AWS ES domain dashboard
Installed LogStash and configured as specified
(Here I entered the Access Key I am using and its ID into the output section. However, I am not sure how the Role and an Access Key relates to each other)
Started LogStash and tailed the logstash-plain.log file to see the output
When I check the output it appears LogStash cannot connect to the ES domain.The following line starts occurring infinitely. (I have replaced the AWS ES domain name with AWSESDOMAIN).
Attempted to resurrect connection to dead ES instance, but got an
error.
{:url=>"https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/",
:error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '403' contacting Elasticsearch at URL
'https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/'"}
FYI I have configured my AWS ES domain with Fine Grained Access Control when setting it up.
What seems to be the issue here? Is it regarding Fine Grained Access Control? Security Groups? IAM issue?

Proxy_url for kibana in metricbeat.yml

My server requires proxy to connect to kibana. How can I specify that in metricbeat.yml file. For output.elasticsearch there is attribute called proxy_url but I cant see for kibana
From metricsbeat you can send beats data elasticsearch as well setup dashboard to kibana.
Do you need proxy for setup dashboard ? If so proxy environment will work for you.
On Linux/Unix simply export proxy variable as below
export http_proxy=http://host:port/
export https_proxy=http://host:port/
and then run the setup such as below.
./metricbeat setup --dashboards
For sending only beats data you already have proxy_url under elasticsearch output module

Elasticsearch Access Log

I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.

Getting Logstash to talk to Elastic Search with HTTPS + Basic auth

I have Elastic Search as part of the ELMA appliance. This appliance presents ES via HTTPS protected by basic auth. I have Logstash running on a separate machine. This Logstash needs to send log data to ES. What is the right output configuration to use?
Thanks for any pointers.
-Raj
there is an option in new version:
http://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-password
ssledit
Value type is boolean
Default value is false
SSL Configurations (HTTP only)
Enable SSL
Looks like Logstash's elasticsearch_http module does not support SSL, or does not handle self-signed certs. My solution was to disable SSL on the ElasticSearch httpd conf entry in the ELMA appliance.

Resources