Can't get Centralized pipeline management working in ELastic Cloud X-pack - elasticsearch

I'm trying to set Centralized pipeline management but it is still not working.
I'm using Elastic Cloud trial version and Logstash running on a local vm
my logstash.yml looks like:
xpack.management.elasticsearch.url: "https://xxx.eu-central-1.aws.cloud.es.io:xxx/"
xpack.management.enabled: true
xpack.management.elasticsearch.username: elastic
xpack.management.elasticsearch.password: password
xpack.management.logstash.poll_interval: 5s
xpack.management.pipeline.id: ["apache", "cloudwatch_logs"]
cloud.auth: "elastic:xxxx"
cloud.id: "yyyy=="
path.data: /var/lib/logstash
path.logs: /var/log/logstash
I followed instructions from https://www.elastic.co/guide/en/logstash/6.x/logstash-centralized-pipeline-management.html#logstash-centralized-pipeline-management and https://www.elastic.co/guide/en/logstash/6.x/configuring-centralized-pipelines.html
But Logstash isn't shipping anything to Elastic if I set manually a conf file and pipeline on logstash vm (whereas these were working fine on hosted trial) and If I create new pipeline in from Kibana UI...nothing else happen than having my pipeline saved under logstash pipeline management
Any tip? Did I miss some steps?

Related

Shipping logs to my local windows ELK/Logstash file from remote centos7 using filebeats

I have ELK all this three components configured on my local windows machine up and running.
I have a logfile on a remote centos7 machine available which I want to ship from there to my local windows with the help of Filebeat. How I can achieve that?
I have installed filebeat on centos with the help of rpm.
in configuration file I have made following changes
commented output.elasticsearch
and uncommented output.logstash (which Ip of my windows machine shall I give overe here? How to get that ip)
AND
**filebeat.inputs:
type: log
enabled: true
paths:
path to my log file**
The flow of the data is:
Agent => logstash > elasticsearch
Agent could be beat family, and you are using filebeat that is okay.
You have to configure all these stuff.
on filebeat you have to configure filebeat.yml
on logstash you have to configure logstash.conf
on elasticsearch you have to configure elasticsearch.yml
I personally will start with logstash.conf
input
{
beats
{
port =>5044
}
}
filter
{
#I assume you just want the log to run into elasticsearch
}
output
{
elasticsearch
{
hosts => "(YOUR_ELASTICSEARCH_IP):9200"
index=>"insertindexname"
}
stdout
{
codec => rubydebug
}
}
this is a very minimal working configuration. this means, Logstash will listen for input from filebeat in port 5044. Filter is needed when you want parse the data. Output is where you want to store the data. We are using elasticsearch output plugin since you want to store it to elasticsearch. stdout is super helpful for debugging your configuration file. add this you will regret nothing. This will print all the messages that sent to elasticsearch
filebeat.yml
filebeat.inputs:
- type: log
paths:
- /directory/of/your/file/file.log
output.logstash:
hosts: ["YOUR_LOGSTASH_IP:5044"]
this is a very minimal working filebeat.yml. paths is where you want logstash to harvest the file.
When you done configuring the file, start elasticsearch then logstash then filebeat.
Let me know any difficulties

ElasticSearch Connection Timed Out in EC2 Instance

I am setting up an ELK Stack (which consists of ElasticSearch, LogStash and Kibana) in a single EC2 instance. AWS EC2 instance. I am following the documentation from the elastic.co site.
TL;DR; I cannot access my ElasticSearch interface hosted in an EC2 from the Web URL. How to fix that?
Type : m4.large
vCPU : 2
Memory : 8 GB
Storage: 25 GB (EBS)
Note : I have provisioned the EC2 instance inside a VPC and with an Elastic IP.
I have installed all 3 components. ElasticSearch and LogStash are running as services while Kibana is running via the command ./bin/kibana inside kibana-7.10.1-linux-x86_64/ directory.
When I curl the ElasticSearch endpoint using
curl http://localhost:9200
I get this JSON output. (Which means the service is running and is accessible via Port 9200).
However, when I try to access the same URL via my browser, I get an error saying
Connection Timed Out
Isn't this supposed to return the same JSON output as the one I've mentioned above?
I have attached the elasticsearch.yml file here (Hosted in gofile.io).
Here are the Inbound Rules for the EC2 instance.
EDIT : I tried changing the network.host: 'localhost'
to network.host: 0.0.0.0 and restarted the service but this time I got an error while starting the service. I attached the screenshot of that.
EDIT 2 : I have uploaded the updated elasticsearch.yml to Gofile.org).
The problem is the following line in your elasticsearch.yml configuration file:
node.name: node-1
network.host: 'localhost'
With that configuration, your ES cluster is only accessible from the same host and not from the outside. According to the official documentation, you need to either specify 0.0.0.0 or a specific publicly accessible IP address, otherwise that won't work.
Note that you also need to configure the following two lines in order for the cluster to properly form:
discovery.seed_hosts: ["node-1-ip-address"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1"]

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

Kibana: Unable to revive connection: http://elastic-url:9200/

I installed on Centos8:
elasticsearch version 7.3.1
kibana version 7.3.1
curl -I localhost:9200/status is ok
curl -I localhost:5601/status --> kibana is not ready yet
In machine with centos7 (.226) all is ok
This is kibana log:
Can somebody help me please?
Elasticsearch 7.x.x requires cluster bootstrapping at first launch and Kibana won't start unless Elasticsearch is ready and each node is running Elasticsearch in version 7.x.x.
I will write steps which you would normally do on a real machine, so that anybody else could do the same. In docker it may look similarly, except that you are working in the containers.
Before we kick off, stop kibana and elasticsearch:
service kibana stop
service elasticsearch stop
killall kibana
killall elasticsearch
Make sure it's dead:
service kibana status
service elasticsearch status
Then head into /etc/elasticsearch/ and edit elasticsearch.yml file. Add at the end of the file:
cluster.initial_master_nodes:
- master-a
- master-b
- master-c
Where master-* will be equal to node.name on each node. Save and exit. Start Elasticsearch and then Kibana. On machines with lower memory (~4GB and probably in Docker too, as it normally gives 4GB memory for containers) you may have to start Kibana first, let it "compile", stop it, start Elasticsearch and back Kibana.
On machines with puppet make sure that puppet or cron is not running, just in case not to start off kibana/elastic too early.
Here's source: https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-discovery-bootstrap-cluster.html

send logs to external elasticsearch from openshift projects

I'm trying to send specific openshift project logs to unsecured external elastic search.
I have tried solution which is there in https://github.com/richm/docs/releases/tag/20190205142308. But found that it will work only when ELS is secured.
Later I have tried using elasticsearch plugin also by adding in output-applications.conf.
output-applications.conf:
<match *.*>
#type elasticsearch
host xxxxx
port 9200
logstash_format true
</match>
All other files are same which is described in https://github.com/richm/docs/releases/tag/20190205142308 #Application logs from specific namespaces/pods/containers
Included output-applications.conf in fluent.conf file.
In fluentd logs except "[info]: reading config file path="/etc/fluent/fluent.conf" " this message i dont see any other things and data is not reaching to elasticsearch
Can anyone tell how to proceed?

Resources