Install Filebeat locally or in the VM? - elasticsearch

I recently started learning ELK and I succeed to parse my XML files locally. But now I would like to have access to my server to get access to all of my XML files (upgrade every 30 seconds)
I have the ip-address of my server and my question is: should I install Filebeat locally and configure my filebeat.yml to get access to the server or I should install the Filebeat in the server and then indicate my locally address?

Filebeat is a shipper, which collects, aggregate and forward logs to your desired output (logstash, elasticsearch etc).
It works as an agent, so you need to install it in every node from which you want to collect logs from. For instance, if you want to collect logs from your local machine then install filebeat there, if you want to collect from logstash server itself, then install filebeat there. If you want to collect log from both, then filebeat needs to be installed in both machines. and use logstash as an output,
have a look at this illustration,

But when I tried to install filebeat on my server using
curl -L -O elastic.co/downloads/beats/filebeat/filebeat-6.3.1-amd64.deb
I get this message:
Could not resolve host: www.elastic.co; Name or service not known
The OS version of the server is : Linux version 3.10.0-693.17.1.el7.x86_64

Related

Path settings configuration for Logstash as a Service

I want to process my logs from my db to Kibana via Logstash. Presently I am able to manually update the logs by calling the command: sudo /usr/share/logstash/bin/logstash -f /data/Logstash_Config/Logstash_Config_File_for_TestReport.conf --pipeline.workers 1 --path.settings "/etc/logstash"
Now, I want to automate the process by using Logstash as a Service. I understand that by placing the path.settings parameter in either the config file or other corresponding file should solve the issue, but I am not able to process further.

Filebeat : data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path

Looking at the logs in one of the filebeat pods i can see this:
2021-01-04T10:10:52.754Z DEBUG [add_cloud_metadata] add_cloud_metadata/providers.go:129 add_cloud_metadata: fetchMetadata ran for 2.351101ms
2021-01-04T10:10:52.754Z INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:93 add_cloud_metadata: hosting provider type detected as openstack, metadata={"ava
ilability_zone":"us-east-1c","instance":{"id":"i-08f536567bd9945df","name":"ip-10-101-2-178.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}
2021-01-04T10:10:52.755Z DEBUG [processors] processors/processor.go:120 Generated new processors: add_cloud_metadata={"availability_zone":"us-east-1c","instance":{"id":"i-08f5
36567bd9945df","name":"ip-10-101-2-178.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]]
2021-01-04T10:10:52.755Z INFO instance/beat.go:392 filebeat stopped.
2021-01-04T10:10:52.755Z ERROR instance/beat.go:956 Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (pat
h.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
as you can see the filebeat stopped with an error :
data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
After searching the problem in github/forum i found this :
https://discuss.elastic.co/t/data-path-already-locked-by-another-beat/219852/4
Which looks like my problem,
Im using the default filebeat-kubernetes.yaml , and there is no information in ELK / Filebeats docs on how to add unique paths in the filebeat-kubernetes.yaml
where do i add them and how do i make them unique?
Thanks
I had the same problem. It means that your data path (/var/lib/filebeats) are locked by another filebeat instance. So execute sudo systemctl stop filebeat (in my case) to ensure that you don't have running filebeat
and then run filebeat with sudo filebeat -e which prints logs in console
I also tried link, that you shared, but it didn't help me. Here another solutions, may be it would help you: https://discuss.elastic.co/t/data-path-already-locked-by-another-beat/219852/2
In addition to #Anton's answer, In one of the scenarios, I had a lock file in the data path. This could be /var/lib/filebeat/filebeat.lock depending on the configuration. Delete the file and run sudo filebeat -e
If you want to run Elastic stack as a service, the solution is just to restart all of the stack in this order:
Elasticsearch
Kibana
Logstash
Filebeat(s)
which is already suggested in this link.

Kibana setup on Ubuntu 17.10 for consuming log files from Jboss Fuse

Every day I get a new log file from Jboss Fuse. Examples:
fuse.log.2018-02-28
fuse.log.2018-03-01
fuse.log.2018-03-03
etc..
I want to load a log file into Kibana every day.
Now this is what I have done so far:
Installed Elasticsearch
Installed ingest-geoip
Installed Kibana on http://localhost:9200
Installed Filebeat
Installed logstash
What do I do from here? When I go to Kibana I only see the default dashboard screen "Add Data to Kibana":
Kibana dashboard
Thanks for any help.

unable to start kibana process

I am trying to install kibana using rpm kibana-4.5.0-1.x86_64.rpm.
However when i try to start the Kibana process, i am getting below prompt
Starting kibana....... unable to start process kibana.
To check the reason i have enabled log file by setting the below parameter in kibana.yml :
logging.dest: /opt/kibana/kibana.log
However no log file is getting created and i am unable to identify why kibana process is not starting.
Any suggestion would be appreciated..
Please check ---
RPM install is not supported on distributions with old versions of RPM, such as SLES 11 and CentOS 5.
My suggetion you can install Kibana with .tar.gz
Can follow the link :
https://www.elastic.co/guide/en/kibana/current/targz.html

ELK - Shield auth problems

I'm trying to setup Shield for Elasticsearch, but had some trouble
When I try to start Elasticsearch like:
/usr/share/elasticsearch/bin/elasticsearch
all work as expected, but when I'm trying to start/restart Elasticsearch like:
/etc/init.d/elasticsearch srart
I've got error described below
[2015-02-17 21:44:09,662][ERROR][shield.audit.logfile ] [Tusk] [rest] [authentication_failed] origin_address=[/192.168.88.17:58291], principal=[es_admin], uri=[/_aliases?pretty=true]
OS: Ubuntu 12.04
Elasticsearch: 1.4.3
Shield: 1.0.1
Elasticsearch and Shield were running with default settings
If your elasticsearch configs are not in /usr/share/elasticsearch but lets say at /etc/elasticsearch
Then just move the usr/share/elasticsearch/config/shield to /etc/elasticseach
Take care that if you start elasticsearch with the user elasticsearch that the new /etc/elasticsearch/shield folder belongs to the user elasticsearch.
If that doesn't make it, then also see this
http://www.elasticsearch.org/guide/en/shield/current/getting-started.html#_configuring_your_environment
Same thing happened with me when i tried to add shield to our elasticsearch cluster to add auth based access to elasticsearch data.
I was on ubuntu 14.04 machine and elasticsearch was installed using a .deb package from elastic-download-link.
Elasticsearch was using a service startup script from
/etc/init.d/elasticsearch
in which the configuration was mentioned as:
# Elasticsearch configuration directory
CONF_DIR=/etc/$NAME
But when i tried to install shield plugin on elasticsearch from this-link
and tried to add user on shield by following es-docs using this command.
sudo bin/shield/esusers useradd es_admin -r admin
shield configuration was being updated in
/usr/share/elasticsearch/config/shield/
but elasticsearch server was expecting configuration files to be in
/etc/elasticsearch/shield/
due to this mismatch in read configuration file for shield and new updated file with newly added users on shield causing this authentication failure.
This can be solved either by moving
/usr/share/elasticsearch/config/shield/
to
/etc/elasticsearch/shield/
or by changing conf file location in
/etc/init.d/elasticsearch
as
# Elasticsearch configuration directory
CONF_DIR=/usr/share/elasticsearch/config/

Resources