Using filebeat with elasticsearch - elasticsearch

I am not getting that how to run this filebeat in order to send output to elasticsearch.
This is from the filebeat.yml file,
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/nginx/access.log
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
index: 'filebeat_nginx'
elasticsearch is up and running.
Now, how to run filebeat to send the log info to elasticsearch.
If I go to bin directory of filebeat, and run this command,
luvpreet#DHARI-Inspiron-3542:/usr/share/filebeat/bin$ sudo ./filebeat -configtest -e
then it shows ,
filebeat2017/04/19 06:54:22.450440 beat.go:339: CRIT Exiting: error loading config file: stat filebeat.yml: no such file or directory
Exiting: error loading config file: stat filebeat.yml: no such file or directory
The filebeat.yml file is in the /etc/filebeat folder. How to run it ?
Please clarify the process to run this with elasticsearch.

A typical filebeat command looks like this:
/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml \
-path.home /usr/share/filebeat -path.config /etc/filebeat \
-path.data /var/lib/filebeat -path.logs /var/log/filebeat
-c indicates your config file, as noted in the comments above. path.home is your scripts. path.config contains config files. path.data is where state is maintained. path.logs is where the filebeat process will log.

If you have made the necessary arrangements in /etc/filebeat/filebeat.yml file, you can use this command "service filebeat start". After the service is started you can control service this command "service filebeat status". If there is an error, you can see errors.

1.If you have installed the rpm package, you will have /etc/filebeat/filebeat.yml file. Edit the file to send the output to Elasticsearch and start it using command "/etc/init.d/filebeat start"
2. If you have downloaded binary and installed it, you can use the command "Downloads/filebeat-5.4.0-darwin-x86_64/filebeat -e -c location_to_your_filebeat.yml"

Related

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

How to collect log from different servers to a central server(Elastic search and kibana)

I am assigned with task to create a central logging server. In my case there are many web app servers spread across. My task is to get logs from these different servers and manage in central server where there will be elastic-search and kibana.
Question
Is it possible to get logs from servers that are having different public IP? If possible how?
How much resource (CPU, Memory, Storage) is required in central server.
Things seen
Saw the examples setups where all logs and applications are on same machine only.
Looking for way to send logs over public IP to elastic-search.
I would like to differ from the Ishara's Answer. You can ship logs directly from filebeat to elasticsearch without using logstash, If your logs are generic types(system logs, nginx logs, apache logs), Using this approach You don't need to go into incur extra cost and maintenance of logstash as filebeat provides inbuilt parsing processor.
If you have debian based OS on your server, I have prepared a shell script to install and configure filebeat. You need to change elasticsearch server URL and modify second last line based on the modules that you want to configure.
Regarding your first question, Yes, You can run filebeat agent on each server and send data to centralize Elasticsearch.
For your second question, It depends on the amount of logs elasticsearch server is going to process and store. It also depends on the where kibana is hosted.
sudo wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install -y filebeat
sudo systemctl enable filebeat
sudo bash -c "cat >/etc/filebeat/filebeat.yml" <<FBEOL
filebeat.inputs:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.name: "filebeat-system"
setup.template.pattern: "filebeat-system-*"
setup.template.settings:
index.number_of_shards: 1
setup.ilm.enabled: false
setup.kibana:
output.elasticsearch:
hosts: ["10.32.66.55:9200", "10.32.67.152:9200", "10.32.66.243:9200"]
indices:
- index: "filebeat-system-%{+yyyy.MM.dd}"
when.equals:
event.module: system
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
logging.level: warning
FBEOL
sudo filebeat modules enable system
sudo systemctl restart filebeat
Yes, it is possible to get logs from servers that are having different public IP. You need to setup an agent like filebeat (provided by elastic) to each server which produce logs.
You need to setup filebeat instance in each machine.
It will listen to your log files in each machine and forward them to the logstash instance you would mention in filebeat.yml configuration file like below:
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path_to_your_log_1/ELK/your_log1.log
- /path_to_your_log_2/ELK/your_log2.log
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["private_ip_of_logstash_server:5044"]
Logstash server listens to port 5044 and stream all logs through logstash configuration files:
input {
beats { port => 5044 }
}
filter {
# your log filtering logic is here
}
output {
elasticsearch {
hosts => [ "elasticcsearch_server_private_ip:9200" ]
index => "your_idex_name"
}
}
In logstash you can filter and split your logs into fields and send them to elasticsearch.
Resources depend on how much of data you produce, data retention plan, TPS and your custom requirements. If you can provide some more details, I would be able to provide a rough idea about resource requirement.

Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Elasticsearch output is not configured/enabled

Struggling to get filebeat working.
According to the docs (https://www.elastic.co/guide/en/beats/filebeat/6.1/load-kibana-dashboards.html) dashboard loading is required.
So, in /etc/filebeat/filebeat.yml
setup.dashboards.enabled: true
but now I get:
beat.go:625: CRIT Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Elasticsearch output is not configured/enabled
Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Elasticsearch output is not configured/enabled
I was under the impression that Logstash output is all that's required and that Logstash takes care of output to Elasticsearch.
However, if I enable Elasticsearch output in filebeat (/etc/filebeat/filebeat.yml):
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
I get:
beat.go:625: CRIT Exiting: error unpacking config data: more then one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')
Exiting: error unpacking config data: more then one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')
I can't see your configuration but it seems you set 2 different outputs for Filebeat, This service accepts only one output, Just comment the other one (ex: output.elasticsearch). And then check the config file again:
filebeat -c /etc/filebeat/filebeat.yml

Unable to connect Filebeat to logstash for logging using ELK

Hi I've been working on a automated logging using elastic stack. I have filebeat that is reading logs from the path and output is set to logstash over the port 5044. The logstash config has an input listening to 5044 and output pushing to localhost:9200. The issue is I can't get it to work, I have no idea what's happening. Below are the files:
My filebeat.yml path: etc/filebeat/filebeat.yml
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /mnt/vol1/autosuggest/logs/*.log
#- c:\programdata\elasticsearch\logs\*
<other commented stuff>
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.10.XX.XX:5044"]
# Optional SSL. By default is off.
<other commented stuff>
My logstash.yml path: etc/logstash/logstash.yml
<other commented stuff>
path.data: /var/lib/logstash
<other commented stuff>
path.config: /etc/logstash/conf.d
<other commented stuff>
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
http.host: "10.10.XX.XX"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
<other commented stuff>
path.logs: /var/log/logstash
<other commented stuff>
My logpipeline30aug.config file path: /usr/share/logstash
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:var0}%{SPACE}%{NOTSPACE}%{SPACE}(?<searchinfo>[^#]*)#(?<username>[^#]*)#(?<searchQuery>[^#]*)#(?<latitude>[^#]*)#(?<longitude>[^#]*)#(?<client_ip>[^#]*)#(?<responseTime>[^#]*)" }
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "logstash30aug2017"
document_type => "log"
}
}
Please Note: Elasticsearch, logstash, filebeat are all installed on the same machine with ip: 10.10.XX.XX and I've checked the firewall, it's not the issue for sure.
I checked that logstash, filebeat services are all running. Filebeat is able to push the data to elasticsearch when configured so and logstash is able to push the data to elasticsearch when configured so.
Maybe it's how I am executing the process is the issue..
I do a bin/logstash -f logpipeline30aug.config in /usr/share/logstash to start it and then I do a /etc/init.d/filebeat start from the root directory.
Please Note: Formatting may be effected due to stackoverflow formatting issue
Can someone please help? I've been trying everything since 3 days now, I've gone through the documentations as well
Your filebeat.yml looks invalid.
The output section lacks an indentation:
output.logstash:
hosts: ["10.10.XX.XX:5044"]
In general, check the correctness of the config files to ensure they're ok.
For instance, for filebeat, you can run:
filebeat -c /etc/filebeat/filebeat.yml -configtest
If you have any errors it explains what is that error so you can fix it.
You can use a similar approach for other ELK services as well

problems with registry file in filebeat

I am using filebeat to send data to elasticsearch,
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/kibana_access.log
document_type: nginx
- input_type: log
paths:
- /var/log/redis/redis-server.log
document_type: redis
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
index: '%{[type]}-log'
versions.2x.enabled: false
The configuration is correct, and it is writing to elastic perfectly. But,The issue is that, it is sending the old lines to elastic also, whereas, it should not do so.
No new logs are being written, but in kibana I can see the log countto be the same as previous, when filebeata again sends the data.
I tried checking the registry file, /var/lib/filebeat/registry, and it had information of the files which I had used earlier but was not using now.
{"source":"/var/log/filebeat/filebeat","offset":2514,"FileStateOS":{"inode":4591858,"device":2058},"timestamp":"2017-04-21T17:33:11.913352399+05:30","ttl":-2},{"source":"/var/log/postgresql/postgresql-2017-04-21_120121.log","offset":4485506,"FileStateOS":{"inode":3932558,"device":2058},"timestamp":"2017-04-21T18:11:56.65579033+05:30","ttl":-2}
this is the registry file.
I have set a cron job which restarts filebeat every minute, and sends data to elastic. I am using ubuntu 16.04 and installed filebeat as deb package.
This is the registry file path in filebeat.full.yml --> ${path.data}/registry.
Please explain this behaviour, and also the solution to this.
i just deleted this folder
rm -rf /var/lib/filebeat/
it was solved.

Resources