Filebeats doesn't foward Docker compose logs, why? - elasticsearch

I am following this tutorial to set up a ELK stack (VPS B) that will receive some Docker/docker compose images logs (VPS A) using Beatfile as forwarder, my diagram is as shown below
So far, I have managed to have all the interfaces with green ticks working. However, there are still remaining some issues in that I am not able to understand. Thus, I would appreciate if someone could help me out a bit with it.
My main issue is that I don't get any Docker/docker-compose log from the VPSA into the Filebeat Server of VPSB; nevertheless, I got other logs from VPSA such as rsyslog, authentication log and so on on the Filebeat Server of VPSB. I have configured my docker-compose file to forward the logs using rsyslog as logging driver, and then filebeat is fowarding that syslog to the VPSB. Reaching this point, I do see logs from the docker daemon itself, such as virtual interfaces up/down, staring process and so, but not the "debug" logs of the containters themselves.
The configuration of Filebeat client in VPSA looks like this
root#VPSA:/etc/filebeat# cat filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
# - /var/log/*.log
input_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["ipVPSB:5044"]
bulk_max_size: 2048
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
level: debug
One of the docker-compose logging driver looks like this
redis:
logging:
driver: syslog
options:
syslog-facility: user
Finally I would like to ask, whether it is possible to forward natively from docker-composer the logs to Filebeat client in VPSA, red arrow in the diagram, so that it can forward them to my VPSB.
Thank you very much,
REgards!!

The issue seemed to be in FileBeat VPSA, since it has to collect data from the syslog, it has to be run before that syslog!
Updating rc.d made it work
sudo update-rc.d filebeat defaults 95 10
My filebeats.yml if someone needs it
root#VPSA:/etc/filebeat# cat filebeat.yml
filebeat:
prospectors:
-
paths:
# - /var/log/auth.log
- /var/log/syslog
# - /var/log/*.log
input_type: log
ignore_older: 24h
scan_frequency: 10s
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["ipVPSB:5044"]
bulk_max_size: 2048
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
rotateeverybytes: 10485760 # = 10MB
Regards

Related

How to send custom logs in a specified path to filebeat running inside docker

I am new to filebeat and elk. I am trying to send custom logs using filebeat to elastic search directly.Both the elk stack and filebeat are running inside docker containers.. The custom logs are in the folder home/username/docker/hello.log. Here is my filebeat.yml file:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ["my_ip:9200"]
And here is my custom log file:
This is a custom log file
Sending logs to elastic search
And these are the commands using which I am using to run filebeat.
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:8.5.3 filebeat -e --strict.perms=false
When i use the above commands to run filebeat I can see the logs of the docker containers on my kibana dashboard. But I am struggling on how to make filebeat to read my custom logs from the specified location above and show me the lines inside the log file on kibana dashboard.
Anyhelp would be appreciated.
Filebeat inputs generally can accept multiple log file paths for harvesting them. In your case, you just need to add the log file location to your log filebeat input path attribute, similar to:
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
- /home/username/docker/hello.log

"No logs found" in grafana

I installed Loki, grafana and promtail and all three runing. on http://localhost:9080/targets Ready is True, but the logs are not displayed in Grafana and show in the explore section "No logs found"
promtail-local-config-yaml:
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
host: ward_workstation
agent: promtail
__path__: D:/LOGs/*log
loki-local-config.yaml:
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
How can i solve this problem?
Perhaps you are using Loki in Windows ?
In your promtail varlogs job ,the Path "D:/LOGs/*log" is obviously wrong, you cannot access the windows file from your docker directly.
You shoud mount your windows file to your docker like this:
promtail:
image: grafana/promtail:2.5.0
volumes:
- D:/LOGs:/var/log
command: -config.file=/etc/promtail/config.yml
networks:
- loki
Then everything will be ok.
Note that, in your promtail docker the config is like this:
you can adjust both to make a match...
Here's a general advice how to debug Loki according to the question's title:
(1) Check promtail logs
If you discover such as error sending batch you need to fix your Promtail configuration.
level=warn ts=2022-10-12T16:26:20.667560426Z caller=client.go:369 component=client host=monitor:3100 msg="error sending batch, will retry" status=-1 error="Post \"http://loki:3100/loki/api/v1/push\": dial tcp: lookup *Loki* on 10.96.0.10:53: no such host"
(2) Open the Promtail config page and check, if Promtail has read your given configuration: http://localhost:3101/config
(3) Open the Promtail targets page http://localhost:3101/targets and check
if your service is listed as Ready
if the log file contains the wanted contents and is readable by Promtail. If you're using docker or kubernetes I would log into the Promtail Container and would try to read the logfile manually.
To the specific problem of the questioner:
The questioner said, that the services are shown as READY in the targets page. So I recommend to check (1) Promtail configuration and (3b) access to log files (as Frank).

Filebeat unable to send logs to Kafka

File Beat is unable to send logs from a particular folder, This is the application logs folder.
Things that have been tried :
Created a new topic in kafka to retest the settings.
Checked for file permission for the folder and the file to send.
Updated to the filebeats to 6.7 from 5.5
changed from filebeat.prospector to filebeat.inputs
Running Configuration :
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.log
fields_under_root: true
output.kafka:
hosts: ["10.0.0.0:9092"]
topic: "testtopic"
codec.json:
pretty: true
With this i am able to see all the logs in "testtopic"
Not running Configuration :
filebeat.inputs:
- type: log
paths:
- /app/log/server/*.log
fields_under_root: true
output.kafka:
hosts: ["10.0.0.0:9092"]
topic: "testtopic"
codec.json:
pretty: true
Expected Results : Logs from the path /app/log/server/*.log should be sent to Kafka

Filebeat is processing all the logs instead of the specified application logs

I have an app server, where I have configured filebeat(through Chef) to extract the logs and publish it to logstash(a separate ELK server), and subsequently to ES and Kibana.
I have configured filebeat to process logs only from /opt/app_logs/*.log, but it seems it is reading logs from other locations too, because in the /etc/filebeat configuration directory, I have filebeat.full.yml and other yml files generated automatically, and they seem to have all those other file locations, thus due to such a huge amount of logs, logstash service is getting out of memory within minutes with logstash.log. How can I not autogenerate the other yml files?
I tried to remove this file and also tried to comment out all the /var/log paths from the prospectors, but then filebeat itself is not starting.
filebeat.yml file:
filebeat:
prospectors: []
registry_file: "/var/lib/filebeat/registry"
config_dir: "/etc/filebeat"
output:
logstash:
hosts:
- elk_host:5044
index: logstash-filebeat
shipper:
name: serverA
tags:
- A
logging:
to_files: 'true'
files:
path: "/var/log/filebeat"
name: filebeat_log
rotateeverybytes: '10485760'
level: info
prospectors:
- paths:
- "/opt/app_logs/*.log"
encoding: plain
input_type: log
ignore_older: 24h
The main problem with your configuration is that for Filebeat 1.2.3 you have the prospectors list defined twice and second one is not in the correct location.
The second problem is that you have defined the config_dir as /etc/filebeat. config_dir is used to specify an additional directory where to look for config files. It should never be set to /etc/filebeat because this is where the main config file should be located. See https://stackoverflow.com/a/39987501/503798 for usage information.
A third problem is that you have used string types in to_files and rotateeverybytes. They should be boolean and integer types respectively.
Here's how the config should look for Filebeat 1.x.
filebeat:
registry_file: "/var/lib/filebeat/registry"
config_dir: "/etc/filebeat/conf.d"
prospectors:
- paths:
- "/opt/app_logs/*.log"
encoding: plain
input_type: log
ignore_older: 24h
output:
logstash:
hosts:
- elk_host:5044
index: logstash-filebeat
shipper:
name: serverA
tags:
- A
logging:
to_files: true
files:
path: "/var/log/filebeat"
name: filebeat_log
rotateeverybytes: 10485760
level: info
I highly recommend that you upgrade to Filebeat 5.x because it has better configuration validation using filebeat -configtest.

Filebeat doesn't forward data to logstash

I have a setup using elasticsearch, kibana, logstash on one vm machine and filebeat on the slave machine. I managed to send syslog messages and logs from auth.log file following the tutorial from here. In the filebeat log I saw that the messages are published, but when I try to send a json file I don't see any publish event ( I see just Flushing spooler because of timemout. Events flushed: 0).
My filebeat.yml file is
filebeat:
prospectors:
-
paths:
# - /var/log/auth.log
# - /var/log/syslog
# - /var/log/*.log
- /home/slave/data_2/*
input_type: log
document_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["192.168.132.207:5044"]
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
rotateeverybytes: 10485760 # = 10MB
PLEASE NOTE that tabs are not allowed in your filebeat.yml!!!! I used notepad++ and view>Show>whitespace and TAB. Sure enough there was a TAB char in a blank line and filebeat wouldn't start. Use filebeat -c filebeat.yml -configtest and it will give you more information.
Go in your logstash input for filebeat and comment the tls section!
Don't forget to check your log file permissions. If everything is rooted, filebeat won't have read access to it.
Set your file group to adm.
sudo chgrp adm /var/log/*.log

Resources