Filebeat is not creating index in Elasticsearch - elasticsearch

I'm setting up Filebeat to send logs to Elasticsearch. This is my filebeat.yml:
filebeat.prospectors:
- type: log
paths:
- '/var/log/project/*.log'
json.message_key: message
output.elasticsearch:
hosts: ["localhost:9200"]
I have this file /var/log/project/test.log with this content:
{ "message": "This is a test" }
and I was expecting this log to be sent to Elasticsearch. Elasticsearch is running in a Docker container in localhost at 9200.
When I run filebeat (Docker), no index is created in Elasticsearch. So, in Kibana, I don't see any data.
Why is that? Isn't supposed that Filebeat creates index automatically?

Solved! I wasn't sharing logs dir between host and Filebeat container, so there wasn't logs to send.
I added a volume when run Filebeat:
docker run -it -v $(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/log/project/:/var/log/project/ docker.elastic.co/beats/filebeat:6.4.0

you can create index as below
output.elasticsearch:
hosts: ["localhost:9200"]
index: "test-%{+yyyy.MM.dd}"

Related

How to send custom logs in a specified path to filebeat running inside docker

I am new to filebeat and elk. I am trying to send custom logs using filebeat to elastic search directly.Both the elk stack and filebeat are running inside docker containers.. The custom logs are in the folder home/username/docker/hello.log. Here is my filebeat.yml file:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ["my_ip:9200"]
And here is my custom log file:
This is a custom log file
Sending logs to elastic search
And these are the commands using which I am using to run filebeat.
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:8.5.3 filebeat -e --strict.perms=false
When i use the above commands to run filebeat I can see the logs of the docker containers on my kibana dashboard. But I am struggling on how to make filebeat to read my custom logs from the specified location above and show me the lines inside the log file on kibana dashboard.
Anyhelp would be appreciated.
Filebeat inputs generally can accept multiple log file paths for harvesting them. In your case, you just need to add the log file location to your log filebeat input path attribute, similar to:
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
- /home/username/docker/hello.log

Filebeat Kubernetes cannot output to ElasticSearch

Filebeat Kubernetes cannot output to ElasticSearch,
ElasticSearch is OK.
filebeat is daemonset,relevant environment variables have been added.
filebeat.yml
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
enabled: false
type: container
paths:
- /var/log/containers/*-${data.container.id}.log
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
Kubernetes
Use nginx app to test:
image=nginx:latest
Deployment annotations have been added.
co.elastic.logs/enabled: "true"
pod.yaml (in node1)
But cannot output to ElasticSearch,Logs and indexes for related input are not seen.
filebeat pod(node1) logs
Expect the filebeat to collect logs for the specified container(Pod) to elasticsearch.
#baymax first off, you don't need to explicitly define the property anywhere:
co.elastic.logs/enabled: "true"
since filebeat, by default, reads all the container log files on the node.
Secondly, you are disabling hints.default_config which ensures filebeat will only read the log files of pods which are annotated as above; however, you haven't provided any template config to be used for reading such log files.
For more info, read: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
Thirdly, in your filebeat logs, do you see any harvester being started, handles created and events published ? Posting a snapshot of logs doesn't give a clear picture. May be try starting filebeat in debug mode for few minutes and paste the logs here in proper formatting.

How to decode JSON in ElasticSearch load pipeline

I set up ElasticSearch on AWS and I am trying to load application log into it. The twist is that application log entry is in JSON format, like
{"EventType":"MVC:GET:example:6741/Common/GetIdleTimeOut","StartDate":"2021-03-01T20:46:06.1207053Z","EndDate":"2021-03-01","Duration":5,"Action":{"TraceId":"80001266-0000-ac00-b63f-84710c7967bb","HttpMethod":"GET","FormVariables":null,"UserName":"ZZZTHMXXN"} ...}
So, I am trying to unwrap it. Filebeat docs suggest that there is decode_json_fields processor; however, I am getting message fields in Kinbana as a single JSON string; nothing unwrapped.
I am new to ElasticSearch, but I am not going to use it as an excuse not to do analysis first. Only as an explanation that I am not sure which information is helpful for answering the question.
Here is filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/opt/logs/**/*.json
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- decode_json_fields:
fields: ["message"]
output.logstash:
hosts: ["localhost:5044"]
And here is Logstash configuration file:
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => ["https://search-blah-blah.us-west-2.es.amazonaws.com:443"]
ssl => true
user => "user"
password => "password"
index => "my-logs"
ilm_enabled => false
}
}
I am still trying to understand the filtering and grok parts of Logstash, but it seems that it should work the way it is. Also, I am not sure where the actual tag messages comes from (probably, from Logstash or Filebeat), but it seems irrelevant as well.
UPDATE: AWS documentation doesn't give an example of just loading through filebeat, without logstash.
If I don't use logstash (just FileBeat) and have the following section in filebeat.yml:
output.elasticsearch:
hosts: ["https://search-bla-bla.us-west-2.es.amazonaws.com:443"]
protocol: "https"
#index: "mylogs"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "username"
password: "password"
I am getting the following errors:
If I use index: "mylogs" - setup.template.name and setup.template.pattern have to be set if index name is modified
And if I don't use index (where would it go in ES then?) -
Failed to connect to backoff(elasticsearch(https://search-bla-bla.us-west-2.es.amazonaws.com:443)): Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license from the /_license endpoint, Filebeat requires the default distribution of Elasticsearch. Please make the endpoint accessible to Filebeat so it can verify the license.: unauthorized access, could not connect to the xpack endpoint, verify your credentials
If transmitting via logstash works in general, add a filter block as Val proposed in the comments and use this json plugin/filter: elastic.co/guide/en/logstash/current/plugins-filters-json.html - it automatically parses the json into elasticsearch fields

Content repeat collecting problems while use filebeat

Recently we will use filebeat to collect our system logs to elasticsearch vias:
${local_log_file} -> filebeat -> kafka -> logstash -> elasticsearch -> kibana
While testing our system, we found a scenario that filebeat will repeatly collect logs which means that it will collect logs from the start of file once there is a change.
here is my configuration for filebeat:
filebeat.prospectors:
- input_type: log
paths:
- /home/XXX/exp/*.log
scan_frequency: 1s
#tail_files: true
#================================ Outputs =====================================
#----------------------------- Logstash output --------------------------------
# output.logstash:
# hosts: ["localhost:5044"]
#----------------------------- Kafka output -----------------------------------
output.kafka:
enabled: true
hosts: ["10.10.1.103:9092"]
topic: egou
#----------------------------- console output --------------------------------
output.console:
enabled: true
pretty: true
Notice:
we construct the log files manually, and we are sure that there is a blank line at the end of file
to make a console, we open the output.console
once there is content appended to the end of log file, filebeat will collect from the beginning of the file.But we hope just fetching the change of file.
filebeat version is 5.6.X
Hope any useful hint can be offered by u all
I think this is because of the editor, you are using, creates a new file on save with new meta-data. Filebeat identifies the state of the file using its meta-data, not the content.
try,
echo "something" >> /path/to/file.log
ref: https://discuss.elastic.co/t/filebeat-repeatedly-sending-old-entries-in-log-file/55796

Filebeat doesn't forward data to logstash

I have a setup using elasticsearch, kibana, logstash on one vm machine and filebeat on the slave machine. I managed to send syslog messages and logs from auth.log file following the tutorial from here. In the filebeat log I saw that the messages are published, but when I try to send a json file I don't see any publish event ( I see just Flushing spooler because of timemout. Events flushed: 0).
My filebeat.yml file is
filebeat:
prospectors:
-
paths:
# - /var/log/auth.log
# - /var/log/syslog
# - /var/log/*.log
- /home/slave/data_2/*
input_type: log
document_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["192.168.132.207:5044"]
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
rotateeverybytes: 10485760 # = 10MB
PLEASE NOTE that tabs are not allowed in your filebeat.yml!!!! I used notepad++ and view>Show>whitespace and TAB. Sure enough there was a TAB char in a blank line and filebeat wouldn't start. Use filebeat -c filebeat.yml -configtest and it will give you more information.
Go in your logstash input for filebeat and comment the tls section!
Don't forget to check your log file permissions. If everything is rooted, filebeat won't have read access to it.
Set your file group to adm.
sudo chgrp adm /var/log/*.log

Resources