Error in Filebeat logs - not able to view data in kibana - elasticsearch

Recently upgraded to 7.17.7 filebeat. Using elasticsearch, kibana and filebeat, all 7.17.7. However , I am not able to see the logs in kibana, as filebeat is not sending the logs to elasticsearch and kibana. In filebeat saw error -
ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(http://localhost:9200)): Connection marked as failed because the onConnect callback failed: resource 'filebeat-7.17.7' exists, but it is not an alias
Can someone help to figure out what could be the cause and solution for this error?
restarted filebeat, but didnt help.
Filebeat config -
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/www/vhosts/rshop/current/var/log/*.log
multiline.pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}
multiline.negate: true
multiline.match: after
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.ilm.enabled: false
setup.kibana:
output.elasticsearch:
hosts: ["localhost:9200"]
indices:
- index: "r-logs-%{[agent.version]}-%{+yyyy.MM.dd}"
when.regexp:
log.file.path: '^.+\/var\/log\/recalculation\.log$'
pipelines:
- pipeline: "filebeat-6.8.7-monolog-pipeline"
when.or:
- regexp:
log.file.path: '^.+\/var\/log\/recalculation\.log$'
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0755

#NKumar most likely its an upgrade issue from legacy to new index templates, which will happen if you don't mark them as true for overwriting.
Can you please provide info from what version of stack did you upgrade to 7.17?
Also, the quick solution would be to just add an alias to your filebeat index as:
POST /_aliases
{
"actions" : [
{
"add" : {
"index" : "filebeat-7.17.7",
"alias" : "filebeat-7.17.7_1",
"is_write_index" : true
}
}
]}
or a more persistent solution would be to add following setting in filebeat:
setup.template.settings:
setup.template.enabled: true
setup.template.overwrite: true

Related

failed to publish events: temporary bulk send failure

When i try to create multiple index in filebeat.yml and output to elasticsearch, i am getting temporary bulk send failure error. This is coming only when i introduce ilm as disable. can anyone help
Below is the filebeat config
filebeat.inputs:
- type: filestream
id: denali
enabled: true
paths:
- /var/log/denali/denali.log
parsers:
- multiline:
type: pattern
pattern: '^(\d{4}-\d{2}-\d{2})'
negate: true
match: after
fields:
app_id: denali
- type: filestream
id: freeswitch
enabled: true
paths:
- /var/log/freeswitch/freeswitch.log
parsers:
- multiline:
type: pattern
pattern: '^((\d|[a-z]|-)+ \d{4}-\d{2}-\d{2}|\d{4}-\d{2}-\d{2})'
negate: true
match: after
fields:
app_id: freeswitch
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.enabled: true
setup.ilm.enabled: false
setup.template.overwrite: true
setup.template.name: "index-%{[agent.version]}"
setup.template.pattern: "index-%{[agent.version]}-*"
output.elasticsearch:
hosts: ["ip:port"]
index: "index-%{[agent.version]}-%{[fields.app_id]:other}-%{+yyyy.MM.dd}"
protocol: "http"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- drop_fields:
fields: ["agent.ephemeral_id", "agent.hostname", "agent.id", "agent.name", "agent.type", "agent.version", "cloud.account.id", "cloud.provider", "cloud.service.name", "container.id", "container.image.name", "container.labels.COMMIT", "container.labels.PIPELINE_URL", "container.labels.PROJECT_NAME", "container.labels.PROJECT_URL", "container.labels.SOURCE_BRANCH", "container.labels.TimeStamp", "container.labels.RELEASEARTIFACT_VERSION", "container.labels.com_docker_compose_config-hash", "container.labels.com_docker_compose_container-number", "container.labels.com_docker_compose_oneoff", "container.labels.com_docker_compose_project", "container.labels.com_docker_compose_project_config_files", "container.labels.com_docker_compose_project_working_dir", "container.labels.com_docker_compose_service", "container.labels.com_docker_compose_version", "ecs.version", "host.architecture", "host.containerized", "host.id", "host.mac", "host.os.codename", "host.os.family", "host.os.kernel", "host.os.name", "host.os.platform", "host.os.type", "host.os.version", "log.offset"]
#Ramanichandran can you please provide the error logs from filebeat? Also, do you see any errors on ES logs when filebeat is trying to send logs for ingestion?
I don't believe it's due to creation of multiple indices since you are essentially creating only 3 indices. I have configured filebeat to create about 15 indices in my use case and it works just fine where my config is similar to yours with ILM disabled.
It's worth trying to set following attributes for output.elasticsearch:
bulk_max_size: 25
bulk_max_bytes: 104857600

FileBeat Setup Error : Payload content length greater than maximum allowed: 1048576

We are working to capture logs from server to elasticsearch by filebeat.
We are getting below issue, while setup the filebeat.
Response: {"statusCode":413,"error":"Request Entity Too Large","message":"Payload content length greater than maximum allowed: 1048576"}
We have added http.max_content_length: 1200mb in elasticsearch.yml and server.maxPayloadBytes: 26214400 in kibana.yml. Still this issue is there.
How to resolve this issue?
Here is the YML files we are using for FileBeat, ElasticSearch & Kibana
FileBeat.yml
filebeat.inputs:
type: log
enabled: true
paths:
c:\logs\XXX*
type: filestream
enabled: false
paths:
/var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "0.0.0.0:0000"
username: "XXXX"
password: "XXXX"
output.elasticsearch:
hosts: ["0.0.0.0:0000"]
username: "XXXX"
password: "XXXX"
processors:
add_host_metadata:
when.not.contains.tags: forwarded
add_cloud_metadata: ~
add_docker_metadata: ~
add_kubernetes_metadata: ~
ElasticSearch.yml
module: elasticsearch
server:
enabled: true
gc:
enabled: true
audit:
enabled: true
slowlog:
enabled: true
deprecation:
enabled: true
access:
var.paths: ["/var/logs/XXX/XXX.log"]
http.max_content_length: 1200mb
Kibana.yml
module: kibana
log:
enabled: true
audit:
enabled: true
server.maxPayloadBytes: 26214400

Filebeat 7.9.3 change index is not working and it always creates default filebeat-7.9.3-2020.11.04-000001

I tried this
https://www.elastic.co/guide/en/beats/filebeat/master//change-index-name.html#change-index-name and
https://discuss.elastic.co/t/index-management-change-index-name-in-filebeat/202876
with filebeat-7.9.3 and kibana-7.9.2 in windows environment .However in Kibana it doesn't create the indexname that I mentioned in filebeat.yml
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: log
- C:\FREESOFT\myfilebeatlogs\*
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
index: "customindexname-%{+yyyy.MM.dd}"
Also I tried with
index: "myindexname-%{+yyyy.MM.dd}"
setup.template.enabled: false
setup.template.name: "myindexname"
setup.template.pattern: "myindexname-*"
Kindly help me with this ,I want to create a custom index and insert data to it from Filebeat. The past questions in stack didn't solve my problm.
Below config works for me and create a index name which I want
filebeat.inputs:
- type: log
paths:
- "/var/log/*.log"
setup.ilm.enabled: false
setup.template.overwrite: true
output.elasticsearch:
hosts: ["<my-es-host>"]
index: "foo-%{+yyyy.MM}"
username: '${ELASTICSEARCH_USERNAME:elastic}'
password: '${ELASTICSEARCH_PASSWORD:elastic}'
setup.template:
name: 'foo'
pattern: 'foo-*'
enabled: false

how to exclude logs/events in journalbeat

We are using journalbeat to push logs of kubernetes cluster to elastic search. It working fine and pushing the logs. However its also pushing event like "200 OK" and "INFO" which we do not want. The journalbeat.yaml is as follows
journalbeat.yaml
journalbeat.yml: |
name: "${NODENAME}"
journalbeat.inputs:
- paths: []
seek: cursor
cursor_seek_fallback: tail
processors:
- add_kubernetes_metadata:
host: "${NODENAME}"
in_cluster: true
default_indexers.enabled: false
default_matchers.enabled: false
indexers:
- container:
matchers:
- fields:
lookup_fields: ["container.id"]
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 1
target: ""
overwrite_keys: true
- drop_event.when:
or:
- regexp.kubernetes.pod.name: "filebeat-.*"
- regexp.kubernetes.pod.name: "journalbeat-.*"
- regexp.kubernetes.pod.name: "nginx-ingress-controller-.*"
- regexp.kubernetes.pod.name: "prometheus-operator-.*"
setup.template.enabled: false
setup.template.name: "journal-${ENVIRONMENT}-%{[agent.version]}"
setup.template.pattern: "journal-${ENVIRONMENT}-%{[agent.version]}-*"
setup.template.settings:
index.number_of_shards: 10
index.refresh_interval: 10s
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
index: "journal-${ENVIRONMENT}-system-%{[agent.version]}-%{+YYYY.MM.dd}"
indices:
- index: "journal-${ENVIRONMENT}-k8s-%{[agent.version]}-%{+YYYY.MM.dd}"
when.has_fields:
- 'kubernetes.namespace'
How can i exclude logs like "INFO" and "200 OK" events?
As far as I'm aware there is no way to exclude logs in Journalbeat. It's working other way around, meaning you tell it what input to look for.
You should read about Configuration input:
By default, Journalbeat reads log events from the default systemd journals. To specify other journal files, set the paths option in the journalbeat.inputs section of the journalbeat.yml file. Each path can be a directory path (to collect events from all journals in a directory), or a file path.
journalbeat.inputs:
- paths:
- "/dev/log"
- "/var/log/messages/my-journal-file.journal"
Within the configuration file, you can also specify options that control how Journalbeat reads the journal files and which fields are sent to the configured output. See Configuration options for a list of available options.
Get familiar with the Configuration options and using the translated fields to target the exact input you want to.
{beatname_lc}.inputs:
- id: consul.service
paths: []
include_matches:
- _SYSTEMD_UNIT=consul.service
- id: vault.service
paths: []
include_matches:
- _SYSTEMD_UNIT=vault.service
You should use it to target the inputs you want to have pushed to elastic.
As an alternative to Journalbeat you could use Filebeat and the exclude might look like this:
type: log
paths:
{{ range $i, $path := .paths }}
- {{$path}}
{{ end }}
exclude_files: [".gz$"]
exclude_lines: ['.*INFO.*']
Hope this helps you a bit.
To apply filter use:
logging.level: warning
Use this instruction to drop event journalbeat.service:
processors:
- drop_event:
when:
equals:
systemd.unit: "journalbeat.service"

Tag a message on the filebeat side to be able to filter on kibana ( HTTP response codes )

I have this configuration:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/nova/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["sdsds"]
I would like to tag a log if it contains the following patter:
message:INFOHTTP*200*
I want to create a query on kibana to filter based on http response codes tag. How can I create this? Can you help me to create the condition with tags?
This response codes are in the nova-api and neutron server logs.
And I don't want to actually filter out the logs, I want to have everything in elastic search, just want to add tag to these kind of logs.
UPDATE:
I managed to figure out something, but I'm not sure what is the best way to list it, because I have many response codes:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
include_lines: ["status: 200"]
fields_under_root: true
fields:
httpresponsecode: 200
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
I have to create multiple times these 4 lines?
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/keystone/keystone.log
- /var/log/neutron/*.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 200"]
fields:
httpresponsecode: 200
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 202"]
fields:
httpresponsecode: 202
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 204"]
fields:
httpresponsecode: 204
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 207"]
fields:
httpresponsecode: 207
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 403"]
fields:
httpresponsecode: 403
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 404"]
fields:
httpresponsecode: 404
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 500"]
fields:
httpresponsecode: 500
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["HTTP 503"]
fields:
httpresponsecode: 503
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: [
What is the best way to do this to multiple files and multiple codes?
UPDATE2:
My solution doesn't work, at the beginning it is sending and after completely stops.
I hope you can help me.
I hope that I understood your question, but in that case, I would go the grok route.
If you know that your status field always looks like this, then why not do a pattern like this:
match => {
"message" => "<prepending patterns> status: %{NUMBER:httpresponsecode} <patterns that follow>"
}
This would create a field called httpresponsecode which is filled with the number that follows the string "status: "
However, based on the ECS-Formats, I'd rather call the field something else, like
http.response.status(.keyword)
As for your specified logline, a valid grok pattern might look like this:
%{TIMESTAMP_ISO8601:timestamp} %{NONNEGINT:message.number} %{WORD:loglevel} %{DATA:application} \[-\] %{IP:source.ip} "(?:%{WORD:verb} %{NOTSPACE:http.request.path}(?: HTTP/%{NUMBER:http.version})?|%{DATA:rawrequest})" status: %{NONNEGINT:http.response.status} len: %{NUMBER:http.response.length} time: %{NUMBER:http.response.time}
Find the Grok-Patterns for logstash in the logstash repository
Use the Grok-Debugger included in Kibana to see how your pattern would match.
Rename the fields accordingly.

Resources