in our cluster some apps are sending logs as multiline, and the problem is that the log structure is different from app to app.
How can we set up an 'if' condition that will include the
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
In it?
Our code:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"
output.logstash:
hosts: ["logstash-listener:5044"]
You need to use auto-discovery (either Docker or Kubernetes) with template conditions.
You will probably have at least two templates, one for capturing your containers that emit multiline messages and another for other containers.
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition: <--- your multiline condition goes here
contains:
kubernetes.namespace: xyz-namespace
config:
- type: container
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
multiline:
pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate: true
match: after
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"
Related
When i try to create multiple index in filebeat.yml and output to elasticsearch, i am getting temporary bulk send failure error. This is coming only when i introduce ilm as disable. can anyone help
Below is the filebeat config
filebeat.inputs:
- type: filestream
id: denali
enabled: true
paths:
- /var/log/denali/denali.log
parsers:
- multiline:
type: pattern
pattern: '^(\d{4}-\d{2}-\d{2})'
negate: true
match: after
fields:
app_id: denali
- type: filestream
id: freeswitch
enabled: true
paths:
- /var/log/freeswitch/freeswitch.log
parsers:
- multiline:
type: pattern
pattern: '^((\d|[a-z]|-)+ \d{4}-\d{2}-\d{2}|\d{4}-\d{2}-\d{2})'
negate: true
match: after
fields:
app_id: freeswitch
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.enabled: true
setup.ilm.enabled: false
setup.template.overwrite: true
setup.template.name: "index-%{[agent.version]}"
setup.template.pattern: "index-%{[agent.version]}-*"
output.elasticsearch:
hosts: ["ip:port"]
index: "index-%{[agent.version]}-%{[fields.app_id]:other}-%{+yyyy.MM.dd}"
protocol: "http"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- drop_fields:
fields: ["agent.ephemeral_id", "agent.hostname", "agent.id", "agent.name", "agent.type", "agent.version", "cloud.account.id", "cloud.provider", "cloud.service.name", "container.id", "container.image.name", "container.labels.COMMIT", "container.labels.PIPELINE_URL", "container.labels.PROJECT_NAME", "container.labels.PROJECT_URL", "container.labels.SOURCE_BRANCH", "container.labels.TimeStamp", "container.labels.RELEASEARTIFACT_VERSION", "container.labels.com_docker_compose_config-hash", "container.labels.com_docker_compose_container-number", "container.labels.com_docker_compose_oneoff", "container.labels.com_docker_compose_project", "container.labels.com_docker_compose_project_config_files", "container.labels.com_docker_compose_project_working_dir", "container.labels.com_docker_compose_service", "container.labels.com_docker_compose_version", "ecs.version", "host.architecture", "host.containerized", "host.id", "host.mac", "host.os.codename", "host.os.family", "host.os.kernel", "host.os.name", "host.os.platform", "host.os.type", "host.os.version", "log.offset"]
#Ramanichandran can you please provide the error logs from filebeat? Also, do you see any errors on ES logs when filebeat is trying to send logs for ingestion?
I don't believe it's due to creation of multiple indices since you are essentially creating only 3 indices. I have configured filebeat to create about 15 indices in my use case and it works just fine where my config is similar to yours with ILM disabled.
It's worth trying to set following attributes for output.elasticsearch:
bulk_max_size: 25
bulk_max_bytes: 104857600
Using ES toolchain in version 7.17.0
I'd like to setup ILM + index_template with customised name.
However from documentation
If index lifecycle management is enabled (which is typically the default), setup.template.name and setup.template.pattern are ignored.
It seems like it's not possible.
Now the questions:
is it ok to setup custom template name (with custom setup) when ILM is/was enabled?
is it ok to run two setup files in filebeat? (e.g. filebeat setup --index-management --dashboards -c setup-ilm.yml && filebeat setup --index-management --dashboards -c setup-template.yml)?
am I able to put those setup files somewhere in filebeat (docker image) to be execute automatically? I've seen that there is only modules and inputs folder setup.
when I've executed those setup files above I've seen following:
Loading ILM policy and write alias without loading template is not recommended. Check your configuration.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
ILM policy and write alias loading not enabled.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
setup-ilm.yml
setup:
ilm:
enabled: true
policy_file: "ilm-policy.json"
template:
enabled: false
output.elasticsearch:
hosts: ["elasticsearch:9200"]
setup-template.yml
setup:
ilm:
enabled: false
template:
enabled: true
name: "${ES_NAMESPACE:+${ES_NAMESPACE}-}filebeat-%{[agent.version]}"
pattern: "${ES_NAMESPACE:+${ES_NAMESPACE}-}filebeat-%{[agent.version]}-*"
kibana:
host: "kibana:5601"
index:
number_of_shards: 1
mapping:
total_fields:
limit: 5000
output.elasticsearch:
hosts: ["elasticsearch:9200"]
This is how I did to have custom index name + lifecycle policy, I'm using the elastic operators from ECK so if it's not your case it may be different. So, assuming ECK and elastic are already installed this is my beats yaml file:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 8.6.0
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
output:
elasticsearch:
index: "custom-name-%{[agent.version]}-%{+yyyy.MM.dd}"
hosts: [ "elastic-host:9200" ]
username: "filebeat_user"
password: "password" # pending to load from secret
ssl:
verification_mode: "none"
setup:
template:
name: "filebeat"
pattern: "*-filebeat-*"
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
As I understood, this will create an index template called filebeat that match the pattern *-filebeat-* and a policy for that index called filebeat. This is the important part:
output:
elasticsearch:
index: "custom-name-%{[agent.version]}-%{+yyyy.MM.dd}"
hosts: [ "elastic-host:9200" ]
username: "filebeat_user"
password: "password" # pending to load from secret
ssl:
verification_mode: "none"
setup:
template:
name: "filebeat"
pattern: "*-filebeat-*"
I hope this helps because is poorly documented, they even have an open issue to improve the documentation: https://github.com/elastic/beats/issues/11866
I am trying to deploy elasticsearch and kibana to kubernetes using this chart and getting this error inside the kibana container, therefore ingress returns 503 error and container is never ready.
Error:
[2022-11-08T12:30:53.321+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.112.130.148:42748, Remote: 10.96.237.95:9200
Ip adress 10.96.237.95 is a valid elasticsearch service address, and port is right.
When i am doing curl to elasticsearch from inside the kibana container, it successfully returns a response.
Am i missing something in my configurations?
Chart version: 7.17.3
Values for elasticsearch chart:
clusterName: "elasticsearch"
nodeGroup: "master"
createCert: false
roles:
master: "true"
data: "true"
ingest: "true"
ml: "true"
transform: "true"
remote_cluster_client: "true"
protocol: https
replicas: 2
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
imageTag: "7.17.3"
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
clusterHealthCheckParams: "wait_for_status=green&timeout=20s"
antiAffinity: "soft"
resources:
requests:
cpu: "100m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "1Gi"
esJavaOpts: "-Xms512m -Xmx512m"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs
Values for kibana chart:
elasticSearchHosts: "https://elasticsearch-master:9200"
extraEnvs:
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: KIBANA_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: encryption-key
key: encryption_key
kibanaConfig:
kibana.yml: |
server.ssl:
enabled: true
key: /usr/share/kibana/config/certs/elastic-certificate.pem
certificate: /usr/share/kibana/config/certs/elastic-certificate.pem
xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
elasticsearch.ssl:
certificateAuthorities: /usr/share/kibana/config/certs/elastic-certificate.pem
verificationMode: certificate
protocol: https
secretMounts:
- name: elastic-certificate-pem
secretName: elastic-certificate-pem
path: /usr/share/kibana/config/certs
imageTag: "7.17.3"
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-issuer
kubernetes.io/ingress.allow-http: 'false'
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
hosts:
- host: mydomain.com
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
tls:
- hosts:
- mydomain.com
secretName: mydomain.com
UPD: tried it with other image version (8.4.1), nothing has changed, i am getting the same error. By the way, logstash is successfully shipping logs to this elasticsearch instance, so i think problem is in kibana.
Figured it out. It was a complete pain in the ass. I hope these tips will help others:
xpack.security.http.ssl.enabled should be set to false. I can't find another way around it, but if you do i'd be glad to hear any advices. As i see it, you don't need security for http layer since kibana connects to elastic via transport layer (correct me if i am wrong). Therefore xpack.security.transport.ssl.enabled should be still set to true, but xpack.security.http.ssl.enabled should be set to false. (don't forget to change your protocol field for readinessProbe to http, and also change protocol for elasticsearch in kibana chart to http.
ELASTIC_USERNAME env variable is pointless in elasticsearch chart, only password is used, user is always elastic
ELASTICSEARCH_USERNAME in kibana chart should be actually set to kibana_systems user with according password for that user
You need to provide the self signed CA for Elasticsearch to Kibana in kibana.yml
elasticsearch.ssl.certificateAuthorities: "/path/cert.ca"
You can test by setting
elasticsearch.ssl.verificationMode: "none"
But that is not recommended for production.
I have this configuration:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/nova/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["sdsds"]
I would like to tag a log if it contains the following patter:
message:INFOHTTP*200*
I want to create a query on kibana to filter based on http response codes tag. How can I create this? Can you help me to create the condition with tags?
This response codes are in the nova-api and neutron server logs.
And I don't want to actually filter out the logs, I want to have everything in elastic search, just want to add tag to these kind of logs.
UPDATE:
I managed to figure out something, but I'm not sure what is the best way to list it, because I have many response codes:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
include_lines: ["status: 200"]
fields_under_root: true
fields:
httpresponsecode: 200
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
I have to create multiple times these 4 lines?
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/keystone/keystone.log
- /var/log/neutron/*.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 200"]
fields:
httpresponsecode: 200
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 202"]
fields:
httpresponsecode: 202
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 204"]
fields:
httpresponsecode: 204
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 207"]
fields:
httpresponsecode: 207
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 403"]
fields:
httpresponsecode: 403
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 404"]
fields:
httpresponsecode: 404
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 500"]
fields:
httpresponsecode: 500
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["HTTP 503"]
fields:
httpresponsecode: 503
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: [
What is the best way to do this to multiple files and multiple codes?
UPDATE2:
My solution doesn't work, at the beginning it is sending and after completely stops.
I hope you can help me.
I hope that I understood your question, but in that case, I would go the grok route.
If you know that your status field always looks like this, then why not do a pattern like this:
match => {
"message" => "<prepending patterns> status: %{NUMBER:httpresponsecode} <patterns that follow>"
}
This would create a field called httpresponsecode which is filled with the number that follows the string "status: "
However, based on the ECS-Formats, I'd rather call the field something else, like
http.response.status(.keyword)
As for your specified logline, a valid grok pattern might look like this:
%{TIMESTAMP_ISO8601:timestamp} %{NONNEGINT:message.number} %{WORD:loglevel} %{DATA:application} \[-\] %{IP:source.ip} "(?:%{WORD:verb} %{NOTSPACE:http.request.path}(?: HTTP/%{NUMBER:http.version})?|%{DATA:rawrequest})" status: %{NONNEGINT:http.response.status} len: %{NUMBER:http.response.length} time: %{NUMBER:http.response.time}
Find the Grok-Patterns for logstash in the logstash repository
Use the Grok-Debugger included in Kibana to see how your pattern would match.
Rename the fields accordingly.
i have the following config for metricbeat:
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
metricbeat_modules:
- module: system
metricsets:
- cpu
- load
- memory
- network
- diskio
enabled: true
period: 10s
tags: ['os']
cpu.metrics: ['percentages']
core.metrics: ['percentages']
setup.template:
name: {{ metricbeat_index }}
pattern: {{ metricbeat_index }}-*
settings:
index:
number_of_shards: 1
codec: best_compression
tags: [{{ metricbeat_tags | join(', ') }}]
fields:
env: {{ metricbeat_env }}
output.elasticsearch:
hosts: {{ metricbeat_output_es_hosts | to_json }}
index: "{{ metricbeat_index }}-%{+yyyy-MM-dd}"
setup.dashboards.directory: /usr/share/metricbeat/kibana
setup.kibana:
host: {{ metricbeat_kibana_url }}
processors:
- drop_fields:
fields: ["beat.name","beat.hostname"]
processors:
- add_host_metadata:
netinfo.enabled: false
processors:
- add_cloud_metadata: ~
it worked as expected as i had the metricsets process and process_summary enabled. since i removed them it seems still to harvest those metrics. i restarted, stopped/started metricbeat again but it still has no effect.
Thanks for ideas as i cannot see any reason why this should happen this way :/
I digged a bit more into you issue.
You sepcify a module config folder with this part of your config:
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
If you run look info that folder I'm sure you'll find this:
aerospike.yml.disabled
apache.yml.disabled
ceph.yml.disabled
couchbase.yml.disabled
docker.yml.disabled
dropwizard.yml.disabled
elasticsearch.yml.disabled
envoyproxy.yml.disabled
etcd.yml.disabled
golang.yml.disabled
graphite.yml.disabled
haproxy.yml.disabled
http.yml.disabled
jolokia.yml.disabled
kafka.yml.disabled
kibana.yml.disabled
kubernetes.yml.disabled
kvm.yml.disabled
logstash.yml.disabled
memcached.yml.disabled
mongodb.yml.disabled
munin.yml.disabled
mysql.yml.disabled
nginx.yml.disabled
php_fpm.yml.disabled
postgresql.yml.disabled
prometheus.yml.disabled
rabbitmq.yml.disabled
redis.yml.disabled
system.yml
traefik.yml.disabled
uwsgi.yml.disabled
vsphere.yml.disabled
windows.yml.disabled
zookeeper.yml.disabled
See that system.yml file ?
This is the configuration that is loaded.
So you can remove process from this configuration file or not use metricbeat.config.modules.path
Hope it helped.
Shoudln't you have metricbeat.modules instead of metricbeat_modules ?