i've got an issue with my log4j2.yml configuration file - the following piece of configuration does not work as expected:
fileName: "${baseName}/logs/${project.build.finalName}.log"
filePattern: "${baseName}/logs/%d{yyyy-MM-dd}_${project.build.finalName}.log.gz"
variable ${baseName} has been declared in the application.properties file:
baseName="d:\dev\dd"
yaml support has been added via the following dependency in the build.gradle file:
compile "com.fasterxml.jackson.core:jackson-core"
compile "com.fasterxml.jackson.dataformat:jackson-dataformat-yaml"
When I built my project, i had a ${baseName} directory in the project root directory. The value "d:\dev\dd" has not been assigned to the ${baseName} variable for some reasons.
Any ideas how to handle this?
variable should be set in log4j2.yml as properties.property, like :
Configuration:
properties:
property:
- name: baseName
value: /home/shared/log/
- name: filename
value: sample.log
- name: pattern
value: "%d{yyyy-MM-dd HH:mm:ss} [%p] [%t] [%c] %m%n"
status: INFO
Appenders:
Console:
name: console
target: SYSTEM_OUT
PatternLayout:
pattern: "${pattern}"
RollingFile:
- name: FileAppender
fileName: "${baseName}${filename}"
filePattern: "${baseName}${filename}-%d{yyyy-MM-dd}"
PatternLayout:
pattern: "${pattern}"
Policies:
TimeBasedTriggeringPolicy: {}
Loggers:
Root:
level: INFO
AppenderRef:
- ref: console
- ref: FileAppender
Related
I have this pipeline file:
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
branches:
include:
- main
- issues*
- tasks*
paths:
exclude:
- documentation/*
- Readme.md
variables:
- name: majorVersion
value: 1
- name: minorVersion
value: 0
- name: revision
value: $[counter(variables['minorVersion'],0)]
- name: buildVersion
value: $(majorVersion).$(minorVersion).$(revision)
name: $(buildVersion)
and I expect the pipeline name to be 1.0.0
but instead it is a string $(majorVersion).$(minorVersion).$(revision)
where did i get the formatting wrong?
in our cluster some apps are sending logs as multiline, and the problem is that the log structure is different from app to app.
How can we set up an 'if' condition that will include the
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
In it?
Our code:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"
output.logstash:
hosts: ["logstash-listener:5044"]
You need to use auto-discovery (either Docker or Kubernetes) with template conditions.
You will probably have at least two templates, one for capturing your containers that emit multiline messages and another for other containers.
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition: <--- your multiline condition goes here
contains:
kubernetes.namespace: xyz-namespace
config:
- type: container
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
multiline:
pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate: true
match: after
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"
I have a folder structure as
Ansible:
- roles
- elastic.beats
- templates
filebeat-inputs.yml.j2
From my playbook, I am calling it as below:
- {
role: elastic.beats,
beat: "filebeat",
beat_conf: "templates/filebeat-inputs.yml.j2"
}
But this does not seem to work. Note: beat_conf: accepts a "map structure of values" and hence any call to the yml.j2 file must be in the same form.
Also, how do I call more than one beat for the role? is it like this? or there is a cleaner way.
- {
role: elastic.beats,
beat: "filebeat",
beat_conf: "templates/filebeat-inputs.yml.j2"
}
- {
role: elastic.beats,
beat: "metricbeat",
beat_conf: "templates/metribeat-inputs.yml.j2"
}
Thanks
Just to give more context: I am trying to use Ansible elastic.beats role [https://github.com/elastic/ansible-beats] which needs a mandatory parameter beat_conf.
This parameter accepts values in the following format (map):
hosts: localhost
roles:
- {
role: elastic.beats,
beat: filebeat
beat_conf:
filebeat:
inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
- type: log
paths:
- /var/log/mysql.log
scan_frequency: 10s
- type: log
paths:
- /var/log/apache.log
scan_frequency: 5s
However, the inputs can be put in a separate file ($root/templates/filebeat-inputs.yml.j2) as:
- type: log
enabled: true
paths:
- /var/log/*.log
- type: log
paths:
- /var/log/mysql.log
scan_frequency: 10s
- type: log
paths:
- /var/log/apache.log
scan_frequency: 5s
[Ref: https://www.elastic.co/guide/en/beats/filebeat/7.12/configuration-filebeat-options.html]
How do I call this file ($root/templates/filebeat-inputs.yml.j2) so that the final filebeat.yml file generated on the targets is of the following format:
https://github.com/elastic/beats/blob/master/filebeat/filebeat.yml
I configured logging subsystem in my thorntail as below:
thorntail:
logging:
pattern-formatters:
LOG_FORMATTER:
pattern: "%p [%c] %s%e%n"
periodic-rotating-file-handlers:
FILE:
file:
path: ws.log
relative-to: /var/log/app/
suffix: .yyyy-MM-dd
named-formatter: LOG_FORMATTER
level: INFO
handlers:
- CONSOLE
- FILE
root-logger:
handlers:
- CONSOLE
- FILE
loggers:
org.jboss.security:
level: TRACE
org.jboss.resteasy:
level: TRACE
Now how i can set relative-to property in all setting same -Djboss.server.log.dir in wildfly.
Do i should repeat relative-to properties in all handlers?
I have this configuration:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/nova/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["sdsds"]
I would like to tag a log if it contains the following patter:
message:INFOHTTP*200*
I want to create a query on kibana to filter based on http response codes tag. How can I create this? Can you help me to create the condition with tags?
This response codes are in the nova-api and neutron server logs.
And I don't want to actually filter out the logs, I want to have everything in elastic search, just want to add tag to these kind of logs.
UPDATE:
I managed to figure out something, but I'm not sure what is the best way to list it, because I have many response codes:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
include_lines: ["status: 200"]
fields_under_root: true
fields:
httpresponsecode: 200
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
I have to create multiple times these 4 lines?
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/keystone/keystone.log
- /var/log/neutron/*.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 200"]
fields:
httpresponsecode: 200
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 202"]
fields:
httpresponsecode: 202
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 204"]
fields:
httpresponsecode: 204
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 207"]
fields:
httpresponsecode: 207
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 403"]
fields:
httpresponsecode: 403
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 404"]
fields:
httpresponsecode: 404
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 500"]
fields:
httpresponsecode: 500
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["HTTP 503"]
fields:
httpresponsecode: 503
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: [
What is the best way to do this to multiple files and multiple codes?
UPDATE2:
My solution doesn't work, at the beginning it is sending and after completely stops.
I hope you can help me.
I hope that I understood your question, but in that case, I would go the grok route.
If you know that your status field always looks like this, then why not do a pattern like this:
match => {
"message" => "<prepending patterns> status: %{NUMBER:httpresponsecode} <patterns that follow>"
}
This would create a field called httpresponsecode which is filled with the number that follows the string "status: "
However, based on the ECS-Formats, I'd rather call the field something else, like
http.response.status(.keyword)
As for your specified logline, a valid grok pattern might look like this:
%{TIMESTAMP_ISO8601:timestamp} %{NONNEGINT:message.number} %{WORD:loglevel} %{DATA:application} \[-\] %{IP:source.ip} "(?:%{WORD:verb} %{NOTSPACE:http.request.path}(?: HTTP/%{NUMBER:http.version})?|%{DATA:rawrequest})" status: %{NONNEGINT:http.response.status} len: %{NUMBER:http.response.length} time: %{NUMBER:http.response.time}
Find the Grok-Patterns for logstash in the logstash repository
Use the Grok-Debugger included in Kibana to see how your pattern would match.
Rename the fields accordingly.