call jinja2 template within Ansible role at Playbook level - ansible

I have a folder structure as
Ansible:
- roles
- elastic.beats
- templates
filebeat-inputs.yml.j2
From my playbook, I am calling it as below:
- {
role: elastic.beats,
beat: "filebeat",
beat_conf: "templates/filebeat-inputs.yml.j2"
}
But this does not seem to work. Note: beat_conf: accepts a "map structure of values" and hence any call to the yml.j2 file must be in the same form.
Also, how do I call more than one beat for the role? is it like this? or there is a cleaner way.
- {
role: elastic.beats,
beat: "filebeat",
beat_conf: "templates/filebeat-inputs.yml.j2"
}
- {
role: elastic.beats,
beat: "metricbeat",
beat_conf: "templates/metribeat-inputs.yml.j2"
}
Thanks
Just to give more context: I am trying to use Ansible elastic.beats role [https://github.com/elastic/ansible-beats] which needs a mandatory parameter beat_conf.
This parameter accepts values in the following format (map):
hosts: localhost
roles:
- {
role: elastic.beats,
beat: filebeat
beat_conf:
filebeat:
inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
- type: log
paths:
- /var/log/mysql.log
scan_frequency: 10s
- type: log
paths:
- /var/log/apache.log
scan_frequency: 5s
However, the inputs can be put in a separate file ($root/templates/filebeat-inputs.yml.j2) as:
- type: log
enabled: true
paths:
- /var/log/*.log
- type: log
paths:
- /var/log/mysql.log
scan_frequency: 10s
- type: log
paths:
- /var/log/apache.log
scan_frequency: 5s
[Ref: https://www.elastic.co/guide/en/beats/filebeat/7.12/configuration-filebeat-options.html]
How do I call this file ($root/templates/filebeat-inputs.yml.j2) so that the final filebeat.yml file generated on the targets is of the following format:
https://github.com/elastic/beats/blob/master/filebeat/filebeat.yml

Related

Can I import parameters set in azure-pipeline.yml into playbook.yml

I have two yaml files. One is azure-pipeline.yml
name: test-resources
trigger: none
resources:
repositories:
- repository: pipeline
type: git
name: test-templates
parameters:
- name: whetherYesOrNo
type: string
default: Yes
values:
- Yes
- No
extends:
template: pipelines/ansible-playbook-deploy.yml#pipeline
parameters:
folderName: test-3scale
As for this file, when I run the pipeline, I could choose Yes or No as options before running it.
The other one is the playbook.yml for Ansible
- hosts: localhost
connection: local
become: true
vars_files:
- test_service.yml
- "vars/test.yml"
collections:
- test_collection
tasks:
- name: Find out playbooks pwd
shell: pwd
register: playbook_path_output
no_log: false
- debug: var=playbook_path_output.stdout
- name: echo something
shell: echo 'test this out'
register: playbook_ls_content_output
no_log: false
- debug: var=playbook_ls_content_output.stdout
I wish to add a condition in the playbook.yml task, so that
When I choose "Yes" when running the pipeline, task named "echo something" will run, but if I choose "No", this task will be skipped. I am really new in yaml syntax and logic. Could someone help? Many thanks!
These runs successfully on my side(I can judge the condition with no problem, at compile time it will be expanded.):
azure-pipeline.yml
trigger: none
parameters:
- name: whetherYesOrNo
type: string
default: Yes
values:
- Yes
- No
extends:
template: pipelines/ansible-playbook-deploy.yml
parameters:
whetherYesOrNo: ${{parameters.whetherYesOrNo}}
ansible-playbook-deploy.yml
parameters:
- name: whetherYesOrNo
type: string
default: No
steps:
- ${{ if eq(parameters.whetherYesOrNo, 'Yes') }}:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
Repository structure on my side:
If Yes:
If No:

elasticsearch - filebeat - How to define multiline in filebeat.inputs with conditions?

in our cluster some apps are sending logs as multiline, and the problem is that the log structure is different from app to app.
How can we set up an 'if' condition that will include the
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
In it?
Our code:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"
output.logstash:
hosts: ["logstash-listener:5044"]
You need to use auto-discovery (either Docker or Kubernetes) with template conditions.
You will probably have at least two templates, one for capturing your containers that emit multiline messages and another for other containers.
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition: <--- your multiline condition goes here
contains:
kubernetes.namespace: xyz-namespace
config:
- type: container
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
multiline:
pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate: true
match: after
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"

Task with loop in Argo workflow

I want to introduce a for loop in a workflow that consists of 2 individual tasks. The second will be dependent on the first. Each one should use different templates. The second should iterate with {{item}}. For each iteration I want to know if the default is to execute only the second task or it will re-execute the whole flow?
To repeat the second step only, use withItems/withParameter (there is no withArtifact, though you can get the same behavior with data). These loops repeat the specific step they are mentioned in for the specified items/parameter only.
- name: create-resources
inputs:
paramet`enter code here`ers:
- name: env
- name: giturl
- name: resources
- name: awssecret
dag:
tasks:
- name: resource
template: resource-create
arguments:
parameters:
- name: env
value: "{{inputs.parameters.env}}"
- name: giturl
value: "{{inputs.parameters.giturl}}"
- name: resource
value: "{{item}}"
- name: awssecret
value: "{{inputs.parameters.awssecret}}"
withParam: "{{inputs.parameters.resources}}"
############# For parallel execution use steps ##############
steps:
- - name: resource
template: resource-create
arguments:
parameters:
- name: env
value: "{{inputs.parameters.env}}"
- name: giturl
value: "{{inputs.parameters.giturl}}"
- name: resource
value: "{{item}}"
- name: awssecret
value: "{{inputs.parameters.awssecret}}"
withParam: "{{inputs.parameters.resources}}"

how to exclude logs/events in journalbeat

We are using journalbeat to push logs of kubernetes cluster to elastic search. It working fine and pushing the logs. However its also pushing event like "200 OK" and "INFO" which we do not want. The journalbeat.yaml is as follows
journalbeat.yaml
journalbeat.yml: |
name: "${NODENAME}"
journalbeat.inputs:
- paths: []
seek: cursor
cursor_seek_fallback: tail
processors:
- add_kubernetes_metadata:
host: "${NODENAME}"
in_cluster: true
default_indexers.enabled: false
default_matchers.enabled: false
indexers:
- container:
matchers:
- fields:
lookup_fields: ["container.id"]
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 1
target: ""
overwrite_keys: true
- drop_event.when:
or:
- regexp.kubernetes.pod.name: "filebeat-.*"
- regexp.kubernetes.pod.name: "journalbeat-.*"
- regexp.kubernetes.pod.name: "nginx-ingress-controller-.*"
- regexp.kubernetes.pod.name: "prometheus-operator-.*"
setup.template.enabled: false
setup.template.name: "journal-${ENVIRONMENT}-%{[agent.version]}"
setup.template.pattern: "journal-${ENVIRONMENT}-%{[agent.version]}-*"
setup.template.settings:
index.number_of_shards: 10
index.refresh_interval: 10s
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
index: "journal-${ENVIRONMENT}-system-%{[agent.version]}-%{+YYYY.MM.dd}"
indices:
- index: "journal-${ENVIRONMENT}-k8s-%{[agent.version]}-%{+YYYY.MM.dd}"
when.has_fields:
- 'kubernetes.namespace'
How can i exclude logs like "INFO" and "200 OK" events?
As far as I'm aware there is no way to exclude logs in Journalbeat. It's working other way around, meaning you tell it what input to look for.
You should read about Configuration input:
By default, Journalbeat reads log events from the default systemd journals. To specify other journal files, set the paths option in the journalbeat.inputs section of the journalbeat.yml file. Each path can be a directory path (to collect events from all journals in a directory), or a file path.
journalbeat.inputs:
- paths:
- "/dev/log"
- "/var/log/messages/my-journal-file.journal"
Within the configuration file, you can also specify options that control how Journalbeat reads the journal files and which fields are sent to the configured output. See Configuration options for a list of available options.
Get familiar with the Configuration options and using the translated fields to target the exact input you want to.
{beatname_lc}.inputs:
- id: consul.service
paths: []
include_matches:
- _SYSTEMD_UNIT=consul.service
- id: vault.service
paths: []
include_matches:
- _SYSTEMD_UNIT=vault.service
You should use it to target the inputs you want to have pushed to elastic.
As an alternative to Journalbeat you could use Filebeat and the exclude might look like this:
type: log
paths:
{{ range $i, $path := .paths }}
- {{$path}}
{{ end }}
exclude_files: [".gz$"]
exclude_lines: ['.*INFO.*']
Hope this helps you a bit.
To apply filter use:
logging.level: warning
Use this instruction to drop event journalbeat.service:
processors:
- drop_event:
when:
equals:
systemd.unit: "journalbeat.service"

Tag a message on the filebeat side to be able to filter on kibana ( HTTP response codes )

I have this configuration:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/nova/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["sdsds"]
I would like to tag a log if it contains the following patter:
message:INFOHTTP*200*
I want to create a query on kibana to filter based on http response codes tag. How can I create this? Can you help me to create the condition with tags?
This response codes are in the nova-api and neutron server logs.
And I don't want to actually filter out the logs, I want to have everything in elastic search, just want to add tag to these kind of logs.
UPDATE:
I managed to figure out something, but I'm not sure what is the best way to list it, because I have many response codes:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
include_lines: ["status: 200"]
fields_under_root: true
fields:
httpresponsecode: 200
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
I have to create multiple times these 4 lines?
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/keystone/keystone.log
- /var/log/neutron/*.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 200"]
fields:
httpresponsecode: 200
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 202"]
fields:
httpresponsecode: 202
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 204"]
fields:
httpresponsecode: 204
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 207"]
fields:
httpresponsecode: 207
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 403"]
fields:
httpresponsecode: 403
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 404"]
fields:
httpresponsecode: 404
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 500"]
fields:
httpresponsecode: 500
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["HTTP 503"]
fields:
httpresponsecode: 503
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: [
What is the best way to do this to multiple files and multiple codes?
UPDATE2:
My solution doesn't work, at the beginning it is sending and after completely stops.
I hope you can help me.
I hope that I understood your question, but in that case, I would go the grok route.
If you know that your status field always looks like this, then why not do a pattern like this:
match => {
"message" => "<prepending patterns> status: %{NUMBER:httpresponsecode} <patterns that follow>"
}
This would create a field called httpresponsecode which is filled with the number that follows the string "status: "
However, based on the ECS-Formats, I'd rather call the field something else, like
http.response.status(.keyword)
As for your specified logline, a valid grok pattern might look like this:
%{TIMESTAMP_ISO8601:timestamp} %{NONNEGINT:message.number} %{WORD:loglevel} %{DATA:application} \[-\] %{IP:source.ip} "(?:%{WORD:verb} %{NOTSPACE:http.request.path}(?: HTTP/%{NUMBER:http.version})?|%{DATA:rawrequest})" status: %{NONNEGINT:http.response.status} len: %{NUMBER:http.response.length} time: %{NUMBER:http.response.time}
Find the Grok-Patterns for logstash in the logstash repository
Use the Grok-Debugger included in Kibana to see how your pattern would match.
Rename the fields accordingly.

Resources