I am trying to merge a dictionary with a yaml file content and pass them to some salt state.
metricbeat.yml content:
metricbeat:
config:
modules:
path: /etc/metricbeat/modules.d/*.yml
reload.enabled: true
reload.period: 10s
output.logstash:
hosts:
worker: 1
compression_level: 3
loadbalance: true
ssl:
certificate: /usr/share/metricbeat/metricbeat.crt
key: /usr/share/metricbeat/metricbeat.key
verification_mode: none
logging:
level: debug
to_files: true
files:
path: /var/tellme/log/metricbeat
name: metricbeat.log
rotateeverybytes: 10485760
keepfiles: 7
config.yml content:
metricbeat:
config:
modules:
reload.period: 100s
Statefile:
{% import_yaml "config.yml" as config %}
manage_file:
file.managed:
- name: /etc/metricbeat/metricbeat.yml
- source: salt://metricbeat.yml
- template: jinja
conf_file:
file.serialize:
- name: /etc/metricbeat/metricbeat.yml
- dataset:
output.logstash:
hosts: ['exacmple.domain.com:5158']
{{ config | yaml }}
- serializer: yaml
- merge_if_exists: true
But I am getting the below error:
example-1.domain.com:
Data failed to compile:
----------
Rendering SLS 'base:test' failed: could not find expected ':'
What am I doing wrong ?
Fixed the issue as below
{% import_yaml "config.yml" as config %}
manage_file:
file.managed:
- name: /etc/metricbeat/metricbeat.yml
- source: salt://metricbeat.yml
- template: jinja
conf_file:
file.serialize:
- name: /etc/metricbeat/metricbeat.yml
- dataset:
output.logstash:
hosts: ['exacmple.domain.com:5158:5158']
{{ config | yaml(false) | indent(8) }}
- serializer: yaml
- merge_if_exists: true
"yaml(false)" is for multiline yaml and proper indentation with "indent".
Related
in our cluster some apps are sending logs as multiline, and the problem is that the log structure is different from app to app.
How can we set up an 'if' condition that will include the
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
In it?
Our code:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"
output.logstash:
hosts: ["logstash-listener:5044"]
You need to use auto-discovery (either Docker or Kubernetes) with template conditions.
You will probably have at least two templates, one for capturing your containers that emit multiline messages and another for other containers.
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition: <--- your multiline condition goes here
contains:
kubernetes.namespace: xyz-namespace
config:
- type: container
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
multiline:
pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate: true
match: after
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event:
when:
contains:
container.image.name: "kibana"
I am running in ansible playbook with:
--extra-vars "log_path=/var/logs/a*.log,/repo/log-events-a*.json,repo/user1/log-events-b*.json"
as comma separated lines, I want the output in filebeat.yml file as
paths:
- /var/logs/a*.log
- /repo/log-events-a*.json
- /repo/user1/log-events-b*.json
I am using jinja2 for filebeat.yml
paths:
- {{ log_path }}
And my ansible file testconfigure.yml is
- hosts: localhost
gather_facts: no
vars:
log_path: "{{ logpath.replace(',', '\n-')}}"
tasks:
- name: a jija test
template:
src: /repo/filebeat/filebeat.j2
dest : /repo/filebeat/filebeat.yml
I am getting the output in filebeat.yml file as:
paths:
- /var/logs/*.log,/repo/log-events-a*.json,/repo/user1/log-events-b*.json
I also tried logpath: "{{ logpath | regex_replace(',', '\n-') }" in my playbook, but still getting same output.
How should I try it?
create a j2 file :
paths:
{% for log in log_path %}
- {{ log }}
{% endfor %}
playbook:
- hosts: localhost
vars:
log_path: "{{ logpath.split(',') }}"
tasks:
- name: templating
template:
src: filebeat.j2
dest: filebeat.yml
and the command to call:
ansible-playbook yourplaybook.yml --extra-vars "logpath=/var/logs/a*.log,/repo/log-events-a*.json,repo/user1/log-events-b*.json
result:
paths:
- /var/logs/a*.log
- /repo/log-events-a*.json
- repo/user1/log-events-b*.json
if you just want to create a var file, no need to template:
- name: create var file
copy:
content: "{{ log_path | to_nice_yaml }}"
dest: filebeat.yml
result:
paths:
- /var/logs/a*.log
- /repo/log-events-a*.json
- repo/user1/log-events-b*.json
I have a yaml:
global:
resolve_timeout: 5m
receivers:
- name: alerts-null
- name: default
local_configs:
- api_url: https://abx.com
channel: '#abx'
send_resolved: true
username: abc-123
- name: devops-alerts
local_configs:
- api_url: https://abx.com
channel: '#abx'
send_resolved: true
username: abc-123
The yaml can have multiple "name:" elements in the array and I want to loop all "name" elements and change the value for key "username:" to "xyz-321". Resultant YAML should be as follows:
global:
resolve_timeout: 5m
receivers:
- name: alerts-null
- name: default
local_configs:
- api_url: https://abx.com
channel: '#abx'
send_resolved: true
username: xyz-321
- name: devops-alerts
local_configs:
- api_url: https://abx.com
channel: '#abx'
send_resolved: true
username: xyz-321
I tried to use following yq command, but it did not changed the desired key's value:
yq eval '(.receivers[] | select(.name.local_configs.username)) = "xyz-321"' source.yaml > manipulated.yaml
Any pointers are appreciated.
If you change the indexing to look like this:
global:
resolve_timeout: 5m
receivers:
- name: alerts-null
internal_config:
username: abc-123
- name: default
internal_config:
username: abc-123
Then the flowing will work:
yq e '(.receivers[].internal_config.username) |= "new_username"' myFile.yaml
I am currently trying to copy a config file (YAML) to a raspberry pi using ansible-playbook. the problem is, ansible doesn't seem to keep the quotes from my config which is problematic since there are numbers represented as strings in it.
Playbook:
- name: Update PIs configs
hosts: "{{ hosts }}"
remote_user: pi
tasks:
- name: Check if the config directory exists
stat:
path: "~/.config/argos"
register: argos_folder
- name: Create the directory if it didn't already exist
file:
path: "~/.config/argos"
state: directory
mode: 0755
when: not argos_folder.stat.exists
- name: Create the config file.
copy:
content: '{{ config }}'
dest: /home/pi/.config/argos/argos_config.yaml
I am passing the variables hosts and config via the cli:
ansible-playbook ~/ansible_playbook.yaml -i ~/hosts --extra-vars "#./vars.yaml
vars.yaml:
hosts: testprofile
config: |
default-settings:
apn: test.com
tests:
sms:
active: true
endpoint:
- "123123"
call:
active: false
endpoint:
- "123"
tcp:
active: false
endpoint: []
The command runs fine, but the copied content:
default-settings:
apn: test.com
tests:
sms:
active: true
endpoint:
- 123123
call:
active: false
endpoint:
- 123
tcp:
active: false
endpoint: []
on the pi, as you can see, is missing the double quotes for the endpoint arrays.
Is there a workaround to force ansible to keep those double quotes when copying the content ?
i have the following config for metricbeat:
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
metricbeat_modules:
- module: system
metricsets:
- cpu
- load
- memory
- network
- diskio
enabled: true
period: 10s
tags: ['os']
cpu.metrics: ['percentages']
core.metrics: ['percentages']
setup.template:
name: {{ metricbeat_index }}
pattern: {{ metricbeat_index }}-*
settings:
index:
number_of_shards: 1
codec: best_compression
tags: [{{ metricbeat_tags | join(', ') }}]
fields:
env: {{ metricbeat_env }}
output.elasticsearch:
hosts: {{ metricbeat_output_es_hosts | to_json }}
index: "{{ metricbeat_index }}-%{+yyyy-MM-dd}"
setup.dashboards.directory: /usr/share/metricbeat/kibana
setup.kibana:
host: {{ metricbeat_kibana_url }}
processors:
- drop_fields:
fields: ["beat.name","beat.hostname"]
processors:
- add_host_metadata:
netinfo.enabled: false
processors:
- add_cloud_metadata: ~
it worked as expected as i had the metricsets process and process_summary enabled. since i removed them it seems still to harvest those metrics. i restarted, stopped/started metricbeat again but it still has no effect.
Thanks for ideas as i cannot see any reason why this should happen this way :/
I digged a bit more into you issue.
You sepcify a module config folder with this part of your config:
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
If you run look info that folder I'm sure you'll find this:
aerospike.yml.disabled
apache.yml.disabled
ceph.yml.disabled
couchbase.yml.disabled
docker.yml.disabled
dropwizard.yml.disabled
elasticsearch.yml.disabled
envoyproxy.yml.disabled
etcd.yml.disabled
golang.yml.disabled
graphite.yml.disabled
haproxy.yml.disabled
http.yml.disabled
jolokia.yml.disabled
kafka.yml.disabled
kibana.yml.disabled
kubernetes.yml.disabled
kvm.yml.disabled
logstash.yml.disabled
memcached.yml.disabled
mongodb.yml.disabled
munin.yml.disabled
mysql.yml.disabled
nginx.yml.disabled
php_fpm.yml.disabled
postgresql.yml.disabled
prometheus.yml.disabled
rabbitmq.yml.disabled
redis.yml.disabled
system.yml
traefik.yml.disabled
uwsgi.yml.disabled
vsphere.yml.disabled
windows.yml.disabled
zookeeper.yml.disabled
See that system.yml file ?
This is the configuration that is loaded.
So you can remove process from this configuration file or not use metricbeat.config.modules.path
Hope it helped.
Shoudln't you have metricbeat.modules instead of metricbeat_modules ?