I need create some configuration file in yml wtith ansible role. This example what i want:
filebeat.inputs:
- type: log
paths:
- /var/log/system.log
- /var/log/wifi.log
In ansible dirctory structue in Templates dir i have file with such text:
filebeat.inputs:
- type: {{filebeat_input.type}}
{{ filebeat_input.paths | to_yaml}}
In Directory defaults i have such main.yml:
filebeat_create_config: true
filebeat_input:
type: log
paths:
- "/var/log/*.log"
- "/var/somelog/*.log"
When im run ansible-playbook with this role im got this:
filebeat.inputs:
- type: log
[/var/log/*.log, /var/sdfsadf]
Where i`m wrong ? what and where i need change to get exactly what i whant (see example above).
Thanks for any help !
filebeat_input.paths is a list, so when you pass it to to_yaml you get a list. You just need to structure your YAML template so that you're putting the list in the right place, e.g:
filebeat.inputs:
- type: {{filebeat_input.type}}
paths: {{ filebeat_input.paths | to_yaml}}
Note that:
paths: [/var/log/*.log, /var/sdfsadf]
Is exactly equivalent to:
paths:
- /var/log/*.log
- /var/sdfsadf
Related
I have a simple problem:
My logfiles have timestamps in their name, i.e.:
/var/log/html/access-2021-11-27.log
/var/log/html/access-2021-11-28.log
/var/log/html/access-2021-11-29.log
Promtail is scraping this but does not "see" that access-2021-11-28.log is a continuation of access-2021-11-27.log. So it will "detect" a log file access-2021-11-28.log on the 28th and not show the access-2021-11-27.log anymore. I would want to see just "access.log" with data for several days.
I would assume this should be a well-known scenario, but I cannot find anything on this on the Internet.
The only way is to change log configuration of the application which is generating the logs, to use a unique access.log instead of the schema of the access-xxxx-xx-xx.log files. Unfortunately, this is not always possible.
But...
The old files can still be shown, it only depends on the time range used. Here is an example:
You can use regular expressions to perform the query, like in this example:
{filename=~".*JIRA_INSTALL/logs/access_log\\..*"}
If you want to statically override the filename field you can so something as simple as this:
scrape_configs:
- job_name: system
static_configs:
- labels:
job: remotevarlogs
__path__: /var/log/html/access-*.log
pipeline_stages:
- match:
selector: '{job="remotevarlogs"}'
stages:
- static_labels:
filename: '/var/log/html/access.log'
For those of you searching how to dynamically change the filepath prefix. For example, I'm using FreeBSD jails to nullfs mount my logs from other jails into a promtail jail. I don't want the local mount location (/mnt/logs/<hostname>) to show up as part of the path. Mounting shared folder could similarly be done with NFS or Docker.
scrape_configs:
- job_name: system
static_configs:
- labels:
job: remotevarlogs
__path__: /mnt/logs/*/**/*.log
pipeline_stages:
- match:
selector: '{job="remotevarlogs"}'
stages:
- regex:
source: filename
expression: "/mnt/logs/(?P<host>\\S+?)/(?P<relativepath>\\S+)"
- template:
source: host
template: '{{ .Value }}.mylocaldomain.com'
- template:
source: relativepath
template: '/var/log/{{ .Value }}'
- labels:
host:
filename: relativepath
- labeldrop:
- job
- relativepath
/etc/fstab for loki jail to pass-in /var/log/ directory from the grafana jail:
# Device Mountpoint FStype Options Dump Pass#
...
/jails/grafana/root/var/log/ /jails/loki/root/mnt/logs/grafana nullfs ro,nosuid,noexec 0 0
...
Now when I browse the logs, instead of seeing /mnt/logs/grafana/nginx/access.log, I see /var/log/nginx/access.log from grafana.mylocaldomain.com.
I wanted to log content of a variable in a file. I did it like this,
- name: write infraID to somefile.log
copy:
content: "{{ infraID.stdout }}"
dest: somefile.log
And it worked. But in the documentation for copy here, I found that they have recommended to use template instead of copy in such cases.
For advanced formatting or if content contains a variable, use the
ansible.builtin.template module.
I went through the examples given at the documentation for template module here. But I was unable to figure out something that works in my scenario. Could you please show me how to do this properly in recommended way ?
Thanks in advance !! Cheers!
The template module does not have a content property.
You have to create a file that contains your template, for example:
templates/infra-id-template
{{ infraID.stdoud }}
playbook
---
- hosts: localhost
tasks:
- name: Get infra ID
shell: echo "my-infra-id"
register: infraID
- name: Template the file
template:
src: infra-id-template
dest: infra-id-file
I'm attempting to use Ansible to better manage my Kubernetes configmaps in a multienvironment project (dev, stage, and prod). I've generalized each of the config maps as j2 templates, and I'll override the variables depending on how they might change in different environments (so that they aren't duplicated three times for basically the same file).
My playbook currently looks something like this:
---
- hosts: localhost
vars_files:
- "vars/{{ env }}.yml"
tasks:
- name: Generate YAML from j2 template
template:
src: templates/foo.j2
dest: output/foo.yml
And this has been working great for testing so far. However, I'm at the point where I want to incorporate this into my already existing Jenkins CI/CD, but I'm having trouble understanding how it might work with what I am doing currently.
After generating what is basically a Kuberenets ConfigMap from the j2, I'll somehow do this within Jenkins:
kubectl apply -f <yaml>
However, the playbook is creating a YAML file every time I run it, and I am wondering if there is an alternative that would allow me to pipe the contents of the YAML file or somehow retrieve it from stdout.
Basically, I want to evaluate the template and retrieve it without necessarily creating a file.
If I do this, I could do something like the following:
echo result | kubectl apply -f -
where result of course is the contents of the YAML file that results after the templating, and the short dash after the f flag specifies Kubernetes to use the process' stdout.
Sorry for so much explaining, I can clarify anything if needed.
I would like to retrieve the result of the template, and pipe it into that command, such as "echo result | kubectl apply -f -"
In which case, you'd use the stdin: parameter of the command: module:
- name: generate kubernetes yaml
command: echo "run your command that generates yaml here"
register: k8s_yaml
- name: feed the yaml to kubectl apply
command: kubectl apply -f -
stdin: '{{ k8s_yaml.stdout }}'
It isn't super clear what the relationship is in your question between the top part, dealing with the template:, and the bottom part about apply -f -, but if you mean "how can I render a template to a variable, instead of a file?" the the answer is the template lookup plugin:
- name: render the yaml
set_fact:
k8s_yaml: '{{ lookup("template", "templates/foo.j2") }}'
- name: now feed it to apply
command: kubectl apply -f -
stdin: '{{ k8s_yaml }}'
You've got a couple options here. I usually try to stay away from shelling out to command wherever possible. Check out the k8s module in ansible. Note that as long as state is present ansible will patch your object.
- name: Apply your previously generated configmap if you so choose.
k8s:
state: present
definition: "{{ lookup('file', '/output/foo.yml') }}"
Or even better, you could just directly create the configmap
- name: Create the configmap for {{ env }}
k8s:
state: present
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: ConfigMap
namespace: "{{ foo_namespace }}"
labels:
app: bar
environment: "{{ bizzbang }}"
I am getting the below error when I try to run the following code:
ERROR! Syntax Error while loading YAML.
The error appears to have been in '/home/shanthi/ansible-5g/roles/ymlRoles/tasks/main.yml': line 5, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
-name: adding yml files
shell: echo '
^ here
Here is the code:
- name: adding yml files
shell: echo '
ops-center:
product:
autoDeploy: true
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url: http://engci-maven-master.cisco.com/artifactory/mobile-cnat-charts-release/builds/2019.01-5/amf.2019.01.01-5/
name: amf
' > amf.yml
You should avoid using the shell module. Instead, use the copy module to copy a file to remote hosts.
Contents of /path/to/local/amf.yml
ops-center:
product:
autoDeploy: true
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url: http://engci-maven-master.cisco.com/artifactory/mobile-cnat-charts-release/builds/2019.01-5/amf.2019.01.01-5/
name: amf
Playbook task
- name: Copy amf.yml to host
copy:
src: /path/to/local/amf.yml
dest: /path/to/remote/amf.yml
I have filebeat rpm installed onto a unix server and I am attempting to read 3 files with multiline logs and I know a bit about multiline matching using filebeat but I am wondering if its possible to have matching for 3 separate logs.
Thanks
you might basically need multiple prospectors,
Example, (not tested)
filebeat.prospectors:
- input_type: log
paths:
- /var/log/app1/file1.log
multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: false
multiline.match: after
- input_type: log
paths:
- "/var/log/app2/file2.log"
- input_type: log
paths:
- "/var/log/app3/file3.log"
negate: true and match: after => specify that any line that does not match the specified pattern belongs to the previous line.
References
https://www.elastic.co/guide/en/beats/filebeat/current/multiple-prospectors.html
https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html
Understanding Multiline