Promtail not scraping logs from within subdirectories - grafana-loki

I'm having issues getting promtail to scrape files from subdirectories. My filesystem where the logs are stored is structured as follows:
/var/log/building1logs/logfolder/<serialnumber>/<logname with different file endings>
I'd like to be able to scrape all files inside every subdirectory in the logfolder.
Here is my current scrape config:
- job_name: myjob
static_configs:
- targets:
- localhost
labels:
job: mylogs
__path__: /var/log/building1logs/logfolder/**/*
In practice, this only scrapes files that are located at /var/log/building1logs/logfolder - it doesn't dive into any subdirectory located inside logfolder.
What am I doing wrong?

Related

How can I provide a templated manifest for a Helm chart dependency?

I have an application deployed with a Helm chart that has dependencies. I have templated YAML manifests in the templates directory for the main chart, but I also need to provide templated manifests for the dependency.
The dependency is a zipped tar file in the charts directory - I believe this is what was pulled in when I ran helm dependency build (or update - I forget which I used). I can manually un-tar this file and access all of the dependent chart's components within, including its templates directory. Can I add the appropriate Go template code to a manifest in there? Will that work and is it good practice? Is there a "better" way to do this?
Here are example files:
Chart.yaml:
apiVersion: v2
name: spoe-staging
type: application
version: 1.0.0
dependencies:
- name: keycloak
version: 18.3.0
repository: https://codecentric.github.io/helm-charts
condition: keycloak.enabled
values.yaml:
...
keycloak:
enabled: true
extraEnv: |
- name: X509_CA_BUNDLE
value: "/usr/share/pki/ca-trust-source/anchors/*.crt"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
extraVolumeMounts: |
- name: trusted-certs
mountPath: /usr/share/pki/ca-trust-source/anchors/
extraVolumes: |
- name: trusted-certs
configMap:
name: trusted-certs
...
As you can see, the keycloak dependency needs a ConfigMap named trusted-certs, containing certificate information.
This is just one example, there may be other things I may need to templatize at a dependency level. I don't think I should locate the ConfigMap in the main chart templates directory, since it has nothing to do with that chart.

AnsibleFileNotFound error during Ansible playbook execution via AWS SSM

I zipped up an Ansible playbook and a configuration file, pushed the .zip file to S3, and I'm triggering the Ansible playbook from AWS SSM.
I'm getting a AnsibleFileNotFound error: AnsibleFileNotFound: Could not find or access '/path/to/my_file.txt' on the Ansible Controller.
Here is my playbook:
- name: Copies a configuration file to a machine.
hosts: 127.0.0.1
connection: local
tasks:
- name: Copy the configuration file.
copy:
src: /path/to/my_file.txt
dest: /etc/my_file.txt
owner: root
group: root
mode: '0644'
become: true
my_file.txt exists in the .zip file that I uploaded to S3, and I've verified that it's being extracted (via the AWS SSM output). Why wouldn't I be able to copy that file over? What do I need to do to get Ansible to save this file to /etc/ on the target machine?
EDIT:
Using remote_src: true makes sense because the .zip file is presumably unpacked by AWS SSM to somewhere on the target machine. The problem is that this is unpacked to a random temp directory, so the file isn't found anyway.
I tried a couple of different absolute paths - I am assuming the path here is relevant to the .zip root.
The solution here is a bit horrendous:
The .zip file is extracted to the machine into an ephemeral directory with a random name which is not known in advance of the AWS SSM execution.
remote_src must be true. Yeah, it's your file that you've uploaded to S3, but Ansible isn't really smart enough to know that in this context.
A path relative to the playbook has to be used if you're bundling configuration files with the playbook.
That relative path has to be interpolated.
So using src: "{{ playbook_dir | dirname }}/path/to/my_file.txt" solved the problem in this case.
Note that this approach should not be used if configuration files contain secrets, but I'm not sure what approach AWS SSM offers for that type of scenario when you are using it in conjunction with Ansible.

ansible link from directory to directory

I have a playbook that creates a directory, creates content on index.html, and a link from /web_hosting to /var/www/html.
The directory is called /web_hosting
the content is /web_hosting/index.html
I do not want to change the httpd.conf default web directory to /web_hosting I just want to use a link.
After running the play when I curl the server I'm not seeing the content from the index.html file.
Can someone help me with my play?
name: setup webserver and link to folder
hosts: prod
tasks:
name: create dir
file:
path: /web_hosting
state: directory
setype: httpd_sys_content_t
mode: 0775
name: install
yum:
name: httpd
state: present
name: configure service
service:
name: httpd
state: started
enabled: true
name: create content on index.html
copy:
dest: /web_hosting/index.html
content: "hello from {{ansible_hostname}}"
name: create link
file:
src: /web_hosting
dest: /var/www/html
state: link
This doesn't sound like an Ansible problem if it is creating the files and not erroring out.
If you manually create a file in /var/www/html/ called "index2.html", can you use curl to see it? If not, then it's definitely NOT an Ansible problem.
If that test works, then look for differences in ownership, SELinux permissions, etc. Then use Ansible to set those properly on your "index.html".
I suspect you might need to enable a "follow links" setting in your webserver configuration. But again, that's not an Ansible issue either - though Ansible could update the configuration file once you figure out what setting(s) to apply.

How to split an ansible role's `defaults/main.yml` file into multiple files?

In some ansible roles (e.g. roles/my-role/) I've got quite some big default variables files (defaults/main.yml). I'd like to split the main.yml into several smaller files. Is it possible to do that?
I've tried creating the files defaults/1.yml and defaults/2.yml, but they aren't loaded by ansible.
The feature I'm describing below has been available since Ansible 2.6, but got a bugfix in v2.6.2 and another (minor) one in v2.7.
To see a solution for older versions, see Paul's answer.
defaults/main/
Instead of creating defaults/main.yml, create a directory — defaults/main/ — and place all YAML files in there.
defaults/main.yml → defaults/main/*.yml
Ansible will load any *.yml file inside that directory, so you can name your files like roles/my-role/defaults/main/{1,2}.yml.
Note, the old file — defaults/main.yml — must not exist. See this Github comment.
vars/main/
By the way, the above solution also works for vars/:
vars/main.yml → vars/main/*.yml
further details
The feature has been introduced in v2.6 — git commit, Pull Request, main Github issue.
There have been two bugfixes:
v2.7 fix: git commit, Pull Request — backported to v2.6.2: commit, Pull Request
v2.7 fix: git commit, Pull Request, bug discussion
If you aren't using 2.6 (which you probably should, but I understand that isn't always an option), then you might find include_vars useful.
- name: Include vars of stuff.yaml into the 'stuff' variable (2.2).
include_vars:
file: stuff.yaml
name: stuff
- name: Conditionally decide to load in variables into 'plans' when x is 0, otherwise do not. (2.2)
include_vars:
file: contingency_plan.yaml
name: plans
when: x == 0
- name: Load a variable file based on the OS type, or a default if not found. Using free-form to specify the file.
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution }}.yaml"
- "{{ ansible_os_family }}.yaml"
- default.yaml
- name: Bare include (free-form)
include_vars: myvars.yaml
- name: Include all .json and .jsn files in vars/all and all nested directories (2.3)
include_vars:
dir: vars/all
extensions:
- json
- jsn
- name: Include all default extension files in vars/all and all nested directories and save the output in test. (2.2)
include_vars:
dir: vars/all
name: test
- name: Include default extension files in vars/services (2.2)
include_vars:
dir: vars/services
depth: 1
- name: Include only files matching bastion.yaml (2.2)
include_vars:
dir: vars
files_matching: bastion.yaml
Note that this is a task directive, though. It isn't as neat as just being able to include it into the defaults file itself.

Ansible: How to delete a folder and file inside a directory in a single task?

I'm using Ansible 2.3.2.0 and am trying to delete a file and folder inside a directory in one task.
Right now I have this
tasks:
- name: Removing existing war
file:
path: /usr/share/tomcat/webapps/app.war
state: absent
- name: Removing existing folder
file:
path: /usr/share/tomcat/webapps/app
state: absent
I cannot simply remove the webapps folder because I do not want to delete other files and folders in there. I want to reduce the number of tasks because I am using Duo push auth and this adds to the deploy time. I've tried looping over files and file globs but for some reason it never works.
http://docs.ansible.com/ansible/latest/playbooks_loops.html#looping-over-files
Simply iterate over the two values:
tasks:
- name: Removing
file:
path: "{{ item }}"
state: absent
with_items:
- /usr/share/tomcat/webapps/app.war
- /usr/share/tomcat/webapps/app
But it will still create 2 tasks executions: one for each item.
If you simply want to delete a directory and its contents, just use the file module and pass the path to the directory only:
tasks:
- name: Removing
file:
path: /usr/share/tomcat/webapps/app
state: absent
See this post: Ansible: How to delete files and folders inside a directory?
and from the ansible docs:
If absent, directories will be recursively deleted, and files or symlinks will be unlinked.
see: http://docs.ansible.com/ansible/latest/file_module.html

Resources