Ansible AWX EE and lineinfile module - ansible

Heyall!
Im having some difficulty understanding a particular playbook that we are working on.
Simple scenario, we want to edit a file that is running on AWX itself, which is running on k3s. The playbook uses lineinfile module and we want to edit a particular file on the host itself located in /projects.
Playbook executes fine and says that line has been added however what we have noticed is that it looks like the VM is running on the EE pod which is created when using the execution environment. Case in point when we added the the backupflag in the playbook, during execution we have managed to see the backupfile created on the overlay volume. Then container is automatically removed and file is gone.
[root#infraawx WIN_Cluster]# locate WIN_Cluster.csv.
/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/541/fs/runner/project/WIN_Cluster/WIN_Cluster.csv.59.2022-05-18#10:54:28~
Playbook is
- name: Adding initial line to spreadsheet
ansible.builtin.lineinfile:
remote_user: root
path: '/projects/vmware/{{ cluster_name | replace(" ",replacer) + "/" + cluster_name | replace(" ",replacer)}}.csv'
line: "Testing123"
insertafter: EOF
backup: yes
create: yes
state: present
register: testout
delegate_to: localhost
Any clues what might be happening?
Thank you!

Related

How do I configure remote host journal.conf file with ansible?

I am trying to configure raspberry pi journalctl using ansible.
I tried using some ansible-galaxy roles which seem too complicated and did not deliver in the end.
I am just trying to configure the /etc/systemd/journald.conf file.
Can I do it with ansible.builtin.systemd or any other suggestions?
You only need a playbook and a template file.
myproject/changejournald.yml # your playbook
myproject/journald.conf.j2 # a jinja2 template, the 'journald.conf as you want it'
in changejournald.yml
---
- name: upload new template
template:
src: 'journald.conf.j2'
dest: '/etc/systemd/journald.conf'
become: true #<-- unless you are connecting as root
- name: reload systemd-journald
systemd:
name: systemd-journald
state: restart
become: true
Something like that should work?
There are also other modules like lineinfile or blockinfile that might be more useful depending on how you intend to configure it.
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/lineinfile_module.html
https://unix.stackexchange.com/questions/253203/how-to-tell-journald-to-re-read-its-configuration

Jinja template in Ansible treats "dest" as a directory instead of as a file

tl;dr
Ansible's template module treats a template destination as a directory instead of as a file. What am I missing?
Situation:
I am using paulfantom's ansible-restic role in another role, named backups-role. I am testing my role with a dummy playbook in a local lxc environment, which installs a database service and then uses ansible-restic to create a cron job to back it up. For testing purposes, the backup destination is a local restic repo: /var/backup/backups-test . I am using only one restic_repo named backups-testWrong! the variable was named {{ ansible_hostname }}/{{ ansible_date_time.epoch }}, evaluated to something equivalent to myhost.local/1551799877. See the answer directly.
Problem
This ansible task with from a stable ansible role (ansible-restic):
- name: Deploy cron script
template:
src: 'restic.cron.j2'
dest: '/etc/cron.d/restic-{{ item.name }}'
mode: '0640'
no_log: false
with_items: '{{ restic_repos }}'
register: _cronfiles
... fails complaining with:
Destination directory /etc/cron.d/restic-backups-test does not exist
Discussion
What ansible-restic should do here is to deploy a cron script based on a template, with name restic-backups-test, inside directory /etc/cron.d/ . However, it looks like ansible interprets that the directory should be /etc/cron.d/restic-backups-test and the file name just the epoch timestamp like 1551799877 , contrary to what ansible docs itself suggests:
# Example from Ansible Playbooks
- template:
src: /mytemplates/foo.j2
dest: /etc/file.conf
owner: bin
group: wheel
mode: 0644
I'm afraid it has to do with my environment instead, but I don't know what could make ansible change this behaviour, and my playbook doesn't do black magic.
Some more details
I am running ansible version 2.7.8 with python 3.5.3 from a Debian Stretch machine, against a linux container with guest os Ubuntu Bionic with python 2.7.15rc1 and python 3.6.7 . python symlink points to python2.7 . I have tried also with python3.6 with the same result.
Petition
Can you help me make sense of this? At the end I just want it to work without having to modify the upstream role. The timestamp filename can be a hint you may understand.
I self-answer becase I solved it with the hints of #TheLastProject from github.com. See just asked:
What's the value of your restic_repos variable?
And it turned to be that the info I provided at the question was wrong. I was setting the variable with a slash / in the middle of the name, still don't know why. Therefore, when ansible-restic was trying to make sure that /etc/cron.d/{{ repo_name }} existed, it was actually trying to check /etc/cron.d/myhost.local/1551799877, which clearly didn't exist, and didn't create it because it only creates files, not intermediary parents.
So no magic and no bugs in Ansible!

Change the the path of backup files in ansible playbook while using backup=yes with lineinfile module

How can we change the path of backup files while using lineinfile module in an ansible playbook with backup = yes.The issue is the that the backup files are stored in the same directory which is causing the nginx service to fail everytime it is restarted using handler
- name: Down1
lineinfile: backup=yes
state=present
dest=/etc/nginx/conf.d/new.conf
regexp='^ server {{ groups['target'][1] }}:8080;'
line=' server {{ groups['target'][1] }}:8080 down;'
when: (groups['target'][0] == inventory_hostname) and (status == "down1")
notify: Restart nginx
It is IMHO currently not possible change to folder of backup files in Ansible. There is an open issue on github for this problem.

Ansible stop playbook if file present

Is it possible to stop a playbook during his execution if a define file is present on my node and also output to explain why the playbook has stopped?
It is to prevent an accidental re-execution of my playbook on a node that has my application already installed because I generate a password during this install and I don't want to reinitialise this password.
You can use the fail module to force a failure with a custom failure message.
Couple this with a check for a file, using the stat module, and this should work easily enough for you.
A quick example or a one run playbook might look something like this:
- name: check for foo.conf
stat: path=/etc/foo.conf
register: foo
- name: fail if already run on host
fail: msg="This host has already had this playbook run against it"
when: foo.stat.exists
- name: create foo.conf
file: path=/etc/foo.conf state=touch

How do I use ansible to create IAM users as shown in the documentation?

Setup
I want to use Ansible to configure my IAM users, groups and permissions but I am having trouble even getting off the ground. I installed the development fork of Ansible (2.1.0) and attempted to run the simple play shown in the example in the docs.
site.yml
# Basic user creation example
tasks:
- name: Create two new IAM users with API keys
iam:
iam_type: user
name: "{{ item }}"
state: present
password: "{{ temp_pass }}"
access_key_state: create
with_items:
- jcleese
- mpython
I ran the play with the following command:
$ ansible-playbook site.yml
And received the following error:
Error
ERROR! playbooks must be a list of plays
The error appears to have been in '~/aws_kingdom/site.yml': line 2, column 1, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
# Basic user creation example
tasks:
^ here
I am going to plead ignorance on a lack of understanding of the anatomy of a playbook especially one that should effectively have no hosts since it only applies to creating users in the AWS IAM service.
References
http://docs.ansible.com/ansible/iam_module.html
You still need to tell Ansible what hosts it needs to run on, just that it needs to run locally.
So instead your site.yml file should look like:
- hosts: 127.0.0.1
connection: local
tasks:
# Basic user creation example
- name: Create two new IAM users with API keys
iam:
iam_type: user
name: "{{ item }}"
state: present
password: "{{ temp_pass }}"
access_key_state: create
with_items:
- jcleese
- mpython
I encountered the:
ERROR! playbooks must be a list of plays
error myself and after double checking everything couldn't find the error.
By accident I found that when I removed any trailing whitespace and/or newlines from my playbook.yml that it fixed the issue.
It's weird because I tried validating my YAML with a linter before encountering this fix so I can't understand why it worked.
Admittedly, I don't have much experience with YAML so there might be some rule that I don't understand that I'm missing.

Resources