How do I configure remote host journal.conf file with ansible? - ansible

I am trying to configure raspberry pi journalctl using ansible.
I tried using some ansible-galaxy roles which seem too complicated and did not deliver in the end.
I am just trying to configure the /etc/systemd/journald.conf file.
Can I do it with ansible.builtin.systemd or any other suggestions?

You only need a playbook and a template file.
myproject/changejournald.yml # your playbook
myproject/journald.conf.j2 # a jinja2 template, the 'journald.conf as you want it'
in changejournald.yml
---
- name: upload new template
template:
src: 'journald.conf.j2'
dest: '/etc/systemd/journald.conf'
become: true #<-- unless you are connecting as root
- name: reload systemd-journald
systemd:
name: systemd-journald
state: restart
become: true
Something like that should work?
There are also other modules like lineinfile or blockinfile that might be more useful depending on how you intend to configure it.
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/lineinfile_module.html
https://unix.stackexchange.com/questions/253203/how-to-tell-journald-to-re-read-its-configuration

Related

Ansible AWX EE and lineinfile module

Heyall!
Im having some difficulty understanding a particular playbook that we are working on.
Simple scenario, we want to edit a file that is running on AWX itself, which is running on k3s. The playbook uses lineinfile module and we want to edit a particular file on the host itself located in /projects.
Playbook executes fine and says that line has been added however what we have noticed is that it looks like the VM is running on the EE pod which is created when using the execution environment. Case in point when we added the the backupflag in the playbook, during execution we have managed to see the backupfile created on the overlay volume. Then container is automatically removed and file is gone.
[root#infraawx WIN_Cluster]# locate WIN_Cluster.csv.
/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/541/fs/runner/project/WIN_Cluster/WIN_Cluster.csv.59.2022-05-18#10:54:28~
Playbook is
- name: Adding initial line to spreadsheet
ansible.builtin.lineinfile:
remote_user: root
path: '/projects/vmware/{{ cluster_name | replace(" ",replacer) + "/" + cluster_name | replace(" ",replacer)}}.csv'
line: "Testing123"
insertafter: EOF
backup: yes
create: yes
state: present
register: testout
delegate_to: localhost
Any clues what might be happening?
Thank you!

Ansible pre-check before run playbook

Is it possible to add a condition before to run a playbook which check if there is a title, a description, the environment and the versions on the playbook ?
For example my test.yml playbook:
---
#Apache servers
#Linux
#Ubuntu
#version 2.1.1
#Testing for secure the webserver
task:
xxxxxx
xxxxxx
And I would like to check if all the comment before are present before to run this task !
I tried to test this solution :
name: run Apachetest playbook
include: test.yml
when: "{{ lookup('file', 'test.yml').split('\n')[1] == '#Apache servers' }}"
But still not working...
BS
Comments are, well, comments. They do not impact the execution and are just ignored. So there is no way, and actually no real reason, to check if comments are present or not. You would need to implement that yourself.
To check playbooks, roles, etc. there is ansible-lint which will verify the syntax and some best practices (e.g. if you use a command or shell for something there is a module for) but this does not verify comments (again, checking for comments does not make sense from a execution perspective, as they are ignored).
You want some information to be present in your playbook, that is what I understand. If I was you, I would either create a git hook, that verifies if the information is present before letting you push that code to your repository or establish a proper review-process, where the reviewer only accepts a merge/pull request, if the information is present.
Otherwise, here is the code, that will do what you are trying to do:
---
#Apache server
- hosts: all
tasks:
- name: set fact
set_fact:
pb: "{{ lookup('file', 'test.yml').split('\n')[1] }}"
- name: check if we found it
debug:
msg: 'found'
when: "'#Apache server' in pb"
You could use the apache role for apache installed like that
---
- hosts: apache
sudo: yes
tasks:
- name: install apache2
apt: name=apache2 update_cache=yes state=latest
have a look here how-to-install-apache-on-ansible

How to consolidate roles and templates to the same playbook

I need to use a provisioning tool called vcommander
looks like this tool supports running ansible locally on the provisioned host
only, and you need to provide him only the playbook to run.
I already have working ansible roles which includes tasks and templates for examples the ntp role looks toughtly like below:
ls -1 roles/ntp/
defaults
LICENSE
meta
README.md
tasks
templates
the main task looks something like this:
cat roles/ntp/tasks/main.yml
---
- name: "Configure ntp.conf"
template:
src: "ntp.conf.j2"
dest: "/etc/ntp.conf"
mode: "0644"
become: True
- name: Updating time zone
shell: tzdata-update
- name: Ensure NTP-related packages are installed.
package:
name: ntp
state: present
- name: Ensure NTP is running and enabled as configured.
service:
name: ntpd
state: started
enabled: yes
and the template looks like this:
cat roles/ntp/templates/ntp.conf.j2
{{ ansible_managed }}
restrict 127.0.0.1
restrict -6 ::1
server 127.127.1.0
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
keys /etc/ntp/keys
Is there a way to include the main.yml and the ntp.conf.j2 (template)
and other files included in the roles into one yml file?
Please provide an example.
Are you asking for a playbook that applies your ntp role to localhost?
It would look like this:
cat playbook.yml
- hosts: localhost
roles:
- ntp
See docs: https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#using-roles

Is there a better way to conditionally add a user to a group with Ansible?

I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)

template task: write to root owned directory

I want to copy a template generated file to /etc/init.d folder. But template task doesn't seem to support sudo parameter.
What is the recommended way to handle this? should I copy it to temporary directory and then move file with with sudo?
The playbook task looks like as shown below. Ansible version 1.8.2
- name: copy init script
template: src=template/optimus_api_service.sh dest=/etc/init.d/optimus-api mode=0755 force=yes owner=root group=root
I have tested the following playbook and it works.
My setup:
The User vagrant on the machine vm is allowed to execute commands password-free with sudo.
I created a simple template and installed it with the following playbook:
---
- name: Test template
hosts: vm
gather_facts: no
remote_user: vagrant
vars:
bla: blub # some variable used in the template
tasks:
- name: copy init script
sudo: yes # << you have to activate sudo
sudo_user: root # << and provide the user
template: src=template/test.j2 dest=/opt/test mode=0755 force=yes owner=root group=root

Resources