Ansible: ensure directory ownership / permissions across all servers - ansible

Given a bunch of servers, A..Z, what's the best way of ensuring that all /dir/path directories on these servers (where these directories exist) are set to a certain owner, group and mode?
I can see how to do it for a specific role, e.g.
- name: Add apache vhosts configuration.
template:
src: "vhosts.conf.j2"
dest: "{{ apache_conf_path }}/{{ apache_vhosts_filename }}"
owner: root
group: root
mode: 0644
notify: restart apache
when: apache_create_vhosts
but how do you do it across a whole range of servers?

http://docs.ansible.com/ansible/latest/file_module.html
- name: Change ownership of the folder
file:
state : directory
recurse : yes
path : "{{ folder }}"
mode : "{{ desired_mode }}"
Execute the task on all the systems you want changed.
Obviously, run it as the necessary user; if that's root, make sure you specify owner and group if needed.
Forgive me if this seems a bit basic, but sometimes it's nice to have the obvious reiterated - http://docs.ansible.com/ansible/latest/intro_inventory.html
To run against a list of servers, put them in a group in your inventory file and call the task on that group as hosts.
Hosts file example:
[targetServers]
host1
host2
# etc
then in your main.yml
- name: do the thing
hosts: targetServers
# etc
then ansible-playbook -i hosts -v main.yml oe some such.

Related

ansible copy file based on the group name

I have a task which copies a default script file to the destination:
- name: copy keepalive state script
copy:
src: 'keepalived.state.sh'
dest: /usr/local/bin/keepalived.state.sh
mode: '0755'
owner: root
group: root
And now I want to have scripts based on the group that the host is in, so I changed the task to this:
- name: copy keepalive state script
copy:
src: '{{keepalived_state_script_file}}'
dest: /usr/local/bin/keepalived.state.sh
mode: '0755'
owner: root
group: root
vars:
keepalived_state_script_file: "{{ lookup('first_found', dict(files=['keepalived/' + item + '.state.sh', 'keepalive.state.sh'])) }}"
with_items: "{{group_names}}"
so now for a host inside the application group, if I put the application.state.sh file inside the keepalived directory, it will copy this file instead of default keepalived.state.sh, but If my host is a member of more than one group like this:
[application]
host1
[dc1]
host1
this task will first checks the application group and copies the application.state.sh then checks the dc1 group and since there is no dc1.state.sh it will copy the default script file which is keepalived.state.sh.
Considering I will use only one script file per host no matter how many groups that host is member of, how can I fix the problem so that I can get the customized script instead of the default script?
Your solution is way too complicated. You try to invent a feature, that Ansibles has already implemented: group vars.
Put the files application.yml and dc1.yml in the group_vars directory.
For the application group you set one value:
files_to_copy:
- file1
- file2
And for the dc1 group you set other values:
files_to_copy:
- file3
- file4
And in your playbook you iterate just over the files_to_copy variable. Ansible will take care, that every host gets the right variable based on the group membership.

Ansible with host specific files, but fallback to default files

I'm currently trying to get used to Ansible but I'm failing to achieve what seems to be a common use-case:
Lets say I have have a role nginx in roles/nginx and and one task is to setup a custom default page:
- name: install nginx default page
copy:
src: "index.html"
dest: /var/www/html/
owner: root
mode: 0644
Ansible will look for the file in:
roles/nginx/files
roles/nginx
roles/nginx/tasks/files
roles/nginx/tasks
files
./
Now for some reason a single host should receive a completely different file.
I know I could alter the file src path to src: "{{ inventory_hostname }}/index.html" but then it would search in host-specific directories only.
Is there a way to alter the search paths so that Ansible will look for files in host-specific directories first but fall-back to common directories?
I don't want to decide if files might need to be host-specific when writing roles. I'd rather like to overwrite the role default files without altering the base role at all.
Q: "Is there a way to alter the search paths so that Ansible will look for files in host-specific directories first but fall back to common directories?"
A: In general, it is not possible to change the search paths. But, with first_found it is possible to define how a specific file shall be searched. For example,
- copy:
src: "{{ lookup('first_found', findme) }}"
dest: /scratch/tmp/
owner: root
mode: 0644
vars:
findme:
- "{{ inventory_hostname }}/index.html"
- "{{ role_path }}/files/index.html"
- "{{ role_path }}/files/defaults/index.html"

Using Host Group as Variable in Ansible Task

I'm working on putting together a playbook that will deploy local facts scripts to various groups in my Ansible inventory, and I would to be able to utilize the group name being worked on as a variable in the tasks themselves. Assume for this example that I have the traditional Ansible roles directory structure on my Ansible machine, and I have subdirectories under the "files" directory called "apache", "web", and "db". I'll now illustrate by example, ...
---
- hosts: apache:web:db
tasks:
- name: Set facts for facts directories
set_fact:
facts_dir_local: "files/{{ group_name }}"
facts_dir_remote: "/etc/ansible/facts.d"
- name: Deploy local facts
copy:
src: "{{ item }}"
dest: "{{ facts_dir_remote }}"
owner: ansible
group: ansible
mode: 0750
with_fileglob:
- "{{ facts_dir_local }}/*.fact"
The goal is to have {{ group_name }} above take on the value of "apache" for the hosts in the apache group, "web" for the hosts in the web group, and "db" for the hosts in the db group. This way I don't have to copy and paste this task and assign custom variables for each group. Any suggestions for how to accomplish this would be greatly appreciated.
While there is no group_name variable in ansible, there is a group_names (plural). That is because any host may be part of one or more groups.
It is described in the official documentation as
group_names
List of groups the current host is part of
In the most common case each host would only be part of one group and so you could simply say group_names[0] to get what you want.
TLDR;
Use group_names[0].
You can use group variables to achieve this, either specified in the inventory file or in separate group_vars/<group> files - see https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
Note though that these group variables are squashed into host variables when you run the playbook. Hosts that are listed in multiple groups will end up with variables based on a precedence order

Organizing/optimizing logic in ansible playbooks

I have the below ansible playbook. It does its job but I would like to know if can be improved regarding maintenance, redundancy, readability, formatting etc.
I am a bit concerned that my current approach will result in some huge messy playbooks so any advise/recommendation to make this more comprehensible are most welcome.
---
# Below will do:
#
# 1) Install nano
# 2) Create 2 users with password, home dir and add to sudoers
# 3) Set password for root user
# 4) Copy private/public key pair and authorized_keys to users home dir.
- hosts: cont
any_errors_fatal: true
user: root
vars:
password: $6$BqaK91TChphw6$EJRKoOD87VneNhASOh25b7sPg4xVzmE3noeXwgJGhTfs6ROVlh4ptLcXrBpRSAQ.9TdqOCzJmvNmQAdLVl5OR.
root_password: $6$BqaK91TChphw6$haQjB0BdF6pAfUe5FicDM8w.rC34WX2a5y0Tvt1xdJLZVPRmGsphh2Pj.1HIiynCPAkJHPBQJe1PV0utVJ1781
users:
- username: usera
- username: userb
tasks:
- name: Install the package "nano"
apt:
name: nano
- name: Change password for root user
user: name=root
password={{root_password}}
- name: Add users | create users, shell, home dirs
user: name={{ item.username }}
groups="sudo"
password={{password}}
shell=/bin/bash
createhome=yes
comment='created with ansible'
with_items: '{{users}}'
- name: Copy private/public key to home dir for users
copy:
src=../linux-files/user/.ssh
dest=/home/{{ item.username }}/
owner={{ item.username }}
group={{ item.username }}
with_items: '{{users}}'
- name: Copy private/public key to home dir for root
copy:
src: ../linux-files/root/.ssh
dest: /home/root/
You can divide your playbook in several files to make it more scalable and organized. Instead of grouping your tasks inside one file, you can create a dir named tasks and include them in a main playbook. In best practices you have an example, but in your case you can go as simple as:
vars: directory containing your vars
main.yaml: your actual playbook
roles
tasks: directory containing your tasks and a main yaml using import_tasks
You can even import other playbooks inside your main playbook if need be. It will depend on your objective.

How do I save an ansible variable into a temporary file that is automatically removed at the end of playbook execution?

In order to perform some operations locally (not on the remote machine), I need to put the content of an ansible variable inside a temporary file.
Please note that I am looking for a solution that takes care of generating the temporary file to a location where it can be written (no hardcoded names) and also that takes care of the removal of the file as we do not want to leave things behind.
You should be able to use the tempfile module, followed by either the copy or template modules. Like so:
- hosts: localhost
tasks:
# Create a file named ansible.{random}.config
- tempfile:
state: file
suffix: config
register: temp_config
# Render template content to it
- template:
src: templates/configfile.j2
dest: "{{ temp_config.path }}"
vars:
username: admin
Or if you're running it in a role:
- tempfile:
state: file
suffix: config
register: temp_config
- copy:
content: "{{ lookup('template', 'configfile.j2') }}"
dest: "{{ temp_config.path }}"
vars:
username: admin
Then just pass temp_config.path to whatever module you need to pass the file to.
It's not a great solution, but the alternative is writing a custom module to do it in one step.
Rather than do it with a file, why not just use the environment? This wan you can easily work with the variable and it will be alive through the ansible session and you can easily retrieve it in any steps or outside of them.
Although using the shell/application environment is probably, if you specifically want to use a file to store variable data you could do something like this
- hosts: server1
tasks:
- shell: cat /etc/file.txt
register: some_data
- local_action: copy dest=/tmp/foo.txt content="{{some_data.stdout}}"
- hosts: localhost
tasks:
- set_fact: some_data="{{ lookup('file', '/tmp/foo.txt') }}"
- debug: var=some_data
As for your requirement to give the file a unique name and clean it up at the end of the play. I'll leave that implementation to you

Resources