Ansible lookup file plugin with variable inside the file - ansible

Hei guys,
After a few days of struggling, I've decided to write my issue here.
I have an ansible(2.7) task that that has a single variable, which points to a host var that uses the file lookup plugin.
Thing is that this works, for one host, but I have 6 hosts, where a value inside the lookup file should be different for each of the hosts.
Can you pass a variable inside the file that is looked up?
I'm new to ansible and don't master it fully.
Has someone encountered this before?
Task:
- name: Copy the file to its directory
template:
src: file.conf
dest: /path/to/file
vars:
file_contents: "{{file_configuration}}"
-----
hostvar file:
file_configuration:
- "{{lookup('file', './path/to/file') | from_yaml}}"
----
file that is looked up:
name: {{ value that should be different per host }}
driver:
long_name: unchanged value.

You should have 6 host_vars files, one for each host. In that host_var file, set your desired value.
Ansible documentation is here
E.g.
https://imgur.com/a/JCbnNBT
Content of host1.yml
---
my_value: something
Content of host2.yml
---
my_value: else
Ansible automagicly sees the host_var folder. It looks in that folder and searches for files which exactly match a host in the play.
So ensure your host_var/filename.yml matches the hostname in your play!
If there is a match, then it'll use that .yml file for that specific host.

Related

Find a file and rename it ansible playbook [duplicate]

This question already has answers here:
How to move/rename a file using an Ansible task on a remote system
(13 answers)
Closed 1 year ago.
So i have been trying to fix a mistake i did in all the servers by using a playbook. Basicly i launched a playbook with logrotate to fix the growing logs problem, and in there is a log named btmp, which i wasnt supposed to rotate but did anyway by accident, and now logrotate changed its name to add a date to it and therefore braking the log. Now i want to use a playbook that will find a log named btmp in /var/log directory and rename it back, problem is that the file atm is different in each server for example 1 server has btmp-20210316 and the other has btmp-20210309, so in bash command line one would use wildcard "btmp*" to bypass thos problem, however this does not appear to work in playbook. So far i came up with this:
tasks:
- name: stat btmp*
stat: path=/var/log
register: btmp_stat
- name: Move btmp
command: mv /var/log/btmp* /var/log/btmp
when: btmp_stat.stat.exists
However this results in error that the file was not found. So my question is how does one get the wildcard working in playbook or is there an equivalent way to find all files that have "btmp" in their names and rename them ? BTW all servers are Centos 7 servers.
So i will add my own solution aswell, even tho the answer solution is better.
Make a bash script with a single line, anywhere in you ansible VM.
Line is : mv /var/log/filename* /var/log/filename
And now create a playbook to operate this in target VM:
---
- hosts: '{{ server }}'
remote_user: username
become: yes
become_method: sudo
vars_prompt:
- name: "server"
prompt: "Enter server name or group"
private: no
tasks:
- name: Move the script to target host VM
copy: src=/anywhereyouwant/bashscript.sh dest=/tmp mode=0777
- name: Execute the script
command: sh /tmp/bashscript.sh
- name: delete the script
command: rm /tmp/bashscript.sh
There's more than one way to do this in Ansible, and using the shell module is certainly a viable way to do it (but you would need the shell module in place of command as the latter does not support wildcards). I would solve the problem as follows:
First create a task to find all matching files (i.e. /var/log/btmp*) and store them in a variable for later processing - this would look like this:
- name: Find all files named /var/log/btmp*
ansible.builtin.find:
paths: /var/log
patterns: 'btmp*'
register: find_btmp
This task uses the find module to locate all files called btmp* in /var/log - the results are stored in a variable called find_btmp.
Next create a task to copy the btmp* file to btmp. Now you may very well have more than 1 file pathing the above pattern, and logically you don't want to rename them all to btmp as this simply keeps overwriting the file every time. Instead, let's assume you want only the newest file that you matched - we can use a clever Jinja2 filter to get this entry from the results of the first task:
- name: Copy the btmp* to the required filename
ansible.builtin.copy:
src: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
dest: /var/log/btmp
remote_src: yes
when: find_btmp.failed == false
This task uses Ansible's copy module to copy our chosen source file to /var/log/btmp. The remote_src: yes parameter tells the copy module that the source file exists on the remote machine rather than the Ansible host itself.
We use a when clause to ensure that we don't run this copy operation if we failed to find any files.
Now let's break down that Jinja2 filter:
find_btmp.files - this is all of the files listed in our find_btmp variable
sort(attribute='mtime',reverse=true) - here we are sorting our list of files using the mtime (modification time) attribute - we're reverse sorting so that the newest entry is at the top of the list.
map(attribute='path') - we're using map to "extract" the path attribute of the files dictionary, as this is the only data we actually want to pass to the copy module - the path of the file itself
first - this selects only the first element in the list (i.e. the newest file as they were reverse sorted)
Finally, you asked for a move operation - there's no native "move" module in Ansible so you will want to remove the source file after the copy - this can be done as follows (the Jinja2 filter is the same as before:
- name: Delete the original file
ansible.builtin.file:
path: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
state: absent
when: find_btmp.failed == false
Again we use a when clause to ensure we don't delete anything if we didn't find it in the first place.
I have tested this on Ansible 3.1.0/ansible-base 2.10.7 - if you're running Ansible 2.9 or earlier, remove the ansible.builtin. from the module names (i.e. ansible.builtin.copy becomes copy.)
Hope this helps you out!

Ansible task has problems running on delegated host when filename matches hostname

I have a playbook in which on of the tasks within is to copy template files to a specific server with delegate_to: named "grover"
The inventory list with servers of consequence here is: bigbird, grover, oscar
These template files MUST have a name that matches each server's hostnames, and the delegate server grover also must have it's own instance of said file. This template copy operation should only take place if the file does not already exist. /tmp/grover pre-exists on server grover, and it needs to remain unaltered by the playbook run.
Eventually in addition to /tmp/grover on server grover, after the run there should also exist: /tmp/bigbird and /tmp/oscar also on server grover.
The problem I'm having is that when the playbook runs without any conditionals, it does function, but then it also clobbers/overwrites /tmp/grover which is unacceptable.
BUT, if I add tasks in previous plays to pre-check for these files, and then a conditional at the template play to skip grover if the file already exists, it not only skips grover, but it skips every other server that would be run on grover for that play as well. If I try to set it to run on every other server BUT grover, it will still fail because the delegate server is grover and that would be skipped.
Here is the actual example code snipits, playbook is running on all 3 servers due to host pattern:
- hosts: bigbird:grover:oscar
- name: File clobber check.
stat:
path: /tmp/{{ansible_hostname}}
register: clobber_check
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
- name: Copy templates to grover.
template:
backup: yes
src: /opt/template
dest: /tmp/{{ansible_hostname}}
group: root
mode: "u=rw,g=rw"
owner: root
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
when: ( not clobber_check.stat.exists ) and ( clobber_check is defined )
If I run that, and /tmp/grover exists on grover, then it will simply skip the entire copy play because the conditional failed on grover. Thus the other servers will never have their /tmp/bigbird and /tmp/oscar templates copied to grover due to this problem.
Lastly, I'd like to avoid ghetto solutions like saving a backup of grover's original config file, allowing the clobber, and then copying the saved file back as the last task.
I must be missing something here, I can't have been the only person to run into this scenario. Anyone have any idea on how to code for this?
The answer to this question is to remove the unnecessary with_items in the code. While with_items used as it is does allow you to delegate_to a host pattern group, it also makes it so that variables can't be defined/assigned properly to the hosts within that group either.
Defining a single host entity for delegate_to fixes this issue and then the tasks execute as expected on all hosts defined.
Konstantin Suvorov should get the credit for this, as he was the original answer-er.
Additionally, I am surprised that Ansible doesn't allow for easy delegation to a group of hosts. There must be reasons they didn't allow it.

Ansible: list all hosts to add it to 'hosts' file

I set up group of test VMs (with pure Debian) to play with, all of them are on test network so it is self-contained group of VM. These VMs be destroyed soon so I don't want to add its names to DNS while I need to have them communicate oven hostnames.
Is there any way I can form /etc/hosts files on these VMs based on hosts IPs from Ansible hosts file?
I can set hostnames like that
192.168.0.5 myhostname=host01.local
192.168.0.6 myhostname=host02.local
so I think I can somehow form /etc/hosts like that:
{{ ip }} {{ myhostname }}
but can archive that. I think I can iterate over group['all'] and use lineinfile to add lines to /etc/hosts, but it won't work for me so far.
You can use the inventory_file variable to get the path filename of the hosts file:
Also available, inventory_dir is the pathname of the directory holding Ansible’s inventory host file, inventory_file is the pathname and the filename pointing to the Ansible’s inventory host file.
Reference
You can read its variables with something like:
tasks:
- local_action: shell cat {{ inventory_file }}
register: result
As recommended here. More on local actions here.
You can then use jinja filters and loops to use the variable.

Warning while constructing a mapping in Ansible

Whenever I run my playbook the following warning comes up:
[WARNING]: While constructing a mapping from /etc/ansible/roles/foo/tasks/main.yml, line 17, column 3, found
a duplicate dict key (file). Using last defined value only.
The relevant part of my main.yml in the tasks folder is like this:
(line 17 is the task to clean the files which seems a bit off so I guess the problem is with the previous "script" line)
- name: Run script to format output
script: foo.py {{ taskname }} /tmp/fcpout.log
- name: Clean temp files
file: path=/tmp/fcpout.log state=absent
And my vars file:
---
my_dict: {SLM: "114", Regular: "255", Production: "1"}
taskid: "{{my_dict[taskname]}}"
To run my playbook I do:
ansible-playbook playbooks/foo.yml --extra-vars "server=bar taskname=SLM"
What I'm trying to do is to take the command line arguments, set the hosts: with the "server" parameter, get the taskname and from that find out to which id refers to. This id is used as the first input to my python script which runs remotely.
The playbook works fine, but I don't understand why I get a warning. Could anyone explain what is wrong here?
Are you sure there is not more around the line 17? This warning is triggered when there are two identical keys in a task (or in general anywhere in a dict).
The warning claims there are two file keys, suggesting the task looks like this:
- name: Clean temp files
file: ...
file: ...
A common mistake is that people forget to start a new list item for the next task. The following would be valid, while the above is not:
- name: Clean temp files
file: ...
- file: ...
I noticed Ansible sometimes gets the lines or even files wrong in error messages. I have seen it complaining about tasks/main.yml while the problem actually was in handlers/main.yml. If no such task with duplicate file keys can be found near that line, search the whole file or even other files for it. If there is nothing like this anywhere to be found, then it would appear you found a bug in Ansible. In that case you should repot it on github.
I faced this warning because of using duplicate options in a module. For example, accidentally I used "host" option 2 times in the module definition like below:
name: Create NEW DB User
mysql_user:
name: NEW USER NAME
login_user: root
login_password: Root Passwd
password: NEW USER'S PASSWD
host: localhost
priv: 'DB NAME.*:ALL,GRANT'
state: present
host: localhost
The warning has been disappeared by omitting one of the host options.

Finding file name in files section of current Ansible role

I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this:
- name: copy file
copy:
src=*.txt
dest=/path/to/fixedname.txt
Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do:
- name: find the filename
connection: local
shell: "ls -1 files/*.txt"
register: myfile
- name: copy file
copy:
src="files/{{ item }}"
dest=/path/to/fixedname.txt
with_items:
- myfile.stdout_lines
However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
It looks like you need access to a task that looks up information locally, and then uses that information as input to the copy module.
There are two ways to get local information.
use local_action:. That's shorthand for running the task agains 127.0.0.1, more info found here. (this is what you've been using)
use a lookup. This is a plugin system specifically designed for getting information locally. More info here.
In your case, I would go for the second method, using lookup. You could set it up like this example:
vars:
local_file_name: "{{ lookup('pipe', 'ls -1 files/*.txt') }}"
tasks:
- name: copy file
copy: src="{{ local_file_name }}" dest=/path/to/fixedname.txt
Or, more directly:
tasks:
- name: copy file
copy: src="{{ lookup('pipe', 'ls -1 files/*.txt') }}" dest=/path/to/fixedname.txt
With regards to paths
the lookup plugin is run from the context of the task (playbook vs role). This means that it will behave differently depending on where it's used.
In the setup above, the tasks are run directly from a playbook, so the working dir will be:
/path/to/project -- this is the folder where your playbook is.
If you where to add the task to a role, the working dir would be:
/path/to/project/roles/role_name/tasks
In addition, the file and pipe plugins run from within the role/files folder if it exists:
/path/to/project/roles/role_name/files -- this means your command is ls -1 *.txt
caveat:
The plugin is called every time you access the variable. This means you cannot trust debugging the variable in your playbook, and then relying on the variable to have the same value when used later in a role!
I do wonder though, about the use-case for a file that resides inside a projects ansible folders, but who's name is not known in advance. Where does such a file come from? Isn't it possible to add a layer in between the generation of the file and using it in Ansible... or having a fixed local path as a variable? Just curious ;)
Just wanted to throw in an additional answer... I have the same problem as you, where I build an ansible bundle on the fly and copy artifacts (rpms) into a role's files folder, and my rpms have versions in the filename.
When I run the ansible play, I want it to install all rpms, regardless of filenames.
I solved this by using the with_fileglob mechanism in ansible:
- name: Copy RPMs
copy: src="{{ item }}" dest="{{ rpm_cache }}"
with_fileglob: "*.rpm"
register: rpm_files
- name: Install RPMs
yum: name={{ item }} state=present
with_items: "{{ rpm_files.results | map(attribute='dest') | list }}"
I find it a little bit cleaner than the lookup mechanism.

Resources