I have a playbook in which on of the tasks within is to copy template files to a specific server with delegate_to: named "grover"
The inventory list with servers of consequence here is: bigbird, grover, oscar
These template files MUST have a name that matches each server's hostnames, and the delegate server grover also must have it's own instance of said file. This template copy operation should only take place if the file does not already exist. /tmp/grover pre-exists on server grover, and it needs to remain unaltered by the playbook run.
Eventually in addition to /tmp/grover on server grover, after the run there should also exist: /tmp/bigbird and /tmp/oscar also on server grover.
The problem I'm having is that when the playbook runs without any conditionals, it does function, but then it also clobbers/overwrites /tmp/grover which is unacceptable.
BUT, if I add tasks in previous plays to pre-check for these files, and then a conditional at the template play to skip grover if the file already exists, it not only skips grover, but it skips every other server that would be run on grover for that play as well. If I try to set it to run on every other server BUT grover, it will still fail because the delegate server is grover and that would be skipped.
Here is the actual example code snipits, playbook is running on all 3 servers due to host pattern:
- hosts: bigbird:grover:oscar
- name: File clobber check.
stat:
path: /tmp/{{ansible_hostname}}
register: clobber_check
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
- name: Copy templates to grover.
template:
backup: yes
src: /opt/template
dest: /tmp/{{ansible_hostname}}
group: root
mode: "u=rw,g=rw"
owner: root
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
when: ( not clobber_check.stat.exists ) and ( clobber_check is defined )
If I run that, and /tmp/grover exists on grover, then it will simply skip the entire copy play because the conditional failed on grover. Thus the other servers will never have their /tmp/bigbird and /tmp/oscar templates copied to grover due to this problem.
Lastly, I'd like to avoid ghetto solutions like saving a backup of grover's original config file, allowing the clobber, and then copying the saved file back as the last task.
I must be missing something here, I can't have been the only person to run into this scenario. Anyone have any idea on how to code for this?
The answer to this question is to remove the unnecessary with_items in the code. While with_items used as it is does allow you to delegate_to a host pattern group, it also makes it so that variables can't be defined/assigned properly to the hosts within that group either.
Defining a single host entity for delegate_to fixes this issue and then the tasks execute as expected on all hosts defined.
Konstantin Suvorov should get the credit for this, as he was the original answer-er.
Additionally, I am surprised that Ansible doesn't allow for easy delegation to a group of hosts. There must be reasons they didn't allow it.
Related
Context: provisioning fresh new servers.
I would like to provision them just once especially the update part.
After launching the first playbook bootstrap.yml I made it leave a file in a folder for me to know that the playbook ran well.
So then in the same playbook I would have to add a condition above every task (which I did) to check for the existance of this file.
If file exists skip running the playbook against all those machines who have that file.
Problem: How do I add the condition to run the playbook only if the file isn't found? I don't want to add a "when:" statement for each of my own tasks I find it silly.
Question: Does anyone have a better solution that maybe solves this with a single line or a parameter in the inventory file that I haven't thought of?
edit:
This is how i check for the file
- name: Bootstrap check
find:
path: /home/bot/bootstrapped-ok
register: bootstrap
and then when condition would be:
when: bootstrap.matched == 0
so if file is not found run the entire playbook.
I think you may be over-complicating this slightly. Would it be accurate to say "I want to bail early on a playbook without error under a certain condition?"
If so, you want "end_host"
How do I exit Ansible play without error on a condition
Just begin with a check for the file, and end_host if it's found.
- name: Bootstrap check
stat:
path: /home/bot/bootstrapped-ok
register: bootstrap
- name: End the play if previous provisioning was successful
meta: end_host
when: bootstrap.stat.exists == True
- name: Confinue if bootstrap missing
debug:
msg: "Hello"
I want to execute tasks only if a file exists on the target.
I can do that, of course, with
- ansible.builtin.stat:
path: <remote_file>
register: remote_file
- name: ...
something:
when: remote_file.stat.exists
but is there something smaller?
You can, for example, check files on the host with
when: '/tmp/file.txt' is file
But that only checks files on host.
Is there also something like that available for files on remotes?
Added, as according the comment section I need to be bit more specific.
I want to run tasks on the target, and on the next run, they shouldn´t be executed again. So I thought to put a file somewhere on the target, and on the next run, when this file exists, the tasks should not be executed anymore.
- name: do big stuff
bigsgtuff:
when: not <file> exists
They should be executed, if the file does not exists.
I am not aware of a way to let the Control Node know during templating or compile time if there exist a specific file on Remote Node(s).
I understand your question that you
... want to execute tasks on the target ...
only if a certain condition is met on the target.
This sounds to be impossible without taking a look on the target before or do a lookup somewhere else, in example in a Configuration Management Database (CMDB).
... is there something smaller?
It will depend on your task, what you try to achieve and how do you like to declare a configuration state.
In example, if you you like to declare the absence of a file, there is no need to check if it exists before. Just make sure it becomes deleted.
---
- hosts: test
become: false
gather_facts: false
tasks:
- name: Remove file
shell:
cmd: "rm /home/{{ ansible_user }}/test.txt"
warn: false
register: result
changed_when: result.rc != 1
failed_when: result.rc != 0 and result.rc != 1
- name: Show result
debug:
msg: "{{ result }}"
As you see, it will be necessary to Defining failure and control how the task behaves. Another example for showing the content.
---
- hosts: test
become: false
gather_facts: false
tasks:
- name: Gather file content
shell:
cmd: "cat /home/{{ ansible_user }}/test.txt"
register: result
changed_when: false
failed_when: result.rc != 0 and result.rc != 1
- name: Show result
debug:
msg: "{{ result.stdout }}"
Please take note, for the given example tasks there are already specific Ansible modules available which do the job better.
According the additional given information in your comments I understand that you like to install or configure "something" and like to leave that fact left on the remote node(s). You like to run the task the next time on the remote node(s) in fact only if it is wasn't successful performed before.
To do so, you may have a look into Ansible facts and Discovering variables: facts and magic variables, especially into Adding custom facts.
What your installation or configuration tasks could do, is leaving a custom .fact file with meaningful keys and values after they were successful running.
During next playbook execution and if gather_facts: true, the information would be gathered from the setup module and you can than let tasks run based on conditions in ansible_local.
Further Q&A
How Ansible gather_facts and sets variables
An introduction to Ansible facts
Loading custom facts
Whereby the host facts can be considered as kind of distributed CMDB, by using facts Cache
fact_caching = yaml
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 129600
you can have the information also available on the Control Node. It might even be possible to organize it in host_vars and group_vars.
One of my Ansible roles allows users to execute a set of tests against a target host, leaving minimal impact, removing all traces of the test.
One of the surprisingly difficult things to determine is absolute paths on the host running the playbook. Resolving files for copy seems to be really complicated, Ansible having a priority list for trying to resolve files.
As part of my debugging information emitted when tests fail, I want to specify the absolute path of the file on the local host and the absolute path of the file on the remote host. The remote host is easy enough to query, but I can't seem to find a way to determine where the source file is coming from on the local host.
Here's my task:
- copy: src={{ goss_file }} dest=/tmp/goss/ mode=0644
goss_file is specified by the user as a role variable. What I'd like to do is determine the full absolute path of goss_file on the local machine running Ansible against remote hosts.
Is there a way to do this? It would significantly help in the debugging I've been doing.
Here is a snippet that I've used to find the templates files (in this case) in our goss testing solution
- name: save goss files list
set_fact:
goss_files : "{{ lookup('fileglob', '../roles/{{role_name}}/templates/*.j2', wantlist=True) | sort }}"
- name: print
debug:
msg:
- "{{ goss_files }}"
- name: Copy goss test from template to remote
template:
src: "{{ item }}"
dest: /tmp/{{ item | basename | regex_replace('\.j2','.yml') }}
mode: '0644'
with_items:
- "{{ goss_files }}"
I also had to create an empty files folder in the "root" of ansible.
You can see the full solution here Using goss and Ansible templates for System Testing
Is there an easy way to log output from multiple remote hosts to a single file on the server running ansible-playbook?
I have a variable called validate which stores the output of a command executed on each server. I want to take validate.stdout_lines and drop the lines from each host into one file locally.
Here is one of the snippets I wrote but did not work:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
delegate_to: localhost
When I executed my playbook w/ the above, it was only able to capture the output from one of the remote hosts. I want to capture the lines from all hosts in that single /var/log/ansible/log file.
One thing you should do is to add a marker to the blockinfile to wrap the result from each single host in a unique block.
The second problem is that the tasks would run in parallel (even with delegate_to: localhost, because the loop here is realised by the Ansible engine) with effectively one task overwriting the other's /var/log/ansible/log file.
As a quick workaround you can serialise the whole play:
- hosts: ...
serial: 1
tasks:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
marker: "# {{ inventory_hostname }} {mark}"
delegate_to: localhost
The above produces the intended result, but if serial execution is a problem, you might consider writing your own loop for this single task (for ideas refer to support for "serial" on an individual task #12170).
Speaking of other methods, in two tasks: you can concatenate the results into a single list (no issue with parallel execution then, but pay attention to delegated facts) and then write to a file using copy module (see Write variable to a file in Ansible).
I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this:
- name: copy file
copy:
src=*.txt
dest=/path/to/fixedname.txt
Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do:
- name: find the filename
connection: local
shell: "ls -1 files/*.txt"
register: myfile
- name: copy file
copy:
src="files/{{ item }}"
dest=/path/to/fixedname.txt
with_items:
- myfile.stdout_lines
However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
It looks like you need access to a task that looks up information locally, and then uses that information as input to the copy module.
There are two ways to get local information.
use local_action:. That's shorthand for running the task agains 127.0.0.1, more info found here. (this is what you've been using)
use a lookup. This is a plugin system specifically designed for getting information locally. More info here.
In your case, I would go for the second method, using lookup. You could set it up like this example:
vars:
local_file_name: "{{ lookup('pipe', 'ls -1 files/*.txt') }}"
tasks:
- name: copy file
copy: src="{{ local_file_name }}" dest=/path/to/fixedname.txt
Or, more directly:
tasks:
- name: copy file
copy: src="{{ lookup('pipe', 'ls -1 files/*.txt') }}" dest=/path/to/fixedname.txt
With regards to paths
the lookup plugin is run from the context of the task (playbook vs role). This means that it will behave differently depending on where it's used.
In the setup above, the tasks are run directly from a playbook, so the working dir will be:
/path/to/project -- this is the folder where your playbook is.
If you where to add the task to a role, the working dir would be:
/path/to/project/roles/role_name/tasks
In addition, the file and pipe plugins run from within the role/files folder if it exists:
/path/to/project/roles/role_name/files -- this means your command is ls -1 *.txt
caveat:
The plugin is called every time you access the variable. This means you cannot trust debugging the variable in your playbook, and then relying on the variable to have the same value when used later in a role!
I do wonder though, about the use-case for a file that resides inside a projects ansible folders, but who's name is not known in advance. Where does such a file come from? Isn't it possible to add a layer in between the generation of the file and using it in Ansible... or having a fixed local path as a variable? Just curious ;)
Just wanted to throw in an additional answer... I have the same problem as you, where I build an ansible bundle on the fly and copy artifacts (rpms) into a role's files folder, and my rpms have versions in the filename.
When I run the ansible play, I want it to install all rpms, regardless of filenames.
I solved this by using the with_fileglob mechanism in ansible:
- name: Copy RPMs
copy: src="{{ item }}" dest="{{ rpm_cache }}"
with_fileglob: "*.rpm"
register: rpm_files
- name: Install RPMs
yum: name={{ item }} state=present
with_items: "{{ rpm_files.results | map(attribute='dest') | list }}"
I find it a little bit cleaner than the lookup mechanism.