Context: provisioning fresh new servers.
I would like to provision them just once especially the update part.
After launching the first playbook bootstrap.yml I made it leave a file in a folder for me to know that the playbook ran well.
So then in the same playbook I would have to add a condition above every task (which I did) to check for the existance of this file.
If file exists skip running the playbook against all those machines who have that file.
Problem: How do I add the condition to run the playbook only if the file isn't found? I don't want to add a "when:" statement for each of my own tasks I find it silly.
Question: Does anyone have a better solution that maybe solves this with a single line or a parameter in the inventory file that I haven't thought of?
edit:
This is how i check for the file
- name: Bootstrap check
find:
path: /home/bot/bootstrapped-ok
register: bootstrap
and then when condition would be:
when: bootstrap.matched == 0
so if file is not found run the entire playbook.
I think you may be over-complicating this slightly. Would it be accurate to say "I want to bail early on a playbook without error under a certain condition?"
If so, you want "end_host"
How do I exit Ansible play without error on a condition
Just begin with a check for the file, and end_host if it's found.
- name: Bootstrap check
stat:
path: /home/bot/bootstrapped-ok
register: bootstrap
- name: End the play if previous provisioning was successful
meta: end_host
when: bootstrap.stat.exists == True
- name: Confinue if bootstrap missing
debug:
msg: "Hello"
Related
I'm trying to spin up an AWS deployment environment in Ansible, and I want to make it so that if something fails along the way, Ansible tears down everything on AWS that has been spun up so far. I can't figure out how to get Ansible to throw an error within the role
For example:
<main.yml>
- hosts: localhost
connection: local
roles:
- make_ec2_role
- make_rds_role
- make_s3_role
2. Then I want it to run some code based on that error here.
<make_rds_role>
- name: "Make it"
- rds:
params: etc <-- 1. Let's say it fails in the middle here
I've tried:
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
As well as other things on within the documentation, but what I really want is just a way to use something like the "block" and "rescue" commands , but as far as I can tell that only works within the same book and on plays, not roles. Does anyone have a good way to do this?
Wrap tasks inside your roles into block/rescue thing.
Make sure that rescue block has at least one task – this way Ansible will not mark the host as failed.
Like this:
- block:
- name: task 1
... # something bad may happen here
- name: task N
rescue:
- assert: # we need a dummy task here to prevent our host from being failed
that: ansible_failed_task is defined
Recent versions of Ansible register ansible_failed_task and ansible_failed_result when hit rescue block.
So you can do some post_tasks in your main.yml playbook like this:
post_tasks:
- debug:
msg: "Failed task: {{ ansible_failed_task }}, failed result: {{ ansible_failed_result }}"
when: ansible_failed_task is defined
But be warned that this trick will NOT prevent other roles from executing.
So in your example if make_rds_role fails ansible will apply make_s3_role and run your post_tasks afterwards.
If you need to prevent it, add some checking for ansible_failed_task fact in the beginning of each role or something.
I have a issue when i am trying to have a conditional statement based on the variable set in my inventory file. Below is the details. My inventory file looks like this.
[webserver]
server1
server2
[appserver]
server1
[webserver:vars]
TYPE=w
[appserver:vars]
TYPE=a
Now when i am trying to add a condition in my task like
name: abc
shell: run this task
when: TYPE == "w"
name: cde
shell: run this task
when: TYPE == "a"
Now when i run the Play 1 it picks up the first variable and stores it but when it tries to run the task second time (Play2) its still have the same variable and fails. I have two plays 1 for web and other for app. Please let me know what may be the issue.
You can't define the same variable twice in an inventory file, even if it belongs to another group.
After the parsing of your inventory file the only "TYPE" known to Ansible will be the last one defined in your inventory file. In your case that would be one within appserver.
If you switch the order of your variables you will face the exact opposite behavior I guess.
In my opinion you should change the way to determine which tasks to run.
Little example:
- name: some task for webservers
shell: run this webserver task
when: run_webserver_task
- name: some task for appservers
shell: run this appserver task
when: run_appserver_task
Please see this link as evidence for my bold statement:
https://github.com/ansible/ansible/issues/6538
OR
You could try to design your inventory file the yamlish way instead of using an ini file. Please see:
http://docs.ansible.com/ansible/latest/intro_inventory.html
I have a playbook in which on of the tasks within is to copy template files to a specific server with delegate_to: named "grover"
The inventory list with servers of consequence here is: bigbird, grover, oscar
These template files MUST have a name that matches each server's hostnames, and the delegate server grover also must have it's own instance of said file. This template copy operation should only take place if the file does not already exist. /tmp/grover pre-exists on server grover, and it needs to remain unaltered by the playbook run.
Eventually in addition to /tmp/grover on server grover, after the run there should also exist: /tmp/bigbird and /tmp/oscar also on server grover.
The problem I'm having is that when the playbook runs without any conditionals, it does function, but then it also clobbers/overwrites /tmp/grover which is unacceptable.
BUT, if I add tasks in previous plays to pre-check for these files, and then a conditional at the template play to skip grover if the file already exists, it not only skips grover, but it skips every other server that would be run on grover for that play as well. If I try to set it to run on every other server BUT grover, it will still fail because the delegate server is grover and that would be skipped.
Here is the actual example code snipits, playbook is running on all 3 servers due to host pattern:
- hosts: bigbird:grover:oscar
- name: File clobber check.
stat:
path: /tmp/{{ansible_hostname}}
register: clobber_check
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
- name: Copy templates to grover.
template:
backup: yes
src: /opt/template
dest: /tmp/{{ansible_hostname}}
group: root
mode: "u=rw,g=rw"
owner: root
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
when: ( not clobber_check.stat.exists ) and ( clobber_check is defined )
If I run that, and /tmp/grover exists on grover, then it will simply skip the entire copy play because the conditional failed on grover. Thus the other servers will never have their /tmp/bigbird and /tmp/oscar templates copied to grover due to this problem.
Lastly, I'd like to avoid ghetto solutions like saving a backup of grover's original config file, allowing the clobber, and then copying the saved file back as the last task.
I must be missing something here, I can't have been the only person to run into this scenario. Anyone have any idea on how to code for this?
The answer to this question is to remove the unnecessary with_items in the code. While with_items used as it is does allow you to delegate_to a host pattern group, it also makes it so that variables can't be defined/assigned properly to the hosts within that group either.
Defining a single host entity for delegate_to fixes this issue and then the tasks execute as expected on all hosts defined.
Konstantin Suvorov should get the credit for this, as he was the original answer-er.
Additionally, I am surprised that Ansible doesn't allow for easy delegation to a group of hosts. There must be reasons they didn't allow it.
With the command module, if the command creates a file, you can check to see if that file exists. If it does it prevents the command executing again.
- command: touch ~/myfile
args:
creates: ~/myfile
However, if the command does not create a file then on a re-run it executes again.
To avoid a second execution, I create some random file on a change (notify) as follows:
- command: dothisonceonly # this does not create a file
args:
creates: ~/somefile
notify: done
then the handler:
- name: done
command: touch ~/somefile
This approach works but is a bit ugly. Can anyone shed let on best practice? Maybe setting some fact? Maybe a whole new approach?
It is a fact (in common language) that a command was run successfully on a specific target host, so the most appropriate would be to use local facts (in Ansible vernacular).
In your handler save the state as a JSON file under /etc/ansible/facts.d with copy module and content parameter.
It will be retrieved and accessible whenever you run a play against the host with a regular fact-gathering process.
Then you can control the tasks with when condition (you need to include the default filter for the situation when no fact exists yet).
Ideally with Ansible, check for the state that was changed by the first command rather than using a file as a proxy. The reason being that checking the actual state of something provides better immutability since it is tested on every pass.
If there is a reason not to use that approach. Then register the result of the command and use that instead of a notifier to trigger creation of the file.
- command: dothisonceonly # this does not create a file
creates: ~/somefile
register: result
- file:
path: ~/somefile
state: touch
when: result|succeeded
If you are curious to see what is happening here, add:
- debug: var=result
Be aware with notifiers that they are run at the end of the play. This means that if a notifier is triggered by a task but then then play fails to complete, the notifier will not be run. Conversely there are Ansible options which cause notifiers to run even when not triggered by tasks.
This is my actual implementation based on the answer by #techraf
- command: echo 'phew'
register: itworks
notify: Done itworks
when: ansible_local.itworks | d(0) == 0
and handler:
- name: Done itworks
copy:
content: '{{ itworks }}'
dest: /etc/ansible/facts.d/itworks.fact
Docs:
http://docs.ansible.com/ansible/playbooks_variables.html#local-facts-facts-d
Thanks #techraf this works great and persists facts.
EDIT
Applied default value logic in #techraf's comment.
I'm writing an Ansible playbook which will install and configure an agent of some monitoring system my company uses. One of the steps required for the successful configuration of the agent is to configure certain directives in nagios.cfg file.
The nagios.cfg can reside on two different paths based on the way it was installed (package manager / from source).
The two relevant paths are:
/usr/local/nagios/etc/nagios.cfg
/etc/nagios3/nagios.cfg
What I want Ansible to do is to find the correct path and then insert it into a variable which I'll be able to use in the following configuration steps.
I've started with this:
- stat: path=/usr/local/nagios/etc/nagios.cfg
register: nag_conf_usrlocal
when: nag_conf_usrlocal.stat.exists
I thought about stating the file in the first location, understand if it exists there and if so then insert it to a variable and if it's not there then the next path should be stat'ed and if the file exists there then variable should include the correct path where the file exists.
How can it be done?
What you have done is not wrong, it just needs some fixing:
- stat: path=/usr/local/nagios/etc/nagios.cfg
register: nag_conf_usrlocal
- stat: path=/etc/nagios3/nagios.cfg
register: nag_conf_etc
when: nag_conf_usrlocal.stat.exists
- set_fact:
nagios_path: "/usr/local/nagios/etc/nagios.cfg"
when: nag_conf_usrlocal.stat.exists
- set_fact:
nagios_path: "/etc/nagios3/nagios.cfg"
when: nag_conf_etc.stat.exists
This is probably not the best solution, but it's to give you an overwiew of how this could be achieved.