Executing task only on the first run on each host - ansible

I'm new to ansible and I would like to run some tasks only once. Not once per every run but only once and then never. In puppet I used file placeholders for that, similar to ansible "creates".
What is the best practice for my use case?
If I should use the "creates" method, which folder should be safe to use?
For example I would like to install letsencrypt on host where the first step is to update snap. I don't want to refresh snap every run so I would like to run it only before letsencrypt install.
- name: Update snap
command: snap install core && snap refresh core
args:
creates: /usr/local/ansible/placeholders/.update-snap
become: true

Most of the ansible modules are made to be idempotent, which means that they will perform action only when needed.
shell, script and command are the exception as ansible cannot know if action needs to be performed, and that's why there is the creates parameter to add idempotency.
That's why before writing a shell task you should check if there is not already an ansible module that could perform your task. There are almost 3000 base modules and with new versions of ansible you can get new community-provided modules through collections in Ansible Galaxy.
In your specific case, the snap module (https://docs.ansible.com/ansible/2.10/collections/community/general/snap_module.html) may be enough:
- name: Update snap
snap:
name: core
become: true
It will install the core snap if it's not already installed, and do nothing otherwise.
If it's not enough, you can implement yourself idempotency by creating a task querying snap to check the presence of core with it's version (add a changed_when: False as it's a task not doing any changes) and then launching the install/update depending on the query result (with a when condition).
If there is no way to do that, then you can fallback to using the creates. If the command executed already creates a specific file that presence can be used to know if it was already executed, better to reference this one. Else you need to create your own marker file in the shell. As it's a last option solution, there is no convention of where this marker file should be, so up to you to create your own (the one you propose looks good, even if the file name could be more specific and include the snap name in case you should use the same trick for other snaps).

Related

How to test install and uninstall scenario with Molecule for Ansible?

In the Ansible role that I'm creating, I'm covering both an install and an uninstall scenario:
foo-install.yml is called from main.yml when the install flag is set to true.
foo-uninstall.yml is called from main.yml when the install flag is set to false.
While the install covers installing an RPM package, copying config files, and starting a system service, the uninstall steps basically reverse the installation: Stop the system service, uninstall the RPM package, remove the application folder.
As a good citizen, I have created a test for the role using Molecule, which runs the role in a CentOS Vagrant box. This works fine for the install scenario, where I use Python tests (using testinfra) to validate the RPM is installed, the service is started, etc.
How can I use Molecule to now test the uninstall scenario as well? Is there a way to change the steps in Molecule that it does something like this (simplified)?
create
converge (run the install parts of the role)
idempotence (for the install part)
verify (the install steps)
converge (run the uninstall parts of the role)
idempotence (for the uninstall part)
verify (the uninstall steps)
destroy
Maybe I'm missing something, but I have not found an obvious way (or examples) on how to do something like this.
Is there a way to cover scenarios like this? Or am I better off with just testing the install scenario?
Recommended Solution
The recommended way to address this is using multiple Molecule Scenarios. You could use your install scenario as the default, and then add a second uninstall scenario that just runs and tests the uninstall steps.
When setting this up, simply create a second scenario directory in your role's molecule folder (copy the default one), and then make some changes:
(Edit: this step was necessary for molecule < 3.0. scenario.name was removed in later versions) In the molecule.yml file change the scenario.name attribute to uninstall.
In the same file, use the default scenario's playbook.yml file as the playbook for the prepare step:
provisioner:
name: ansible
playbooks:
prepare: ../default/playbook.yml
converge: playbook.yml
Adjust the tests for the uninstall scenario to verify the uninstall steps.
This will ensure that the same steps are used for installing the software as in the install/default scenario, and you can focus on the uninstall steps.
To run the scenarios you can either run all of them or a single one:
# Run all scenarios
molecule test --all
# Run only the uninstall scenario
molecule test --scenario-name uninstall
This should get you pretty close to what you want to do, without replicating any code.
If you want to try some other things, here are a couple of other thoughts:
Alternatives
I would keep a scenario for the install only that would play all the needed tests (lint, idempotency, check, verify....) and create an install_uninstall specific scenario.
Playing install_uninstall will never be idempotent. So this scenario should disable the idempotency tests that will never pass. You might as well want to disable the check test that is played in your other scenario, lint... This can be done in molecule.yml by adjusting the parameters for scenario.test_sequence:
scenario:
name: install_uninstall
test_sequence:
- destroy
- create
- prepare
- converge
- verify
- destroy
Of course you can adjust to your real needs (like droping verify also if you have no testinfra tests for this case).
Once this is done you only have to add two plays in your scenario playbook:
---
- name: install
hosts: all
roles:
- role: my_role
install: true
- name: uninstall
hosts: all
roles:
- role: my_role
install: false
And you should be ready to test with:
molecule test -s install_uninstall
Edit:
An other option would be to keep only your current install scenario but launch the individual molecule commands rather than a full test. Assuming your current working scenario is in default
# Check and lint my files
molecule lint
# Make sure no box/container is on the way
molecule destroy
# Create my box/container for tests
molecule create
# Play my default playbook
molecule converge
# Idempotency check
molecule idempotence
# Verify we can correctly use check mode
molecule check
# Play testinfra tests
molecule verify
# Now play the uninstall
molecule converge -- -e install=false
## add more tests you can think off ##
# and finally cleanup
molecule destroy.
Unfortunately, unless this feature was very recently added to molecule, it is not possible to call idempotency and check with extra variables

Tell if ansible playbook is being run directly

I've got a monolithic playbook I'm refactoring into a series of smaller playbooks that get imported. Which is all well and good when importing them, but when testing them individually with e.g. vagrant the apt-cache often is stale and half my packages are missing.
I could of course put a task to update the apt-cache in every single one of them, but that seems wasteful.
Is there any conditional check for if a playbook is running inside another playbook or directly from the command line? Or check that the cache is in fact up-to-date in a globally-scoped var and condition based on that?
Or check that the cache is in fact up-to-date in a globally-scoped var and condition based on that
You can very easily declare you are running in vagrant through the Vagrantfile (if you are using the ansible "provisioner"), through a static fact in /etc/ansible/facts.d, by expending the energy to declare an extra-var on the command line, by checking for the NIC vendor that corresponds to the VM (so VBox in my case), or the virtualization method, or or or.
Separately, you are also welcome to actually declare a "staleness" threshold, and run apt-get update if the mtime for /var/lib/apt/lists/partial is older than some number of seconds/minutes/etc, which actually sounds much closer to what you really want versus having things behave one way in Vagrant and a separate way for real.
Following up on Matthew L. Daniel's excellent answer which I've accepted, here's the actual tasks:
tasks:
- name: Get the apt-cache stat
stat: path=/var/lib/apt/lists/partial
register: cachage
- name: Update apt cache
apt: update_cache=yes
when: '{{ (ansible_date_time.epoch|float - cachage.stat.mtime) > 1800.0 }}'

How can I ensure that an Ansible shell command executes only once per host?

I want to be able to guarantee that after a long running shell command is executed successfully on a given host, it is not executed in subsequent playbook runs on that host.
I was happy to learn of the creates option to the Ansible shell task (see http://docs.ansible.com/ansible/shell_module.html); using this I can create a file after a shell command succeeds and not have that command execute in future runs:
- name: Install Jupiter theme
shell: wp theme install ~/downloads/jupiter-theme.zip --activate
&& touch ~/ansible-flags/install-jupiter-theme
args:
creates: ~/ansible-flags/install-jupiter-theme
As you can see, I have a directory ansible-flags for these control files in the home directory of my active user (deploy).
However, this is quite a kludge. Is there a better way to do this?
Also, the situation gets a lot more complicated when I need to run a step as a different user; in that case, I don't have access to the deploy home directory and its subdirectories. I could grant access, of course, but that's adding even more complexity. How would you recommend I deal with this?
Your playbook should be idempotent!
So either task should be safe to be executed multiple times (e.g. echo ok) or your should check whether you should execute it at all.
Your task may look like this:
- name: Install Jupiter theme
shell: wp theme is-installed jupiter || wp theme install ~/downloads/jupiter-theme.zip --activate
It will check if the jupiter theme is installed and will run wp theme install only if theme is not installed.
But this will mark the result of task as changed in any case.
To make it nice, you can do this:
- name: Install Jupiter theme
shell: wp theme is-installed jupiter && echo ThemeAlreadyInstalled || wp theme install ~/downloads/jupiter-theme.zip --activate
register: cmd_result
changed_when: cmd_result.stdout != "ThemeAlreadyInstalled"
First, it will check if the jupiter theme is already installed.
If this is the case, it will output ThemeAlreadyInstalled and exit.
Otherwise it will call wp theme install.
The task's result is registered into cmd_result.
We user changed_when parameter to state the fact that if ThemeAlreadyInstalled ansible shouldn't consider the task as changed, so it remains green in your playbook output.
If you do some preparation tasks before wp theme install (e.g. download theme), you may want to run the test as separate task to register the result and use when clauses in subsequent tasks:
- name: Check Jupiter theme is installed
shell: wp theme is-installed jupiter && echo Present || echo Absent
register: wp_theme
changed_when: false
- name: Download theme
get_url: url=...
when: wp_theme.stdout == "Absent"
- name: Install Jupiter theme
shell: wp theme install ~/downloads/jupiter-theme.zip --activate
when: wp_theme.stdout == "Absent"
Ansible itself is stateless, so whatever technique you use needs to be implement by yourself. There is no easier way like telling Ansible to only ever run a task once.
I think what you have already is pretty straight forward. Maybe use another path to check. I'm pretty sure the wp theme install will actually extract that zip to some location, which you then can use together with the creates option.
There are alternatives but nothing that makes it easier:
Create a script in any language you like, that checks for the theme and if it is not installed, install it. Instead of the shell task then you would execute the script with the script module.
Install a local fact in /etc/ansible/facts.d which checks if the theme in already installed. Then you could simply apply a condition based on that fact to the shell task.
Set up fact caching, then register the result of your shell task and store it as a fact. With fact caching enabled you can access the stored result on later playbook runs.
It only gets more complicated. The creates option is perfect for this scenario and if you use the location where the wp command extracted the zip to, it also is pretty clean.
Of course you need to grant access to that file, if you have a need to access it from another user account. Storing stuff in a users home directory with the permissions set to only be readable by the owner indeed is no ideal solution.

Ansible local changes not getting reflected while running the playbook

I modified an Ansible play-book, and from git status I can clearly see the modified data/files. Now when I run the play-book, the changes do not come into effect even after the play-book has run completely.
I tried renaming the playbook file, and even removing the play-book file form the directory, still the play-book executes. Seems some sort of caching mechanism.
I often end up in this situation but not able to understand what goes wrong. Ansible version used is 1.7.1
Got it, I had multiple path entries in the ansible.cfg for roles_path parameter. Co-incidentally, for each path specified in the roles_path parameter, the repository also existed and as per design Ansible used the first match.
Hence every time I made changes in my repository, still the data would come from the other repositories.

running ansible playbook against unavailable hosts (down/offline)

May be missing something obvious but ansible play books (which work great for a network of machines that are ssh connected) don't have a mechanism to track which play books have been run against which servers and then re-run when then node pops up/checks in? The playbook works fine but if it is executed when some of the machines are down/offline then those hosts miss those changes…I'm sure the solution can't be to run all the playbook again and again.
Maybe its about googling correct terms…if someone understands the question, please help with what should be searched for since this must be a common requirement…is this called automatic provisioning (just a guess)?
Looking for an ansible speciic way since I like 2 things about it (Python and SSH based…no additional client deployment required)
There is an inbuilt way to do this. By using retry concept we can accomplish retrying on failed hosts.
Step1: Check if your ansible.cfg file contain
retry files
retry_files_enabled = True
retry_files_save_path = ~
Step2: when you run your ansible-playbook with all required hosts, it will create a .retry file with playbook name.
Suppose if you execute below command
ansible-playbook update_network.yml -e group=rollout1
It will create a retry file in your home directory with hosts which are failed.
Step3: Once you run for the first time, just run ansible-playbook in a loop format as below with a while loop or a crontab
while true
do
ansible-playbook update_network.yml -i ~/update_network.retry
done
This automatically run until you have hosts exhausted in ~/update_network.retry file.
Often the solution is indeed to run the playbook again--there are lots of ways to write playbooks that ensure you can run the playbooks over and over again without harmful effects. For ongoing configuration remediation like this, some people choose to just run playbooks using cron.
AnsibleWorks AWX has a method to do an on-boot or on-provision checkin that triggers a playbook run automatically. That may be more what you're asking for here:
http://www.ansibleworks.com/ansibleworks-awx

Resources