running ansible playbook against unavailable hosts (down/offline) - ansible

May be missing something obvious but ansible play books (which work great for a network of machines that are ssh connected) don't have a mechanism to track which play books have been run against which servers and then re-run when then node pops up/checks in? The playbook works fine but if it is executed when some of the machines are down/offline then those hosts miss those changes…I'm sure the solution can't be to run all the playbook again and again.
Maybe its about googling correct terms…if someone understands the question, please help with what should be searched for since this must be a common requirement…is this called automatic provisioning (just a guess)?
Looking for an ansible speciic way since I like 2 things about it (Python and SSH based…no additional client deployment required)

There is an inbuilt way to do this. By using retry concept we can accomplish retrying on failed hosts.
Step1: Check if your ansible.cfg file contain
retry files
retry_files_enabled = True
retry_files_save_path = ~
Step2: when you run your ansible-playbook with all required hosts, it will create a .retry file with playbook name.
Suppose if you execute below command
ansible-playbook update_network.yml -e group=rollout1
It will create a retry file in your home directory with hosts which are failed.
Step3: Once you run for the first time, just run ansible-playbook in a loop format as below with a while loop or a crontab
while true
do
ansible-playbook update_network.yml -i ~/update_network.retry
done
This automatically run until you have hosts exhausted in ~/update_network.retry file.

Often the solution is indeed to run the playbook again--there are lots of ways to write playbooks that ensure you can run the playbooks over and over again without harmful effects. For ongoing configuration remediation like this, some people choose to just run playbooks using cron.
AnsibleWorks AWX has a method to do an on-boot or on-provision checkin that triggers a playbook run automatically. That may be more what you're asking for here:
http://www.ansibleworks.com/ansibleworks-awx

Related

how to not display skipped hosts/tasks?

I have many playbooks and I don't have access the where Ansible is installed. I can write my playbooks locally on my laptop then I push them to a repo and I can run them via Jenkins. I can't control or change e.g. ansible.cfg or so. Is there a way to manipulate the ansible default stdout callback plugin per playbook without accessing the ansible host itself?
Actually it is, you can use Environmental variable for this: check documentation
ANSIBLE_DISPLAY_SKIPPED_HOSTS=yes ansible-playbook main.yml
But for obvious reasons (It's deprecated) it's better to use the ansible.cfg option for this.
[defaults]
display_skipped_hosts = False

Getting a python warning when running playbook EC2 inventory

I am really new to Ansible and I hate getting warnings when I run a playbook. This environment is being used for my education.
Environment:
AWS EC2
4 Ubuntu 20
3 Amazon Linux2 hosts
Inventory
using the dynamic inventory script
playbook
just runs a simple ping against all hosts. I wanted to test the inventory
warning
[WARNING]: Platform linux on host XXXXXX.amazonaws.com is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the
meaning of that path. See https://docs.ansible.com/ansible-core/2.11/reference_appendices/interpreter_discovery.html for more information.
Things I have tried
updated all sym links on hosts to point to the python3 version
adding the line "ansible_python_interpreter = /usr/bin/python" to "/etc/ansible/ansible.cfg"
I am relying on that cfg file
I would like to know how to solve this. since I am not running a static inventory, I didn't think that I could specific an interpreter on a per host or group of hosts. While the playbook runs, it seems that something is not configured correctly and I would like to get that sorted. This is only present on the Amazon Linux instances. the Ubuntu instances are fine.
Michael
Thank you. I did find another route that work though I am sure that you suggest would also work.
I was using the wrong configuration entry. I was using
ansible_python_interpreter = /usr/bin/python
when I should have been using
interpreter_python = /usr/bin/python
on each host I made sure that /usr/bin/python sym link was pointing and the correct version.
according to the documentation
for individual hosts and groups, use the ansible_python_interpreter inventory variable
globally, use the interpreter_python key in the [defaults] section of ansible.cfg
Regards, Michael.
You can edit your ansible.cfg and set auto_silent mode:
interpreter_python=auto_silent
Check reference here:
https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html

Is it possible to run a playbook in "pull mode"?

I have playbooks which I start on a master host and which run specific actions on remote hosts. This is a "push" mode - the activity is initiated by the master host.
Several of my hosts are down at a given time and obviously fail to run the playbook when in such a state. This leads to hosts which are up-to-date, while others are not.
To fix this I could run the playbook on the master host in a regular way (via cron for instance) but this is not particularly effective.
Is there a built-in way in Ansible to reverse the flow, that is to initiate from a remote host a playbook available on the master, to run it on that remote host?
I can imagine that the remote host could ssh to the master (at boot time for instance) and then trigger the playbook with the host as a parameter (or something around that idea) but I would definitely prefer to use an Ansible feature instead of reinventing it.
There is a script called ansible-pull which reverses the default push architecture of Ansible. There is also an example playbook from the Ansible developers available.
The use of ansible-pull mode is really simple and straight-forward, this example might help you:
ansible-pull -d /root/playbooks -i 'localhost,' -U git#bitbucket.org:arbabnazar/pull-test.git --accept-host-key
options detail:
1. --accept-host-key: adds the hostkey for the repo url if not already added
2. -U: URL of the playbook repository
3. -d: directory to checkout repository to
4. –i localhost,: This option indicates the inventory that needs to be considered.
Since we're only concerned about one host, we use -i localhost,.
For detail, please refer to this tutorial

Running playbooks automatically

I am learning ansible recently and I am a hard time figuring out, how to configure ansible to run the playbooks on its own after a certain interval. ? Just like puppet does.
Ansible works in a different way compared to Puppet.
Puppet PULLS for configuration changes from a central place and applies changes on the remote host that asked for it.
Ansible by design works different. You PUSH the changes (from any control machine that has SSH access to remote hosts - usually your own computer) to remote hosts.
You can make Ansible work in pull mode also but it's not how Ansible was designed to be used.
You can see this answer for more information: Can't run Ansible in daemon-mode
If you would like the host to automatically run playbooks on itself (localhost) you would basically use ansible-pull script + crontab.
If you want to run the playbooks once after a certain interval, you can use the at command.
Example
# Schedule a command to execute in 20 minutes as root.
- at: command="ls -d / > /dev/null" count=20 units="minutes"
Further information available on ansible official site.
This is what Ansible Tower is for. It'll run after being pinged on its API, by schedule, manually, and so on.

Run tasks in ansible before pre provisioning

I have a set of taks in my playbook that I would like to run before ansible checks to see if roles exist. (One installs roles from galaxy and github) Right now, it appears, that ansible checks if all of the roles referenced exist prior to running ANY tasks because I get fatal errors saying those roles don't cannot be found. Can I define a task that can be run before this pre-provisioning? I would like to do this via ansible and not have to put it in a bash script that runs before my playbook.
It would be a great way to automate downloading Galaxy dependencies and ensuring the latest/correct version of all roles is installed. Unfortunately this is not possible.
I tried this with
pre_tasks
playbook-includes
conditional playbook-includes
But it's all the same. The playbook is first completely parsed and resolved (includes, roles) before the first task is executed.

Resources