I have a set of taks in my playbook that I would like to run before ansible checks to see if roles exist. (One installs roles from galaxy and github) Right now, it appears, that ansible checks if all of the roles referenced exist prior to running ANY tasks because I get fatal errors saying those roles don't cannot be found. Can I define a task that can be run before this pre-provisioning? I would like to do this via ansible and not have to put it in a bash script that runs before my playbook.
It would be a great way to automate downloading Galaxy dependencies and ensuring the latest/correct version of all roles is installed. Unfortunately this is not possible.
I tried this with
pre_tasks
playbook-includes
conditional playbook-includes
But it's all the same. The playbook is first completely parsed and resolved (includes, roles) before the first task is executed.
Related
What I'm doing:
I start with a new server with no files on it
I run the Ansible Runner process
Ansible will create all the files in the roles
I've tried this procedure many times, always deleting the server and starting from fresh but getting the same result.
Everything seems fine but if I print the files created by using a modified template it shows always the same content.
If I try to run the playbook by using the Ansible installed on my system it works.
I'm thinking about some sort of caching in the Ansible installed in my virtualenv.
Any idea?
I'm running ansible within terraform.
I downloaded a galaxy role using:
ansible-galaxy install <role>
This downloaded it to my ~/.ansible/roles directory.
But when I referenced the role name in my ansible playbook that I use with terraform, it couldn't find it.
So instead I specified a directory like so:
ansible-galaxy install --roles-path . <role>
While this now works, it's mixed in with all my other scripts that are under source control.
I'm wondering if there's a way to use a galaxy role without actually downloading it? Or if there's a way to specify in my ansible playbook where to look for additional roles besides my roles?
Since Ansible only knows how to retrieve roles from the local filesystem, you must download roles before you can use them.
You can tell Ansible where to look for roles by setting the ANSIBLE_ROLES_PATH environment variable, or by setting roles_path in your ansible.cfg. See the documentation for details.
I have a playbook that calls the active_directory role. Within the main.yml I do not call the rm_user task but would like to remove users in the event that I needed to via the CLI. Even tags don't work if the task is not included in the play.
is there a way to run tasks that are not included in the play (but are included in the active_directory role directory) via the CLI?
To answer your question:
Is there a way to run tasks that are not included in the play (but are included in the active_directory role directory) via the CLI?
Yes. There is include_role module.
You can run
# ansible -m include_role -a "name=active_directory tasks_from=file_you_chose.yml"
I'm trying to find my way out of a dependency thicket. I'm using Ansible 1.9.2.
In a single playbook, I want to be able to install a Galaxy role (in the event, the Datadog.datadog role) and configure it. But Ansible always barfs; since the Datadog.datadog role doesn't exist until another role that I wrote installs the Galaxy role, it won't execute. This is how I'd really like it to be, cutting out the other roles that my playbook uses:
- hosts: all
roles:
- install_datadog
- (some other roles...)
- { role: Datadog.datadog, sudo: true }
vars:
datadog_api_key: "somekey"
I have tried all of the following, and none of them work for installing the Ansible Galaxy Datadog.datadog role first:
Having an earlier block in the same playbook that runs my install_datadog role.
Using an 'include' statement earlier in the playbook that includes the install_datadog role's main.yml.
Creating a pre_task statement in the above playbook.
Defining a role dependency doesn't make sense, because Datadog.datadog doesn't exist yet, so I can't define any dependencies in it. There is always an error akin to this:
ERROR: cannot find role in /etc/ansible/roles/Datadog.datadog
The only thing I can get to work is executing the install_datadog role in a prior run. This is not a great solution, as previously one playbook that had many execution blocks and role invocations configured our whole environment; this would require the execution of two playbooks in a specific order, which is highly inelegant.
So in a single run, how do I resolve a Galaxy role that won't exist until an earlier role has run to install it?
Make sure that your roles_path is correct. The roles_path variable in ansible.cfg specifies where ansible will look for roles, and the --roles-path option to ansible-galaxy will specify where the datadog role gets installed.
For example, my install task looks like this:
ansible-galaxy install Datadog.datadog --roles-path=/usr/home/vagrant
and in my ansible.cfg file I have this line:
roles_path = /vagrant/ansible/roles:/usr/home/vagrant
May be missing something obvious but ansible play books (which work great for a network of machines that are ssh connected) don't have a mechanism to track which play books have been run against which servers and then re-run when then node pops up/checks in? The playbook works fine but if it is executed when some of the machines are down/offline then those hosts miss those changes…I'm sure the solution can't be to run all the playbook again and again.
Maybe its about googling correct terms…if someone understands the question, please help with what should be searched for since this must be a common requirement…is this called automatic provisioning (just a guess)?
Looking for an ansible speciic way since I like 2 things about it (Python and SSH based…no additional client deployment required)
There is an inbuilt way to do this. By using retry concept we can accomplish retrying on failed hosts.
Step1: Check if your ansible.cfg file contain
retry files
retry_files_enabled = True
retry_files_save_path = ~
Step2: when you run your ansible-playbook with all required hosts, it will create a .retry file with playbook name.
Suppose if you execute below command
ansible-playbook update_network.yml -e group=rollout1
It will create a retry file in your home directory with hosts which are failed.
Step3: Once you run for the first time, just run ansible-playbook in a loop format as below with a while loop or a crontab
while true
do
ansible-playbook update_network.yml -i ~/update_network.retry
done
This automatically run until you have hosts exhausted in ~/update_network.retry file.
Often the solution is indeed to run the playbook again--there are lots of ways to write playbooks that ensure you can run the playbooks over and over again without harmful effects. For ongoing configuration remediation like this, some people choose to just run playbooks using cron.
AnsibleWorks AWX has a method to do an on-boot or on-provision checkin that triggers a playbook run automatically. That may be more what you're asking for here:
http://www.ansibleworks.com/ansibleworks-awx