Running playbooks automatically - ansible

I am learning ansible recently and I am a hard time figuring out, how to configure ansible to run the playbooks on its own after a certain interval. ? Just like puppet does.

Ansible works in a different way compared to Puppet.
Puppet PULLS for configuration changes from a central place and applies changes on the remote host that asked for it.
Ansible by design works different. You PUSH the changes (from any control machine that has SSH access to remote hosts - usually your own computer) to remote hosts.
You can make Ansible work in pull mode also but it's not how Ansible was designed to be used.
You can see this answer for more information: Can't run Ansible in daemon-mode
If you would like the host to automatically run playbooks on itself (localhost) you would basically use ansible-pull script + crontab.

If you want to run the playbooks once after a certain interval, you can use the at command.
Example
# Schedule a command to execute in 20 minutes as root.
- at: command="ls -d / > /dev/null" count=20 units="minutes"
Further information available on ansible official site.

This is what Ansible Tower is for. It'll run after being pinged on its API, by schedule, manually, and so on.

Related

How to run custom command on rundeck host in Rundeck?

I'm creating Rundeck job and I want to run a custom command (e.g. git pull) on Rundeck host itself within a job working on other nodes. I can see the Command or Script node steps, but is there a matching workflow step?
Context
I'm pretty new to Rundeck, so here's some context on what I'm trying to achieve, maybe my whole design is wrong. I'm pretty familiar with Ansible though and I'm trying to integrate it with Rundeck to treat Rundeck as an executor of Ansible scripts
We're developing some software product, which is on-prem solution and is quite complex to install (requires deep OS configuration). We want to develop it in Continous Delivery fasion, as our cloud products are. So in git repository, along the product, we keep Ansible workspace (playbooks, roles, requirements, custom tasks - everything exept inventory) and on every commit Ansible workspace should be compatible with particular product version.
My current approach is following: build pipeline publishes as artifacts both build of the product and zipped Ansible workspace. Whenever we want to deploy it, we would run Rundeck job, which:
downloads Ansible workspace from artifacts (alternative idea: pulls repository in proper commit)
runs Ansible playbook (via Ansible workflow step), which does the stuff on selected nodes
How can I perform this first step? From what I can see I can run script or command on nodes (but in particular job run nodes are the target machines, not rundeck host). There is also SCM git plugin for Rundeck, but it can load jobs from repository, not Ansible workspace
A good approach for that:
Integrate Rundeck with Ansible following this (consider that Rundeck and Ansible must coexist in the same server).
Create a new job, by default new Rundeck jobs are configured to run locally.
In The first step you can add a "script step" (inline script) moving to the desired directory and the git pull command (also, you can use this approach to cloning the repo if you need that).
Now, the next step could be the execution of your ansible playbook (Ansible Playbook Workflow Node Step) in your job.

Do I need "shell" runner if I want to deploy gitlab + ansible on the same box?

I've have seen a gitlab demo about using Infrastructure as code. If I wanted to do the same on-prem, would this work? Setup open-source gitlab on-prem, setup a bash as shell runner, and using the shell runner, execute ansible playbooks on the network equipment?
having written out this answer, I now believe this question is at risk of closure for being either Too Broad or Primarily Opinion Based; but, I already spent the effort to type it out, so here we go.
Setup open-source gitlab on-prem, setup a bash as shell runner, and using the shell runner, execute ansible playbooks on the network equipment?
I believe that's possible, or you can install ansible, along with any required python modules for your playbooks, into a docker image and then use the docker executor to run the playbooks inside a container. Using Tower or AWX is also possible, since they have the concept of projects run from source control
The advantage of using the docker runner is that you don't have to pre-install ansible (along with its dependencies) on every runner host; the disadvantage of using the docker runner is that I could imagine ssh authentication from inside the container getting weird.
# hypothetical .gitlab-ci.yml
stages:
- apply
run ansible playbook:
stage: apply
image: docker.example.com/my-ansible:2.8
scripts:
- ansible-playbook -i ./some-inventory -v playbook.yml
The advantage of using a dedicated system like AWX (or Tower) is that the inventory against which those playbooks run is also a formally managed entity in the system, and wouldn't require teaching GitLab about how to make that available to your playbook. Same story with the authentication, since AWX has first-class support for a managed SSH keypair that can be conditionally granted to only certain playbook projects
You can still have GitLab integrate with Tower by either using tower-cli or their rich API to launch a Job Template that has its Project configured to do an SCM update before launch

Generate VMs based on Ansible Inventory prior to playbook run

So I'm looking at creating a generic wrapper around the ansible-playbook command.
What I'd like to do is spin up a number of VMs (Vagrant or docker), based on the inventory supplied.
I'd use these VMs locally for automated testing using molecule, as well as manual function testing.
Crucially the number of machines in the inventory could change, so these need created prior to the run.
Any thoughts?
Cheers,
Stuart
You could use a tool like Terraform to run your docker images, and then export the inventory from Terraform to Ansible using something like terraform-inventory.
I think there's also an Ansible provisioner for Terraform.

Is it possible to run a playbook in "pull mode"?

I have playbooks which I start on a master host and which run specific actions on remote hosts. This is a "push" mode - the activity is initiated by the master host.
Several of my hosts are down at a given time and obviously fail to run the playbook when in such a state. This leads to hosts which are up-to-date, while others are not.
To fix this I could run the playbook on the master host in a regular way (via cron for instance) but this is not particularly effective.
Is there a built-in way in Ansible to reverse the flow, that is to initiate from a remote host a playbook available on the master, to run it on that remote host?
I can imagine that the remote host could ssh to the master (at boot time for instance) and then trigger the playbook with the host as a parameter (or something around that idea) but I would definitely prefer to use an Ansible feature instead of reinventing it.
There is a script called ansible-pull which reverses the default push architecture of Ansible. There is also an example playbook from the Ansible developers available.
The use of ansible-pull mode is really simple and straight-forward, this example might help you:
ansible-pull -d /root/playbooks -i 'localhost,' -U git#bitbucket.org:arbabnazar/pull-test.git --accept-host-key
options detail:
1. --accept-host-key: adds the hostkey for the repo url if not already added
2. -U: URL of the playbook repository
3. -d: directory to checkout repository to
4. –i localhost,: This option indicates the inventory that needs to be considered.
Since we're only concerned about one host, we use -i localhost,.
For detail, please refer to this tutorial

running ansible playbook against unavailable hosts (down/offline)

May be missing something obvious but ansible play books (which work great for a network of machines that are ssh connected) don't have a mechanism to track which play books have been run against which servers and then re-run when then node pops up/checks in? The playbook works fine but if it is executed when some of the machines are down/offline then those hosts miss those changes…I'm sure the solution can't be to run all the playbook again and again.
Maybe its about googling correct terms…if someone understands the question, please help with what should be searched for since this must be a common requirement…is this called automatic provisioning (just a guess)?
Looking for an ansible speciic way since I like 2 things about it (Python and SSH based…no additional client deployment required)
There is an inbuilt way to do this. By using retry concept we can accomplish retrying on failed hosts.
Step1: Check if your ansible.cfg file contain
retry files
retry_files_enabled = True
retry_files_save_path = ~
Step2: when you run your ansible-playbook with all required hosts, it will create a .retry file with playbook name.
Suppose if you execute below command
ansible-playbook update_network.yml -e group=rollout1
It will create a retry file in your home directory with hosts which are failed.
Step3: Once you run for the first time, just run ansible-playbook in a loop format as below with a while loop or a crontab
while true
do
ansible-playbook update_network.yml -i ~/update_network.retry
done
This automatically run until you have hosts exhausted in ~/update_network.retry file.
Often the solution is indeed to run the playbook again--there are lots of ways to write playbooks that ensure you can run the playbooks over and over again without harmful effects. For ongoing configuration remediation like this, some people choose to just run playbooks using cron.
AnsibleWorks AWX has a method to do an on-boot or on-provision checkin that triggers a playbook run automatically. That may be more what you're asking for here:
http://www.ansibleworks.com/ansibleworks-awx

Resources