I modified an Ansible play-book, and from git status I can clearly see the modified data/files. Now when I run the play-book, the changes do not come into effect even after the play-book has run completely.
I tried renaming the playbook file, and even removing the play-book file form the directory, still the play-book executes. Seems some sort of caching mechanism.
I often end up in this situation but not able to understand what goes wrong. Ansible version used is 1.7.1
Got it, I had multiple path entries in the ansible.cfg for roles_path parameter. Co-incidentally, for each path specified in the roles_path parameter, the repository also existed and as per design Ansible used the first match.
Hence every time I made changes in my repository, still the data would come from the other repositories.
Related
I'm new to ansible and I would like to run some tasks only once. Not once per every run but only once and then never. In puppet I used file placeholders for that, similar to ansible "creates".
What is the best practice for my use case?
If I should use the "creates" method, which folder should be safe to use?
For example I would like to install letsencrypt on host where the first step is to update snap. I don't want to refresh snap every run so I would like to run it only before letsencrypt install.
- name: Update snap
command: snap install core && snap refresh core
args:
creates: /usr/local/ansible/placeholders/.update-snap
become: true
Most of the ansible modules are made to be idempotent, which means that they will perform action only when needed.
shell, script and command are the exception as ansible cannot know if action needs to be performed, and that's why there is the creates parameter to add idempotency.
That's why before writing a shell task you should check if there is not already an ansible module that could perform your task. There are almost 3000 base modules and with new versions of ansible you can get new community-provided modules through collections in Ansible Galaxy.
In your specific case, the snap module (https://docs.ansible.com/ansible/2.10/collections/community/general/snap_module.html) may be enough:
- name: Update snap
snap:
name: core
become: true
It will install the core snap if it's not already installed, and do nothing otherwise.
If it's not enough, you can implement yourself idempotency by creating a task querying snap to check the presence of core with it's version (add a changed_when: False as it's a task not doing any changes) and then launching the install/update depending on the query result (with a when condition).
If there is no way to do that, then you can fallback to using the creates. If the command executed already creates a specific file that presence can be used to know if it was already executed, better to reference this one. Else you need to create your own marker file in the shell. As it's a last option solution, there is no convention of where this marker file should be, so up to you to create your own (the one you propose looks good, even if the file name could be more specific and include the snap name in case you should use the same trick for other snaps).
I am learning ansible recently and I am a hard time figuring out, how to configure ansible to run the playbooks on its own after a certain interval. ? Just like puppet does.
Ansible works in a different way compared to Puppet.
Puppet PULLS for configuration changes from a central place and applies changes on the remote host that asked for it.
Ansible by design works different. You PUSH the changes (from any control machine that has SSH access to remote hosts - usually your own computer) to remote hosts.
You can make Ansible work in pull mode also but it's not how Ansible was designed to be used.
You can see this answer for more information: Can't run Ansible in daemon-mode
If you would like the host to automatically run playbooks on itself (localhost) you would basically use ansible-pull script + crontab.
If you want to run the playbooks once after a certain interval, you can use the at command.
Example
# Schedule a command to execute in 20 minutes as root.
- at: command="ls -d / > /dev/null" count=20 units="minutes"
Further information available on ansible official site.
This is what Ansible Tower is for. It'll run after being pinged on its API, by schedule, manually, and so on.
I was trying to fix an issue in vagrant-rackspace plugin, which required running a command to remove the requiretty line from the sudoers file before the synced folder ran, as the rsync command would fail with sudo: sorry, you must have a tty to run sudo.
However, regardless of the order in the Vagrantfile, the synced folder action would always go first. It seems that in core Vagrant, the synced folder has the highest priority, and will always run first. I thought it would be like the provisioners, which are run in order they're put in the Vagrantfile.
Eventually I remembered a similar looking line of code in a plugin I'd worked on recently:
action_hook(:install_chef, Plugin::ALL_ACTIONS) do |hook|
require_relative 'action/install_chef'
hook.after(Vagrant::Action::Builtin::Provision, Action::InstallChef)
if defined? VagrantPlugins::AWS::Action::TimedProvision
hook.after(VagrantPlugins::AWS::Action::TimedProvision,
Action::InstallChef)
end
end
So, I just changed that to make sure the task I wanted always ran before the synced folder task:
action_hook(:tty_workaround, Plugin::ALL_ACTIONS) do |hook|
require_relative 'action/tty_workaround'
hook.after(Vagrant::Action::Builtin::SyncedFolders, Action::TTYWorkaround)
end
Worked like a charm!
I'm still trying to figure out if there's a way of doing this in Vagrant core, but for now I'm planning on making a vagrant plugin where you can give a path to a script you want to run on the vagrant node in the config, allowing you to use the above approach in any Vagrant instance, not just vagrant-rackspace.
May be missing something obvious but ansible play books (which work great for a network of machines that are ssh connected) don't have a mechanism to track which play books have been run against which servers and then re-run when then node pops up/checks in? The playbook works fine but if it is executed when some of the machines are down/offline then those hosts miss those changes…I'm sure the solution can't be to run all the playbook again and again.
Maybe its about googling correct terms…if someone understands the question, please help with what should be searched for since this must be a common requirement…is this called automatic provisioning (just a guess)?
Looking for an ansible speciic way since I like 2 things about it (Python and SSH based…no additional client deployment required)
There is an inbuilt way to do this. By using retry concept we can accomplish retrying on failed hosts.
Step1: Check if your ansible.cfg file contain
retry files
retry_files_enabled = True
retry_files_save_path = ~
Step2: when you run your ansible-playbook with all required hosts, it will create a .retry file with playbook name.
Suppose if you execute below command
ansible-playbook update_network.yml -e group=rollout1
It will create a retry file in your home directory with hosts which are failed.
Step3: Once you run for the first time, just run ansible-playbook in a loop format as below with a while loop or a crontab
while true
do
ansible-playbook update_network.yml -i ~/update_network.retry
done
This automatically run until you have hosts exhausted in ~/update_network.retry file.
Often the solution is indeed to run the playbook again--there are lots of ways to write playbooks that ensure you can run the playbooks over and over again without harmful effects. For ongoing configuration remediation like this, some people choose to just run playbooks using cron.
AnsibleWorks AWX has a method to do an on-boot or on-provision checkin that triggers a playbook run automatically. That may be more what you're asking for here:
http://www.ansibleworks.com/ansibleworks-awx
This might not be a valid question. I was going through source of copy module at github.
While I could understand what and how it is doing one thing I am not able to get. I see following two lines
if not os.path.exists(src)
and
if os.path.exists(dest):
While I get that these lines are checking the presence of source and dest directories, how does python knows where to look these on as they are on different machines (the ansible server and host). How does python differentiates them and looks for them on their repective machines?
Can someone please help?
I think this copy module (liblary/file/copy) doesn't work.
Usually, when we use a command like this,
ansible webservers -m copy -a "src=/tmp/foo.conf dest=/tmp/bar.conf"
ansible will use this runnner module(lib/ansible/runner/action_plugins/copy.py) preferentially.
I tried running the same command, hiding the runner module. Then, (ansible/liblary/file/copy) module was executed. However, this did not do the work that is expected. There are several problems, this code is one of a cause.
if not os.path.exists(src)
if os.path.exists(dest):
Both check the file on the remote host.