Ansible is not creating .retry files - ansible

I'm using Ansible 2.8 version and i do not see .retry files getting created if the playbook fails to execute tasks on servers.
I do not see any error message as such
Entries in ansible.cfg are as below.
Since the retry_files_enabled defaults to true i have not made any changes to cfg file. Does anyone know the reason why it isnt creating .retry files?
#retry_files_enabled = False
#retry_files_save_path = ~/.ansible-retry

You can check the actual values of configuration by running
ansible-config dump
The reason the Ansible behavior is different from the value in a configuration file is that Ansible have few possible places to look for configuration files, and some of them may have higher preference than your. (f.e. ansible.cfg in the directory with the playbook).

The default behaviour was changed via this proposal:
https://github.com/ansible/proposals/issues/155
RETRY_FILES_ENABLED now defaults to False

Related

how to not display skipped hosts/tasks?

I have many playbooks and I don't have access the where Ansible is installed. I can write my playbooks locally on my laptop then I push them to a repo and I can run them via Jenkins. I can't control or change e.g. ansible.cfg or so. Is there a way to manipulate the ansible default stdout callback plugin per playbook without accessing the ansible host itself?
Actually it is, you can use Environmental variable for this: check documentation
ANSIBLE_DISPLAY_SKIPPED_HOSTS=yes ansible-playbook main.yml
But for obvious reasons (It's deprecated) it's better to use the ansible.cfg option for this.
[defaults]
display_skipped_hosts = False

Getting a python warning when running playbook EC2 inventory

I am really new to Ansible and I hate getting warnings when I run a playbook. This environment is being used for my education.
Environment:
AWS EC2
4 Ubuntu 20
3 Amazon Linux2 hosts
Inventory
using the dynamic inventory script
playbook
just runs a simple ping against all hosts. I wanted to test the inventory
warning
[WARNING]: Platform linux on host XXXXXX.amazonaws.com is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the
meaning of that path. See https://docs.ansible.com/ansible-core/2.11/reference_appendices/interpreter_discovery.html for more information.
Things I have tried
updated all sym links on hosts to point to the python3 version
adding the line "ansible_python_interpreter = /usr/bin/python" to "/etc/ansible/ansible.cfg"
I am relying on that cfg file
I would like to know how to solve this. since I am not running a static inventory, I didn't think that I could specific an interpreter on a per host or group of hosts. While the playbook runs, it seems that something is not configured correctly and I would like to get that sorted. This is only present on the Amazon Linux instances. the Ubuntu instances are fine.
Michael
Thank you. I did find another route that work though I am sure that you suggest would also work.
I was using the wrong configuration entry. I was using
ansible_python_interpreter = /usr/bin/python
when I should have been using
interpreter_python = /usr/bin/python
on each host I made sure that /usr/bin/python sym link was pointing and the correct version.
according to the documentation
for individual hosts and groups, use the ansible_python_interpreter inventory variable
globally, use the interpreter_python key in the [defaults] section of ansible.cfg
Regards, Michael.
You can edit your ansible.cfg and set auto_silent mode:
interpreter_python=auto_silent
Check reference here:
https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html

How can I tell if my ansible.cfg is working?

I have an ansible.cfg file. Ansible isn't behaving as expected for me, but I don't know if that's because my configuration isn't working or because my ansible.cfg file isn't even getting picked up at all.
How can I verify whether my ansible.cfg is working?
Q: "I don't know if my configuration isn't working or my ansible.cfg file isn't even getting picked up at all."
A: Run the command
shell> ansible-config dump --only-changed
This will "Only show configurations that have changed from the default" and will also reveal the source of the change either it's a configuration file or an environment variable.
For details see Configuration settings.

how to deploy different conf in different server without repeat the task each time

hy folks
I have to deploy configurations on several servers but they are different on each of them. I was wondering if with ansible it was possible to make a loop or I would pass in parameter the names of the servers. And when this match it deploys the configuration on the server?
There are multiple ways you can approach this, depending on your envirmonment:
If you have a seperate file for each host you could name them something like "hostname.application.conf". Then you can use a simple copy to deploy the configs:
- copy:
src: "/path/to/configs/{{ansible_hostname}}.application.conf
dest: path/to/application/folder/application.conf
The variable "ansible_hostname" is automatically generated by ansible and contains the hostname of the currently targeted host. If you have multiple applications you can loop over them with someting like this:
- copy
src: "/path/to/configs/{{ansible_hostname}}.{{item.name}}.conf
dest: "{{item.path}}{{item.name}}.conf
...
loop:
- { name: 'appl1', path: '/path/to/application/folder/' }
- ...
If you have one configuration that needs to be modified and copied to the other hosts, you can look into templating: https://docs.ansible.com/ansible/latest/modules/template_module.html
I woud Strongly recommend to use a Jinja2 template as the configuration file so Ansible can set the vars as described in group or host files.
https://docs.ansible.com/ansible/latest/modules/template_module.html
Role based Ansible Playbook
Will work based on the following behaviors, for each role ‘x’:
If roles/x/tasks/main.yml exists, tasks listed therein will be added to the play.
If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play.
If roles/x/vars/main.yml exists, variables listed therein will be added to the play.
If roles/x/defaults/main.yml exists, variables listed therein will be added to the play.
If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles
Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.

.ansible/tmp/ansible-tmp-* Permission denied

Remote host throws error while running Ansible playbook despite a user being sudo user.
"/usr/bin/python: can't open file '/home/ludd/.ansible/tmp/ansible-tmp-1466162346.37-16304304631529/zypper'
A fix that worked for me, was to change the path of the ansible's remote_tmp directory, in ansible's configuration file, e.g.
# /etc/ansible/ansible.cfg
remote_tmp = /tmp/${USER}/ansible
Detailed information can be found here.
Note: With ansible v4 (or later) this this variable might look like this ansible_remote_tmp check the docs
Caution:Ansible Configuration Settings can be declared and used in a configuration file which will be searched for in the following order:
ANSIBLE_CONFIG (environment variable if set)
ansible.cfg (in the current directory)
~/.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
I had to set variable ansible_remote_tmp rather than remote_tmp in order to make it working.
Changing remote_tmp didn't solve the issue for me. What did solve it, however, was removing --connection=local from the playbook invocation.
How does the file in question get to the host? Do you copy or sync it? If you do, may want do to do
chmod 775 fileName
on the file before you send it to the host.

Resources