Ansible idempotency issue with unarchive and then modify extracted file - ansible

In one of the ansible roles we extract some tar.gz file and then we replace one of the extracted files with another one to fix some issue.
The problem is when we run ansible again, ansible is extracting the archive back again since the directory content is changed and naturally marking the task changed and also replaces the file again as expected.
So we have two "changes" now everytime we run the playbook...
How should I handle this issue to keep the operation idempotent?

Use exclude option to ignore certain paths, see documentation.
i.e.
- unarchive:
src: https://example.com/example.zip
dest: /usr/local/bin
remote_src: True
exclude: bad.config
creates might also suit you, unarchive step will not be run if specified path already exists on remote machine

Related

Executing task only on the first run on each host

I'm new to ansible and I would like to run some tasks only once. Not once per every run but only once and then never. In puppet I used file placeholders for that, similar to ansible "creates".
What is the best practice for my use case?
If I should use the "creates" method, which folder should be safe to use?
For example I would like to install letsencrypt on host where the first step is to update snap. I don't want to refresh snap every run so I would like to run it only before letsencrypt install.
- name: Update snap
command: snap install core && snap refresh core
args:
creates: /usr/local/ansible/placeholders/.update-snap
become: true
Most of the ansible modules are made to be idempotent, which means that they will perform action only when needed.
shell, script and command are the exception as ansible cannot know if action needs to be performed, and that's why there is the creates parameter to add idempotency.
That's why before writing a shell task you should check if there is not already an ansible module that could perform your task. There are almost 3000 base modules and with new versions of ansible you can get new community-provided modules through collections in Ansible Galaxy.
In your specific case, the snap module (https://docs.ansible.com/ansible/2.10/collections/community/general/snap_module.html) may be enough:
- name: Update snap
snap:
name: core
become: true
It will install the core snap if it's not already installed, and do nothing otherwise.
If it's not enough, you can implement yourself idempotency by creating a task querying snap to check the presence of core with it's version (add a changed_when: False as it's a task not doing any changes) and then launching the install/update depending on the query result (with a when condition).
If there is no way to do that, then you can fallback to using the creates. If the command executed already creates a specific file that presence can be used to know if it was already executed, better to reference this one. Else you need to create your own marker file in the shell. As it's a last option solution, there is no convention of where this marker file should be, so up to you to create your own (the one you propose looks good, even if the file name could be more specific and include the snap name in case you should use the same trick for other snaps).

Ansible - win_file module - force deletion if file in use

We are using ansible win_file module to delete a particular folder in a Windows Server machine with the following code:
- name: Delete <folderName> directory
win_file: path=C:\<pathToFolder>\{{target_environment}}\<folderName> state=absent
tags: <folderName>
The problem: When a file from that directory is open in another program at the same time the ansible role runs, it fails saying:
"The process cannot access the file because it is being used by another process"
Now, i understand the error, but i am looking for suggestions to force this deletion, even if the file is in use, or if there is other module that i don't know that can't resolve this problem.
(currently using ansible 2.4.6)
So, after some searching and digging, i came out with a solution, i found a similar ansible module that can do the job, win_shell.
I resolved the problem with the following code:
name: Delete <folderName> directory
win_shell: Remove-Item –path <folderName> –recurse -force
args:
chdir: C:\<pathToFolder>\{{target_environment}}
removes: C:\<pathToFolder>\{{target_environment}}\<folderName>
tags: <folderName>
removes: checks if the folder exists else skips the task
force: does the trick of what i want, delete the folder and all his files even if some of the files are in use or open in any program.

Create a sym link in ansible

I'm writing a playbook and i want to create a symlink.
While installing citrix on the linux system i need to create a sym link using this command:
ln -s /etc/ssl/serts cacerts
now in the playbook i use it as a :
- name: Link
command: ln -s /etc/ssl/serts cacerts
The thing is when I use the format above it works fine. But if I want to check if the file exists and if not creating and if yes then skip to the next task.
I could use ignore_errors: yes but I think there is a better way of doing it.
Thank you very much in advance.
You can use the "file" module:
- name: Link
file:
src: cacerts
dest: /etc/ssl/serts
state: link
It is generally better to use a proper module which will deal with failure conditions and check mode. In this case, it will not fail if the link already exists and it is correct.
You may want to give an absolute src depending on your application.
For more information: https://docs.ansible.com/ansible/latest/modules/file_module.html

how to deploy different conf in different server without repeat the task each time

hy folks
I have to deploy configurations on several servers but they are different on each of them. I was wondering if with ansible it was possible to make a loop or I would pass in parameter the names of the servers. And when this match it deploys the configuration on the server?
There are multiple ways you can approach this, depending on your envirmonment:
If you have a seperate file for each host you could name them something like "hostname.application.conf". Then you can use a simple copy to deploy the configs:
- copy:
src: "/path/to/configs/{{ansible_hostname}}.application.conf
dest: path/to/application/folder/application.conf
The variable "ansible_hostname" is automatically generated by ansible and contains the hostname of the currently targeted host. If you have multiple applications you can loop over them with someting like this:
- copy
src: "/path/to/configs/{{ansible_hostname}}.{{item.name}}.conf
dest: "{{item.path}}{{item.name}}.conf
...
loop:
- { name: 'appl1', path: '/path/to/application/folder/' }
- ...
If you have one configuration that needs to be modified and copied to the other hosts, you can look into templating: https://docs.ansible.com/ansible/latest/modules/template_module.html
I woud Strongly recommend to use a Jinja2 template as the configuration file so Ansible can set the vars as described in group or host files.
https://docs.ansible.com/ansible/latest/modules/template_module.html
Role based Ansible Playbook
Will work based on the following behaviors, for each role ‘x’:
If roles/x/tasks/main.yml exists, tasks listed therein will be added to the play.
If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play.
If roles/x/vars/main.yml exists, variables listed therein will be added to the play.
If roles/x/defaults/main.yml exists, variables listed therein will be added to the play.
If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles
Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.

Ansible synchronize mode permissions

I'm using an Ansible playbook to copy files between my host and a server. The thing is, I have to run the script repeatedly in order to upload some updates. At the beginning I was using the "copy" module of Ansible, but to improve performance of the synchronizing of files and directories, I've now switched to use the "synchronize" module. That way I can ensure Ansible uses rsync instead of sftp or scp.
With the "copy" module, I was able to specify the file's mode in the destination host by adding the mode option (e.g. mode=644). I want to do that using synchronize, but it only has the perms option that accepts yes or no as values.
Is there a way to specify the file's mode using "synchronize", without having to inherit it?
Thx!
Finally I solved it using rsync_opts
- name: sync file
synchronize:
src: file.py
dest: /home/myuser/file.py
rsync_opts:
- "--chmod=F644"

Resources