I have a task that shuts down any running processes using my application files and it is called twice in the deployment from different roles.
The problem is that in between the two roles someone starts a new process.
Ansible skips the second stop task because it thinks the "Conditional result was False"
This then causes the deployment to fail.
How can I force ansible to not check and always run the task?
It is at a task level and not a role level so I can't use always.
How can I force Ansible to not check and always run the task?
Depending on the tasks (annot. which are not given or described in your question), to do so you may need to set
when: true
check_mode: false
Further Documentation
Basic conditionals with when
Enforcing or preventing check mode on tasks
Defining failure
Related
I run a task which is using the command module. I call a licensing tool and will unregister the active application license on my host.
The Tool will give me a message when there is currently no active license and it can't be deactivated. This causes an issue that the task is failing and my pipeline will abort.
I added a changed_when condition to that task. Looks like this:
register: mstr_lic_info
changed_when: '"Error: This MicroStrategy instance is not active. It cannot be deactivated." not in mstr_lic_info.stderr'
But that task is failing because the changed_when condition will not take any affect and the task state will not be updated to changed.
Stderr Output looks like this:
mstr_lic_info.stderr: 'Error: This MicroStrategy instance is not active. It cannot be deactivated.'
Do I need to search only for a string without any special characters or it dosen't matter for changed_when.
Do you know how to solve it?
I m attempting to use together the strategy free and run_once = true so I can run concurrently several tasks on multiple hosts. Ansible shows a warning during execution it is not yet supported. Any suggestions on how to achieve that?
Have you considered using the default strategy with async tasks?
Fire each task off and check for them later.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html#concurrent-tasks-poll-0
I'm new to an Ansible
simply i say, I wanna make a confirmation alert after tasks is started.
due to I have to use setup facts which is hosts addresses in my alert.
my process is below
to show hosts
to confirm that the users really wanna run next processes?
[in this part I need to show hosts list]
to do something....
is there any way to resolve it?
thank you.
I think Start and Step will do what you want assuming you don't have a lot of tasks in your playbook.
Run your playbook like:
ansible-playbook playbook.yml --step
From the docs:
This will cause ansible to stop on each task, and ask if it should execute that task.
The tradeoff is that you're going to get prompted for every task, so this is really only suitable if the number of tasks is small. Also, do you really want to run your playbook interactively?
You may want to try pause module.
- pause: prompt="Make sure that {{my_custom_fact_ip_address}} is correct..."
So after reading Ansible docs, I found out that Handlers are only fired when tasks report changes, so for example:
some tasks ...
notify: nginx_restart
# our handler
- name: nginx_restart
vs
some tasks ...
register: nginx_restart
# do this after nginx_restart changes
when: nginx_restart|changed
Is there any difference between these 2 methods? When should I use each of them?
For me, register seems to have more functionality here, unless I am missing something...
There are some differences and which is better depends on the situation.
Handlers will only be visible in the output if they have actually been executed. Not notified, there will be no skipped tasks in Ansibles output. Tasks always have output no matter if skipped, executed with change or without. (except they are excluded via tags/skip-tags)
Handlers can be called from any role. This gets handy if you have more complex roles which depend on each other. Let's say you have a role to manage iptables but which rules you define is actually depending on other roles (e.g. database role, redis role etc...) Each role can add their rules to a config file and at the end you notify the iptables role to reload iptables if changed.
Handlers by default get executed at the end of the playbook. Tasks will get executed immediately where they are defined. This way you could configure all your applications and at the end the service restart for all changed apps will be triggered per handler. This can be dangerous though. In case your playbook fails after a handler has been notified, the handler will actually not be called. If you run the playbook again, the triggering task may not have a changed state any longer, therefore not notifying the handler. This results in Ansible actually not being idempotent. Since Ansible 1.9.1 you can call Ansible with the --force-handler option or define force_handlers = True in your ansible.cfg to even fire all notified handlers after the playbook failed. (See docs)
If you need your handlers to be fired at a specific point (for example you configured your system to use an internal DNS and now want to resolve a host through this DNS) you can flush all handlers by defining a task like:
- meta: flush_handlers
A handler would be called only once no matter how many times it was notified. Imagine you have a service that depends on multiple config files (for example bind/named: rev, zone, root.db, rndc.key, named.conf) and you want to restart named if any of these files changed. With handlers you simply would notify from every single task that managed those files. Otherwise you need to register 5 useless vars, and then check them all in your restart task.
Personally I prefer handlers. It appears much cleaner than dealing with register. Tasks triggered per register was safer before Ansible 1.9.1.
On the Ansible Variables page, you can see how register works.
Another major use of variables is running a command and using the result of that command to save the result into a variable.
Registered variables are just like facts:
Effectively registered variables are just like facts.
This is very different from notify, which triggers handlers. It does not save or store variables or facts.
with ignore_errors: True you can avoid the failed handler from stopping other handlers defined after it continue to run
Let's imagine a playbook with following roles: base, monitoring, nginx and another playbook with only base and nginx.
Now I want in monitoring role to run a task only if playbook includes nginx role, because for monitoring nginx I have to pass a little bit different configuration to monitoring service.
How to execute a task what dependes on another role existence?
While my workaround in the comments might have worked for you, I feel it's still not the best approach. It's not modular. For example in a situation where you change monitoring system, you'd need to go into each role and check if it has monitoring component and update that...Not the most optimal way.
Perhaps a better way would be to still include a separate monitoring role, but there execute specific tasks using playbook conditionals. For example, nginx monitoring task would execute only when this server is part of your [webservers] group. Or when a certain variable is set to a specific value or some other appropriate conditional is met.
It is possible to set fact with set_fact in nginx role (set_fact: nginx=True) and then check it in monitoring role and execute task when fact is defined and true ( when: (ansible_facts['nginx'] is defined) and (ansible_facts['nginx'] == True)).