where are set_facts stored for ansible when cacheable true? - ansible

I have an example playbook below. When I say cacheable: true where are facts stored? Also, how to delete these facts after for a fresh play? I have looked into the remote host(our example database) and found nothing and on the local host(where I ran my playbook) too cannot find it. I can see the facts displayed though. Found this on ansible docs https://docs.ansible.com/ansible/2.5/modules/set_fact_module.html not much helpful to understand though.
---
- name: Setting database facts
hosts: database_servers:!localhost
tasks:
- name: set_facts for database servers
set_fact:
database_endpoints: "{{ remote_endpoints_dev }}"
cacheable: true
when: ENVIR == "dev"

When I say cacheable: true where are facts stored?
cacheable only takes effect if fact caching is enabled, and where the facts get stored depends on how you configured fact caching.
Also, how to delete these facts after for a fresh play?
The facts will get updated every time the set_fact task runs. I use tags to control when those tasks run. You can control how long things stay in cache in the fact caching configuration.

Related

Store Timestamp As A Constant Value

I'm trying to save the timestamp corresponding to when a playbook runs in a variable.
I plan to use this variable across the playbook but I'm facing issues particularly since
the lookup plugin runs and generates a new value each time.
module_defaults:
group/ns.col.session:
sid: "{{ lookup('pipe','date \"+%Y-%m-%d-%H%M\"') }}"
The value is looked up at the time that it is invoked.
I could use set_fact: but it only works inside of the tasks: block and I'd like to set it to some value before any task can run i.e. right after hosts.
- hosts:
- localhost
module_defaults:
group/ns.col.session:
sid: .............
How do I achieve this WITHOUT using set_fact OR without using the lookup() ?
In other words, how to save or copy the value of a lookup to some variable ?
I've already reviewed Constant Date and Time
but the solutions proposed over there do not satisfy my constraints.
The whole reason behind this is to not have any task run anywhere or modify the playbook to run a task on the Ansible Controller node alone, for looking up the timestamp but rather have it be preserved in a variable at the very beginning itself.

Force ansible to apply changes on each lines of an inventory group

I have a bare metal server, and I want to install multiple services on this server.
My inventory looks like that
[Mygroup]
Server port_service=9990 service_name="service1"
Server port_service=9991 service_name="service2"
When I launch my ansible job,only service 2 is installed because I have the same server in each line of my group. There is way to force ansible to take all lines of a group?
I don't want to create a group for each service
Q: "There is a way to force Ansible to take all lines of a group?"
A: No. There is not. In a group, the hosts shall be unique. If there are multiple hosts with the same name the last one will be taken.
Put the variables into one line e.g.
[Mygroup]
Server port_services="[9990, 9991]" service_names="['service1', 'service2']"
(and change the code).
See How to build your inventory. There is a lot of other options e.g.
[Mygroup]
Server
[Mygroup:vars]
port_services="[9990, 9991]"
service_names="['service1', 'service2']"
I hope I got u right but this should do the trick.
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
Greets
Harry
Another solution is to use an alias.
This solution works fine for me
[Mygroup]
service_1 ansible_host=Server port_service=9990 service_name="service1"
service_2 ansible_host=Server port_service=9991 service_name="service2"

Ansible retry get attempt number

I am using a task that connects via ssh to a device. Latency is not always constant and sometimes when the prompt is not displayed in time the task fails.
Assuming it is possible to control the timeout value for this task is it possible to dynamically increase this timeout proportionally to the number of the attempt performed?
Something like this
- name: task_name
connection : local
task_module:
args...
timeout : 10 * "{{ attempt_number }}"
retries: 3
delay: 2
register: result
until: result | success
I don't think its possible to get the current attempt number while running the task, it's quite unclear why you're trying to achieve such thing.
Can you elaborate a little bit more?
Yes, it's possible, here are docs.
When you run a task with until and register the result as a variable, the registered variable will include a key called “attempts”, which records the number of the retries for the task.

How to model a multi db setup on one host in ansible? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Here is the setup
Each of the environments have one or many dbserves.
Each dbserver can, but must not, have databases running on different ports. E.g:
server1:port1 -> db1
server1:port2 -> db2
server2:port1 -> db3
I want to insert configuration updates to the databases. The question is now how to model this setup in ansible:
I can not go by host_vars since the ports are not unique by host
since I do not know how many db servers I have I would need to have a generic approach (so the option to pass via -e or as role var is not possible)
I can not use nested loops (e.g. over the ports) since the ports differ
My workaround
I came up with a workaround to have groups as like:
dbservers_db1
dbservers_db2
That way I can do the following:
- include_tasks: db_config_update.yml
with_items:
- "{{ groups['dbservers_db1'] }}"
loop_control:
loop_var: host
- include_tasks: db_config_update.yml
with_items:
- "{{ groups['dbservers_db2'] }}"
loop_control:
loop_var: host
Note: this can not be done in ONE loop since ansible would detect the same host and process it only once.
But this workaround has its limitations:
I need for every further db a new group
it is not intuitive
How could this be modeled in a smarter way ?
Define for each host, which DBs are running on the host.
host_vars/host1.yml:
dbs:
- db1
- db2
host_vars/host2.yml:
dbs:
- db3
- db4
Then define a task for each host in which you iterate over the dbs of the host:
playbook.yml:
- include_tasks: db_config_update.yml
with_items: dbs

Using a synchronized counter variable ansible

My playbook is creating multiple instances of application in AWS.I want each of the instance to be tagged with a counter variable to maintain the count and id of each instance (do not want to use instance id and any other random id). Now , since the provisioning happens in parallel i am failing to get a consistent counter variable.
I have tried using a global variable to the play and incrementing it but it always returns the initial value as set fact is executed once.
I have also tried putting a variable in a file, reading and incrementing it for every host. This leads to race condition and i see same values for different hosts. Is there any way to do this.
Assuming that your ec2.ini file has
all_instances = True
to get stopped instances, they already ARE tagged, in a sense.
webserver[1] is always going to be the same host, until your inventory changes.
However, you can still tag your instances as you want, but if your inventory changes, it might be difficult to tag new instances with unique numbers.
- name: Loop over webserver instances and tag sequentially
ec2_tag:
state: present
tags:
myTag: "webserver{{item}}"
resource: "{{ hostvars[groups['webserver'][item|int]]['ec2_id'] }}"
with_sequence: start=0 end="{{ groups['webserver']|length - 1 }}"
delegate_to: localhost
N.B: item is a string, so we have to use [item|int] when pulling from the groups['webserver'] array.

Resources