ansible the way to use consul_kv change between two versions - ansible

I need some help on using consul_kv module with ansible version since 2.8.x , maybe i missed something, but i took a look to the code of the module and i don't realy see changes between 2.7.x and 2.8.x that can explay the problem i got.
With ansible 2.7.x , when i try to get value from consul, i get consul host, port, path from my env vars and i execute my code like this:
# group_var/all
consul_path: "{{ lookup('env','ANSIBLE_CONSUL_PATH') }}"
consul_host: "{{ lookup('env','ANSIBLE_CONSUL_HOST') }}"
consul_port: "{{ lookup('env', 'ANSIBLE_CONSUL_PORT') }}"
- hosts: localhost
tasks:
- name: test ansible 2.8.5 with consul
debug:
msg: "{{ lookup('consul_kv', consul_path+'path/to/value' }}"
it works on 2.7.0 and i got my value, but doesn't work on 2.8.x , from those newer versions i need to specify host and port on each line which using lookup
msg: "{{ lookup('consul_kv', 'path/to/value', host='myconsulhost.com', port='80') }}"
Is there a way to continue to use env vars in ansible 2.8.x with this module ?

The fine manual says that the lookup now uses the $ANSIBLE_CONSUL_URL environment variable to determine the protocol, hostname, and port -- or (as you observed) using the inline kwargs to the lookup function. Those group_vars you mentioned no longer seem to be consulted
You'll also want to be careful as your group_vars/all (at least in this question, unknown if you are really doing it) has a trailing space in consul_path : which creates a variable named consul_path<space>

Related

how to set different python interpreters for local and remote hosts

Use-Case:
Playbook 1
when we first connect to a remote host/s, the remote host will already have some python version installed - the auto-discovery feature will find it
now we install ansible-docker on the remote host
from this time on: the ansible-docker docs suggest to use ansible_python_interpreter=/usr/bin/env python-docker
Playbook 2
We connect to the same host/s again, but now we must use the /usr/bin/env python-docker python interpreter
What is the best way to do this?
Currently we set ansible_python_interpreter on the playbook level of Playbook 2:
---
- name: DaqMon app
vars:
- ansible_python_interpreter: "{{ '/usr/bin/env python-docker' }}"
This works, but this will also change the python interpreter of the local actions. And thus the local actions will fail, because (python-docker does not exist locally).
the current workaround is to explicitly specify the ansible_python_interpreter on every local-action which is tedious and error-prone
Questions:
the ideal solution is, if we could add '/usr/bin/env python-docker' as fallback to interpreter-python-fallback - but I think this is not possible
is there a way to set the python interpreter only for the remote hosts - and keep the default for the localhost?
or is it possible to explicitly override the python interpreter for the local host?
You should set the ansible_python_interpreter on the host level.
So yes, it's possible to explicitly set the interpreter for localhost in your inventory.
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python
And I assume that you could also use set_fact on hostvars[<host>].ansible_python_interpreter on your localhost or docker host.
There is a brillant article about set_fact on hostvars ! ;-P
Thanks to the other useful answers I found an easy solution:
on the playbook level we set the python interpreter to /usr/bin/env python-docker
then we use a set_fact task to override the interpreter for localhost only
we must also delegate the facts
we can use the magic ansible_playbook_python variable, which refers to the python interpreter that was used on the (local) Ansible host to start the playbook: see Ansible docs
Here are the important parts at the start of Playbook 2:
---
- name: Playbook 2
vars:
- ansible_python_interpreter: "{{ '/usr/bin/env python-docker' }}"
...
tasks:
- set_fact:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
delegate_to: localhost
delegate_facts: true
Try to use set_fact for ansible_python_interpreter at host level in the first playbook.
Globally, use the interpreter_python key in the [defaults] section of the ansible.cfg file.
interpreter_python = auto_silent

Ansible - vars are not correctly propagated to handlers when role run in loop

I am asking for help with a problem of deploying multiple versions (different variables) of the app with the same role run from one playbook.
We have an app with multiple product families, which are different code versions. Each version has separate uWSGI vassal config and Nginx virtualhost config(/api/v2, /api/v3, ...).
The desired state would be to run playbook and configure the server with all versions specified.
Sadly, ansible's import_role/import_tasks can't be used with with_items, so include_role/include_tasks must be used (pitty because they do not honor role tags).
The include_role method would not be the biggest problem, but we use handlers to notify uWSGI touch to reload - on a code change, link change, virtualenv change, app_config change, ...).
But when using loop (with_items), the variables passed from the loop does not correctly propagate to handlers.
I tried this scenarios
playbook.yml - with_items loop inside the playbook
PROBLEM: Handler is run only for the first iteration of the loop.
#!/usr/bin/env ansible-playbook
# HAndler is run only once, from first notifier
- hosts: localhost
gather_facts: no
vars:
app_root: "/tmp/test_ansible"
app_versions:
- app_product_family: 1
app_release: "v1.0.2"
- app_product_family: 3
app_release: "v4.0.7"
tasks:
- name: Deploy multiple versions of app
include_role:
name: app
with_items: "{{ app_versions }}"
loop_control:
loop_var: app_version
vars:
app_product_family: "{{ app_version.app_product_family }}"
app_release: "{{ app_version.app_release }}"
tags:
- app
- app_debug
playbook_v2.yml - with_items loop inside role task
PROBLEM: Handler is run with the default value from "Defaults"
#!/usr/bin/env ansible-playbook
- hosts: localhost
gather_facts: no
roles:
- app_v2
vars:
app_v2_root: "/tmp/test_ansible_v2"
app_v2_versions:
- app_v2_product_family: 1
app_v2_release: "v1.0.2"
- app_v2_product_family: 3
app_v2_release: "v4.0.7"
Tasks roles/app_v2/main.yml
---
# Workaround because import_tasks can't be run with_items
- include_tasks: deploy.yml
when: app_v2_versions
with_items: "{{ app_v2_versions }}"
loop_control:
loop_var: app_v2
vars:
app_v2_product_family: "{{ app_v2.app_v2_product_family }}"
app_v2_release: "{{ app_v2.app_v2_release }}"
tags:
- app_v2
- app_v2_deploy
...
One idea was about writing a separate role for each product family, but they share nginx and uWSGI, so it will be lots of copy-pasting and sharing tasks (so tags would not work properly).
For now, I solved it with shell script wrapper, but this is not an ideal solution and does not work from Ansible tower.
Sample repo with tasks to reproduce problem (tested with ansible 2.4, 2.5, 2.6)
Any ideas & recommendations are very welcome.
The order of overrides for variables is broken for includes in Ansible. F.e. even set_fact in the included role will be shadowed by role defaults.
See this bug: https://github.com/ansible/ansible/issues/22025
It's closed but not fixed. My advice: use include and variables really carefully.
In practice I never use role includes with loop. If you need loop, include a tasklist in this loop (and that tasklist, in turn, may import_role).
Ok, It is a bug as #George Shuklin posted.
I will use my shell wrapper, which reads group_vars yaml and then runs the playbook multiple times according to the variable list length.
Sadly I hit multiple annoying bugs in ansible in last few weeks, kinda losing my trust in it ):
And probably everybody is using microservices and kubernetes, so need to speed up our migration (:

Refer to ansible inventory ip address [duplicate]

I'm setting up an Ansible playbook to set up a couple servers. There are a couple of tasks that I only want to run if the current host is my local dev host, named "local" in my hosts file. How can I do this? I can't find it anywhere in the documentation.
I've tried this when statement, but it fails because ansible_hostname resolves to the host name generated when the machine is created, not the one you define in your hosts file.
- name: Install this only for local dev machine
pip:
name: pyramid
when: ansible_hostname == "local"
The necessary variable is inventory_hostname.
- name: Install this only for local dev machine
pip:
name: pyramid
when: inventory_hostname == "local"
It is somewhat hidden in the documentation at the bottom of this section.
You can limit the scope of a playbook by changing the hosts header in its plays without relying on your special host label ‘local’ in your inventory. Localhost does not need a special line in inventories.
- name: run on all except local
hosts: all:!local
This is an alternative:
- name: Install this only for local dev machine
pip: name=pyramid
delegate_to: localhost

How to catch exceptions for ansible credstash lookup plugin with key not found?

I have the following code
app_key: "{{ lookup('credstash', 'aws/project/'+app_name+'/'+app_env+'/app_key') | default('not-set') }}"
And was expecting to be able set a default value based on lookup failing with a key not found, and then generate and store a key later in my playbook.
However, I found that the plugin raises an Exception which makes the entire playbook run fail. Obviously this is not what I was looking for (pre-storing app-keys for non production branches)
(see credstash code:
You can find the credstash plugidn code here https://github.com/ansible/ansible/blob/ec701c4b82e570371af7c3999ffb587d870a5b37/lib/ansible/plugins/lookup/credstash.py)
What are my options?
What are my options?
For example (I don't know what task you define app_key in, so I use set_fact in the second task below):
- set_fact:
app_key_candidate: "{{ lookup('credstash', 'aws/project/'+app_name+'/'+app_env+'/app_key')"
ignore_errors: true
- set_fact:
app_key: "{{ app_key_candidate | default('not-set') }}"

Ansible - Unable to run certain JUNOS modules

I'm trying to run the Ansible modules junos_cli and junos_rollback and I get the following error:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/home/quake/network-ansible/roles/junos-rollback/tasks/main.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: I've made a huge mistake
^ here
This is the role in question:
---
- name: I've made a huge mistake
junos_rollback:
host={{ inventory_hostname }}
user=ansible
comment={{ comment }}
confirm={{ confirm }}
rollback={{ rollback }}
logfile={{ playbook_dir }}/library/logs/rollback.log
diffs_file={{ playbook_dir }}/configs/{{ inventory_hostname }}
Here is the Juniper page:
http://junos-ansible-modules.readthedocs.io/en/1.3.1/junos_rollback.html
Their example's syntax is a little odd. host uses a colon while the rest uses = signs. I've tried mixing both and only using one or the other. I keep getting errors.
I also confirmed that my junos-eznc version is higher than 1.2.2 (I have 2.0.1)
I've been able to use junos_cli before, I don't know if a version mismatch happened. On the official Ansible documentation, there is no mention of junos_cli or junos_rollback. Perhaps they're not supported anymore?
http://docs.ansible.com/ansible/list_of_network_modules.html#junos
Thanks,
junos_cli & junos_rollback are part of Galaxy and not core modules. You can find them at
https://galaxy.ansible.com/Juniper/junos/
Is the content posted here has whole content of your playbook? if yes, You need to define other items too in your playbook such as roles, connection, local. For example
refer https://github.com/Juniper/ansible-junos-stdlib#example-playbook
```
---
- name: rollback example
hosts: all
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: I've made a huge mistake
junos_rollback:
host = {{inventory_hostname}}
----
----
```
Where have you saved the content of juniper.junos modules?. Can you post the content of your playbook and the output of the tree command to see your file structure? That could help.
I had a similar problem where Ansible was not finding my modules and what I did was to copy the juniper.junos folder to my roles folder and then added a tasks folder within it to execute the main.yaml from there.
Something like this:
/Users/macuared/Ansible_projects/roles/Juniper.junos/tasks
---
- name: "TEST 1 - Gather Facts"
junos_get_facts:
host: "{{ inventory_hostname}}"
user: "uuuuu"
passwd: "yyyyyy"
savedir: "/Users/macuared/Ansible_projects/Ouput/Facts"
ignore_errors: True
register: junos
- name: Checking Device Version
debug: msg="{{ junos.facts.serialnumber }}"
Additionally, I would add "" to the string values in your YAML. Something like this:
---
- name: I've made a huge mistake
junos_rollback:
host="{{ inventory_hostname }}"
user=ansible
comment="{{ comment }}"
confirm={{ confirm }}
rollback={{ rollback }}
logfile="{{ playbook_dir }}/library/logs/rollback.log"
diffs_file="{{ playbook_dir }}/configs/{{ inventory_hostname }}"
Regarding this "I've tried mixing both and only using one or the other. I keep getting errors."
I've used just colon and mine works fine even when in the documentation suggests = signs. See junos_get_facts

Resources