The Consul lookup plugin has the following code. I would like to override by setting the environment variable for ANSIBLE_CONSUL_URL. I can't seem to get my task to use the environment variable. The task runs fine if I set the environment variable manually prior to running the task.
self.agent_url = 'http://localhost:8500'
if os.getenv('ANSIBLE_CONSUL_URL') is not None:
self.agent_url = os.environ['ANSIBLE_CONSUL_URL']
My task:
- name: Build list of modules and its associated tag
environment:
ANSIBLE_CONSUL_URL: "http://indeploy001.local.domain:8500"
set_fact: deploy_list="{{ item | replace("deploys/" + environment_id + "/",'') }}-{{ lookup('consul_kv','deploys/' + environment_id + '/' + item) }}"
with_items: "{{ modules }}"
register: deploy_list_result
Error
TASK: [Build list of modules and its associated tag] **************************
fatal: [127.0.0.1] => Failed to template deploy_list="{{ item | replace("deploys/" + environment_id + "/",'') }}-{{ lookup('consul_kv','deploys/' + environment_id + '/' + item) }}": Error locating 'deploys/sf/ariaserver' in kv store. Error was HTTPConnectionPool(host='localhost', port=8500): Max retries exceeded with url: /v1/kv/deploys/sf/ariaserver (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x1eac410>: Failed to establish a new connection: [Errno 111] Connection refused',))
FATAL: all hosts have already failed -- aborting
Lookups are a templating feature. The keys of a task are rendered to resolve any Jinja expressions before the task is executed. That means your lookup is executed before the task is executed. The environment key simply is a task property, which has an effect when the task is executed. But you're not getting that far.
Also, I'm not 100% sure what would happen with your environment in this special case used in a set_fact task. set_fact is an action plugin, running on the control machine. So do lookups. But the task itself is not delegated to localhost. So it's possible the env var would be set for the control machine or on the remote machine. My guess though is, it is not set at all, neither on localhost nor remotely. The reason for my guess is, many action plugins do later in the process call a module with the same name, which then is executed on the target host. The logical behavior, and what a user would expect, would be that the env var then is set on the remote host by the module. Just a guess and neglectable given the first paragraph.
Conclusion:
If a lookup plugin is depending on environment variables, you can not set them in the same task. They need to be set previously - what you already found working.
Related
I am working on a project using ansible AWX. In this project I have a check to see if all my microk8s pods are in the state running. Before I installed AWX I was testing all my plays on my linux vm. The following play worked fine on my linux vm but does not seem to work in ansible awx.
- name: Wait for pods to become running
hosts: host_company_01
tasks:
- name: wait for pod status
shell: kubectl get pods -o json
register: kubectl_get_pods
until: kubectl_get_pods.stdout|from_json|json_query('items[*].status.phase')|unique == ["Running"]
timeout: 600
AWX gives me the following respons
[WARNING]: an unexpected error occurred during Jinja2 environment setup: unable
to locate collection community.general
fatal: [host_company_01]: FAILED! => {"msg": "The conditional check 'kubectl_get_pods.stdout|from_json|json_query('items[*].status.phase')|unique == [\"Running\"]' failed. The error was: template error while templating string: unable to locate collection community.general. String: {% if kubectl_get_pods.stdout|from_json|json_query('items[*].status.phase')|unique == [\"Running\"] %} True {% else %} False {% endif %}"}
I have looked at the error message and tried different possible solutions but without any effect.
First thing I tried was looking for the community.general collections in the ansible galaxy collection list. After I saw that it was found I tried downloading it once again, but this time with sudo. The collection was installed. After running my workflow template, the error message popped up again.
Second thing was trying to use different Execution environments, I thought that this was not going to make any difference but tried it anyways since someone online fixed a similar issue by changed EE.
Last thing I tried was trying to find a way around this play by building a new play with different commands. Sadly I was not able to build an other play that did what the original one did.
Since I can not build a play that fits my needs I came back at the error message to try and fix it.
I'm trying to deployement an Openstack infrastracture multi node with kolla-ansible on Ubuntu 20.04.4.
ansible 4.10.0
ansible-core 2.11.11
kolla-ansible 13.0.1
when I execute command:
kolla-ansible -i /etc/kolla/multinode pull
or
kolla-ansible -i /etc/kolla/multinode deploy
I'm getting error:
fatal: [controller]: FAILED! => {"msg": "template error while templating string: No filter named 'select_services_enabled_and_mapped_to_host'.. String: {{ lookup('vars', (kolla_role_name | default(project_name)) + '_services') | select_services_enabled_and_mapped_to_host }}"}
fatal: [compute01]: FAILED! => {"msg": "template error while templating string: No filter named 'select_services_enabled_and_mapped_to_host'.. String: {{ lookup('vars', (kolla_role_name | default(project_name)) + '_services') | select_services_enabled_and_mapped_to_host }}"}
I have 1 deployment Node, 1 controller, 1 compute Node
Doesn't seems like a pull-issue based in the error-message. Seems like another configuration-problem with your Neutron. The default-multi-node config requires 3 controller. I don't exactly know your multi-node config for kolla-ansible, but when you only want 1 controller and 1 compute you can do something different:
Use the all-in-one-file for your setup. Should be in path kolla-ansible/ansible/inventory/all-in-one. Modify this file by replacing the localhost in the [compute]-section with the name of the host, where the compute should be created and remove the ansible_connection=local. This always worked for me, when I created a test-deployment with 1 controller and X compute-nodes.
Only in case you want the controller not on the host, where you execute the kolla-ansible, then replace all other localhost by the name of your controller-node and of course remove the ansible_connection=local everywhere.
Only the be sure: Don't forget to write the name of the host with it's ip-address in the /etc/hosts-file on the node, where you execute the kolla-ansible run.
Firstly, what's the result of kolla-ansible prechecks -i multinode_or_all-in-one command?
Dive into this method select_services_enabled_and_mapped_to_host of kolla-ansible project, you will find some clues. The method's return dependent by service_enabled and service_mapped_to_host
So, there are two step to clear your problem:
1, check the service enabled setting in /etc/kolla/globals.yml and kolla-ansible/ansible/group_vars/all.yml.
You can use this file(globals.yml) to override any variable throughout Kolla. Additional options can be found in the 'kolla-ansible/ansible/group_vars/all.yml' file.
2, check the host inventory in multinode or all-in-one file whether consistent with the really information or not.
Or everything is OK, I think you should redeploy it refer to kolla-ansible quickstar.
In one of my playbooks I start a service and poll for its status before moving onto the next task like the following:
- name: Poll for service status
uri:
url: http://some-service/status
register: response
until: response.status == 200
retries: 12
delay: 10
This logs a message each time it queries the URL which looks like
FAILED - RETRYING: TASK: Poll for service status
Is there a way to customise this message? Specifically remove the word FAILED.
After grepping around, I found the "FAILED - RETRYING" message in the default output callback (read about callbacks here). That means you can change the callback in ansible.cfg to something that suits your needs- or make your own. You can even search the code for v2_runner_retry to see the various outputs.
For instance, here's what stdout_callback=oneline returns. There are no "retrying" messages even at -vv. It still says "FAILED" but that's because it actually failed.
ansible-playbook -vvi localhost, p.yml
ansible-playbook 2.4.1.0
config file = /opt/app/ansible.cfg
configured module search path = ['/opt/app/library']
ansible python module location = /usr/local/lib/python3.5/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.5.2 (default, Sep 14 2017, 22:51:06) [GCC 5.4.0 20160609]
Using /opt/app/ansible.cfg as config file
1 plays in p.yml
META: ran handlers
localhost | FAILED! => {"attempts": 3,"changed": false,"content": "","failed": true,"msg": "Status code was not [200]: Request failed: <urlopen error [Errno -2] Name or service not known>","redirected": false,"status": -1,"url": "http://some-service/status"}
Aside from setting it in ansible.cfg the documentation implies it can be done in a role. I have no idea how.
As #techraf said, the Ansible folks are pretty good at reviewing pull requests.
Very frequently when I'm making changes to an ansible playbook (or role) I am working with some something like this
tasks:
...
- name: task x
notify: handler a
...
- name: task y
...
handlers:
...
- name: handler a
...
Now during the development, task x succeeds and performs a change but task y fails because there's an error in it. handler a is not notified.
Later on I manage to fix task y and it performs its change but since task x was already changed in the previous run, it reports no change and handler a remains not notified.
I guess I should have run it with --force-handlers but now that the deal is done, what is the correct way to force handler a to run now.
Yes you can force the handler to run by adding the following after task x:
- meta: flush_handlers
Of course now you are in a predicament where you are relying on a changed status for the handler to run so this isn't much good to you.
You could add changed_when: True to task x to get the handler to run and then revert it. There is no correct way to get it to run because it is relying on the changed status of the notifying task
This question already has answers here:
Why does Ansible show "ERROR! no action detected in task" error?
(6 answers)
Closed 5 years ago.
I've an issue with a really basic module which doesn't seem do work like it is supposed to work, I guess?
I actually copied the part out of the documentation.
My playbook looks like this:
---
- hosts: all
tasks:
- name: Create Snapshot
vmware_guest_snapshot:
hostname: vSphereIP
username: vSphereUsername
password: vSpherePassword
name: vmname
state: present
snapshot_name: aSnapshotName
description: aSnapshotDescription
I run this playbook from ansible tower and it throws "ERROR! no action detected in task". It seems like a syntax error for me but I literaly copied it over from the documentation and other modules are working with the same syntax.
So does anyone knows what I'm doing wrong?
The vmware_guest_snapshot module is available since version 2.3 of Ansible (which is not yet released).
If you are running any older version, the module name will not be recognised and Ansible will report the error no action detected in task.
Currently you need to be running Ansible from source (the devel branch) to run the vmware_guest_snapshot module.