I need to know the hostname or IP of the machine where ansible was invoked, not the ones from the inventory.
What is the easiest way to access this? Please note that I need to use this variable in tasks that executed on various hosts.
Note that all these are not valid answers:
- ansible_hostname
- inventory_hostname
Use lookups, they are always executed on localhost.
For example: {{lookup("pipe","hostname")}}
If you use this value extensively, better to do set_fact first, otherwise lookup command will be executed every time it is referenced.
while using in set_fact: use the quotes appropriately I had issue with quotes and below worked for me
controller_host: "{{lookup('pipe','hostname')}}"
Related
I'm calling AWX template from ManageIQ. I'm passing 9 variables to the playbook (with prompt on launch active). The playbook is successfully called, and all of the vars come through. However two of the vars are supposed to be arrays. Instead they come through to AWX as strings: e.g., '["chefclient"]' instead of ["chefclient"].
I have confirmed that these vars are indeed of type array in ManageIQ before I pass them to the AWX template.
Any clue why this is happening? Do all vars get irresistibly converted to strings? How do I fix this?
Thank you!
According to the RedHat developers on Gitter.im, this is a shortcoming in the launch_ansible_method in ManageIQ. I.e., it always converts arrays to strings. We have opened an issue on GitHub to address this.
I have basically had a variable in ansible tower/awx that takes input as Text with server names as array/List. example: ["node1","node2","node3"] and once job is launched I can see the variable in the extra variables as '["node1","node2","node3"]'. I'm not sure about reason why it does that but it doesn't effect your subsequent ansible operations on that variable. Not all variables gets single quotations only when you use array/List.
I have tried to replicate this on my end with AWX installed locally. I have passed v_packages variables data as ["apache2","nginx"]. I don't see that issue now.
I am new to ansible and trying to figure out how can I call one playbook from another playbook in loop. I also want to consume the output back in master playbook. Not sure if it could be possible in Ansible.
Below is a stub from other programming languages -
masterplaybook.yml - from where I want to invoke auditplaybook
for devicePair in devicePairList
output = auditdevice.yml -e "d1=devicePair.A d2=devicePair.B"
save/process output
auditdevice.yml playbook is using d1 and d2 as hosts on which it is performing auditing, running commands etc. It is performing audit on dynamic inventory passed as part of argument.
Is it possible to achieve above using Ansible? If yes, can someone point to any example?
Q: "How can I call one playbook from another playbook in the loop?"
A: It is not possible. Quoting from import_playbook
"You cannot use this action inside a play."
See the example.
FWIW. ansible-runner is able to controll playbooks withing projects similar to AWX. See example.
I have an ansible playbook that creates a network object and sets ACL policies. It's working well, but I would like to create the complementary playbook to remove the object and its associated config but I don't know the correct way to approach the task.
I could just use asa_command to issue the 'no' prefix for the appropriate lines, however, that doesn't feel like the "Ansible Way" since it would try to execute the commands even if they were already absent in the config.
I have seen that some modules have a state: absent operator. However, the asa_ modules don't indicate that as an option.
Any suggestions would be much appreciated.
I think having a state: absent option makes a lot of sense, as I don't think there is a simple way of doing this more efficiently with the current asa_ modules. The Ansible team is extremely responsive to issues and PRs, so I would submit one for this feature.
It looks like there isn't a clean way to do this as of Ansible 2.4. I have a working playbook, however, I had to settle for issuing the no commands using asa_config and putting ignore_errors: yes in for each play. It's inelegant to say the least and in some cases can break down. I think there may be a way to use an error handling along with check_mode: yes. My initial attempt at this failed because when registering the result of a play to a variable, I cannot use that variable to interpret which of the affected hosts actually required a change it's just a generic yes/no for the entire play.
What I'm doing currently:
- name: Remove Network Object
asa_config:
commands:
- no object network {{ object_name }}
provider: "{{ cli }}"
ignore_errors: yes
register: dno
(I'm currently running Ansible 2.1)
I have a playbook that gathers a list of elements and I have another playbook (that calls different hosts and whatnot) using said element as the basis for most operations. Therefore, whenever I use with_items over the playbook, it causes an error.
The loop control section of the docs say that "In 2.0 you are again able to use with_ loops and task includes (but not playbook includes) ". Is there a workaround? I really need to be able to call multiple hosts in an included playbook that runs over a set of entries. Any workarounds, ideas for such or anything are greatly appreciated!
P.S. I could technically command: ansible-playbook but I dont want to go down that rabbit hole if necessary
I think I faced same issues, and by the way, migrating to shows more than in 'item' already in use.
refering to http://docs.ansible.com/ansible/playbooks_best_practices.html , you should have an inventory (that contains all your hosts), and a master playbook (even if theorical).
A good way, instead of including playbooks, is to design roles, even if empty. Try to find a "common" role for everything that could be applied to most of your hosts.Then, include additional roles depending of usage, this will permit you to trigg on correct hosts.
You can also have roles that do nothing (meaning, nothing in 'tasks'), but that contain set of variables that can be common for two roles (you avoid then duplicate entries).
When creating a new Ansible role, the template creates both a vars and a defaults directory with an empty main.yml file. When defining my role, I can place variable definitions in either of these, and they will be available in my tasks.
What's the difference between putting the definitions into defaults and vars? What should go into defaults, and what should to into vars? Does it make sense to use both for the same data?
I know that there's a difference in precedence/priority between the two, but I would like to understand what should go where.
Let's say that my role would create a list of directories on the target system. I would like to provide a list of default directories to be created, but would like to allow the user to override them when using the role.
Here's what this would look like:
---
- directories:
- foo
- bar
- baz
I could place this either into the defaults/main.yml or in the vars/main.yml, from an execution perspective, it wouldn't make any difference - but where should it go?
The Ansible documentation on variable precedence summarizes this nicely:
If multiple variables of the same name are defined in different places, they win in a certain order, which is:
extra vars (-e in the command line) always win
then comes connection variables defined in inventory (ansible_ssh_user, etc)
then comes "most everything else" (command line switches, vars in play, included vars, role vars, etc)
then comes the rest of the variables defined in inventory
then comes facts discovered about a system
then "role defaults", which are the most "defaulty" and lose in priority to everything.
So suppose you have a "tomcat" role that you use to install Tomcat on a bunch of webhosts, but you need different versions of tomcat on a couple hosts, need it to run as different users in other cases, etc. The defaults/main.yml file might look something like this:
tomcat_version: 7.0.56
tomcat_user: tomcat
Since those are just default values it means they'll be used if those variables aren't defined anywhere else for the host in question. You could override these via extra-vars, via facts in your inventory file, etc. to specify different values for these variables.
Edit: Note that the above list is for Ansible 1.x. In Ansible 2.x the list has been expanded on. As always, the Ansible Documentation provides a detailed description of variable precedence for 2.x.
Role variables defined in var have a very high precedence - they can only be overwritten by passing them on the command line, in the specific task or in a block. Therefore, almost all your variables should be defined in defaults.
In the article "Variable Precedence - Where To Put Your Role Vars" the author gives one example of what to put in vars: System-specific constants that don't change much. So you can have vars/debian.yml and vars/centos.yml with the same variable names but different values and include them conditionally.
IMHO it is impractical and not sensible that Ansible places such high priority on configuration in vars of roles. Configuration in vars/main.yml and defaults/main.yml should be low and probably the same priority.
Are there any real life examples of cases where we want this type of behavior?
There are examples that we dont' want this.
The point to make here is that configuration in defaults/main.yml cannot be dynamic. Configuration in vars/main.yml can. So for example you can include configuration for specific OS and version dynamically as shown in geerlingguy.postgresql
But because precedence is so strange and impractical in Ansible geerlingguy needs to introduce pseudo variables as can be seen in variables.yml
- name: Define postgresql_packages.
set_fact:
postgresql_packages: "{{ __postgresql_packages | list }}"
when: postgresql_packages is not defined
This is a concrete real life example that demonstrates that the precedence is impractical.
Another point to make here is that we want roles to be configurable. Roles can be external, managed by someone else. As a general rule you don't want configuration in roles to have high priority.
Basically, anything that goes into “role defaults” (the defaults folder inside the role) is the most malleable and easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in namespace. The idea here to follow is that the more explicit you get in scope, the more precedence it takes with command line -e extra vars always winning. Host and/or inventory variables can win over role defaults, but not explicit includes like the vars directory or an include_vars task.
doc
Variables and defaults walk hand in hand. here's an example
-name: install package
yum: name=xyz{{package_version}} state=present
in your defaults file you would have something like:
package_version: 123
What ansible will do is, it's gonna take the value of package_version and put it next to the package name so it will read somewhere as:
-name: install package
yum: name=xyz123 state=present
This way it will install xyz123 and not xyz123.4 or whatever is in the great repository of xyz's.
At the end it will do yum install -y xyz123
So basically the defaults are the values present, if you do not set a specific value for the variables, cause that space can't stay empty.