I'm trying to separate my vars from the playbook file.
My vars use an alias to make things easier.
Example of my playbook.yaml:
---
- hosts: localhost
name: playbook name
vars:
login: &login
username: "{{ login.user }}"
password: "{{ login.pass }}"
vars_files:
- creds.yaml
tasks:
- name: using creds
some_module:
some_command: true
<<: *login
How can I move the "vars" section to a file outside the playbook and keep the same template inside "tasks"?
Your way to achieve that is to create a proper inventory and use the inventory variables. ansible inventory docs
Create this schema tree:
hosts (directory)
group_vars (directory)
host_vars (directory)
hosts-groups (directory)
hosts-all (file)
Then either you choose to define your hosts individually in the hosts-all
localhost ansible_ssh_host=x.x.x.x {{ potentially any other needed argument/variable }}
Or You group hosts in groups and define new group in hosts/hosts-groups: Create a file with the desired group name, ex: hosts/hosts-groups/local and inside it define your host-group
[local]
localhost
And then, either in the hosts_var or group_vars (depending on your first step), you create a file either with host name or group name, and inside it, you write your variables. Ex (by group): Create hosts/group_vars/local and inside it write your variables
login: user1
pass: pass1
And then automatically, your play will search the right vars for each hosts according to your inventory structure.
Related
I want to include variables from a file (say vars.yml) and use it inside group_vars/all.yml.
How this can be achieved?
tried (in all.yml):
include_vars:
file: ../inventories/dev/cluster-0/group_vars/vars.yml
name: x
ansible_user: "{{ x.username }}"
include_vars is an ansible task, and can only appear in a playbook or task list. A variable file can only contain variables; there's no "include"-style mechanism.
Depending on what exactly you're trying to do, you can turn all.yml into a directory (group_vars/all/) and then place multiple files in that directory:
group_vars/all/file1.yaml
group_vars/all/file2.yaml
I have a simple Ansible dynamic inventory for AWS servers that looks like this.
---
plugin: aws_ec2
regions:
- eu-west-2
keyed_groups:
- key: tags.Name
hostnames:
# A list in order of precedence for hostname variables.
- ip-address
compose:
ansible_host: _Applications_
ansible_user: "'ubuntu'"
This works fine, except that I also have another instance that's Redhat.
Which means that when I try to do a simple ping command on all the hosts, it fails as the username ubuntu is only valid on one of the servers.
Is there a way to set group my inventory file so that I can add in the ec2-user username for a specific group maybe based on it's tag or something else.
I could do this easily with my static inventory but I'm not sure how to do this with a dynamic inventory.
I've tried setting my Ansible inventory as an environment variable
export ANSIBLE_INVENTORY=~/Users/inventory
And placed my aws_ec2.yamlin the inventory directory along with a group vars directory containing my different groups with default usernames in each of the different groups
username: ubuntu
username: ec2-user
and then setting my inventory file as such
compose:
ansible_user: "{{ username }}"
But when Ansible tries to connect, it's using an admin username and not what's set in my group vars.
Is there a way to set the different usernames needed to connect to the different type of servers?
Per the example for the constructed plugin, you can use the keyed_group feature to create groups by ansible_distribution:
keyed_groups:
# this creates a group per distro (distro_CentOS, distro_Debian) and assigns the hosts that have matching values to it,
# using the default separator "_"
- prefix: distro
key: ansible_distribution
And then set ansible_user inside groups_vars/distro_Ubuntu.yaml and group_vars/distro_RedHat.yaml.
Also from the documentation, this requires fact caching to operate (because otherwise Ansible doesn't know the value of ansible_distribution at the time it's processing the keyed_groups setting).
I don't have access to AWS at the moment, but here's how I'm testing everything locally. Given an inventory that looks like:
$ tree inventory
inventory/
├── 00-hosts.yaml
└── 10-constructed.yaml
Where inventory/00-hosts.yaml looks like:
all:
hosts:
host0:
ansible_host: localhost
And inventory/10-constructed.yaml looks like:
plugin: constructed
strict: false
groups:
ipmi_hosts: ipmi_host|default(false)
keyed_groups:
- prefix: "distro"
key: ansible_distribution
And ansible.cfg looks like:
[defaults]
inventory = inventory
enable_plugins = constructed
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./.facts
The first time I run this playbook:
- hosts: all
gather_facts: true
tasks:
- debug:
var: group_names
The output of the debug task is:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [host0] => {
"group_names": [
"ungrouped"
]
}
But because of the fact gathering and caching performed by the previous playbook run, the second time I run it the output is:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [host0] => {
"group_names": [
"distro_Fedora"
]
}
Similarly, before the first playbook run ansible-inventory --graph outputs:
#all:
|--#ungrouped:
| |--host0
But after running the playbook once, I get:
#all:
|--#distro_Fedora:
| |--host0
|--#ungrouped:
I've bundled this all into an example repository.
I'm working on putting together a playbook that will deploy local facts scripts to various groups in my Ansible inventory, and I would to be able to utilize the group name being worked on as a variable in the tasks themselves. Assume for this example that I have the traditional Ansible roles directory structure on my Ansible machine, and I have subdirectories under the "files" directory called "apache", "web", and "db". I'll now illustrate by example, ...
---
- hosts: apache:web:db
tasks:
- name: Set facts for facts directories
set_fact:
facts_dir_local: "files/{{ group_name }}"
facts_dir_remote: "/etc/ansible/facts.d"
- name: Deploy local facts
copy:
src: "{{ item }}"
dest: "{{ facts_dir_remote }}"
owner: ansible
group: ansible
mode: 0750
with_fileglob:
- "{{ facts_dir_local }}/*.fact"
The goal is to have {{ group_name }} above take on the value of "apache" for the hosts in the apache group, "web" for the hosts in the web group, and "db" for the hosts in the db group. This way I don't have to copy and paste this task and assign custom variables for each group. Any suggestions for how to accomplish this would be greatly appreciated.
While there is no group_name variable in ansible, there is a group_names (plural). That is because any host may be part of one or more groups.
It is described in the official documentation as
group_names
List of groups the current host is part of
In the most common case each host would only be part of one group and so you could simply say group_names[0] to get what you want.
TLDR;
Use group_names[0].
You can use group variables to achieve this, either specified in the inventory file or in separate group_vars/<group> files - see https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
Note though that these group variables are squashed into host variables when you run the playbook. Hosts that are listed in multiple groups will end up with variables based on a precedence order
How can I declare global variable within Ansible playbook. I have searched in google and found the below solution, but its not working as expected.
- hosts: all
vars:
prod-servers:
- x.x.x.x
- x.x.x.x
- hosts: "{{prod-servers}}"
tasks:
- name: ping
action: ping
When I'm trying the above code, it says variable prod-servers is undefined.
You cannot define a variable accessible on a playbook level (global variable) from within a play.
Variable Scopes
Ansible has 3 main scopes:
Global: this is set by config, environment variables and the command line
Play: each play and contained structures, vars entries (vars; vars_files; vars_prompt), role defaults and vars.
Host: variables directly associated to a host, like inventory, include_vars, facts or registered task outputs
Anything you declare inside a play can thus only be either a play variable, or a (host) fact.
To define a variable, which you can use in the hosts declaration:
run ansible-playbook with --extra-vars option and pass the value in the argument;
or to achieve the same functionality (decide which hosts to run a play on, from within a preceding play):
define an in-memory inventory and run the subsequent play against that inventory.
what you seem to want is an inventory (http://docs.ansible.com/ansible/latest/intro_inventory.html), it looks like you have an static list of IP's that may be prod servers (or dev, or whatever), therefore you can create an static inventory.
In your second play you want to use the list of IP's as hosts to run the tasks, that's not what Ansible expects. After the "hosts" keyword in a play declaration, Ansible expects a group name from the inventory.
If, on the opossite, your prod servers change from time to time, you may need to create a dynamic inventory. You can have a look at examples in https://github.com/ansible/ansible/tree/devel/contrib/inventory (for instance, there are examples of dynamic inventory based on EC2 from Amazon or vsphere)
regards
well, this can be done using
set_fact.
I don't know the best practice for this but this works for me
Here's my playbook example
- hosts: all
gather_facts: false
tasks:
- set_fact: host='hostname'
- hosts: host-name1
gather_facts: false
tasks:
- name: CheckHostName
shell: "{{ host }}"
register: output
- debug: msg="{{ output }}"
- hosts: host-name2
gather_facts: false
tasks:
- name: CheckHostName
shell: "{{ host }}"
register: output
- debug: msg="{{ output }}"
template/file.cfg.j2
This template file will contain basic lines that will be shared between three boxes. There will be some differences in the lines that will be unique for each box. This is a value I wish to variable:ize.
set system user nsroot 546426471446579744 -encrypted
The 546... hash should now be in a {{ }} variable, as it will differ between the instances. {{ item.hash}}
I need an approach on how to set it up and structure, do I need include_vars etc.
EDIT: What I have:
vars/vars.yml
servers
ns:
- name: Copy hash
hash: 187f637f107bf7265069ace04bf87fcd8e63923169a2c529a
playbook.yml
tasks:
- name: Variable:ize
template: src=templates/template.j2 dest=/tmp mode=644 owner=root group=wheel
with_items: servers[ansible_hostname]
In your inventory file you'll want to do something like this:
host1 nsroot_hash=12345
host2 nsroot_hash=54321
host3 nsroot_hash=24680
And then your template/file.cfg.j2 will look like this:
set system user nsroot {{ nsroot_hash }} -encrypted
Edit: You want the hash variable to be defined in your inventory file since you want a different value for each host that you're going to run this task against. So your inventory (host_vars) file should look something like this (I assume ns is the name of one of your servers):
ns hash=187f637f107bf7265069ace04bf87fcd8e63923169a2c529a
Then your playbook.yml would simply look something like this:
- hosts: all
tasks:
- name: Variable:ize
template: src=templates/template.j2 dest=/tmp/template.txt mode=644 owner=root group=wheel
Note that you don't need the with_items statement. In the above case, assuming that ns is the name of a host then this will create the file /tmp/template.txt with the templated text in it. (Note that dest is a path to a file, and not just a path to a directory.)
If you want to apply this task to multiple hosts then all you do is edit the inventory file as shown above:
ns hash=187f637f107bf7265069ace04bf87fcd8e63923169a2c529a
aa hash=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
bb hash=bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
When you run the above playbook.yml file then it will apply the template to all three hosts, ns, aa, and bb, and put the proper hash value in the file on each host.