How to Set UserNames in Ansible Dynamic Inventory EC2 - ansible

I have a simple Ansible dynamic inventory for AWS servers that looks like this.
---
plugin: aws_ec2
regions:
- eu-west-2
keyed_groups:
- key: tags.Name
hostnames:
# A list in order of precedence for hostname variables.
- ip-address
compose:
ansible_host: _Applications_
ansible_user: "'ubuntu'"
This works fine, except that I also have another instance that's Redhat.
Which means that when I try to do a simple ping command on all the hosts, it fails as the username ubuntu is only valid on one of the servers.
Is there a way to set group my inventory file so that I can add in the ec2-user username for a specific group maybe based on it's tag or something else.
I could do this easily with my static inventory but I'm not sure how to do this with a dynamic inventory.
I've tried setting my Ansible inventory as an environment variable
export ANSIBLE_INVENTORY=~/Users/inventory
And placed my aws_ec2.yamlin the inventory directory along with a group vars directory containing my different groups with default usernames in each of the different groups
username: ubuntu
username: ec2-user
and then setting my inventory file as such
compose:
ansible_user: "{{ username }}"
But when Ansible tries to connect, it's using an admin username and not what's set in my group vars.
Is there a way to set the different usernames needed to connect to the different type of servers?

Per the example for the constructed plugin, you can use the keyed_group feature to create groups by ansible_distribution:
keyed_groups:
# this creates a group per distro (distro_CentOS, distro_Debian) and assigns the hosts that have matching values to it,
# using the default separator "_"
- prefix: distro
key: ansible_distribution
And then set ansible_user inside groups_vars/distro_Ubuntu.yaml and group_vars/distro_RedHat.yaml.
Also from the documentation, this requires fact caching to operate (because otherwise Ansible doesn't know the value of ansible_distribution at the time it's processing the keyed_groups setting).
I don't have access to AWS at the moment, but here's how I'm testing everything locally. Given an inventory that looks like:
$ tree inventory
inventory/
├── 00-hosts.yaml
└── 10-constructed.yaml
Where inventory/00-hosts.yaml looks like:
all:
hosts:
host0:
ansible_host: localhost
And inventory/10-constructed.yaml looks like:
plugin: constructed
strict: false
groups:
ipmi_hosts: ipmi_host|default(false)
keyed_groups:
- prefix: "distro"
key: ansible_distribution
And ansible.cfg looks like:
[defaults]
inventory = inventory
enable_plugins = constructed
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./.facts
The first time I run this playbook:
- hosts: all
gather_facts: true
tasks:
- debug:
var: group_names
The output of the debug task is:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [host0] => {
"group_names": [
"ungrouped"
]
}
But because of the fact gathering and caching performed by the previous playbook run, the second time I run it the output is:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [host0] => {
"group_names": [
"distro_Fedora"
]
}
Similarly, before the first playbook run ansible-inventory --graph outputs:
#all:
|--#ungrouped:
| |--host0
But after running the playbook once, I get:
#all:
|--#distro_Fedora:
| |--host0
|--#ungrouped:
I've bundled this all into an example repository.

Related

Passing hostname to ansible playbook through extravars

I have to pass the host on which the Ansible command will be executed through extra vars.
I don't know in advance to which hosts the tasks will be applied to, and, therefore, my inventory file is currently missing the hosts: variable.
If I understood from the article "How to pass extra variables to an Ansible playbook" correctly, overwriting hosts is only possible by having already composed groups of hosts.
From the post Ansible issuing warning about localhost I gathered that referencing hosts to be managed in an Ansible inventory is a must, however, I still have doubts about it since the usage of extra vars was not mentioned in the given question.
So my question is: What can i do in order to make this playbook work?
- hosts: "{{ host }}"
tasks:
- name: KLIST COMMAND
command: klist
register: klist_result
- name: TEST COMMAND
ansible.builtin.shell: echo hi > /tmp/test_result.txt
... referencing hosts to be managed in an Ansible inventory is a must
Yes, that's the case. Regarding your question
What can I do in order to make this playbook work? (annot. without a "valid" inventory file)
you could try with the following workaround.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- add_host:
hostname: "{{ target_hosts }}"
group: dynamic
- hosts: dynamic
become: true
gather_facts: true
tasks:
- name: Show hostname
shell:
cmd: "hostname && who am i"
register: result
- name: Show result
debug:
var: result
A call with
ansible-playbook hosts.yml --extra-vars="target_hosts=test.example.com"
resulting into execution on
TASK [add_host] ***********
changed: [localhost]
PLAY [dynamic] ************
TASK [Show hostname] ******
changed: [test.example.com]
In any case it is recommended to check how to build your inventory.
Further Documentation
add_host module – Add a host (and alternatively a group) to the ansible-playbook in-memory inventory

Executing playbooks in groupings created in hosts.yaml file

Ansible Version: 2.8.3
I have the following hosts.yaml file for use in Ansible
I have applications that I want to deploy on potentially both rp_1 and rp_2
---
all:
vars:
docker_network_name: devopsNet
http_protocol: http
http_host: ansiblenode01_new.example.com
http_url: "{{ http_protocol }}://{{ http_host }}:{{ http_port }}/{{ http_context }}"
hosts:
ansiblenode01_new.example.com:
ansiblenode02_new.example.com:
children:
##################################################################
rp_1:
children:
httpd:
hosts:
ansiblenode01_new.example.com:
vars:
number_of_tools: 6
outside_port: 443
jenkins:
hosts:
ansiblenode01_new.example.com:
vars:
http_port: 4444
http_context: jenkins
artifactory:
hosts:
ansiblenode01_new.example.com:
vars:
http_port: 8000
http_context: artifactory
rp_2:
children:
httpd:
hosts:
ansiblenode02_new.example.com:
vars:
number_of_tools: 4
outside_port: 7090
jenkins:
hosts:
ansiblenode02_new.example.com:
vars:
http_port: 7990
http_context: jenkins
artifactory:
hosts:
ansiblenode02_new.example.com:
vars:
http_port: 8000
http_context: artifactory
The following python wrapper script is calling ansible-playbook in a loop to deploy the applications
#!/usr/bin/python
import yaml
import os
import getpass
with open('hosts.yaml') as f:
var = yaml.load(f)
sudo_pass = getpass.getpass(prompt="Please enter sudo password: ")
# Running individual ansible-playbook deployment for each application listed and uncommented under 'applications' object.
for network in var['all']['children']:
for app in var['all']['children'][network]['children']:
os.system('ansible-playbook deploy.yml --extra-vars "application='+app+' ansible_sudo_password='+sudo_pass+'"')
The problem I recognize is that both Ansible and Python will use the hosts.yaml file, but not use it the way I thought it would as I'm not too familiar with Ansible.
The hosts.yaml was written in a format that is required by Ansible.
The Python script will open the yaml file, make a dictionary out of it, and step through the dictionary and look for the application names to pass to the command line call. The problem is then that Python only passes the name of the app as a string to the invocation of ansible-playbook, the dictionary structure obviously doesn't get passed, so Ansible will then open the hosts.yaml file as well, but all it does is step through the yaml and look for the first occurrence of the app name that was passed as an argument when ansible-playbook was invoked, completely disregarding the structure I've created in the yaml file.
So basically only the rp_1 group in the yaml file will be executed since Ansible, I think reads through the yaml from top down and stops at the first occurrence, therefore all or parts of the rp_2 group will never be processed by Ansible if the group contains all or some of the same apps as rp_1, therefore running the same deployment twice.
Is there a way to invoke Ansible or some ways to set the playbooks up so that Ansible will recognize that in my hosts file, I have networks (rp_1, rp_2) that I want to setup and executes the playbooks in the grouping that I've created in the yaml file?
Ansible already has this built-in. You do not need a wrapper script.
To run the deploy.yml playbook on all hosts in your hosts.yaml (this is called "inventory" btw.) do this:
ansible-playbook -i hosts.yaml deploy.yml -bK
To only run it on rp_1, do this:
ansible-playbook -i hosts.yaml deploy.yml --limit rp_1 -bK
-b makes ansible become root
-K will make ansible ask for the password to become root
-i <file> specifies the inventory file
--limit <host/group> limits the execution to certain hosts or groups, you can also add more than one, as a comma-separated list (e.g., pr_1,rp_2)
You can also specify a list of hosts/groups in your playbook like this:
- name: do whatever you like
hosts:
- rp_1
- rp_2
become: yes
tasks:
- debug:
msg: "I'm running on {{ inventory_hostname }}!"
Further reading:
Discovering variables: facts and magic variables
How to build your inventory
Special variables
Using variables
Ansible examples
Accessing variables of "other" hosts: on serverfault and stackoverflow

Ansible alias in separate file

I'm trying to separate my vars from the playbook file.
My vars use an alias to make things easier.
Example of my playbook.yaml:
---
- hosts: localhost
name: playbook name
vars:
login: &login
username: "{{ login.user }}"
password: "{{ login.pass }}"
vars_files:
- creds.yaml
tasks:
- name: using creds
some_module:
some_command: true
<<: *login
How can I move the "vars" section to a file outside the playbook and keep the same template inside "tasks"?
Your way to achieve that is to create a proper inventory and use the inventory variables. ansible inventory docs
Create this schema tree:
hosts (directory)
group_vars (directory)
host_vars (directory)
hosts-groups (directory)
hosts-all (file)
Then either you choose to define your hosts individually in the hosts-all
localhost ansible_ssh_host=x.x.x.x {{ potentially any other needed argument/variable }}
Or You group hosts in groups and define new group in hosts/hosts-groups: Create a file with the desired group name, ex: hosts/hosts-groups/local and inside it define your host-group
[local]
localhost
And then, either in the hosts_var or group_vars (depending on your first step), you create a file either with host name or group name, and inside it, you write your variables. Ex (by group): Create hosts/group_vars/local and inside it write your variables
login: user1
pass: pass1
And then automatically, your play will search the right vars for each hosts according to your inventory structure.

Ansible: How to declare global variable within playbook?

How can I declare global variable within Ansible playbook. I have searched in google and found the below solution, but its not working as expected.
- hosts: all
vars:
prod-servers:
- x.x.x.x
- x.x.x.x
- hosts: "{{prod-servers}}"
tasks:
- name: ping
action: ping
When I'm trying the above code, it says variable prod-servers is undefined.
You cannot define a variable accessible on a playbook level (global variable) from within a play.
Variable Scopes
Ansible has 3 main scopes:
Global: this is set by config, environment variables and the command line
Play: each play and contained structures, vars entries (vars; vars_files; vars_prompt), role defaults and vars.
Host: variables directly associated to a host, like inventory, include_vars, facts or registered task outputs
Anything you declare inside a play can thus only be either a play variable, or a (host) fact.
To define a variable, which you can use in the hosts declaration:
run ansible-playbook with --extra-vars option and pass the value in the argument;
or to achieve the same functionality (decide which hosts to run a play on, from within a preceding play):
define an in-memory inventory and run the subsequent play against that inventory.
what you seem to want is an inventory (http://docs.ansible.com/ansible/latest/intro_inventory.html), it looks like you have an static list of IP's that may be prod servers (or dev, or whatever), therefore you can create an static inventory.
In your second play you want to use the list of IP's as hosts to run the tasks, that's not what Ansible expects. After the "hosts" keyword in a play declaration, Ansible expects a group name from the inventory.
If, on the opossite, your prod servers change from time to time, you may need to create a dynamic inventory. You can have a look at examples in https://github.com/ansible/ansible/tree/devel/contrib/inventory (for instance, there are examples of dynamic inventory based on EC2 from Amazon or vsphere)
regards
well, this can be done using
set_fact.
I don't know the best practice for this but this works for me
Here's my playbook example
- hosts: all
gather_facts: false
tasks:
- set_fact: host='hostname'
- hosts: host-name1
gather_facts: false
tasks:
- name: CheckHostName
shell: "{{ host }}"
register: output
- debug: msg="{{ output }}"
- hosts: host-name2
gather_facts: false
tasks:
- name: CheckHostName
shell: "{{ host }}"
register: output
- debug: msg="{{ output }}"

Override hosts variable of Ansible playbook from the command line

This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
For anyone who might come looking for the solution.
Play Book
- hosts: '{{ host }}'
tasks:
- debug: msg="Host is {{ ansible_fqdn }}"
Inventory
[web]
x.x.x.x
[droplets]
x.x.x.x
Command: ansible-playbook deplyment.yml -i hosts --extra-vars "host=droplets"
So you can specify the group name in the extra-vars
We use a simple fail task to force the user to specify the Ansible limit option, so that we don't execute on all hosts by default/accident.
The easiest way I found is this:
---
- name: Force limit
# 'all' is okay here, because the fail task will force the user to specify a limit on the command line, using -l or --limit
hosts: 'all'
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
Now we must use the -l (= --limit option) when we run the playbook, e.g.
ansible-playbook playbook.yml -l www.example.com
Limit option docs:
Limit to one or more hosts This is required when one wants to run a
playbook against a host group, but only against one or more members of
that group.
Limit to one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
Negated limit.
NOTE: Single quotes MUST be used to prevent bash
interpolation.
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to host group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
This is a bit late, but I think you could use the --limit or -l command to limit the pattern to more specific hosts. (version 2.3.2.0)
You could have
- hosts: all (or group)
tasks:
- some_task
and then ansible-playbook playbook.yml -l some_more_strict_host_or_pattern
and use the --list-hosts flag to see on which hosts this configuration would be applied.
An other solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
If the --limit option is omitted, then Ansible issues a warning, but does nothing since no host matched.
[WARNING]: Could not match supplied host pattern, ignoring: None
PLAY ****************************************************************
skipping: no hosts matched
I'm using another approach that doesn't need any inventory and works with this simple command:
ansible-playbook site.yml -e working_host=myhost
To perform that, you need a playbook with two plays:
first play runs on localhost and add a host (from given variable) in a known group in inmemory inventory
second play runs on this known group
A working example (copy it and runs it with previous command):
- hosts: localhost
connection: local
tasks:
- add_host:
name: "{{ working_host }}"
groups: working_group
changed_when: false
- hosts: working_group
gather_facts: false
tasks:
- debug:
msg: "I'm on {{ ansible_host }}"
I'm using ansible 2.4.3 and 2.3.3
I changed mine to default to no host and have a check to catch it. That way the user or cron is forced to provide a single host or group etc. I like the logic from the comment from #wallydrag. The empty_group contains no hosts in the inventory.
- hosts: "{{ variable_host | default('empty_group') }}"
Then add the check in tasks:
tasks:
- name: Fail script if required variable_host parameter is missing
fail:
msg: "You have to add the --extra-vars='variable_host='"
when: (variable_host is not defined) or (variable_host == "")
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory, like this: ansible --inventory configs/hosts --list-hosts all
If you want to run a task that's associated with a host, but on different host, you should try delegate_to.
In your case, you should delegate to your localhost (ansible master) and calling ansible-playbook command
I am using ansible 2.5 (2.5.3 exactly), and it seems that the vars file is loaded before the hosts param is executed. So you can set the host in a vars.yml file and just write hosts: {{ host_var }} in your playbook
For example, in my playbook.yml:
---
- hosts: "{{ host_name }}"
become: yes
vars_files:
- vars/project.yml
tasks:
...
And inside vars/project.yml:
---
# general
host_name: your-fancy-host-name
Here's a cool solution I came up to safely specify hosts via the --limit option. In this example, the play will end if the playbook was executed without any hosts specified via the --limit option.
This was tested on Ansible version 2.7.10
---
- name: Playbook will fail if hosts not specified via --limit option.
# Hosts must be set via limit.
hosts: "{{ play_hosts }}"
connection: local
gather_facts: false
tasks:
- set_fact:
inventory_hosts: []
- set_fact:
inventory_hosts: "{{inventory_hosts + [item]}}"
with_items: "{{hostvars.keys()|list}}"
- meta: end_play
when: "(play_hosts|length) == (inventory_hosts|length)"
- debug:
msg: "About to execute tasks/roles for {{inventory_hostname}}"
This worked for me as I am using Azure devops to deploy an application using CICD pipelines. I had to make this hosts (in yml file) more dynamic so in release pipeline I can add it's value, for example:
--extra-vars "host=$(target_host)"
pipeline_variable
My ansible playbook looks like this
- name: Apply configuration to test nodes
hosts: '{{ host }}'

Resources