How to set common ansible_user to all hosts in a group? - ansible

Does anyone know if ansible enables users share the same ansible_user settings across all hosts included in a group? This feature would be particularly useful when using some cloud computing platforms such as OpenStack, which enables users to launch multiple instances that share the same config, such as user accounts and SSH keys.

There are several behavioral parameters you can configure to modify the way Ansible connects to your host. Among them is the ansible_user variable. You can set it per host pr per group. You can also define a general ansible_user variable under the all hosts group, that you override at the group or host level.
If you were writing your inventory in just one hosts.yml file you'd do it like this:
all:
children:
ubuntu_linux:
hosts:
ubuntu_linux_1:
ubuntu_linux_2:
aws_linux:
hosts:
aws_linux_host_1:
aws_linux_host_2:
aws_linux_host_3:
vars:
ansible_user: ec2-user
vars:
ansible_host: ubuntu
And, if you are using a directory to create your inventory, you can set it inside the ./inventory/group/vars.yml file.
Check the "Connecting to hosts: behavioral inventory parameters" section of Ansible docs to see what other parameters you can configure.
I hope it helps

Related

Running Ansible playbooks with two different ansible_user accounts

I have the following playbook that runs a role on two inventories. win_domain is an inventory of domain-joined Windows targets while win_workgroup are non-domain joined targets.
---
- name: Windows Test
hosts: win_domain, win_workgroup
roles:
- Windows_Test
The ansible_user used to run this playbook is a domain account that is not accessible to any targets in win_workgroup. Is there a way to run this playbook using two different ansible_users for each inventory?
I would use Group Variables and set the connection type on a per group basis:
[win_domain]
domain__server_01
[win_workgroup]
workgroup_server_01
[win_domain:vars]
ansible_connection=ssh
ansible_user=domain_user
[win_workgroup:vars]
ansible_connection=ssh
ansible_user=localaccount

Ansible Tower/AWX multi credentials

I have Ansible Tower running, I would like to be able to assign multiple credentials (ssh keys) to a job template and then when I run the job I would like it to use the correct credentials for the machine it connects to, thus not causing user lockouts as it cycles through the available credentials.
I have a job template and have added multiple credentials using custom credentials types, but how can I tell it to use the correct key for each host?
Thanks
I'm using community ansible so forgive if ansible tower behave differently
inventory.yml
all:
children:
web_servers:
vars:
ansible_user: www_user
ansible_ssh_private_key_file: ~/.ssh/www_rsa
hosts:
node1-www:
ansible_host: 192.168.88.20
node2-www:
ansible_host: 192.168.88.21
database_servers:
vars:
ansible_user: db_user
ansible_ssh_private_key_file: ~/.ssh/db_rsa
hosts:
node1-db:
ansible_host: 192.168.88.20
node2-db:
ansible_host: 192.168.88.21
What you can see here is that:
Both server are runing both webapp and db
Webapp & DB uses different users to run
Groups host names differ base on user (this is important)
You have to create Credential Types per Hosts and then create Credential per your credential types. you should transform injector configurations to host variables.
Please see the following post:
https://faun.pub/lets-do-devops-ansible-awx-tower-authentication-power-user-2d4c3c68ca9e

Ansible group variable evaluation with local actions

I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

Ansible: ansible_user in inventory vs remote_user in playbook

I am trying to run an Ansible playbook against a server using an account other than the one I am logged on the control machine. I tried to specify an ansible_user in the inventory file according to the documentation on Inventory:
[srv1]
192.168.1.146 ansible_connection=ssh ansible_user=user1
However Ansible called with ansible-playbook -i inventory playbook.yml -vvvv prints the following:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: techraf
What worked for me was adding the remote_user argument to the playbook:
- hosts: srv1
remote_user: user1
Now the same Ansible command connects as user1:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: user1
Also adding remote_user variable to ansible.cfg makes Ansible use the intended user instead of the logged-on one.
Are the ansible_user in inventory file and remote_user in playbook/ansible.cfg for different purposes?
What is the ansible_user used for? Or why doesn't Ansible observe the setting in the inventory?
You're likely running into a common issue: the published ansible docs are for the development version (2.0 right now), and we don't keep the old ones around. It's a big point of contention... Assuming you're using something pre-2.0, the inventory var name you need is ansible_ssh_user. ansible_user works in 2.0 (as does ansible_ssh_user- it gets aliased in).
I usually add my remote username in /etc/ansible/ansible.cfg as follows:
remote_user = MY_REMOTE_USERNAME
This way it is not required to configure ansible_user in the inventory file for each host entry.

Resources