Running Ansible playbooks with two different ansible_user accounts - ansible

I have the following playbook that runs a role on two inventories. win_domain is an inventory of domain-joined Windows targets while win_workgroup are non-domain joined targets.
---
- name: Windows Test
hosts: win_domain, win_workgroup
roles:
- Windows_Test
The ansible_user used to run this playbook is a domain account that is not accessible to any targets in win_workgroup. Is there a way to run this playbook using two different ansible_users for each inventory?

I would use Group Variables and set the connection type on a per group basis:
[win_domain]
domain__server_01
[win_workgroup]
workgroup_server_01
[win_domain:vars]
ansible_connection=ssh
ansible_user=domain_user
[win_workgroup:vars]
ansible_connection=ssh
ansible_user=localaccount

Related

Run a tasks only for hosts that belong to a certain group

I'm trying to skip a task in a playbook based on the group in my inventory file.
Inventory:
[test]
testserver1
[qa]
qaserver1
[prod]
prodserver1
prodserver2
I'm pretty sure I need to add the when clause to my task, but not sure how to reference the inventory file.
- name: Add AD group to local admin
ansible.windows.win_group_membership:
name: Administrators
members:
- "{{ local_admin_ad_group }}"
state: present
when: inventory is not prod
Basically, I want run the task above on all the servers except for the ones in the group called prod.
You can have all your hosts in one environment, but I suggest to use different environment files for dev, staging, qa and prod. If you separate it by condition, it can happen quite fast, that you mess up some condition or forget to add it to a new task altogether and accidentally run a task on a prod host, that should not run there.
If you still want to have all your hosts in the same inventory, you can either separate them using different plays (you can have multiple in the same playbook) and then using hosts to specify where they should run.
For example:
- name: play one
hosts:
- qa
- test
tasks:
<all your tasks for qa and test>
- name: play two
hosts: prod
tasks:
<all your tasks for prod>
If you want to do it on a per-task level, you can use the group_names variable.
For example:
- name: Add AD group to local admin
ansible.windows.win_group_membership:
name: Administrators
members:
- "{{ local_admin_ad_group }}"
state: present
when: '"prod" not in group_names'
In that case you need to be really careful if you change things, so your conditions are still the way they are supposed to be.

How to set common ansible_user to all hosts in a group?

Does anyone know if ansible enables users share the same ansible_user settings across all hosts included in a group? This feature would be particularly useful when using some cloud computing platforms such as OpenStack, which enables users to launch multiple instances that share the same config, such as user accounts and SSH keys.
There are several behavioral parameters you can configure to modify the way Ansible connects to your host. Among them is the ansible_user variable. You can set it per host pr per group. You can also define a general ansible_user variable under the all hosts group, that you override at the group or host level.
If you were writing your inventory in just one hosts.yml file you'd do it like this:
all:
children:
ubuntu_linux:
hosts:
ubuntu_linux_1:
ubuntu_linux_2:
aws_linux:
hosts:
aws_linux_host_1:
aws_linux_host_2:
aws_linux_host_3:
vars:
ansible_user: ec2-user
vars:
ansible_host: ubuntu
And, if you are using a directory to create your inventory, you can set it inside the ./inventory/group/vars.yml file.
Check the "Connecting to hosts: behavioral inventory parameters" section of Ansible docs to see what other parameters you can configure.
I hope it helps

Ansible group variable evaluation with local actions

I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

Ansible: how handle servers on the same host?

I have been trying for days to setup Ansible to use it for getting up a dev environment for my project, and secondly deploy to beta and live servers. The project is not that big but it seems that Ansible is not flexible enough when it comes to small projects.
Inventory
[development]
web_server ansible_connection=docker
db_server ansible_connection=docker
[production]
web_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
db_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
I want to keep the web_server and db_server aliases intact so I can switch between development and production in my scripts without hustle. The main issue is that I can't figure out how to create a playbook that will work nicely with the above setup.
This solution doesn't work since it runs all tasks twice!
---
- hosts: staging
tasks:
- name: Setup web server
command: uptime
delegate_to: web_server
- name: Setup db server
command: ls
delegate_to: db_server
This solution solves the above problem but it prints the wrong alias (web_server even when running the db task)!
---
- hosts: staging
run_once: true
tasks:
- name: Setup web servers
command: uptime
delegate_to: web_server
- name: Setup db servers
command: ls
delegate_to: db_server
This solution would be plausible but Ansible does not support access to an individual host from a group:
---
- hosts: staging:web_server
tasks:
- name: Deploy to web server
command: uptime
---
- hosts: staging:db_server
tasks:
- name: Deploy to db server
command: ls
Is there a way to accomplish what I want? Ansible feels quite restrictive until this point which is a bummer after all the praise I heard about it.
-------------------------- Edit after udondan's suggestion ----------------------
I tried udondan's suggestion and it seemed to work. However when I add a new group to the inventory it breaks.
[development]
web_server ansible_connection=docker
db_server ansible_connection=docker
[staging]
web_server ansible_host=20.20.20.20 ansible_user=tom ansible_connection=ssh
db_server ansible_host=20.20.20.20 ansible_user=tom ansible_connection=ssh
[production]
web_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
db_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
In this case the IP of the staging server (20.20.20.20) will be used when running the production playbook.
This solution doesn't work since it runs all tasks twice!
Assuming that hosts: staging is what you have defined in development this is the expected behavior. You defined a group of hosts and by running tasks or roles against this group all hosts of that group will be processed. By delegating the task to a different host you only force the task to be executed elsewhere, but the task still is executed for each host of the group.
I think what you want is this:
---
- hosts: web_server
tasks:
- name: Setup web server
command: uptime
- hosts: db_server
tasks:
- name: Setup db server
command: ls
Update after response:
The problem is you use the same hostnames for all environments and try to delegate them to different connectors. What Ansible though is doing is this:
It reads your inventory from top to bottom, finds the groups and processes the groups in alphabetical order. (development, production, staging) It finds the host web_server in group development, so it creates that group, adds that host and sets the var ansible_connection for this host. It proceed to the group production, again finds the host web_server so it adds it to the group production, and sets the vars ansible_host, ansible_user and ansible_connection. But this is not specific for this group. It is set for the hostname web_server, overriding the previous value of ansible_connection. Ansible continues to the staging group and again override all the settings. The host web_server belongs to all 3 groups and its vars have been merged, even if you have targeted only one group, e.g. ansible-playbook ... --limit=development. The limit only limits the play to matching hosts, but the host still belongs to all groups and has all the (merged) vars which have been defined.
It is highly recommended and best practice to have an inventory file per environment. Not specifically for the problem you are facing, but simply to add another layer of security to not accidentally do anything to your production hosts when you actually want to target staging boxes. But it will also solve your problem. So instead of one inventory file like you'd have:
inventory/development:
[development]
web_server
db_server
[development:vars]
ansible_connection=docker
inventory/staging:
[staging]
web_server
db_server
[staging:vars]
ansible_host=20.20.20.20
ansible_user=tom
ansible_connection=ssh
inventory/production:
[production]
web_server
db_server
[production:vars]
ansible_host=10.10.10.10
ansible_user=tom
ansible_connection=ssh
Then you'd call Ansible with the appropriate inventory file:
ansible-playbook -i inventory/staging ... playbook.yml
If you somehow can not do that and need a single inventory file you need to make sure the host names are unique so the connection details are not overridden.

Resources