Ansible: how handle servers on the same host? - ansible

I have been trying for days to setup Ansible to use it for getting up a dev environment for my project, and secondly deploy to beta and live servers. The project is not that big but it seems that Ansible is not flexible enough when it comes to small projects.
Inventory
[development]
web_server ansible_connection=docker
db_server ansible_connection=docker
[production]
web_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
db_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
I want to keep the web_server and db_server aliases intact so I can switch between development and production in my scripts without hustle. The main issue is that I can't figure out how to create a playbook that will work nicely with the above setup.
This solution doesn't work since it runs all tasks twice!
---
- hosts: staging
tasks:
- name: Setup web server
command: uptime
delegate_to: web_server
- name: Setup db server
command: ls
delegate_to: db_server
This solution solves the above problem but it prints the wrong alias (web_server even when running the db task)!
---
- hosts: staging
run_once: true
tasks:
- name: Setup web servers
command: uptime
delegate_to: web_server
- name: Setup db servers
command: ls
delegate_to: db_server
This solution would be plausible but Ansible does not support access to an individual host from a group:
---
- hosts: staging:web_server
tasks:
- name: Deploy to web server
command: uptime
---
- hosts: staging:db_server
tasks:
- name: Deploy to db server
command: ls
Is there a way to accomplish what I want? Ansible feels quite restrictive until this point which is a bummer after all the praise I heard about it.
-------------------------- Edit after udondan's suggestion ----------------------
I tried udondan's suggestion and it seemed to work. However when I add a new group to the inventory it breaks.
[development]
web_server ansible_connection=docker
db_server ansible_connection=docker
[staging]
web_server ansible_host=20.20.20.20 ansible_user=tom ansible_connection=ssh
db_server ansible_host=20.20.20.20 ansible_user=tom ansible_connection=ssh
[production]
web_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
db_server ansible_host=10.10.10.10 ansible_user=tom ansible_connection=ssh
In this case the IP of the staging server (20.20.20.20) will be used when running the production playbook.

This solution doesn't work since it runs all tasks twice!
Assuming that hosts: staging is what you have defined in development this is the expected behavior. You defined a group of hosts and by running tasks or roles against this group all hosts of that group will be processed. By delegating the task to a different host you only force the task to be executed elsewhere, but the task still is executed for each host of the group.
I think what you want is this:
---
- hosts: web_server
tasks:
- name: Setup web server
command: uptime
- hosts: db_server
tasks:
- name: Setup db server
command: ls
Update after response:
The problem is you use the same hostnames for all environments and try to delegate them to different connectors. What Ansible though is doing is this:
It reads your inventory from top to bottom, finds the groups and processes the groups in alphabetical order. (development, production, staging) It finds the host web_server in group development, so it creates that group, adds that host and sets the var ansible_connection for this host. It proceed to the group production, again finds the host web_server so it adds it to the group production, and sets the vars ansible_host, ansible_user and ansible_connection. But this is not specific for this group. It is set for the hostname web_server, overriding the previous value of ansible_connection. Ansible continues to the staging group and again override all the settings. The host web_server belongs to all 3 groups and its vars have been merged, even if you have targeted only one group, e.g. ansible-playbook ... --limit=development. The limit only limits the play to matching hosts, but the host still belongs to all groups and has all the (merged) vars which have been defined.
It is highly recommended and best practice to have an inventory file per environment. Not specifically for the problem you are facing, but simply to add another layer of security to not accidentally do anything to your production hosts when you actually want to target staging boxes. But it will also solve your problem. So instead of one inventory file like you'd have:
inventory/development:
[development]
web_server
db_server
[development:vars]
ansible_connection=docker
inventory/staging:
[staging]
web_server
db_server
[staging:vars]
ansible_host=20.20.20.20
ansible_user=tom
ansible_connection=ssh
inventory/production:
[production]
web_server
db_server
[production:vars]
ansible_host=10.10.10.10
ansible_user=tom
ansible_connection=ssh
Then you'd call Ansible with the appropriate inventory file:
ansible-playbook -i inventory/staging ... playbook.yml
If you somehow can not do that and need a single inventory file you need to make sure the host names are unique so the connection details are not overridden.

Related

Run a tasks only for hosts that belong to a certain group

I'm trying to skip a task in a playbook based on the group in my inventory file.
Inventory:
[test]
testserver1
[qa]
qaserver1
[prod]
prodserver1
prodserver2
I'm pretty sure I need to add the when clause to my task, but not sure how to reference the inventory file.
- name: Add AD group to local admin
ansible.windows.win_group_membership:
name: Administrators
members:
- "{{ local_admin_ad_group }}"
state: present
when: inventory is not prod
Basically, I want run the task above on all the servers except for the ones in the group called prod.
You can have all your hosts in one environment, but I suggest to use different environment files for dev, staging, qa and prod. If you separate it by condition, it can happen quite fast, that you mess up some condition or forget to add it to a new task altogether and accidentally run a task on a prod host, that should not run there.
If you still want to have all your hosts in the same inventory, you can either separate them using different plays (you can have multiple in the same playbook) and then using hosts to specify where they should run.
For example:
- name: play one
hosts:
- qa
- test
tasks:
<all your tasks for qa and test>
- name: play two
hosts: prod
tasks:
<all your tasks for prod>
If you want to do it on a per-task level, you can use the group_names variable.
For example:
- name: Add AD group to local admin
ansible.windows.win_group_membership:
name: Administrators
members:
- "{{ local_admin_ad_group }}"
state: present
when: '"prod" not in group_names'
In that case you need to be really careful if you change things, so your conditions are still the way they are supposed to be.

Ansible group variable evaluation with local actions

I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

How to create a file locally with ansible templates on the development machine

I'm starting out with ansible and I'm looking for a way to create a boilerplate project on the server and on the local environment with ansible playbooks.
I want to use ansible templates locally to create some generic files.
But how would i take ansible to execute something locally?
I read something with local_action but i guess i did not get this right.
This is for the webbserver...but how do i take this and create some files locally?
- hosts: webservers
remote_user: someuser
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
You can delegate tasks with the param delegate_to to any host you like, for example:
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
delegate_to: localhost
See Playbook Delegation in the docs.
If your playbook should in general run locally and no external hosts are involved though, you can simply create a group which contains localhost and then run the playbook against this group. In your inventory:
[local]
localhost ansible_connection=local
and then in your playbook:
hosts: local
Ansible has a local_action directive to support these scenarios which avoids the localhost and/or ansible_connection workarounds and is covered in the Delegation docs.
To modify your original example to use local_action:
- name: create some file
local_action: template src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
which looks cleaner.
If you cannot do/allow localhost SSH, you can split the playbook on local actions and remote actions.
The connection: local says to not use SSH for a playbook, as shown here: https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#local-playbooks
Example:
# myplaybook.yml
- hosts: remote_machines
tasks:
- debug: msg="do stuff in the remote machines"
- hosts: 127.0.0.1
connection: local
tasks:
- debug: msg="ran in local ansible machine"
- hosts: remote_machines
tasks:
- debug: msg="do more stuff in remote machines"

Ansible ec2 only provision required servers

I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")

Resources