Executing specific tasks against specific hosts in an Ansible playbook - ansible

I need to deploy my apps to two VMs using an Ansible playbook. Each VM serves a different purpose and my app consists of several different components, so each component has its own set of tasks in the playbook.
My inventory file looks like this:
[vig]
192.168.10.11
[websvr]
192.168.10.22
Ansible playbooks only have one for declaring hosts, which is right at the top and all the tasks execute against the specified hosts. But what I hope to achieve is:
Tasks 1 to 10 execute against the vig group
Tasks 11 to 20 execute against the websvr group
All in the same playbook, as in: ansible-playbook -i <inventory file> deploy.yml.
Is that possible? Do I have to use Ansible roles to achieve this?

Playbooks can have multiple plays (see https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html).
Playbooks can contain multiple plays. You may have a playbook that
targets first the web servers, and then the database servers. For
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- hosts: databases
remote_user: root
tasks:
- name: ensure postgresql is at the latest version
yum:
name: postgresql
state: latest
- name: ensure that postgresql is started
service:
name: postgresql
state: started

Related

Ansible multiple hosts for a specific playbook?

normally in ansible you always define a specific host for a specific playbook. But what is if I want to run a playbook for multiple hosts?
For a bit more clarity: I have following groups:
Webservers
Dev Clients
and following playbooks:
install main software
install certbot
install nginx
install vscode
For obvious reasons for sure the nginx playbook should have the entry: hosts: Webservers but what is with the main software playbook. This playbook should include both hosts.
In the end I want to have many different playbooks and groups and the only thing I wanna execute is: ansible-playbook ~/playbooks/webservers.yml and this triggers: "main software, certbot, nginx..." via includes. But this doesn't work without editing the hosts section of every playbook that is included in the webservers.yml.
Is it possible to tell a playbook that it should run on the hosts from the main-playbook which has included it?
like: webservers.yml -> hosts -> nginx.yml -> run on all clients which are entered in webservers.yml?
Does somebody of you has an idea? Or is this just not realizable with ansible?
Regards
You could use include module with when statement using group_names var as below:
webservers.yml
- hosts: all
tasks:
- name: Include Main Software playbook
include: main_software.yml
- name: Include Nginx playbook
include: nginx.yml
when: "'webservers' in {{ group_names }}"
- name: Include Vscode playbook
include: vscode.yml
when: "'DevClients' in {{ group_names }}"
Remember main_software.yml, nginx.yml and vscode.yml playbooks must have a list of tasks not a play.

How to wait for ssh to become available on a host before installing a role?

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools
Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

How to consolidate roles and templates to the same playbook

I need to use a provisioning tool called vcommander
looks like this tool supports running ansible locally on the provisioned host
only, and you need to provide him only the playbook to run.
I already have working ansible roles which includes tasks and templates for examples the ntp role looks toughtly like below:
ls -1 roles/ntp/
defaults
LICENSE
meta
README.md
tasks
templates
the main task looks something like this:
cat roles/ntp/tasks/main.yml
---
- name: "Configure ntp.conf"
template:
src: "ntp.conf.j2"
dest: "/etc/ntp.conf"
mode: "0644"
become: True
- name: Updating time zone
shell: tzdata-update
- name: Ensure NTP-related packages are installed.
package:
name: ntp
state: present
- name: Ensure NTP is running and enabled as configured.
service:
name: ntpd
state: started
enabled: yes
and the template looks like this:
cat roles/ntp/templates/ntp.conf.j2
{{ ansible_managed }}
restrict 127.0.0.1
restrict -6 ::1
server 127.127.1.0
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
keys /etc/ntp/keys
Is there a way to include the main.yml and the ntp.conf.j2 (template)
and other files included in the roles into one yml file?
Please provide an example.
Are you asking for a playbook that applies your ntp role to localhost?
It would look like this:
cat playbook.yml
- hosts: localhost
roles:
- ntp
See docs: https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#using-roles

To execute a task file from a role in ansible

I have setup basic directory architecture for my ansible playbooks.
I have defined two roles:-
1) www:-To manage all the site deployment
2) root :- To do the root related tasks
My root roles contains following tasks:-
1) Setup a new site on target server
2) Start the web server (apache,nginx)
I want to restart my apache server after the site deployment and for that i have created a playbook called apache-restart under tasks for the root roles.This is how my directory structure looks like
This is what i am trying in my site.yml
---
- name: Deploy Application
hosts: "{{ host }}"
roles:
- www
become: true
become_user: www-data
tags:
- site-deployment
- name: Restart Apache Server
hosts: "{{ host }}"
roles:
- root
tasks:
include: roles/root/apache-restart.yml
become: true
become_user: root
tags:
- site-deployment
When i am running this playbook it throwing me this error:-
ERROR! A malformed block was encountered.
The error appears to have been in '/Users/Atul/Workplace/infra-automation/deployments/LAMP/site.yml': line 18, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Restart Apache Server
^ here
is there any better way so that i can directly inclue apache-restart.yml file with my site.yml by specifying root role because if i include only role then ansible will start looking for main.yml.
tasks should be a list, so:
tasks:
- include: roles/root/apache-restart.yml
Rename apache-restart.yml to main.yml in root/tasks directory and the tasks specified in that file will be automatically run when you call root role
After renaming the file you can simplify Restart apache server play to
- name: Restart Apache Server
hosts: "{{ host }}"
roles:
- root
become: true
become_user: root
tags:
- site-deployment
Ansible best practice to conditionally restart a server or a daemon is to use handlers
In your case you'd create a handler called Restart Apache Server in www role and notify it in relevant tasks in that role. Using the restart handler in www you can get rid of root role altogether.

Ansible ec2 only provision required servers

I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")

Resources