How to override hosts variable in ansible_playbook - ansible

I am new to anisble...
I want to make sure mongo is running on all the hosts with tag tag_role_mongo and node is running on tag_role_node
is it possible to override hosts variable
hosts: {{ item.tag_name }}
tasks:
command: // check ps output {{ item.process_name }}
with_items:
- tag_name: tag_role_mongo
process_name: "mongo"
- tag_name: tag_role_node
process_name: "node"
I am pretty sure, my syntax is not, correct, my question is it even possible to do such thing using a playbook.
In all the playbook examples, hosts is fixed or can be overridden from command line using extra-args option.
Any examples would be very helpful

I'm not fully sure if I understand your question correctly but tags are not added on a host-level but on task-level - see the documentation. What you probably mean is to execute the same command for two different groups (mongo and node).
For this you can just split your playbook in two parts:
hosts: mongo_hosts
tasks:
command: ...
hosts: node_hosts
tasks:
command: ...
Not very nice but should solve the problem. You can also create a role and pass to this role just the process name you want to check for, so maintenance become easier.

Related

In Ansible, can playbooks pass tags to other playbooks?

We have a "periodic" tag in our roles that is intended to be run at regular intervals by our Ansible box for file assurance, etc. Would it be possible to have a playbook for periodic runs that calls the other playbooks with the appropriate host groups and tags?
The only way to execute an Ansible playbook "with the appropriate host groups and tags" is to run ansible-playbook executable. This is the only case in which all the data structures starting from the inventory would be created in isolation from the currently running playbook.
You can simply call the executable using command module on the control machine:
- hosts: localhost
tasks:
- command: ansible-playbook {{ playbook }} --tags {{ tags }}
You can also use local_action or delegate_to.
It might be that you want to include plays, or use roles, however given the problem description in the question, it's impossible to tell.
Here is what we ended up with: It turns out that tags and variables passed on the command-line are inherited all the way down the line. This allowed us to pass this on the command line:
ansible-playbook -t periodic periodic.yml
Which calls a playbook like this:
---
- name: This playbook must be called with the "periodic" tag.
hosts: 127.0.0.1
any_errors_fatal: True
tasks:
- fail:
when: periodic not True
- name: Begin periodic runs for type 1 servers
include: type1-server.yml
vars:
servers:
- host_group1
- host_group2
- ...
- name: Begin periodic runs for type 2 servers
...
Our 'real' playbooks have - hosts: "{{ servers }}" so that they can be inherited from the parent. The tasks in our roles are tagged with "periodic" for things that need to be run on a schedule. We then use SystemD to schedule the runs. You can use cron, but SystemD is better IMHO. Examples can be provided upon request.

Ansible: Include playbook according to inventory variable

I am trying to set up Ansible to be able to run a playbook according to what inventory group the host is in. For example, in the inventory, we have:
[group1]
host1.sub.domain.tld ansible_host=10.0.0.2
...
[group1:vars]
whatsmyplaybook=build-server.yml
Then we want to make a simple playbook that will more or less redirect to the playbook that is in the inventory:
---
- name: Load Playbook from inventory
include: "{{hostvars[server].whatsmyplaybook}}"
Where the "server" variable would be the host's FQDN, passed in from the command line:
ansible-playbook whatsmyplaybook.yml -e "server=host1.sub.domain.tld"
Our reasoning for this would be to have a server bootstrap itself from a fresh installation (PXE boot), where it will only really know its FQDN, then have a firstboot script SSH to our Ansible host and kick off the above command. However, when we do this, we get the below error:
ERROR! 'hostvars' is undefined
This suggests that the inventory is not parsed until a host list is provided, which sucks a lot. Is there another way to accomplish this?
A bit strange workflow, honestly.
Your setup doesn't work, because most of variables are not defined during playbook parse time.
You may be more lucky with defining single playbook with different plays for different groups (no need to set group var, just use correct host pattern (group name in my example)) and execute it limiting to specific host:
site.yml:
---
- hosts: group1
tasks:
- include: build-web-server-tasks.yml
- hosts: group2
tasks:
- include: build-db-server-tasks2.yml
command to provision specific server:
ansible-playbook -l host1.sub.domain.tld site.yml
You can develop your own dynamic inventory file so that all machines which needs to be bootstrapped will automatically added into your inventory and group respectively with out an manual entry in to the inventory file.
For developing dynamic inventory you can follow the below link:
http://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html
You can include multiple playbooks targeted to different groups as follows.
---
- hosts: all
tasks:
- include: build-web-server-tasks.yml
where: inventory_hostname in groups['group1']
- include: build-db-server-tasks2.yml
where: inventory_hostname in groups['group2']
inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the rest of the domain.

How do I get the Ansible expect module to properly wait for pid file creation with_items

I'm trying to start a bunch of services on a node with a service startup shell script we use. It seems like the services do not fully startup because ansible doesn't wait for the script to finish running (part of it starts thin webserver in the bg). I want the with_items loop to wait until the pid file is in place before starting the second srvc.
- name: startup all the services
hosts: all
gather_facts: no
tasks:
expect:
command: /bin/bash -c "/home/vagrant/app-src/app_global/bin/server_tool server_daemon {{ item }}"
creates: "/home/vagrant/app-src/{{ item }}/tmp/pids/thin.pid"
with_items:
- srvc1
- srvc2
I want the items loop to work both with the command as well as the thin.pid file it creates.
But it doesn't seem to do anything when I run it.
🍺 vagrant provision
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PLAY [startup all the services] *******************************************
PLAY RECAP ********************************************************************
If I understand your intentions correctly, you shouldn't be using Expect module at all. It is for automating programs requiring interactive input (see: Expect).
To start services sequentially and suspend processing of the playbook until the pid-file was created, you can (currently) split your playbook into two files and use include module with with_items attribute:
Main playbook:
- name: startup all the services
hosts: all
gather_facts: no
tasks:
- include: start_daemon.yml srvcname={{ item }}
with_items:
- srvc1
- srvc2
Sub-playbook start_daemon.yml:
- shell: "/home/vagrant/app-src/app_global/bin/server_tool server_daemon {{ srvcname }}"
args:
creates: "/home/vagrant/app-src/{{ srvcname }}/tmp/pids/thin.pid"
- name: Waiting for {{ srvcname }} to start
wait_for: path=/home/vagrant/app-src/{{ srvcname }}/tmp/pids/thin.pid state=present
Remarks:
I think you don't need to specify /bin/bash for command module (however it might depend on the configuration). If for some reason server_tool requires shell environment, use shell module (as I suggested above).
With name: in the wait_for task you'll get an on-screen info which service Ansible is currently waiting for.
For future: A natural way to do it would be to use block module with with_items. This feature has been requested, but as of today is not implemented.

Specifying variables in master Ansible playbook

I am writing a master Ansible playbook which is including playbooks. Now, I want to set variables that all the playbooks specified within it use. How do I specify variables in this playbook?
I know that one of the options is to include vars_files and use it with each of the playbook. Example: - include: abc.yml
vars_files: vars.yml
I am using Ansible 1.9.3.
First I would really recommend you to update your Ansible to latest version. It is very easy to do so, no reason to stay behind.
Having said that, there are many ways on how to specify variables in your master playbook. All these are more or less the same with any other playbook. Briefly mentioning:
a. Define them in your playbook itself
- hosts: webservers
vars:
http_port: 80
b. Separating into a variable file, as you already said:
- hosts: all
remote_user: root
vars:
favcolor: blue
vars_files:
- /vars/external_vars.yml
vars/external_vars.yml
somevar: somevalue
password: magic
Other possibilities include:
c. Using facts
d. Registering output into variables
Additionally, which may be important for your case:
d. You can pass variables into includes:
tasks:
- include: wordpress.yml wp_user=timmy
- include: wordpress.yml wp_user=alice
- include: wordpress.yml wp_user=bob
e. Passing variables in command line:
ansible-playbook release.yml -k "version=1.23.45 other_variable=foo"
-k is shorthand for --exra-vars.
There might be other ways too that I may be missing at the moment.

How to install multiple instances of a service on a host with Ansible?

I have a host on which I want to install the same service multiple times, but with different paths, service names, etc. (stuff that can be configured via variables).
I usually don't use the same host for this, but this is a special use-case scenario and I can't change the architecture.
What is the optimal way of doing this using Ansible (I am already using 2.0)?
Given you have a role to install your application, you could use roll parameters to configure all the moving pieces.
- role: cool-app
location: /some/path/A
config:
some: stuff
- role: cool-app
location: /some/path/B
config:
some: other stuff
Then inside your role you could directly access {{ location }} and {{ config.some }} etc.
A bit more dynamic but also more complex to create - especially if you already have this working role and now need to change it - is to loop all tasks over a set of instances.
You could again pass this as role parameters:
- role: cool-app
instances:
- location: /some/path/A
config:
some: stuff
- location: /some/path/B
config:
some: other stuff
Or better define it in your host- or group-vars.
Then every task which is unique to an instance would need to loop over the instances variable. So for example unzipping:
- unarchive:
src: cool-app.tgz
dest: "{{ item.location }}"
with_items: instances
In addition to udondan's response there
is a third solution. Let's consider following directory structure:
host_vars/myapp01.yml
host_vars/myapp02.yml
roles/touch/tasks/main.yml
inventory.yml
play.yml
and following file contents:
# host_vars/myapp01.yml
myvar: myval01
# host_vars/myapp02.yml
myvar: myval02
# roles/touch/tasks/main.yml
- name: touch
command: touch {{ myvar }}
# inventory.yml
myapp01 ansible_host=192.168.0.1
myapp02 ansible_host=192.168.0.1
# play.yml
- hosts: all
roles:
- touch
Idea
The idea is to alias host with app instance names (one alias per instance of
application). In the example two aliases (myapp01 and myapp02) target the same
host: 192.168.0.1. Now this two instances of application are treated by
ansible exactly as two separate hosts and:
ansible-playbook play.yml -i inventory.ini
will install two instances of application (touch files myval01 and myval02)
on host 192.168.0.1.
Advantages
This solution allows, for example, to execute play only on one instance of
application:
ansible-playbook play.yml -i inventory.ini --limit myapp01
Note
Two DNS or IP addresses also can target the same machine.

Resources