Running tasks on dynamically collected EC2 instances - amazon-ec2

I'm trying to write a role with tasks that filters out several EC2 instances, adds them to the inventory and then stops a PHP service on them.
This is how far I've gotten, the add_host idea I'm copying from here: http://docs.catalystcloud.io/tutorials/ansible-create-x-servers-using-in-memory-inventory.html
My service task does not appear to be running on the target instances but instead on the host specified in the playbook which runs this role.
---
- name: Collect ec2 data
connection: local
ec2_remote_facts:
region: "us-east-1"
filters:
"tag:Name": MY_TAG
register: ec2_info
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- name: Stop the service
hosts: foobar
become: yes
gather_facts: false
service: name=yii-queue#1 state=stopped enabled=yes
UPDATE: when I try baptistemm's suggestion, I get this:
PLAY [MAIN_HOST_NAME] ***************************
TASK [ec2-manage : Collect ec2 data]
*******************************************
ok: [MAIN_HOST_NAME]
TASK [ec2-manage : Add the hosts to a new group]
*******************************************************
PLAY RECAP **********************************************************
MAIN_HOST_NAME : ok=1 changed=0 unreachable=0 failed=0
UPDATE #2 - yes, the ec2_remote_tags filter does return instances (using the real tag value not the fake one I put in this post). Also, I have seen ec2_instance_facts but I ran into some issues with that (boto3 required although I have a workaround there, still I'm trying to fix the current issue first).

If you want to play tasks on a group of targets you need to define a new play after you created the in-memory list of targets with add_host (as mentionned in your link). So to do that you need like this:
… # omit the code before
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- hosts: foobar
gather_facts: false
become: yes
tasks:
- name: Stop the service
hosts: foobar
service: name=yii-queue#1 state=stopped enabled=yes

Related

Access hosts in play filtered by task

I have a task that checks the redis service status on the host list below
- hosts: 192.168.0.1, 192.168.0.2, 192.168.0.3
tasks:
- command:
cmd: service redis-server status
register: result
- debug:
var: result
After checking I need to access hosts where service does not exist.
And they should be accessible as variable to proceed with them in the next tasks.
Can someone please help?
Similar to Ansible facts it is also possible to gather service_facts. In example
- name: Set facts SERVICE
set_fact:
SERVICE: "redis-server.service"
- name: Gathering Service Facts
service_facts:
- name: Show ansible_facts.services
debug:
msg:
- "{{ ansible_facts.services[SERVICE].status }}"
If you like to perform tasks after on a service of which you don't the status, with Conditionals you can check the state.
If the service is not installed at that time, the specific variable (key) would not be defined. You would perform Conditionals based on variables then
when: ansible_facts.services[SERVICE] is defined
when: ansible_facts.services['redis-server.service'] is defined
Also it is recommend to use the Ansible service module to perform tasks on the service
- name: Start redis-server, if not started
service:
name: redis-server
state: started
instead of using the command module.
Further services related Q&A
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?
Ansible: How to start stopped services?
Ansible: How to get disabled but running services?
How to list only the running services with ansible_facts?
Finally found the solution that perfectly matches.
- name: Check that redis service exists
systemd:
name: "redis"
register: redis_status
changed_when: redis_status.status.ActiveState == "inactive"
- set_fact:
_dict: "{{ dict(ansible_play_hosts|zip(
ansible_play_hosts|map('extract', hostvars, 'redis_status'))) }}"
run_once: true
- set_fact:
_changed: "{{ (_dict|dict2items|json_query('[?value.changed].key'))| join(',') }}"
run_once: true

How do I tell Ansible to include localhost on the task?

I have this task:
- name: Install OpenJDK
become: true
apt:
name: openjdk-8-jre-headless
cache_valid_time: 60
state: latest
I want to run it in all hosts, including localhost. How can I tell Ansible to include localhost in the hosts for just one play?
You just add localhost to the pattern of targeted hosts in your play. Note that, unless your re-define it in your inventory, localhost is implicit and does not match the all special group.
The global idea
---
- name: This play will target all hosts in inventory
hosts: all
tasks:
- debug:
msg: I'm a dummy task
- name: This play will target all inventory hosts AND implicit localhost
hosts: all:localhost
tasks:
- debug:
msg: Yet an other dummy task

ansible - consul kv listing recursive and compare the key values

I am getting error while trying to retrieve the key values from consul kv store.
we have key values are stored under config/app-name/ folder. there are many keys. I want to retrieve all the key values from the consul using ansible.
But getting following error:
PLAY [Adding host to inventory] **********************************************************************************************************************************************************
TASK [Adding new host to inventory] ******************************************************************************************************************************************************
changed: [localhost]
PLAY [Testing consul kv] *****************************************************************************************************************************************************************
TASK [show the lookups] ******************************************************************************************************************************************************************
fatal: [server1]: FAILED! => {"failed": true, "msg": "{{lookup('consul_kv','config/app-name/')}}: An unhandled exception occurred while running the lookup plugin 'consul_kv'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Error locating 'config/app-name/' in kv store. Error was 500 No known Consul servers"}
PLAY RECAP *******************************************************************************************************************************************************************************
server1 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=1 changed=1 unreachable=0 failed=0
Here is the code i am trying.
---
- name: Adding host to inventory
hosts: localhost
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: Testing consul kv
hosts: "{{ target }}"
vars:
kv_info: "{{lookup('consul_kv','config/app-name/')}}"
become: yes
tasks:
- name: show the lookups
debug: msg="{{ kv_info }}"
but removing folder and adding folder are working well. but getting the key values from consul cluster is throwing error. please suggest some better way here.
- name: remove folder from the store
consul_kv:
key: 'config/app-name/'
value: 'removing'
recurse: true
state: absent
- name: add folder to the store
consul_kv:
key: 'config/app-name/'
value: 'adding'
I tried this but still the same error.
---
- name: Adding host to inventory
hosts: localhost
environment:
ANSIBLE_CONSUL_URL: "http://consul-1.abcaa.com"
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: show the lookups
debug: kv_info= "{{lookup('consul_kv','config/app-name/')}}"
All lookup plugins in Ansible are always evaluated on localhost, see docs:
Note:
Lookups occur on the local computer, not on the remote computer.
I guess you expect kv_info to be populated by executing consul fetch from
{{ target }} server.
But this lookup is actually executed on your Ansible control host (localhost), and if you have no ANSIBLE_CONSUL_URL set, you get No known Consul servers error.
When you use consul_kv module (to create/delete folders), it is executed on {{ target }} host in contrast to consul_kv lookup plugin.

Does Ansible delegate_to work with handlers?

I wrote a playbook to modify the IP address of several remote systems. I wrote the playbook to change only a few systems at a time, so I wanted to use delegate_to to change the DNS record on the nameservers as each system was modified, instead of adding a separate play targeted at the nameservers that would change all the host IPs at once.
However, it seems the handler is being run on the primary playbook target, not my delegate_to target. Does anyone have recommendations for working around this?
Here's my playbook:
---
host: hosts-to-modify
serial: 1
tasks:
- Modify IP for host-to-modify
//snip//
- name: Modify DNS entry
delegate_to: dns-servers
become: yes
replace:
args:
backup: yes
regexp: '^{{ inventory_hostname }}\s+IN\s+A\s+[\d\.]+$'
replace: "{{ inventory_hostname }} IN A {{ new_ip }}"
dest: /etc/bind/db.my.domain
notify:
- reload dns service
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
With an inventory file like the following:
[dns-servers]
ns01
ns02
[hosts-to-modify]
host1 new_ip=10.1.1.10
host2 new_ip=10.1.1.11
host3 new_ip=10.1.1.12
host4 new_ip=10.1.1.13
Output snippet, including error message:
TASK [Modify DNS entry] ********************************************************
Friday 02 September 2016 14:46:09 -0400 (0:00:00.282) 0:00:35.876 ******
changed: [host1 -> ns01]
changed: [host1 -> ns02]
RUNNING HANDLER [reload dns service] *******************************************
Friday 02 September 2016 14:47:00 -0400 (0:00:38.925) 0:01:27.385 ******
fatal: [host1]: FAILED! => {"changed": false, "failed": true, "msg": "no service or tool found for: bind9"}
First of all, you example playbook is invalid in several ways: play syntax is flawed and delegate_to can't be targeted to a group of hosts.
If you want to delegate to multiple servers, you should iterate over them.
And answering your main question: yes, you can use delegate_to with handlers:
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
delegate_to: "{{ item }}"
with_items: "{{ groups['dns-servers'] }}

Ansible run task once per database-name

I'm using ansible to deploy several sites to the same server. Each site is a separate 'host' in the ansible hosts inventory, which works really well.
However, there are only two databases: production and testing.
How can I make sure my database-migration task only runs once per database?
I've read into the group_by, run_once and delegate_to features, but I'm not sure how to combine those.
The hosts look something like:
[production]
site1.example.com ansible_ssh_host=webserver.example.com
site2.example.com ansible_ssh_host=webserver.example.com
[beta]
beta-site1.example.com ansible_ssh_host=webserver.example.com
beta-site2.example.com ansible_ssh_host=webserver.example.com
[all:children]
production
beta
The current playbook looks like this:
---
- hosts: all
- tasks:
# ...
- name: "postgres: Create PostgreSQL database"
sudo: yes
sudo_user: postgres
postgresql_db: db="{{ DATABASES.default.NAME }}" state=present template=template0 encoding='UTF-8' lc_collate='en_US.UTF-8' lc_ctype='en_US.UTF-8'
tags: postgres
register: createdb
delegate_to: "{{ DATABASES.default.HOST|default(inventory_hostname) }}"
# ...
- name: "django-post: Create Django database tables (migrate)"
django_manage: command=migrate app_path={{ src_dir }} settings={{ item.settings }} virtualenv={{ venv_dir }}
with_items: django_projects
#run_once: true
tags:
- django-post
- django-db
- migrate
The best way I found was to restrict the execution of a task to the first host of a group. Therefore you need to add the groupname and the databases to a group_vars file like:
group_vars/production
---
dbtype=production
django_projects:
- name: project_1
settings: ...
- name: project_n
settings: ...
group_vars/beta
---
dbtype=beta
django_projects:
- name: project_1
settings: ...
- name: project_n
settings: ...
hosts
[production]
site1.example.com ansible_ssh_host=localhost ansible_connection=local
site2.example.com ansible_ssh_host=localhost ansible_connection=local
[beta]
beta-site1.example.com ansible_ssh_host=localhost ansible_connection=local
beta-site2.example.com ansible_ssh_host=localhost ansible_connection=local
[all:children]
production
beta
and limit the task execution to the first host that matches that group:
- name: "django-post: Create Django database tables (migrate)"
django_manage: command=migrate app_path={{ src_dir }} settings={{ item.settings }} virtualenv={{ venv_dir }}
with_items: django_projects
when: groups[dbtype][0] == inventory_hostname
tags:
- django-post
- django-db
- migrate
So, the below will illustrate why I say "once per group" is generally not supported. Naturally, I would love to see a cleaner out-of-the-box way, and do let me know if "once per group" isn't what you're after.
Hosts:
[production]
site1.example.com ansible_ssh_host=localhost ansible_connection=local
site2.example.com ansible_ssh_host=localhost ansible_connection=local
[beta]
beta-site1.example.com ansible_ssh_host=localhost ansible_connection=local
beta-site2.example.com ansible_ssh_host=localhost ansible_connection=local
[beta:vars]
dbhost=beta-site1.example.com
[production:vars]
dbhost=site1.example.com
[all:children]
production
beta
Example playbook which will do something on the dbhost once per group (the two real groups):
---
- hosts: all
tasks:
- name: "do this once per group"
sudo: yes
delegate_to: localhost
debug:
msg: "do something on {{hostvars[groups[item.key].0]['dbhost']}} for {{item}}"
register: create_db
run_once: yes
with_dict: groups
when: item.key not in ['all', 'ungrouped']

Resources