how to run a particular task on specific host in ansible - ansible

my inventory file's contents -
[webservers]
x.x.x.x ansible_ssh_user=ubuntu
[dbservers]
x.x.x.x ansible_ssh_user=ubuntu
in my tasks file which is in common role i.e. it will run on both hosts but I want to run a following task on host webservers not in dbservers which is defined in inventory file
- name: Install required packages
apt: name={{ item }} state=present
with_items:
- '{{ programs }}'
become: yes
tags: programs
is when module helpful or there is any other way? How could I do this ?

If you want to run your role on all hosts but only a single task limited to the webservers group, then - like you already suggested - when is your friend.
You could define a condition like:
when: inventory_hostname in groups['webservers']

Thank you, this helps me too.
hosts file:
[production]
host1.dns.name
[internal]
host2.dns.name
requirements.yml file:
- name: install the sphinx-search rpm from a remote repo on x86_64 - internal host
when: inventory_hostname in groups['internal']
yum:
name: http://sphinxsearch.com/files/sphinx-2.2.11-1.rhel7.x86_64.rpm
state: present
- name: install the sphinx-search rpm from a remote repo on i386 - Production
when: inventory_hostname in groups['production']
yum:
name: http://sphinxsearch.com/files/sphinx-2.2.11-2.rhel6.i386.rpm
state: present

An alternative to consider in some scenarios is -
delegate_to: hostname
There is also this example form the ansible docs, to loop over a group. https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html -
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
loop: "{{groups['dbservers']}}"

Related

How to run a task only once during entire Ansible Playbook?

I have a Ansible playbook which does multiple things as below -
Download artifacts fron nexus into local server (Ansible Master).
Copy those artifacts onto multiple remote machines let's say server1/2/3 etc..
And I have used roles in my playbook and the role (repodownload) which downloads the artifacts I want to run it only once because why would i want to download the same thing again. I have tried to use run_once: true but i guess that won't work because that only works for one playbook run but my playbook is running multiple times for multiple hosts.
---
- name: Deploy my Application to tomcat nodes
hosts: '{{ target_env }}'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy
Here target_env is being passed from the command line and it's the remote host group.
Any help is appreciated.
Below is the code from main.yml from repodownload role -
- connection: local
name: Downloading files from Nexus to local server
get_url: url="{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war" dest={{ local_server_location }}
with_items:
- "{{ temps }}"
This is a really simple one that I battled with too.
Try this:
- connection: local
name: Downloading files from Nexus to local server
get_url:
url: "{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war"
dest: "{{ local_server_location }}"
with_items:
- "{{ temps }}"
run_once: true
Just something else, unrelated to your main question;
When you run a module that has really long args, like in your example above, rather break the params into their own lines nested under the module. It makes for easier reading, and it makes it easier to spot any potential typos or syntax errors early.
Okay extending from your converstation with Zeitounator. The following workaround will work without changing your vars files. Just remember that this is a workaround, might not be the most efficient way to do the job.
---
- name: Download my repo to localhost
# Executes only for first host in target_env and has visibility to group vars of target_env
hosts: '{{ target_env }}[0]'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- name: Deploy my Application to tomcat nodes
# Executes for all hosts in target_env
hosts: '{{ target_env }}'
serial: 1
roles:
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy

How to add host to group in Ansible Tower inventory?

How can I add a host to a group using tower_group or tower_host modules?
The following code creates a host and a group, but they are unrelated to each other:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- tower_inventory:
name: My Inventory
organization: Default
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_host:
name: myhost
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_group:
name: mygroup
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
Docs mention instance_filters parameter ("Comma-separated list of filter expressions for matching hosts."), however do not provide any usage example.
Adding instance_filters: myhost to the tower_group task has no effect.
I solved it using Ansible shell module and tower-cli. I Know that create a ansible module is better than it, but to a fast solution...
- hosts: awx
vars:
tasks:
- name: Create Inventory
tower_inventory:
name: "Foo Inventory"
description: "Our Foo Cloud Servers"
organization: "Default"
state: present
- name: Create Group
tower_group:
inventory: "Foo Inventory"
name: Testes
register: fs_group
- name: Create Host
tower_host:
inventory: "Foo Inventory"
name: "host"
register: fs_host
- name: Associate host group
shell: tower-cli host associate --host "{{fs_host.id}}" --group "> {{fs_group.id}}"
This isn't natively available in the modules included with Tower, which are older and use the deprecated tower-cli package.
But it is available in the newer AWX collection, which uses the awx CLI, as long as you have a recent enough Ansible (2.9 should be fine).
In essence, install the awx collection through a requirements file, or directly like
ansible-galaxy collection install awx.awx -p ./collections
Add the awx.awx collection to your playbook
collections:
- awx.awx
and then use the hosts: option to tower_group:.
- tower_group:
name: mygroup
inventory: My Inventory
hosts:
- myhost
state: present
You can see a demo playbook here.
Be aware though that you may need preserve_existing_hosts: True if your group already contains other hosts. Unfortunately there does not seem to be an easy way to remove a single host from a group.
In terms of your example this would probably work:
---
- hosts: localhost
connection: local
gather_facts: false
collections:
- awx.awx
tasks:
- tower_inventory:
name: My Inventory
organization: Default
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_host:
name: myhost
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_group:
name: mygroup
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
hosts:
- myhost

Trying to include a list of tasks from an playbook in Ansible

My folder structure:
First I'll give you this so you can see how this is laid out and reference it when reading below:
/environments
/development
hosts // Inventory file
/group_vars
proxies.yml
/custom_tasks
firewall_rules.yml // File I'm trying to bring in
playbook.yml // Root playbook, just brings in the plays
rev-proxy.yml // Reverse-proxy playbook, included by playbook.yml
playbook.yml:
---
- include: webserver.yml
- include: rev-proxy.yml
proxies.yml just contains firewall_custom_include_file: custom_tasks/firewall_rules.yml
firewall_rules.yml:
tasks:
- name: "Allowing traffic from webservers on 80"
ufw: src=10.10.10.3, port=80, direction=in, rule=allow
- name: "Allowing traffic all on 443"
ufw: port=443, rule=allow
and finally rev-proxy.yml play:
---
- hosts: proxies
become: yes
roles:
- { role: firewall }
- { role: geerlingguy.nginx }
pre_tasks:
# jessie-backports for nginx-extras 1.10
- name: "Adding jessie-backports repo"
copy: content="deb http://ftp.debian.org/debian jessie-backports main" dest="/etc/apt/sources.list.d/jessie-backports.list"
- name: Updating apt-cache.
apt: update_cache="yes"
- name: "Installing htop"
apt:
name: htop
state: present
- name: "Coopying SSL certificates"
copy: src=/vagrant/ansible/files/ssl/ dest=/etc/ssl/certs force=no
tasks:
- name: "Including custom firewall rules."
include: "{{ inventory_dir }}/{{ firewall_custom_include_file }}.yml"
when: firewall_custom_include_file is defined
vars_files:
- ./vars/nginx/common.yml
- ./vars/nginx/proxy.yml
What I'm trying to do:
Using Ansible 2.2.1.0
I'm trying to include a list of tasks that will be run if a variable firewall_custom_include_file is set. The list is included relative to the inventory directory by doing "{{ inventory_dir }}/{{ firewall_custom_include_file }}.yml" - in this case that works out to /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml
Essentially the idea here is that I need to have different firewall rules be executed based on what environment I'm in, and what hosts are being provisioned.
To give a simple example: I might want to whitelist a database server IP on the production webserver, but not on the reverse proxy, and also not on my development box.
The problem:
Whenever I include firewall_rules.yml like above, it tells me:
TASK [Including custom firewall rules.] ****************************************
fatal: [proxy-1]: FAILED! => {"failed": true, "reason": "included task files must contain a list of tasks"}
I'm not sure what it's expecting, I tried taking out the tasks: at the beginning of the file, making it:
- name: "Allowing traffic from webservers on 80"
ufw: src=10.10.10.3, port=80, direction=in, rule=allow
- name: "Allowing traffic all on 443"
ufw: port=443, rule=allow
But then it gives me the error:
root#ansible-control:/vagrant/ansible# ansible-playbook -i environments/development playbook.yml
ERROR! Attempted to execute "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as inventory script: problem running /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml --list ([Errno 8] Exec format error)
Attempted to read "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as YAML: 'AnsibleSequence' object has no attribute 'keys'
Attempted to read "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as ini file: /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml:2: Expected key=value host variable assignment, got: name:
At this point I'm not really sure what it's looking for in the included file, and I can't seem to really find clear documentation on this, or other people having this issue.
Try to execute with -i environments/development/hosts instead of directory.
But I bet that storing tasks file inside inventory is far from best practices.
You may want to define list of custom rules as inventory variable, e.g.:
custom_rules:
- src: 10.10.10.3
port: 80
direction: in
rule: allow
- port: 443
rule: allow
And instead of include task, make something like this:
- ufw:
port: "{{ item.port | default(omit) }}"
rule: "{{ item.rule | default(omit) }}"
direction: "{{ item.direction | default(omit) }}"
src: "{{ item.src | default(omit) }}"
with_items: "{{ custom_rules }}"

Ansible delegate_to how to set user that is used to connect to target?

I have an Ansible (2.1.1.) inventory:
build_machine ansible_host=localhost ansible_connection=local
staging_machine ansible_host=my.staging.host ansible_user=stager
I'm using SSH without ControlMaster.
I have a playbook that has a synchronize command:
- name: Copy build to staging
hosts: staging_machine
tasks:
- synchronize: src=... dest=...
delegate_to: staging_machine
remote_user: stager
The command prompts for password of the wrong user:
local-mac-user#my-staging-host's password:
So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts.
What am I doing wrong? How do I fix this?
EDIT: It works in 2.0.2, doesn't work in 2.1.x
The remote_user setting is used at the playbook level to set a particular play run as a user.
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
If you only have a certain task that needs to be run as a different user you can use the become and become_user settings.
- name: Run command
command: whoami
become: yes
become_user: some_user
Finally if you have a group of tasks to run as a user in a play you can group them with block
example:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
Reference:
- How to switch a user per task or set of tasks?
- https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
The one which works for me but please note that it is for Windows and Linux do not require become_method: runas and basically does not have it
- name: restart IIS services
win_service:
name: '{{ item }}'
state: restarted
start_mode: auto
force_dependent_services: true
loop:
- 'SMTPSVC'
- 'IISADMIN'
become: yes
become_method: runas
become_user: '{{ webserver_user }}'
vars:
ansible_become_password: '{{ webserver_password }}'
delegate_facts: true
delegate_to: '{{ groups["webserver"][0] }}'
when: dev_env
Try set become: yes and become_user: stager on your YAML file... That should fix it...
https://docs.ansible.com/ansible/2.5/user_guide/become.html

Ansible run task once per database-name

I'm using ansible to deploy several sites to the same server. Each site is a separate 'host' in the ansible hosts inventory, which works really well.
However, there are only two databases: production and testing.
How can I make sure my database-migration task only runs once per database?
I've read into the group_by, run_once and delegate_to features, but I'm not sure how to combine those.
The hosts look something like:
[production]
site1.example.com ansible_ssh_host=webserver.example.com
site2.example.com ansible_ssh_host=webserver.example.com
[beta]
beta-site1.example.com ansible_ssh_host=webserver.example.com
beta-site2.example.com ansible_ssh_host=webserver.example.com
[all:children]
production
beta
The current playbook looks like this:
---
- hosts: all
- tasks:
# ...
- name: "postgres: Create PostgreSQL database"
sudo: yes
sudo_user: postgres
postgresql_db: db="{{ DATABASES.default.NAME }}" state=present template=template0 encoding='UTF-8' lc_collate='en_US.UTF-8' lc_ctype='en_US.UTF-8'
tags: postgres
register: createdb
delegate_to: "{{ DATABASES.default.HOST|default(inventory_hostname) }}"
# ...
- name: "django-post: Create Django database tables (migrate)"
django_manage: command=migrate app_path={{ src_dir }} settings={{ item.settings }} virtualenv={{ venv_dir }}
with_items: django_projects
#run_once: true
tags:
- django-post
- django-db
- migrate
The best way I found was to restrict the execution of a task to the first host of a group. Therefore you need to add the groupname and the databases to a group_vars file like:
group_vars/production
---
dbtype=production
django_projects:
- name: project_1
settings: ...
- name: project_n
settings: ...
group_vars/beta
---
dbtype=beta
django_projects:
- name: project_1
settings: ...
- name: project_n
settings: ...
hosts
[production]
site1.example.com ansible_ssh_host=localhost ansible_connection=local
site2.example.com ansible_ssh_host=localhost ansible_connection=local
[beta]
beta-site1.example.com ansible_ssh_host=localhost ansible_connection=local
beta-site2.example.com ansible_ssh_host=localhost ansible_connection=local
[all:children]
production
beta
and limit the task execution to the first host that matches that group:
- name: "django-post: Create Django database tables (migrate)"
django_manage: command=migrate app_path={{ src_dir }} settings={{ item.settings }} virtualenv={{ venv_dir }}
with_items: django_projects
when: groups[dbtype][0] == inventory_hostname
tags:
- django-post
- django-db
- migrate
So, the below will illustrate why I say "once per group" is generally not supported. Naturally, I would love to see a cleaner out-of-the-box way, and do let me know if "once per group" isn't what you're after.
Hosts:
[production]
site1.example.com ansible_ssh_host=localhost ansible_connection=local
site2.example.com ansible_ssh_host=localhost ansible_connection=local
[beta]
beta-site1.example.com ansible_ssh_host=localhost ansible_connection=local
beta-site2.example.com ansible_ssh_host=localhost ansible_connection=local
[beta:vars]
dbhost=beta-site1.example.com
[production:vars]
dbhost=site1.example.com
[all:children]
production
beta
Example playbook which will do something on the dbhost once per group (the two real groups):
---
- hosts: all
tasks:
- name: "do this once per group"
sudo: yes
delegate_to: localhost
debug:
msg: "do something on {{hostvars[groups[item.key].0]['dbhost']}} for {{item}}"
register: create_db
run_once: yes
with_dict: groups
when: item.key not in ['all', 'ungrouped']

Resources