Running a task on a single host always with Ansible? - ansible

I am writing a task to download a database dump from a specific location. It will always be run on the same host.
So I am including the task as follows in the main playbook:
tasks:
include: tasks/dl-db.yml
The content of the task is:
---
- name: Fetch the Database
fetch: src=/home/ubuntu/mydb.sql.gz dest=/tmp/mydb.sql.bz fail_on_missing=yes
But I want it to fetch from a single specific host not all hosts.
Is a task the right approach for this?

If all you need to happen is that it's only run once rather than on every host you can instead use run_once like so:
---
- name: Fetch the Database
run_once: true
fetch: src=/home/ubuntu/mydb.sql.gz dest=/tmp/mydb.sql.bz fail_on_missing=yes
This will then be run from the first host that runs the task. You can further limit this with delegate_to if you want to specifically target a specific host:
---
- name: Fetch the Database
run_once: true
delegate_to: node1
fetch: src=/home/ubuntu/mydb.sql.gz dest=/tmp/mydb.sql.bz fail_on_missing=yes

Related

How to run tasks file as a playbook and as an include_file

I have a generic tasks, like update a DNS records based on my machine -> server
Which I use it in my playbooks with include_tasks like so:
- name: (include) Update DNS
include_tasks: task_update_dns.yml
I have many of these "generic tasks" simply acting as a "shared task" in many playbooks.
But in some cases, I just want to run one of these generic tasks on a server, but running the following gives the below error:
ansible-playbook playbooks/task_update_dns.yml --limit $myserver
# ERROR
ERROR! 'set_fact' is not a valid attribute for a Play
Simply because it's not a "playbook" they are just "tasks"
File: playbooks/task_update_dns.yml
---
- name: dig mydomain.com
set_fact:
my_hosts: '{{ lookup("dig", "mydomain.com").split(",") }}'
tags: always
- name: Set entries
blockinfile:
....
I know I can write a playbook, empty, that only "include" the task file, but I don't want to create a shallow playbook now for each task.
Is there a way to configure the task file in such way that I'll be able to run it for both include_tasks and as a "stand alone"/play command line ?
Have you tried to use custom tags for those tasks?
Create a playbook with all the tasks
---
- name: Update web servers
hosts: webservers
remote_user: root
tasks:
- name: dig mydomain.com
set_fact:
my_hosts: '{{ lookup("dig", "mydomain.com").split(",") }}'
tags: always
- name: Set entries
blockinfile:
tags:
- set_entries
- name: (include) Update DNS
include_tasks: task_update_dns_2.yml
tags:
- dns
...
when you need to run only that specific task, you just need to add the --tag parameter in the execution with the task name
ansible-playbook playbooks/task_update_dns.yml --limit $myserver --tag set_entries

Ansible: Trigger the task only when previous task is successful and the output is created

I am deploying a VM in azure using ansible and using the public ip created in the next tasks. But the time taken to create the public ip is too long so when the subsequent task is executed, it fails. The time to create the ip also varies, it's not fixed. I want to introduce some logic where the next task will only run when the ip is created.
- name: Deploy Master Node
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS-CI
publisher: OpenLogic
sku: '7-CI'
version: latest
Can someone assist me here..! It's greatly appreciated.
I think the wait_for module is a bad choice because while it can test for port availability it will often give you false positives because the port is open before the service is actually ready to accept connections.
Fortunately, the wait_for_connection module was designed for exactly the situation you are describing: it will wait until Ansible is able to successfully connect to your target.
This generally requires that you register your Azure VM with your Ansible inventory (e.g. using the add_host module). I don't use Azure, but if I were doing this with OpenStack I might write something like this:
- hosts: localhost
gather_facts: false
tasks:
# This is the task that creates the vm, much like your existing task
- os_server:
name: larstest
cloud: kaizen-lars
image: 669142a3-fbda-4a83-bce8-e09371660e2c
key_name: default
flavor: m1.small
security_groups: allow_ssh
nics:
- net-name: test_net0
auto_ip: true
register: myserver
# Now we take the public ip from the previous task and use it
# to create a new inventory entry for a host named "myserver".
- add_host:
name: myserver
ansible_host: "{{ myserver.openstack.accessIPv4 }}"
ansible_user: centos
# Now we wait for the host to finished booting. We need gather_facts: false here
# because otherwise Ansible will attempt to run the `setup` module on the target,
# which will fail if the host isn't ready yet.
- hosts: myserver
gather_facts: false
tasks:
- wait_for_connection:
delay: 10
# We could add additional tasks to the previous play, but we can also start
# new play with implicit fact gathering.
- hosts: myserver
tasks:
- ...other tasks here...

Ansible connect via ssh after certain tasks have populated passwords

In Ansible I have a need to execute a set of tasks and obtain the passwords from a third party (this part was handled) and then use those SSH credentials to connect.
The problem is it seems when I am doing this the best way to loop through my inventory is to include a list of tasks, that's great. The major problem is that I can only get this if I specify hosts in my main.yml playbook to localhost. (Or set to the name of the server group and specify connection: local) this makes the command module execute locally, which defeats the purpose.
I have tried looking into the SSH module but it looks like it is not registering to give me a no_action detected. I am aware I am likely overlooking something glaring.
I will be posting closer to exact code later but what I have now is
main.yml
---
- hosts: localhost
tasks:
- name: subplay
include: secondary.yml
vars:
user:myUser
address:"{{hostvars[item].address}}"
with_items: hostvars['mygroup']
secondary.yml
---
- name: fetch password
[...fethchMyPassword, it works]
register: password
- name:
[...Need to connect with fetched user for this task and this task only..]
command: /my/local/usr/task.sh
I am wanting to connect and execute the script there but it seems no matter what I try it either fails to execute at all or executes locally.
Additionally, I might note I checked out https://docs.ansible.com/ansible/latest/plugins/connection/paramiko_ssh.html and
https://docs.ansible.com/ansible/latest/plugins/connection/ssh.html but must be doing something wrong
it looks like to me that only your fetch task needs to be delegated to localhost, the rest on my_group, and when you have all your connection info, setup connection with set_facts by setting values to ansible_{user, ssh_pass, password} try this :
main.yml
---
- hosts: mygroup # inventory_hostname will loop through all your hosts in my_group
tasks:
- name: subplay
include: secondary.yml
vars:
user:myUser
address:"{{hostvars[inventory_hostname].address}}"
secondary.yml
---
- name: fetch password
[...fethchMyPassword, it works]
delegate_to: localhost # this task is only run on localhost
register: password
- set_fact: # use registered password and vars to setup connection
ansible_user: "{{ user}}"
ansible_ssh_pass: "{{ password }}"
ansible_host: "{{ address }}"
- name: Launch task # this task is run on each hosts of my_group
[...Need to connect with fetched user for this task and this task only..]
command: /my/local/usr/task.sh
launch this with
ansible-playbook main.yml
try to write a role with your secondary.yml, and a playbook witht your main.yml

How to define when condition based on matching string

I am writing a playbook where i need to select host which will be a part of group which starts with name "hadoop". The host will be supplied as an extra variable in term of parent group. The task is about upgrading the java on all machines with repo but there are certain servers which dont have repo configured or are in dmz and can only use there local repo... i need to enable local_rpm:true so that when the playbook execute the server which belong to hadoop group have this fact enabled.
I tried like below :
- hosts: '{{ target }}'
gather_facts: no
become: true
tasks:
- name: enable local rpm
set_fact:
local_rpm: true
when: "'hadoop' in group_names"
tags: always
and then importing my role based on tag
It's probably better to use group_vars in this case.
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#group-variables

How to create a file locally with ansible templates on the development machine

I'm starting out with ansible and I'm looking for a way to create a boilerplate project on the server and on the local environment with ansible playbooks.
I want to use ansible templates locally to create some generic files.
But how would i take ansible to execute something locally?
I read something with local_action but i guess i did not get this right.
This is for the webbserver...but how do i take this and create some files locally?
- hosts: webservers
remote_user: someuser
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
You can delegate tasks with the param delegate_to to any host you like, for example:
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
delegate_to: localhost
See Playbook Delegation in the docs.
If your playbook should in general run locally and no external hosts are involved though, you can simply create a group which contains localhost and then run the playbook against this group. In your inventory:
[local]
localhost ansible_connection=local
and then in your playbook:
hosts: local
Ansible has a local_action directive to support these scenarios which avoids the localhost and/or ansible_connection workarounds and is covered in the Delegation docs.
To modify your original example to use local_action:
- name: create some file
local_action: template src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
which looks cleaner.
If you cannot do/allow localhost SSH, you can split the playbook on local actions and remote actions.
The connection: local says to not use SSH for a playbook, as shown here: https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#local-playbooks
Example:
# myplaybook.yml
- hosts: remote_machines
tasks:
- debug: msg="do stuff in the remote machines"
- hosts: 127.0.0.1
connection: local
tasks:
- debug: msg="ran in local ansible machine"
- hosts: remote_machines
tasks:
- debug: msg="do more stuff in remote machines"

Resources