How to let Ansible run a command with host as argument locally - ansible

I installed servers at multiple hosts using ansible playbook. the hosts are defined at inventory file:
[services]
host_ip1
host_ip1
...
Now I need to test if each host works properly using command:
service-cli -h <host_ip1> -p 6380 ping
...
How do I write that using ansible? it is for sure ansible will run the command at local not remote, Then how do I pass the host_ipX to ansible?

I know how to use delegate_to: 127.0.0.1, I do not know how to get the ip of inventory file using with_items, what keywords should I set? services?
Your use case is literally documented in the extract examples
- debug:
msg: service-cli -h {{ item }} -p 6380 ping
with_items: '{{ groups["services"] | map("extract", hostvars, "ansible_host") }}'
delegate_to: localhost
# this is required if you want the task as part of a playbook
# that contains "hosts: all"
run_once: yes
or, as an alternative to the "run_once" you can isolate that as its own play:
- hosts: localhost
gather_facts: no
tasks:
- debug:
msg: service-cli -h {{ item }} -p 6380 ping
with_items: '{{ groups["services"] | map("extract", hostvars, "ansible_host") }}'
- hosts: all
... # and now back to normal life

Related

Ansible: loops with inventory groups elements

I got a bit stuck in the following problem. I have an inventory file similar with:
[server]
server0 ansible_host=10.254.1.35
server1 ansible_host=10.254.1.46
[worker]
worker0 ansible_host=10.254.1.10
worker1 ansible_host=10.254.1.12
worker2 ansible_host=10.254.1.13
and I want to make a role that tests the connectivity between all machines from two groups (server0 to all workers, server2 with all workers etc).
So something like:
- hosts: "{{ target | default('server') }}"
tasks
- name: Test connectivity
shell: |
ping {{ worker_machine_ip }}
args:
executable: /bin/bash
where the worker_machine_ip to be replaced with the IP of each worker from the inventory file. Is this possible ?
You can use the special variable groups to list all hosts in a specific group, then the other sepcial variable hostvars to get the IP of that other host you want to ping:
- hosts: "{{ target | default('server') }}"
tasks:
- name: Test connectivity
shell: ping -c 3 {{ hostvars[item].ansible_host }}
loop: "{{ groups['worker'] }}"
args:
executable: /bin/bash

ansible multiple per-host output to file

I want to run an Ansible playbook on multiple hosts and register outputs to a variable. Now using this variable, I want to copy output to single file. The issue is that in the end there is output of only one host in the file. How can I add output of all the hosts in a file one after the other. I don't want to use serial = 1 as it slows down execution considerably if we have multiple hosts.
-
hosts: all
remote_user: cisco
connection: local
gather_facts: no
vars_files:
- group_vars/passwords.yml
tasks:
- name: Show command collection
ntc_show_command:
connection: ssh
template_dir: /ntc-ansible/ntc-templates/templates
platform: cisco_ios
host: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
command: "{{commands}}"
register: result
- local_action:
copy content="{{result.response}}" dest='/home/user/show_cmd_ouput.txt'
result variable will be registered as a fact on each host the task ntc_show_command was run, thus you should access the value through hostvars dictionary.
- local_action:
module: copy
content: "{{ groups['all'] | map('extract', hostvars, 'result') | map(attribute='response') | list }}"
dest: /home/user/show_cmd_ouput.txt
run_once: true
You also need run_once because the action would still be run as many times as hosts in the group.

lineinfile module of ansible with delegate_to localhost doesn't write all data to localhost, it writes only 1 random entry on localhost

I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.

How to get value of --limit argument inside an Ansible playbook?

In ansible, is it possible to get the value of the argument to the "--limit" option within a playbook? I want to do is something like this:
---
- hosts: all
remote user: root
tasks:
- name: The value of the --limit argument
debug:
msg: "argument of --limit is {{ ansible-limit-arg }}"
Then when I run he command:
$ ansible-playbook getLimitArg.yaml --limit webhosts
I'll get this output:
argument of --limit is webhost
Of course, I made up the name of the variable "ansible-limit-arg", but is there a valid way of doing this? I could specify "webhosts" twice, the second time with --extra-args, but that seems a roundabout way of having to do this.
Since Ansible 2.5 you can access the value using a new magic variable ansible_limit, so:
- debug:
var: ansible_limit
Have you considered using the {{ ansible_play_batch }} built-in variable?
- hosts: all
become: "False"
gather_facts: "False"
tasks:
- name: The value of the --limit argument
debug:
msg: "argument of --limit is {{ ansible_play_batch }}"
delegate_to: localhost
It won't tell you exactly what was entered as the argument but it will tell you how Ansible interpreted the --limit arg.
You can't do this without additional plugin/module. If you utterly need this, write action plugin and access options via cli.options (see example).
P.S. If you try to use --limit to run playbooks agains different environments, don't do it, you can accidentally blow your whole infrastructure – use different inventories instead.
Here is the small code block to achieve the same
- block:
- shell: 'echo {{inventory_hostname}} >> /tmp/hosts'
- shell: cat /tmp/hosts
register: servers
- file:
path: /tmp/hosts
state: absent
delegate_to: 127.0.0.1
- debug: var=servers.stdout_lines
Then use stdout_lines output as u wish like mine
- add_host:
name: "{{ item }}"
groups: cluster
ansible_user: "ubuntu"
with_items:
- "{{ servers.stdout_lines }}"

Override hosts variable of Ansible playbook from the command line

This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
For anyone who might come looking for the solution.
Play Book
- hosts: '{{ host }}'
tasks:
- debug: msg="Host is {{ ansible_fqdn }}"
Inventory
[web]
x.x.x.x
[droplets]
x.x.x.x
Command: ansible-playbook deplyment.yml -i hosts --extra-vars "host=droplets"
So you can specify the group name in the extra-vars
We use a simple fail task to force the user to specify the Ansible limit option, so that we don't execute on all hosts by default/accident.
The easiest way I found is this:
---
- name: Force limit
# 'all' is okay here, because the fail task will force the user to specify a limit on the command line, using -l or --limit
hosts: 'all'
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
Now we must use the -l (= --limit option) when we run the playbook, e.g.
ansible-playbook playbook.yml -l www.example.com
Limit option docs:
Limit to one or more hosts This is required when one wants to run a
playbook against a host group, but only against one or more members of
that group.
Limit to one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
Negated limit.
NOTE: Single quotes MUST be used to prevent bash
interpolation.
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to host group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
This is a bit late, but I think you could use the --limit or -l command to limit the pattern to more specific hosts. (version 2.3.2.0)
You could have
- hosts: all (or group)
tasks:
- some_task
and then ansible-playbook playbook.yml -l some_more_strict_host_or_pattern
and use the --list-hosts flag to see on which hosts this configuration would be applied.
An other solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
If the --limit option is omitted, then Ansible issues a warning, but does nothing since no host matched.
[WARNING]: Could not match supplied host pattern, ignoring: None
PLAY ****************************************************************
skipping: no hosts matched
I'm using another approach that doesn't need any inventory and works with this simple command:
ansible-playbook site.yml -e working_host=myhost
To perform that, you need a playbook with two plays:
first play runs on localhost and add a host (from given variable) in a known group in inmemory inventory
second play runs on this known group
A working example (copy it and runs it with previous command):
- hosts: localhost
connection: local
tasks:
- add_host:
name: "{{ working_host }}"
groups: working_group
changed_when: false
- hosts: working_group
gather_facts: false
tasks:
- debug:
msg: "I'm on {{ ansible_host }}"
I'm using ansible 2.4.3 and 2.3.3
I changed mine to default to no host and have a check to catch it. That way the user or cron is forced to provide a single host or group etc. I like the logic from the comment from #wallydrag. The empty_group contains no hosts in the inventory.
- hosts: "{{ variable_host | default('empty_group') }}"
Then add the check in tasks:
tasks:
- name: Fail script if required variable_host parameter is missing
fail:
msg: "You have to add the --extra-vars='variable_host='"
when: (variable_host is not defined) or (variable_host == "")
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory, like this: ansible --inventory configs/hosts --list-hosts all
If you want to run a task that's associated with a host, but on different host, you should try delegate_to.
In your case, you should delegate to your localhost (ansible master) and calling ansible-playbook command
I am using ansible 2.5 (2.5.3 exactly), and it seems that the vars file is loaded before the hosts param is executed. So you can set the host in a vars.yml file and just write hosts: {{ host_var }} in your playbook
For example, in my playbook.yml:
---
- hosts: "{{ host_name }}"
become: yes
vars_files:
- vars/project.yml
tasks:
...
And inside vars/project.yml:
---
# general
host_name: your-fancy-host-name
Here's a cool solution I came up to safely specify hosts via the --limit option. In this example, the play will end if the playbook was executed without any hosts specified via the --limit option.
This was tested on Ansible version 2.7.10
---
- name: Playbook will fail if hosts not specified via --limit option.
# Hosts must be set via limit.
hosts: "{{ play_hosts }}"
connection: local
gather_facts: false
tasks:
- set_fact:
inventory_hosts: []
- set_fact:
inventory_hosts: "{{inventory_hosts + [item]}}"
with_items: "{{hostvars.keys()|list}}"
- meta: end_play
when: "(play_hosts|length) == (inventory_hosts|length)"
- debug:
msg: "About to execute tasks/roles for {{inventory_hostname}}"
This worked for me as I am using Azure devops to deploy an application using CICD pipelines. I had to make this hosts (in yml file) more dynamic so in release pipeline I can add it's value, for example:
--extra-vars "host=$(target_host)"
pipeline_variable
My ansible playbook looks like this
- name: Apply configuration to test nodes
hosts: '{{ host }}'

Resources