I'm trying to get it to work for more than just one host. Is it possible?
In this case it will be failing on another hosts of course, because process id would be different on another hosts.
- name: Fetching PID file from remote server
fetch: src="some.pid" dest=/tmp/ flat=yes fail_on_missing=yes
register: result
ignore_errors: True
- name: Is pid_file matching process ?
wait_for: path=/proc/{{ lookup('file', '/tmp/some.pid') }}/status state=present
when: result|success
register: result2
ignore_errors: True
Formal answer to your question – to make it work on multiple hosts at once, use templated filename or flat=no, e.g.:
- name: Fetching PID file from remote server
fetch: src="some.pid" dest="/tmp/{{inventory_hostname}}.pid" flat=yes fail_on_missing=yes
register: result
ignore_errors: True
- name: Is pid_file matching process ?
wait_for: path="/proc/{{ lookup('file', '/tmp/'+inventory_hostname+'.pid') }}/status" state=present
when: result|success
register: result2
ignore_errors: True
But a better way is to replace this tasks with one shell command that checks everything at once without fetching any files to ansible host.
You can also use retry/until clauses with commands to wait for specific command result.
Related
I'm creating playbook which will be applied to new Docker swarm manager(s). Server(s) is/are not configured before playbook run.
We already have some Swarm managers. I can find all of them (include new one) with:
- name: 'Search for SwarmManager server IPs'
ec2_instance_facts:
region: "{{ ec2_region }}"
filters:
vpc-id: "{{ ec2_vpc_id }}"
"tag:aws:cloudformation:logical-id": "AutoScalingGroupSwarmManager"
register: swarmmanager_instance_facts_result
Now I can use something like this to get join-token:
- set_fact:
swarmmanager_ip: "{{ swarmmanager_instance_facts_result.instances[0].private_ip_address }}"
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ swarmmanager_ip }}"
run_once: true
Success shell output looks like this — just 1 line started with "SWMTKN-1":
SWMTKN-1-11xxxyyyzzz-xxxyyyzzz
But I see some possible problems here with swarmmanager_ip:
it can be new instance which still unconfigured,
it can be instance with not working Swarm manager.
So I decided to loop over results until I've got join-token. But many code variants I've tried doesn't work. For example, this one runs over all list without break:
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ item.private_ip_address }}"
loop: "{{ swarmmanager_instance_facts_result.instances }}"
# ignore_errors: true
# until: docker_swarm_token_result.stdout_lines|length == 1
when: docker_swarm_token_result is not defined or docker_swarm_token_result.stdout_lines is not defined or docker_swarm_token_result.stdout_lines|length == 1
run_once: true
check_mode: false
Do you know how to iterate over list until first success shell output?
I use Ansible 2.6.11, it is OK to receive answer about 2.7.
P.S.: I've already read How to break `with_lines` cycle in Ansible?, it doesn't works for modern Ansible versions.
The machine I am targeting should, in theory, have a process running for each individual client called 'marketaccess {client_name}' and I want to ensure that this process is running. Ansible is proving very challenging for checking if processes are running. Below is the playbook I am trying to use to see if there is a process running on a given machine. I plan to then run a conditional on the 'stdout' and say that if it does not contain the customer's name then run a restart process script against that given customer. The issue is that when I run this playbook it tells me that the dictionary object has no attribute 'stdout' yet when I remove the '.stdout' it runs fine and I can clearly see the stdout value for service_status.
- name: Check if process for each client exists
shell: ps aux | grep {{ item.key|lower }}
ignore_errors: yes
changed_when: false
register: service_status
with_dict: "{{ customers }}"
- name: Report status of service
debug:
msg: "{{ service_status.stdout }}"
Your problem is that service_status is a result of a looped task, so it has service_status.results list which contains results for every iteration.
To see stdout for every iteration, you can use:
- name: Report status of service
debug:
msg: "{{ item.stdout }}"
with_items: "{{ service_status.results }}"
But you may want to read this note about idempotent shell tasks and rewrite your code with clean single task.
I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.
I have the following playbook
- hosts: all
gather_facts: False
tasks:
- name: Check status of applications
shell: somecommand
register: result
changed_when: False
always_run: yes
After this task, I want to run a mail task that will mail the accumulated output of all the commands for the above task registered in the variable result. As of right now, when I try and do this, I get mailed for every single host. Is there some way to accumulate the output across multiple hosts and register that to a variable?
You can extract result from hostvars inside a run_once task:
- hosts: mygroup
gather_facts: false
tasks:
- shell: date
register: date_res
changed_when: false
- debug:
msg: "{{ ansible_play_hosts | map('extract', hostvars, 'date_res') | map(attribute='stdout') | list }}"
run_once: yes
This will print out a list of all date_res.stdout from all hosts in the current play and run this task only once.
While trying to copy the results of date_res.stdout to a file on host only single host data is copied not the all host's data is available
- name: copy all
copy:
content: "{{ allhost_out.stdout }}"
dest: "/ngs/app/user/outputsecond-{{ inventory_hostname }}.txt"
I have an ansible playbook that has a task to output the list of installed Jenkins plugins for each servers.
here is the host file:
[masters]
server1
server2
server3
server4
server5
server6
Here is the task that prints out the list of plugins installed on each of the jenkins servers:
- name: Obtaining a list of Jenkins Plugins
jenkins_script:
script: 'println(Jenkins.instance.pluginManager.plugins)'
url: "http://{{ inventory_hostname }}.usa.com:8080/"
user: 'admin'
password: 'password'
What I want to do next is do a comparison with all of the installed plugins across all of the servers -- to ensure that all of the servers are running the same plugins.
I don't necessarily want to force an update -- could break things -- just inform the user that they are running a different version of the plug in that the rest of the servers.
I am fairly new to ansible, will gladly accept any suggestions on how to accomplish this.
This is a bit ugly, but should work:
- hosts: master
tasks:
- jenkins_script:
script: 'println(Jenkins.instance.pluginManager.plugins)'
url: "http://{{ inventory_hostname }}.usa.com:8080/"
user: 'admin'
password: 'password'
register: call_result
- copy:
content: '{{ call_result.output }}'
dest: '/tmp/{{ inventory_hostname }}'
delegate_to: 127.0.0.1
- shell: 'diff /tmp/{{groups.master[0]}} /tmp/{{ inventory_hostname }}'
delegate_to: 127.0.0.1
register: diff_result
failed_when: false
- debug:
var: diff_result.stdout_lines
when: diff_result.stdout_lines | length != 0
This will save the result of the jenkins_script module onto the calling host (where you are running ansible-playbook), into /tmp/{{hostname}}. Afterwards it will run a normal diff against the first server's result and each of the others', and then print out if there are any differences.
It's a bit ugly, as it:
Uses /tmp on the calling host to store some temporary data
Does not clean up after itself
Uses the diff shell commands for something that might be doable with some clever use of jinja
Ansible 2.3 will have the tempfile module, that you might use to clean up /tmp