I'm creating playbook which will be applied to new Docker swarm manager(s). Server(s) is/are not configured before playbook run.
We already have some Swarm managers. I can find all of them (include new one) with:
- name: 'Search for SwarmManager server IPs'
ec2_instance_facts:
region: "{{ ec2_region }}"
filters:
vpc-id: "{{ ec2_vpc_id }}"
"tag:aws:cloudformation:logical-id": "AutoScalingGroupSwarmManager"
register: swarmmanager_instance_facts_result
Now I can use something like this to get join-token:
- set_fact:
swarmmanager_ip: "{{ swarmmanager_instance_facts_result.instances[0].private_ip_address }}"
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ swarmmanager_ip }}"
run_once: true
Success shell output looks like this — just 1 line started with "SWMTKN-1":
SWMTKN-1-11xxxyyyzzz-xxxyyyzzz
But I see some possible problems here with swarmmanager_ip:
it can be new instance which still unconfigured,
it can be instance with not working Swarm manager.
So I decided to loop over results until I've got join-token. But many code variants I've tried doesn't work. For example, this one runs over all list without break:
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ item.private_ip_address }}"
loop: "{{ swarmmanager_instance_facts_result.instances }}"
# ignore_errors: true
# until: docker_swarm_token_result.stdout_lines|length == 1
when: docker_swarm_token_result is not defined or docker_swarm_token_result.stdout_lines is not defined or docker_swarm_token_result.stdout_lines|length == 1
run_once: true
check_mode: false
Do you know how to iterate over list until first success shell output?
I use Ansible 2.6.11, it is OK to receive answer about 2.7.
P.S.: I've already read How to break `with_lines` cycle in Ansible?, it doesn't works for modern Ansible versions.
Related
Ansible version - 2.9
Facing issue in writing output to a csv file, its not writing the output consistently into the file.
Having an inventory file with three server IPs, script will execute command to check the disk space of each server and writing the output to a csv file.
Sometimes its writing all the three server details into the file, sometimes its writing only one or two server details into the file.
- hosts: localhost
connection: local
gather_facts: False
vars:
filext: ".csv"
tasks:
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- name: get current dir
local_action: command pwd
register: current_dir
- name: create dir
file: path={{ current_dir.stdout }}/HCT state=directory
- name: Set file path here
set_fact:
file_path: "{{ current_dir.stdout }}/HCT/HCT_check{{ filext }}"
- name: Creates file
file: path={{ file_path }} state=touch
# Writing to a csv file
- hosts:
- masters
become: false
vars:
disk_space: "Able to get disk space for the CM {{ hostname }} "
disk_space_error: "The server {{ hostname }} is down for some reason. Please check manually."
disk_space_run_status: "{{disk_space}}"
cur_date: "{{ansible_date_time.iso8601}}"
tasks:
- name: runnig command to get file system which are occupied
command: bash -c "df -h | awk '$5>20'"
register: disk_space_output
changed_when: false
ignore_errors: True
no_log: True
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostname }}, {{ ip_address }}, {{ cur_date }}"
insertafter: EOF
state: present
delegate_to: localhost
Please help to resolve this issue.
The issue is that the task "Log the task get list of file systems with space occupied" is executed in parallel for the 3 servers, so you're having concurrent writing problems.
One solution is to use the serial keyword at play level with a value of 1, this way, all the tasks will be executed for each server one at a time.
- hosts:
- masters
become: false
serial: 1
vars:
[...]
Another solution is to have the task executed for only 1 server but looping over the results of all servers by using hostvars:
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostvars[item].hostname }}, {{ hostvars[item].ip_address }}, {{ hostvars[item].cur_date }}"
insertafter: EOF
state: present
run_once: True
loop: "{{ ansible_play_hosts }}" # Looping over all hosts of the play
delegate_to: localhost
I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.
In my host, it needs time (about 20s) to initialize CLI session,... before doing cli
I'm trying to do command by playbook ansible:
---
- name: Run show sub command
hosts: em
gather_facts: no
remote_user: duypn
tasks:
- name: wait for SSH to respond on all hosts
local_action: wait_for host=em port=22 delay=60 state=started
- name: run show sub command
raw: show sub id=xxxxx;display=term-type
After 10 mins, ansible gives me output which is not the result of show sub command :(
...
["CLI Session initializing..", "Autocompleter initializing..", "CLI>This session has been IDLE for too long.",
...
I'm glad to hear your suggestion. Thank you :)
I don't have a copy-paste solution for you but one thing I learned is to put a sleep after ssh is 'up' to allow the machine to finish it's work. This might give you a nudge in the right direction.
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: "{{ ec2.instances }}"
- name: waiting for a few seconds to let the machine start
pause:
seconds: 20
So I had the same problem and this is how I solved it:
---
- name: "Get instances info"
ec2_instance_facts:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
filters:
vpc-id : "{{ vpc_id }}"
private-ip-address: "{{ ansible_ssh_host }}"
delegate_to: localhost
register: my_ec2
- name: "Waiting for {{ hostname }} to response"
wait_for:
host: "{{ item.public_ip_address }}"
state: "{{ state }}"
sleep: 1
port: 22
delegate_to: localhost
with_items:
- "{{ my_ec2.instances }}"
That is the playbook named aws_ec2_status.
The playbook I ran looks like this:
---
# Create an ec2 instance in aws
- hosts: nodes
gather_facts: false
serial: 1
vars:
state: "present"
roles:
- aws_create_ec2
- hosts: nodes
gather_facts: no
vars:
state: "started"
roles:
- aws_ec2_status
The reason I split the create and check to two different playbooks is because I want the playbook to create instances and not wait for one to be ready before creating the other.
But if the second instance is depended on the first one so you should combine them.
FYI Let me know if you want to see my aws_create_ec2 playbook.
Im creating a deployment playbook for our web services. Each web service is in its own directory such as:
/webapps/service-one/
/webapps/service-two/
/webapps/service-three/
I want to check to see if the service directory exists, and if so, I want to run a shell script that stops the service gracefully. Currently, I am able to complete this step by using ignore_errors: yes.
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
ignore_errors: yes
While this works, the output is very messy if one of the directories doesnt exist or a service is being deployed for the first time. I effectively want to something like one of these:
This:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
when: shell: [ -d /webapps/{{item}} ]
or this:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
stat:
path: /webapps/{{item}}
register: path
when: path.stat.exists == True
I'd collect facts first and then do only necessary things.
- name: Check existing services
stat:
path: "/tmp/{{ item }}"
with_items: "{{ services_to_stop }}"
register: services_stat
- name: Stop existing services
with_items: "{{ services_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}"
shell: "/webapps/scripts/stopService.sh {{ item }}"
Also note, that bare variables in with_items don't work since Ansible 2.2, so you should template them.
This will let you get a list of existing directory names into the list variable dir_names (use recurse: no to read only the first level under webapps):
---
- hosts: localhost
connection: local
vars:
dir_names: []
tasks:
- find:
paths: "/webapps"
file_type: directory
recurse: no
register: tmp_dirs
- set_fact: dir_names="{{ dir_names+ [item['path']] }}"
no_log: True
with_items:
- "{{ tmp_dirs['files'] }}"
- debug: var=dir_names
You can then use dir_names in your "Stop services" task via a with_items. It looks like you're intending to use only the name of the directory under "webapps" so you probably want to use the | basename jinja2 filter to get that, so something like this:
- name: Stop services
with_items: "{{ dir_names }}"
shell: "/webapps/scripts/stopService.sh {{item | basename }}"
I want to pass a variable to a notification handler, but can't find anywhere be it here on SO, the docs or the issues in the github repo, how to do it. What I'm doing is deploying multiple webapps, and when the code for one of those webapps is changed, it should restart the service for that webapp.
From this SO question, I got this to work, somewhat:
- hosts: localhost
tasks:
- name: "task 1"
shell: "echo {{ item }}"
register: "task_1_output"
with_items: [a,b]
- name: "task 2"
debug:
msg: "{{ item.item }}"
when: item.changed
with_items: task_1_output.results
(Put it in test.yml and run it with ansible-playbook test.yml -c local.)
But this registers the result of the first task and conditionally loops over that in the second task. My problem is that it gets messy when you have two or more tasks that need to notify the second task! For example, restart the web service if either the code was updated or the configuration was changed.
AFAICT, there's no way to pass a variable to a handler. That would cleanly fix it for me. I found some issues on github where other people run into the same problem, and some syntaxes are proposed, but none of them actually work.
Including a sub-playbook won't work either, because using with_items together with include was deprecated.
In my playbooks, I have a site.yml that lists the roles of a group, then in the group_vars for that group I define the list of webapps (including the versions) that should be installed. This seems correct to me, because this way I can use the same playbook for staging and production. But maybe the only solution is to define the role multiple times, and duplicate the list of roles for staging and production.
So what is the wisdom here?
Variables in Ansible are global so there is no reason to pass a variable to handler. If you are trying to make a handler parameterized in a way that you are trying to use a variable in the name of a handler you won't be able to do that in Ansible.
What you can do is create a handler that loops over a list of services easily enough, here is a working example that can be tested locally:
- hosts: localhost
tasks:
- file: >
path=/tmp/{{ item }}
state=directory
register: files_created
with_items:
- one
- two
notify: some_handler
handlers:
- name: "some_handler"
shell: "echo {{ item }} has changed!"
when: item.changed
with_items: files_created.results
I finally solved it by splitting the apps out over multiple instances of the same role. This way, the handler in the role can refer to variables that are defined as role variable.
In site.yml:
- hosts: localhost
roles:
- role: something
name: a
- role: something
name: b
In roles/something/tasks/main.yml:
- name: do something
shell: "echo {{ name }}"
notify: something happened
- name: do something else
shell: "echo {{ name }}"
notify: something happened
In roles/something/handlers/main.yml:
- name: something happened
debug:
msg: "{{ name }}"
Seems a lot less hackish than the first solution!
To update jarv's answer above, Ansible 2.5 replaces with_items with loop. When getting results, item by itself will not work. You will need to explicitly get the name, e.g., item.name.
- hosts: localhost
tasks:
- file: >
path=/tmp/{{ item }}
state=directory
register: files_created
loop:
- one
- two
notify: some_handler
handlers:
- name: "some_handler"
shell: "echo {{ item.name }} has changed!"
when: item.changed
loop: files_created.results
I got mine to work like this - I had to add some curly brackets
tasks:
- name: Aktivieren von Security-, Backport- und Non-Security-Upgrades
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '^[^"//"]*"\${distro_id}:\${distro_codename}-{{ item }}";'
line: ' "${distro_id}:${distro_codename}-{{ item }}";'
insertafter: "Unattended-Upgrade::Allowed-Origins {"
state: present
register: aenderung
loop:
- updates
- security
- backports
notify: Auskommentierte Zeilen entfernen
handlers:
- name: Auskommentierte Zeilen entfernen
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '^\/\/.*{{ item.item }}";.*'
state: absent
when: item.changed
loop: "{{ aenderung.results }}"