i got following pb to add multiple lines to chrony.conf
- name: add line
lineinfile:
backup: no
backrefs: no
state: present
path: "{{ file_path }}"
insertafter: "{{ line.replace_with }}"
line: "{{ line.line_to_add }}"
with_items:
- { search: "{{ line.replace_with }}", add: "{{ line.line_to_add }}" }
in my vars files I got it like this
line:
line_to_add:
- "server ntp1.domain.com iburst"
- "server ntp2.domain.com iburst"
- "server ntp3.domain.com iburst"
but the change put all 3 ntp servers in one line instead of 3.
any idea?
when i change my yml to
- name: add line
lineinfile:
backup: no
backrefs: no
state: present
path: "{{ file_path }}"
#regexp: '^(\s*)[#]?{{ item.search }}(: )*'
insertafter: "{{ line.replace_with }}"
line: "{{ item }}"
create: true
loop: "{{ line.line_to_add }}"
with_items:
- { search: "{{ line.replace_with }}", add: "{{ line.line_to_add }}" }
I get suplicate loop in task: items
You can use loop for this to iterate each line in the line.line_to_add variable. I also assumed that you have another line.replace_with variable.
- name: Example of multiple lines
hosts: localhost
gather_facts: no
vars:
line:
line_to_add:
- "server ntp1.domain.com iburst"
- "server ntp2.domain.com iburst"
- "server ntp3.domain.com iburst"
replace_with:
- test_line
tasks:
- name: add line
lineinfile:
backup: no
backrefs: no
state: present
path: test_file
insertafter: "{{ line.replace_with }}"
line: "{{ item }}"
create: true
loop: "{{ line.line_to_add }}"
Gives:
server ntp1.domain.com iburst
server ntp2.domain.com iburst
server ntp3.domain.com iburst
Related
I have been trying to get this conditional to work for a few hours now.
The solution is eluding me.
The "when" is always evaluating as false.
This is confusing as template/src works perfectly.
- name: Copy PHP Pool Config
template:
src: "{{ src_conf }}"
dest: /etc/php/{{ php_version }}/fpm/pool.d/{{ user.name }}.conf
mode: u=rw,g=r,o=r
with_flattened:
- "{{ users|default([]) }}"
notify: reload PhpFpm
vars:
user: "{{ item }}"
src_conf: "../files/etc/php/fpm/pool.d/{{ group_names | first }}/{{ user.name }}.conf"
tags:
- php
when: src_conf is exists
This works as expected if I remove the "when".
The issue is not all "{{ user.name }}.conf" files exist.
Does "when: src_conf is exists" search in different place?
Update:
I also tried it with location_action, but it still resulted in false.
- name: Copy PHP Pool Config files
local_action: stat path="../../files/etc/php/fpm/pool.d/{{ group_names | first }}/{{ user.name }}.conf"
register: phpPools
with_flattened:
- "{{ users|default([]) }}"
become: no
vars:
user: "{{ item }}"
tags:
- php
- name: Copy PHP Pool Config
template:
src: item.invocation.module_args.path
dest: /etc/php/{{ php_version }}/fpm/pool.d/{{ user.name }}.conf
mode: u=rw,g=r,o=r
with_flattened:
- "{{ phpPools.results }}"
notify: reload PhpFpm
vars:
user: "{{ item.item }}"
tags:
- php
when: item.stat.exists == true
I have several files that I need to backup in different directories. I have tried the code below and not working for me.
vars:
file_vars:
- {name: /file1}
- {name: /etc/file2}
- {name: /etc/file/file3}
tasks:
- name: "Checking if config files exists"
stat:
path: "{{ item.name }}"
with_items: "{{ file_vars }}"
register: stat_result
- name: Backup Files
copy: src={{ item.name }} dest={{ item.name }}{{ ansible_date_time.date }}.bak
with_items: "{{ file_vars }}"
remote_src: yes
when: stat_result.stat.exists == True
The problem is the condition
when: stat_result.stat.exists == True
There is no attribute stat_result.stat. Instead, the attribute stat_result.results is a list of the results from the loop. It's possible to create a dictionary of files and their statuses. For example
- set_fact:
files_stats: "{{ dict(my_files|zip(my_stats)) }}"
vars:
my_files: "{{ stat_result.results|json_query('[].item.name') }}"
my_stats: "{{ stat_result.results|json_query('[].stat.exists') }}"
Then simply use this dictionary in the condition
when: files_stats[item.name]
Below is a shorter version which creates the dictionary more efficiently
- set_fact:
files_stats: "{{ dict(stat_result.results|
json_query('[].[item.name, stat.exists]')) }}"
Please try using below worked for me:
---
- name: Copy files
hosts: localhost
become: yes
become_user: root
vars_files:
- files.yml
tasks:
- name: "Checking if config files exists"
stat:
path: "{{ item }}"
with_items: "{{ files }}"
register: stat_result
- name: Ansible
debug:
msg: "{{ stat_result }}"
- name: Backup Files
copy:
src: "{{ item }}"
dest: "{{ item.bak }}"
with_items: "{{ files }}"
when: stat_result == "True"
and files.yml will look like:
---
files:
- /tmp/file1
- /tmp/file2
you can check you playbook syntax using below command:
ansible-playbook copy.yml --syntax-check
Also you do dry run your playbook before actual execution.
ansible-playbook -i localhost copy.yml --check
I've write ansible-playbook to collect the result from many network devices. Below playbook is working fine. But if I have to collect result with lot of commands. Let say 20 commands, I've to create the many task to write the results into file in my playbook.
For now, I manually create the tasks to write to logs into file. Below is example with 3 commands.
- name: run multiple commands and evaluate the output
hosts: <<network-host>>
gather_facts: no
connection: local
vars:
datetime: "{{ lookup('pipe', 'date +%Y%m%d%H') }}"
backup_dir: "/backup/"
cli:
host: "{{ ansible_host }}"
username: <<username>>
password: <<password>>
tasks:
- sros_command:
commands:
- show version
- show system information
- show port
provider: "{{ cli }}"
register: result
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show version\n{{ result.stdout[0] }}"
create: yes
changed_when: False
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show system information\n{{ result.stdout[1] }}"
create: yes
changed_when: False
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show port\n{{ cmd_result.stdout[2] }}"
create: yes
changed_when: False
Is it possible to loop commands and result within one task?
Please kindly advice.
Thanks
try this one task alone in place above three tasks..
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show {{ item.command }}\n{{ cmd_result.stdout{{ item.outnum }} }}"
create: yes
changed_when: False
with_items:
- { command: version, outnum: [0] }
- { command: system information, outnum: [1] }
- { command: port, outnum: [2] }
Below playbook is worked for me
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show {{ item.command }}\n{{ item.cmdoutput}}"
create: yes
changed_when: False
with_items:
- { command: "version", cmdoutput: "{{ cmd_result.stdout[0] }}" }
- { command: "system information", cmdoutput: "{{ cmd_result.stdout[1] }}" }
- { command: "port", cmdoutput: "{{ cmd_result.stdout[2] }}" }
I'm currently building Ansible image playbook templates across AWS and VSphere and I'd like be able to define multiple additional disks but only when they're defined via the variables.
Playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Launch Windows 2016 VM Instance
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 | default(80) }}"
type: "{{ vm_disk_type0 | default(thin) }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
- size_gb: "{{ vm_disk_size2 }}"
type: "{{ vm_disk_type2 }}"
datastore: "{{ vm_disk_datastore2 }}"
- size_gb: "{{ vm_disk_size3 }}"
type: "{{ vm_disk_type3 }}"
datastore: "{{ vm_disk_datastore3 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus | default(4) }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: Redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
Variables:
vm_disk_datastore0: C6200_T2_FCP_3Days
vm_disk_size0: 80
vm_disk_type0: thin
vm_disk_datastore1: "C6200_T2_FCP_3Days"
vm_disk_size1: "50"
vm_disk_type1: "thin"
vm_disk_datastore2: "C6200_T2_FCP_3Days"
vm_disk_size2: "20"
vm_disk_type2: "thin"
vm_disk_datastore3: ""
vm_disk_size3: ""
vm_disk_type3: ""
Error:
{
"_ansible_parsed": false,
"exception": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 113, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2396, in <module>\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2385, in main\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2008, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1690, in configure_disks\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1608, in get_configured_disk_size\nValueError: invalid literal for int() with base 10: ''\n",
"_ansible_no_log": false,
"module_stderr": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 113, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2396, in <module>\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2385, in main\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2008, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1690, in configure_disks\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1608, in get_configured_disk_size\nValueError: invalid literal for int() with base 10: ''\n",
"changed": false,
"module_stdout": "",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
The idea being, if the variable isn't defined when the job is launched, it will be skipped by the vm_guest module.
Is this the best method? Is anyone able to suggest a successful way forward?
Update: Probably the best method would be to build the instance and then use vm_guest_disk to add additional disks using the method suggested below. This module is available in v2.8.
Our version of Tower is 2.7.9, so I'm going to use a more verbose method of multiple vm_guest calls via a disk_num variable:
Variables
num_disks: 0
vm_cluster: C6200_NPE_PC_ST
vm_datacenter: DC2
vm_disk_datastore0: C6200_T2_FCP_3Days
vm_disk_datastore1: ''
vm_disk_datastore2: ''
vm_disk_datastore3: ''
vm_disk_datastore4: ''
vm_disk_size0: 80
vm_disk_type0: thin
vm_disk_type1: ''
vm_disk_type2: ''
vm_disk_type3: ''
vm_disk_type4: ''
vm_domain: corp.local
vm_folder: /DC2/vm/ap-dev
vm_hostname: xyzvmserver.corp.local
vm_memory_mb: 16000
vm_network: C6200_10.110.64.0_24_VL1750
vm_num_cpus: 4
vm_servername: server05
vm_template: Windows2016_x64_AN_ESX_v1.1
Playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Launch Windows 2016 VM Instance - No additional Disk
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 0
- name: Launch Windows 2016 VM Instance 1 Additional Disk
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 1
- name: Launch Windows 2016 VM Instance 2 Additional Disks
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
- size_gb: "{{ vm_disk_size2 }}"
type: "{{ vm_disk_type2 }}"
datastore: "{{ vm_disk_datastore2 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 2
etc
It looks like you should build your vm with 1 disk, that you know for sure will always be there when you run the play, which you could make to be the first variable in this list:
vms:
0:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "80"
vm_disk_type: "thin"
1:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "50"
vm_disk_type: "thin"
2:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "20"
vm_disk_type: "thin"
3:
vm_disk_datastore: ""
vm_disk_size: ""
vm_disk_type: ""
Since I don't know of a way that you can loop through sub module arguments to make them present or not, build your vm with the one disk you know for sure will be there. Which mentioned above, for consistency, should be the first one in your list. Or you will need to rebuild the logic to drill into the list looking for a specific value that you can key in on with jinja filters like selectattr and map.
- name: Launch Windows 2016 VM Instance
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vms.0.vm_disk_size }}"
type: "{{ vms.0.vm_disk_type }}"
datastore: "{{ vms.0.vm_disk_datastore }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus | default(4) }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: Redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
Once its built, then use this different module to drill into adding additional disks
- name: Add disks to virtual machine
vmware_guest_disk:
hostname: "{{ vm_servername }}"
datacenter: "{{ vm_datacenter }}"
disk:
- size_gb: "{{ item.vm_disk_size }}"
type: "{{ item.vm_disk_type }}"
datastore: "{{ item.vm_disk_datastore }}"
state: present
loop: "{{ vms }}"
loop_control:
label: "Disk {{ my_idx}} - {{ item.vm_disk_datastore }}"
index_var: my_idx
when:
- item.vm_disk_datastore != ""
- item.vm_disk_size != ""
- item.vm_disk_type != ""
- my_idx > 0
That will feed through a loop pulling that variable in from where ever you decided to define it, and then loop through that list of dictonaries pulling the values with the correct labels, but only doing so if they have a value (if leaving blanks in that list is something you plan on doing, if not, the when isn't even needed). This will also allow you to expand upward by just putting more items in the list without having to expand out your task. If you need to edit the first task because you can't rely on list order, but need to key into a variable, then you need to make sure you make a change to the disks on the second task as well to omit using the disk added in the first task.
Also, the documentation on this module states that existing disks called called with this module will be adjusted to that size, and can't be reduced using the module, so if you want to avoid that, put in some data checks to compare the configured disk size to what is in your variable, so you can skip the task if it is going to be reduced (or if you just don't want an existing vm to be altered).
I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.