I want install one of Filebit prospectors only if service redis is running on the host. For that i create default list of prospectors ( redis, aerospike, postgress ) with {{ item.id }}. But right now i want put some expression to "when:" which will install prospector for redis only if it running - how can i do it ?
- name: Configure Filebeat prospectors
template: src=filebeat_conf.yml.j2 dest=/etc/filebeat/conf.d/{{ item.id }}.yml
notify: restart filebeat
with_items: prospectors
when: { " service: " }
Split your task in two: collect facts, act depending on facts.
In your case, you can list installed services first, then configure only existing ones.
For example:
- hosts: test-host
vars:
services:
- name: apache2
id: 1
- name: nginx
id: 2
- name: openvpn
id: 3
tasks:
- name: get list of services
shell: "service --status-all 2>&1 | awk {'print $4'}"
args:
warn: false
register: services_list
- name: process only existing services
debug: msg="service {{ item.name }} with id={{ item.id }} exists"
with_items: "{{ services }}"
when: item.name in services_list.stdout_lines
Related
I'm trying to create a series of firewalld rules using a variable imported from a yaml file. The yaml file creates a dictionary of service names and the associated ports are a list within each item. A segment of the yaml looks like this:
---
myservice:
description: My service desc
ports:
- 1234/tcp
- 1235/tcp
another:
description: Another service
ports:
- 2222/tcp
- 3333/tcp
The Ansible role I have so far is:
- name: Read services from file
include_vars:
file: "services.yml"
name: services
- name: Create firewalld services
command: >
firewall-cmd --permanent
--new-service={{ item.key }}
--set-description="{{ item.value.description }}"
register: addserv
failed_when: addserv.rc != 26 and addserv.rc != 0
changed_when: not "NAME_CONFLICT" in addserv.stderr
with_dict: "{{ services }}"
- name: Add ports to firewalld service
command: >
firewall-cmd --permanent
--service={{ item.key }} --add-port={{ item.value.ports }}
register: addport
changed_when: not "ALREADY_ENABLED" in addport.stderr
The first segment to create the firewalld service works fine but I'm stumped on the second part with how to loop over the list of ports while retaining the dictionary key. I've tried using subelements to extract the ports list and that works but I can't figure out how to retain the service name from the key.
Use subelements. For example
- debug:
msg: "{{ item.0.key }} - {{ item.0.value.description }} - {{ item.1 }}"
with_subelements:
- "{{ services|dict2items }}"
- value.ports
gives
"msg": "myservice - My service desc - 1234/tcp"
"msg": "myservice - My service desc - 1235/tcp"
"msg": "another - Another service - 2222/tcp"
"msg": "another - Another service - 3333/tcp"
I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!
Hey still kind of new with Ansible.
I made a playbook(test) that will perform a rolling update for a Mariadb galera-cluster that uses a HAproxy as lb.
I have no idea how to use the dictionary(bottom of the code) for all my tasks in the play. Also it has to loop like first server 1 then server 3 then server 2 and then server 4. Idea is that if the host or ip changed that you'll only have to change it in the dictionary.
For example task1 needs to use the key.value of host1 same for task2 and when its done loop to the next host.
I tried to use the vars module but it only worked task
specific. Was thinking of using the Vars folder but I'm not using the roles architecture.
- hosts: DBserver
become: yes
tasks:
- name: disable the haproxy server
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server "{{}}" "{{}}" check weight 0'
with_items:
- 'server "{{}}" "{{}}" check weight 1'
- hosts: "{{}}"
become: yes
tasks:
- name: stop the mariadb
service:
name: mariadb
state: stopped
- hosts: DBserver
become: yes
tasks:
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server "{{}}}" "{{}}" check weight 1'
with_items:
- 'server "{{}}" "{{}}" check weight 0'
dictionary:
{ 'name': 'host1', 'key': 'ipxxx' }, { 'name': 'host2', 'key': 'ipxxxx' }, { 'name': 'host3', 'key': 'ipxxx' }, { 'name': 'host4', 'key': 'ipxxx' }
I solved my problem with this play.
---
- hosts: clients
become: yes
serial: 1
tasks:
- name: disable the haproxy server
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server {{ name }} {{ ip }} check weight 0'
with_items:
- 'server {{ name }} {{ ip }} check weight 1'
delegate_to: deploy
- name: stop the mariadb
service:
name: mariadb
state: stopped
- name: update mariadb
apt:
name: "{{ packages }}"
state: latest
update_cache: yes
vars:
packages:
- mariadb-server
- mariadb-client
- mariadb-common
- name: start the mariadb
service:
name: mariadb
state: started
- name: status mariadb
shell: systemctl status mariadb
register: server_status
- name: debug
debug:
msg: "{{ server_status['stdout_lines'] }}"
- name: enable proxy
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server {{ name }} {{ ip }} check weight 1'
with_items:
- 'server {{ name }} {{ ip }} check weight 0'
delegate_to: deploy
I used the host_vars and that did the trick in combination with delegate_to.
New ansible user (v2.3) and pulled playbook for github but not understanding idempotence. Once elasticsearch is installed and started with playbook, I now want to be able to just modify or add an es_config parm and then rerun playbook expecting an updated elasticsearch.yml config file and elasticsearch restarted. That include_roles elasticsearch however is skipped on rerun and not sure what to modify to change that.
ansible-playbook --ask-become-pass -vv elk.yml
elk.yml
---
#
# Playbook to install the ELK stack
#
- hosts: servers
strategy: debug
remote_user: ansible
become: yes
become_user: root
tasks:
- debug: msg="elk main"
- include_role:
name: java
name: elasticsearch
name: kibana
name: nginx
name: logstash
vars:
es_instance_name: "node1"
es_config: {
discovery.zen.ping.unicast.hosts: "logstash-2, logstash-3",
network.host: _eno1_,
cluster.name: logstash,
node.name: "{{ ansible_hostname }}",
http.port: 9200,
transport.tcp.port: 9300,
node.data: true,
node.master: true,
bootstrap.memory_lock: true,
index.number_of_shards: 3,
index.number_of_replicas: 1
}
es_major_version: "6.x"
es_version: "6.2.1"
es_heap_size: "26g"
es_cluster_name: "logstash"
roles/elasticsearch/tasks/main.yml
---
- name: os-specific vars
include_vars: "{{ansible_os_family}}.yml"
tags:
- always
- debug: msg="es parms"
- name: check-set-parameters
# include: elasticsearch-parameters.yml
include_tasks: elasticsearch-parameters.yml
tags:
- always
#- include: java.yml
# when: es_java_install
# tags:
# - java
#- include: elasticsearch.yml
- include_tasks: elasticsearch.yml
roles/elasticsearch/tasks/elasticsearch.yml
---
- name: Include optional user and group creation.
when: (es_user_id is defined) and (es_group_id is defined)
include_tasks: elasticsearch-optional-user.yml
- name: Include specific Elasticsearch
include_tasks: elasticsearch-Debian.yml
when: ansible_os_family == 'Debian'
roles/elasticsearch/tasks/elasticsearch-Debian.yml
---
- set_fact: force_install=no
- set_fact: force_install=yes
when: es_allow_downgrades
- name: Debian - Install apt-transport-https to support https APT downloads
become: yes
apt: name=apt-transport-https state=present
when: es_use_repository
- name: Debian - Add Elasticsearch repository key
become: yes
apt_key: url="{{ es_apt_key }}" state=present
when: es_use_repository and es_apt_key
- name: Debian - Add elasticsearch repository
become: yes
apt_repository: repo={{ item.repo }} state={{ item.state}}
with_items:
- { repo: "{{ es_apt_url_old }}", state: "absent" }
- { repo: "{{ es_apt_url }}", state: "present" }
when: es_use_repository
- name: Debian - Include versionlock
include: elasticsearch-Debian-version-lock.yml
when: es_version_lock
- name: Debian - Ensure elasticsearch is installed
become: yes
apt: name=elasticsearch{% if es_version is defined and es_version != ""
%}={{$
when: es_use_repository
register: debian_elasticsearch_install_from_repo
notify: restart elasticsearch
- name: Debian - Download elasticsearch from url
get_url: url={% if es_custom_package_url is defined %}{{
es_custom_package_ur$
when: not es_use_repository
- name: Debian - Ensure elasticsearch is installed from downloaded package
become: yes
apt: deb=/tmp/elasticsearch-{{ es_version }}.deb
when: not es_use_repository
register: elasticsearch_install_from_package
notify: restart elasticsearch
roles/elasticsearch/tasks/elasticsearch-parameters.yml
# Check for mandatory parameters
- fail: msg="es_instance_name must be specified and cannot be blank"
when: es_instance_name is not defined or es_instance_name == ''
- fail: msg="es_proxy_port must be specified and cannot be blank when
es_proxy_$
when: (es_proxy_port is not defined or es_proxy_port == '') and
(es_proxy_hos$
- debug: msg="WARNING - It is recommended you specify the parameter
'http.port'"
when: es_config['http.port'] is not defined
- debug: msg="WARNING - It is recommended you specify the parameter
'transport.$
when: es_config['transport.tcp.port'] is not defined
- debug: msg="WARNING - It is recommended you specify the parameter
'discovery.$
when: es_config['discovery.zen.ping.unicast.hosts'] is not defined
#If the user attempts to lock memory they must specify a heap size
- fail: msg="If locking memory with bootstrap.memory_lock a heap size must
be s$
when: es_config['bootstrap.memory_lock'] is defined and
es_config['bootstrap.$
#Check if working with security we have an es_api_basic_auth_username and
es_ap$
- fail: msg="Enabling security requires an es_api_basic_auth_username and
es_ap$
when: es_enable_xpack and ("security" in es_xpack_features) and
es_api_basic_$
- set_fact: file_reserved_users={{ es_users.file.keys() | intersect
(reserved_x$
when: es_users is defined and es_users.file is defined and
(es_users.file.key$
- fail:
msg: "ERROR: INVALID CONFIG - YOU CANNOT CHANGE RESERVED USERS THROUGH THE$
when: file_reserved_users | default([]) | length > 0
- set_fact: instance_default_file={{default_file |
dirname}}/{{es_instance_name$
- set_fact: instance_init_script={{init_script | dirname
}}/{{es_instance_name}$
- set_fact: conf_dir={{ es_conf_dir }}/{{es_instance_name}}
- set_fact: m_lock_enabled={{ es_config['bootstrap.memory_lock'] is defined
and$
#TODO - if transport.host is not local maybe error on boostrap checks
#Use systemd for the following distributions:
#Ubuntu 15 and up
#Debian 8 and up
#Centos 7 and up
#Relies on elasticsearch distribution installing a serviced script to
determine$
- set_fact: use_system_d={{(ansible_distribution == 'Debian' and
ansible_distri$
- set_fact: instance_sysd_script={{sysd_script | dirname
}}/{{es_instance_name}$
when: use_system_d
#For directories we also use the {{inventory_hostname}}-{{ es_instance_name
}} $
- set_fact: instance_suffix={{inventory_hostname}}-{{ es_instance_name }}
- set_fact: pid_dir={{ es_pid_dir }}/{{instance_suffix}}
- set_fact: log_dir={{ es_log_dir }}/{{instance_suffix}}
- set_fact: data_dirs={{ es_data_dirs | append_to_list('/'+instance_suffix)
}}
formatting issue with include_role. Fixed problem with one role per include as follows:
- include_role:
name: java
- include_role:
name: elasticsearch
and so forth.
My vars file :
applications:
- name: server_1
deploy: True
branch: master
group: frontend
- name: server_2
deploy: True
branch: master
group: backend
My task is
- name: deploy boxer
command: "curl http://example.com?hosts={{item.1}}"
with_subelements:
- applications
- "{{ groups[group] }}" # This is not working.
inventory.yml
[frontend]
host1.example.com
host2.example.com
[backend]
host11.example.com
host12.example.com
I want to iterate through applications, and loop through each hosts in a group (only one host at a time) to pass it to curl command.
I want this at the end.
curl http://example.com?hosts=host1.example.com
curl http://example.com?hosts=host2.example.com
curl http://example.com?hosts=host11.example.com
curl http://example.com?hosts=host12.example.com
Did you mean to use with_nested?
with_nested:
- "{{ applications }}"
- "{{ groups }}"
when: item.0.group == item.1