I have following playbook to modify ASA object-group:
---
- hosts: us_asa
connection: local
gather_facts: false
tasks:
- name: change config
asa_config:
auth_pass: "{{ ansible_ssh_password }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_password }}"
authorize: yes
timeout: 45
lines:
- network-object host 1.2.3.4
- network-object host 2.3.2.3
parents: ['object-group network BAD_IPs']
This works fine for single group.
Any suggestion how to modify multiple groups with same connection? If I add another object-group after parents: ['object-group network BAD_IPs'] example:
---
- hosts: us_asa
connection: local
gather_facts: false
tasks:
- name: change config
asa_config:
auth_pass: "{{ ansible_ssh_password }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_password }}"
authorize: yes
timeout: 45
lines:
- network-object host 1.2.3.4
- network-object host 2.3.2.3
parents: ['object-group network BAD_IPs']
- network-object host 4.4.4.4
parents: ['object-group network Good_IPs']
This fails
The offending line appears to be:
parents: ['object-group network BAD_IPs']
- network-object host 4.4.4.4
^ here
Any recommendation on syntax I should use?
Thank you in advance!
You just have a basic YAML syntax error there. A YAML dictionary key with a list value looks either like this:
key: [item1, item2, item3]
Or like this:
key:
- item1
- item2
- item3
You have some weird combination of the two:
parents: ['object-group network BAD_IPs']
- network-object host 4.4.4.4
I don't know exactly what structure you want, but what you have there is simply invalid.
Related
I'm working on a deployment project for SNMP on a larger list of servers. The idea is for the script to utilise the inventory file where servers are listed in the following format
# AMRS
[AMRS_CENTRAL]
server1
server2
server3
[AMRS_EASTERN]
server4
server5
server6
I want to run an ansible playbook on all those hosts and get their MGMT IP address which I would then use to input into the snmpd.conf file along with 127.0.0.1.
So far I've come up with below but I'm not sure how to get the set_fact to gather the IPs of the servers.
---
- name: Gather Facts
hosts: all
gather_facts: yes
tasks:
- name: set_fact (target_ip) ..
set_fact:
target_ip: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
- name: Write this the target IP to a file
copy:
content: "{{ target_ip }}"
dest: /home/AABB/deployment/ansible/playbooks/snmpd/host-vars.ini
- name: Install a list of packages for snmpd
yum:
name:
- net-snmp-utils
- net-snmp-devel
- net-snmp
state: present
- name: "disable and stop snmpd.service"
service:
name: snmpd.service
enabled: no
state: stopped
- name: Write this target IP to a file
lineinfile:
path: /etc/snmp/snmpd.conf
insertafter: "# manual page."
line: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }},127.0.0.1"
firstmatch: yes
state: present
- name: line insert
lineinfile:
path: /etc/snmp/snmpd.conf
insertbefore: BOF
line: "{{ item }}"
with_items:
- 'rwcommunity private 127.0.0.2'
- 'rocommunity public 127.0.0.1'
- 'rwcommunity private 10.129.165.50'
- 'rocommunity public 10.129.165.50'
- name: add text to the end of file
blockinfile:
state: present
insertafter: EOF
path: /etc/snmp/snmpd.conf
marker: "<!-- add services ANSIBLE MANAGED BLOCK -->"
block: |
#Traps To Sink
trapsink 10.129.165.50 public
#Event MIBS
iquerySecName User123
rouser User123
#Generate Traps on UCD error conditions
defaultMonitors yes
#Generate traps on linkUp/Down
linkUpDownNotifications yes
#LINKDOWN/LINKUP Configurations 1 Second Generate alert
notificationEvent linkUpTrap linkUp ifIndex ifAdminStatus ifOperStatus
notificationEvent linkDownTrap linkDown ifIndex ifAdminStatus ifOperStatus
monitor -r 1 -e linkUpTrap "Generate linkUp" ifOperStatus != 2
- name: "enable and start snmpd.service"
service:
name: snmpd.service
enabled: yes
state: started
I'm trying to get server name as user input and if the server OS is RHEL7 it will proceed for further tasks. I'm trying with hostvars but it is not helping, kindly help me to find the OS version with when condition:
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- debug:
msg: "{{ hostvars['server1'].ansible_distribution_major_version }}"
When I execute the playbook, I'm getting below error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['server1']\" is undefined\n\nThe error appears to be in '/var/lib/awx/projects/pacemaker_RHEL_7_ST/main_2.yml': line 33, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
You need to gather_facts on the newly added host before you consume the variable. As an example, this will do it with automatic facts gathering.
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- name: Gather facts for newly added targets
hosts: cluster_nodes
# gather_facts: true <= this is the default
- name: Do <whatever> targeting localhost again
hosts: localhost
gather_facts: false # already gathered in play1
tasks:
# Warning!! bad practice. Looping on a group usually
# shows you should have a play targeting that specific group
- debug:
msg: "OS version for {{ item }} is 7"
when: hostvars[item].ansible_distribution_major_version | int == 7
loop: "{{ groups['cluster_nodes'] }}"
If you don't want to rely on automatic gathering, you can manually play the setup module, e.g. for the second play:
- name: Gather facts for newly added targets
hosts: cluster_nodes
gather_facts: false
tasks:
- name: get facts from targets
setup:
I want to use a specific host / host list for an imported playbook which I get from a vars_prompt input. How can I do this? So far I wasn´t able to get this running.
I have two playbooks which I need to run separately and ios_check_routerports.yaml is the parent playbook:
ios_check_routerports.yaml
---
- hosts: '{{ branch_number }}'
connection: network_cli
gather_facts: False
any_errors_fatal: no
throttle: 75
vars_prompt:
- name: "branch_number"
prompt: "Which branch do you want to check?"
default: all
private: no
tasks:
- name: Check facts
ios_facts:
gather_subset: hardware
- name: Create directory
file:
path: /root/ansible/pb-outputs/ios_check_routerports/
state: directory
delegate_to: 127.0.0.1
- name: Run playbook
import_playbook: ios_check_routerports_main.yaml
ios_check_routerports_main.yaml
---
- hosts: '{{ branch_number }}'
connection: network_cli
gather_facts: False
any_errors_fatal: no
throttle: 75
tasks:
- name: Check default-gateway
ios_command:
commands: sh run | i default-gateway
register: default_gateway
I tried to set a fact for the var {{ branch_number }} like this:
ios_check_routerports.yaml
- set_fact:
devices: "{{ branch_number }}"
ios_check_routerports_main.yaml
---
- hosts: '{{ devices }}'
connection: network_cli
The playbook always runs into an error because the hosts var is not defined. What am I doing wrong here?
try this: no need to create a new variable devices but a dummy host
in ios_check_routerports.yaml add a task:
- name: Register dummy host with variable
add_host:
name: "DUMMY_HOST"
DEVICES: "{{ branch_number }}"
then :
- hosts: "{{ hostvars['DUMMY_HOST']['DEVICES'] }}"
connection: network_cli
as you create a new host, i suggest you to delete it if you havent need the variable branch_number so remove_host doesnt exit:
either you do a first task - meta: refresh_inventory
or you modify your host like this:
- hosts: "{{ hostvars['DUMMY_HOST']['DEVICES'] }},!DUMMY_HOST"
I am having a play where i will collect available host names before running a task, i am using this for a purpose,
My play code:
--
- name: check reachable side A hosts
hosts: ????ha???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
When i print this using
- debug:
msg: "group: {{ groups['reachable_other_pairs'] }}"
i am getting below result
"this group : ['testha1', 'testha2', 'testha3']",
Now if again call the same play with different hosts grouping with the same key i am getting the new host names appending to the existing values, like below
- name: check reachable side B hosts
hosts: ????hb???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
if i print the reachable_other_pairs i am getting below results
"msg": " new group: ['testhb1', 'testhb2', 'testhb3', 'testha1', 'testha2', 'testha3']"
All i want is only first 3 entries ['testhb1', 'testhb2', 'testhb3']
Can some one let me know how to achieve this?
Add this as as task just before your block. It will refresh your inventory and clean up all groups that are not in there:
- meta: refresh_inventory
I have two type of server host names added in the ansible main.yml var file:
main.yml file:
foo_server1: 10.10.1.1
foo_server2: 10.10.1.2
bar_server1: 192.168.1.3
bar_server2: 192.168.1.4
bar_server3: 192.168.1.5
I am having an ansible playbook which essentially runs on foo_server1 and initializes/formats all other servers in the list one at a time - starting with foo_server2 then bar_server1, bar_server2 and so on...
---
- name: Reading variables from var files
hosts: localhost
connection: local
vars_files:
- main.yml
tasks:
- name: Initialize foo server2
command: initialize --host1 {{foo_server1}} to --host2 {{foo_server2}}
- name: Initialize bar server1
command: initialize --host1 {{foo_server1}} to --host2 {{bar_server1}}
- name: Initialize bar server2
command: initialize --host1 {{foo_server1}} to --host2 {{bar_server2}}
- name: Initialize bar server3
command: initialize --host1 {{foo_server1}} to --host2 {{bar_server3}}
I dont want to add multiple lines in the playbook for each server rather wants to loop over the host names from the variable file. I am not sure how i would get this done..i am trying to loop over the hostname.. tried something below but no luck as i am getting undefined variable name..
---
server_list:
foo_server1: 10.10.1.1
foo_server2: 10.10.1.2
bar_server1: 192.168.1.3
bar_server2: 192.168.1.4
bar_server3: 192.168.1.5
Ansible playbook...
---
- hosts: localhost
gather_facts: no
vars_files:
- input.yml
tasks:
- name: Enable replication
local_action: shell initialize --host1 {{item.foo_server1}} --host2 {{item.foo_server2}}
with_items:
- "{{ server_list }}"
Can some one please suggest how can i run the same command on multiple servers. Would appreciate any help offered..
Here is an example for you:
---
- hosts: localhost
gather_facts: no
vars:
servers:
foo_server1: 10.10.1.1
foo_server2: 10.10.1.2
bar_server1: 192.168.1.3
bar_server2: 192.168.1.4
bar_server3: 192.168.1.5
tasks:
- debug:
msg: shell initialize --host1 {{ servers.foo_server1 }} --host2 {{ item.value }}
when: item.key != 'foo_server1'
with_dict: "{{ servers }}"