Multiple variables in with_items made error - ansible

I created the following playbook to set ufw settings.
---
- name: setup ufw for multi ports
hosts: db
become: yes
tasks:
- name: 'Allow all access for multi ports'
community.general.ufw:
rule: allow
port: "{{ item.port_num }}"
src: "{{ item.dest_ip }}"
with_items:
- { port_num: "33787", dest_ip: "{{web_ip_band}}" }
And this is my group_vars file.
web_ip_band:
- '192.168.101.13/24'
- '192.168.101.44/24'
when I execute this playbook, I get this error.
failed: [dbserver01] (item={'port_num': '33787', 'dest_ip': ['192.168.101.13/24', '192.168.101.44/24']}) => {"ansible_loop_var": "item", "changed": false, "commands": ["/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw --version", "/usr/sbin/ufw allow from ['192.168.101.13/24', '192.168.101.44/24'] to any port 33787"], "item": {"dest_ip": ["192.168.101.13/24", "192.168.101.44/24"], "port_num": "33787"}, "msg": "ERROR: Wrong number of arguments\n"}
Is there a syntax error in my playbook?

From the community.general.ufw module documentation (extract rearranged to fit in SO answer)
from_ip (aliases: from, src)
string - Default: "any"
You are passing a list of IPs, which explains your error message:
ERROR: Wrong number of arguments
You have to play that task for each combination of port_num and individual entries in dest_ip. What you need here is a subelements loop:
- name: 'Allow all access for multi ports'
community.general.ufw:
rule: allow
port: "{{ item.0.port_num }}"
src: "{{ item.1 }}"
vars:
my_rules:
- { port_num: "33787", dest_ip: "{{web_ip_band}}" }
loop: "{{ my_rules | subelements('dest_ip') }}"

Related

Ansible & Juniper Junos - Unable to make a PyEZ connection: ConnectError() [duplicate]

I am trying to use juniper_junos_facts from the Ansible Junos module to query some VM's that I provisioned using Vagrant. However I am getting the following error.
fatal: [r1]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r1)"}
fatal: [r2]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r2)"}
I see in the following document Here on juniper.net that this error occurs when you don't have the host defined correctly in the inventory file. I don't believe this to be an issue with my inventory file because when I run ansible-inventory --host all appears to be in order
~/vagrant-projects/junos$ ansible-inventory --host r1
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".vagrant/machines/r1/virtualbox/private_key",
"ansible_ssh_user": "root"
}
~/vagrant-projects/junos$ ansible-inventory --host r2
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2200,
"ansible_ssh_private_key_file": ".vagrant/machines/r2/virtualbox/private_key",
"ansible_ssh_user": "root"
}
My playbook is copied from the following document which I got from Here on juniper.net.
My Inventory File
[vsrx]
r1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=.vagrant/machines/r1/virtualbox/private_key
r2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=.vagrant/machines/r2/virtualbox/private_key
[vsrx:vars]
ansible_ssh_user=root
My Playbook
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: retrieve facts
juniper_junos_facts:
host: "{{ inventory_hostname }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
As you're using connection: local you need to give the module full connection details (usually packaged in a provider dictionary at the play level to reduce repetition):
- name: retrieve facts
juniper_junos_facts:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
savedir: "{{ playbook_dir }}"
Full docs are here (watch out for the correct role version in the URL): https://junos-ansible-modules.readthedocs.io/en/2.1.0/juniper_junos_facts.html where you can also see what the defaults are.
To fully explain the "provider" method, your playbook should look something like this:
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
vars:
connection_info:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
tasks:
- name: retrieve facts
juniper_junos_facts:
provider: "{{ connection_info }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
This answer for people who will find this question by error message.
If you use connection plugin different from local, it can, and usually caused by this bug related to variables ordering
Bug already fixed in Release 2.2.1 and later, try to update module from Galaxy.

Loop through the children host group ansible is failing

My inventory file is having below host groups:
[uat1]
123.11.23.22 ansible_user="xxx"
[OS_uat2]
123.45.6.7 ansible_user="yyy"
[uat1_childs:children]
uat1
OS_uat2
I am having the vars file which is having param for below hosts. I am running a playbook to run a shell command. I am passing some parameters with the playbook. I am passing deployment_environment as uat1_childs. This is giving me error. Playbook is:
- name: play to ping test
gather_facts: false
hosts: "{{ deployment_environment }}"
ignore_unreachable: yes
vars_files:
- r_params.yml
vars:
package: "{{ package }}"
tasks:
- set_fact:
env_param: "{{ deployment_environment }}"
- name: ping test
ping:
data: pong
- name: Deploy Services on "{{ deployment_environment }}"
shell: cd "{{ env_select[env_param].script_path }}"; sh "{{ env_select[env_param].script_path }}/deploy.sh" "param1" "param2" "{{ env_select[env_param].repo }}" "{{ artifact_version }}" "{{ env_select[env_param].ENV }}" "{{ arti_username }}" "{{ arti_pass }}" "{{ deployer }}" "{{ package }}" "{{ env_select[env_param].deployment_path }}"
when: (package == "abc")
with_items: "{{ groups[{{ 'deployment_environment' }}] }}"
This is giving me error as:
fatal: [123.11.23.22]: FAILED! =>
{
"msg": "'dict object' has no attribute 'deployment_environment'"
}
fatal: [123.45.6.7]: FAILED! =>
{
"msg": "'dict object' has no attribute 'deployment_environment'"
}
I tried removing apostrophe in with items, still it is giving me error. Cant identify how to run the task in all children host group.

How to loop over inventory host/ip and add push it through an API in ansible

So I'm fairly new to Ansible. I'm trying to get the ip address and hostname from my inventory:
- set_fact:
ip_out: "{{hostvars[inventory_hostname].ansible_default_ipv4.address }}"
host_out: "{{hostvars[inventory_hostname].inventory_hostname}}"
And then want to add it in my monitoring system through an API. I'm just not sure how to make my loop work. It works when adding one host at a time but not multiple.
- name: Add host to Check_MK site via WebAPI
uri:
url: '{{ cmkclient__connection_string }}?action=add_host&_username={{ cmkclient_api_user }}&_secret={{ cmkclient_api_password }}&output_format=json'
method: 'POST'
body: 'request={"attributes":{"alias": "Test", "ipaddress": "{{item[0]}}", "hostname": "{{item[1]}}", "create_folders": "0", "folder": "" }'
return_content: yes
delegate_to: localhost
when: '"No such host" in cmkclient__host_query.json.result'
register: cmkclient__host_add
changed_when: (cmkclient__host_add.json is defined) and
(cmkclient__host_add.json.result_code == 0)
failed_when: (cmkclient__host_add.json is not defined) or
(cmkclient__host_add.json.result_code != 0)
with_nested:
- "{{ip_out}}"
- "{{host_out}}"
I get a JSON parsing error.
Any ideas would be helpful.
Thanks!
It seems you'd like to use the IP address and hostname of hosts under the targeted group for the body of API request. Instead of delegating this task to localhost, we could have a play on localhost like:
# gather required facts from all hosts
- hosts: all
gather_facts: false
tasks:
- setup:
gather_subset: network
- hosts: localhost
connection: local
gather_facts: false
tasks:
- debug:
msg: "ipaddress: {{ ip_address }}, hostname: {{ host_name }}"
vars:
ip_address: "{{ hostvars[item]['ansible_default_ipv4']['address'] }}"
host_name: "{{ hostvars[item]['inventory_hostname'] }}"
loop: "{{ groups['all'] }}"
I've used a debug task, but the same loop and vars can be applied to your uri task as well.
Note:
If you are running the uri task on my_group (or all) hosts, then you should be simply able to refer to the required variables directly, without delegating. In this case the task will run on each host of the group using its IP address and hostname.
body: 'request={"attributes":{"alias": "Test", "ipaddress": "{{ ansible_default_ipv4['address'] }}", "hostname": "{{ inventory_hostname }}", "create_folders": "0", "folder": "" }'

Trying to get IPs from file and use it as Inventory in Ansible

I get list of IP address in test.text file from which I am trying to get the IP in loop
and then try to get in group or variable and use it as hosts (dynamic_groups)
Below is my playlist
---
- name: provision stack
hosts: localhost
connection: local
gather_facts: no
serial: 1
tasks:
- name: Get Instance IP Addresses From File
shell: cat /home/user/test.text
register: serverlist
- debug: msg={{ serverlist.stdout_lines }}
- name: Add Instance IP Addresses to temporary inventory groups
add_host:
groups: dynamic_groups
hostname: "{{item}}"
with_items: serverlist.stdout_lines
- hosts: dynamic_groups
become: yes
become_user: root
become_method: sudo
gather_facts: True
serial: 1
vars:
ansible_connection: "{{ connection_type }}"
ansible_ssh_user: "{{ ssh_user_name }}"
ansible_ssh_private_key_file: "{{ ssh_private_key_file }}"
tasks:
.....
.....
After running above playbbok I am getting below error
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
"192.168.1.10",
"192.168.1.11",
"192.168.1.50"
]
}
TASK [Add Instance IP Addresses to temporary inventory groups] ***************************************************************************************************************************************************************************
changed: [localhost] => (item=serverlist.stdout_lines)
PLAY [dynamic_groups] *********************************************************************************************************************************************************************************************************************
TASK [Some Command] **********************************************************************************************************************************************************************************************************************
fatal: [serverlist.stdout_lines]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname serverlist.stdout_lines: Name or service not known", "unreachable": true}
What Am I missing here?
Below is correct way to use variable
- name: Add Instance IP Addresses to temporary inventory groups
add_host:
groups: working_hosts
hostname: "{{item}}"
with_items: "{{ serverlist.stdout_lines }}"
It should solve your problem.
As reported in fatal error message "Failed to connect to the host via ssh: ssh: Could not resolve hostname serverlist.stdout_lines", it is trying to connect to "serverlist.stdout_lines", not to a valid IP.
This is caused by an error when passing variable to with_items. In your task:
with_items: serverlist.stdout_lines
it is passing serverlist.stdout_lines string and not its value.
With_items requires variable definition using "{{ ... }}" (https://docs.ansible.com/ansible/2.7/user_guide/playbooks_loops.html#with-items).
This is the correct way for your task:
- name: Add Instance IP Addresses to temporary inventory groups
add_host:
groups: dynamic_groups
hostname: "{{item}}"
with_items: "{{ serverlist.stdout_lines }}"
You can simply use ansible-playbook -i inventory_file_name playbook.yaml for this. inventory_file is the file containing your groups and ips.

Unable to make a PyEZ connection: ConnectUnknownHostError

I am trying to use juniper_junos_facts from the Ansible Junos module to query some VM's that I provisioned using Vagrant. However I am getting the following error.
fatal: [r1]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r1)"}
fatal: [r2]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r2)"}
I see in the following document Here on juniper.net that this error occurs when you don't have the host defined correctly in the inventory file. I don't believe this to be an issue with my inventory file because when I run ansible-inventory --host all appears to be in order
~/vagrant-projects/junos$ ansible-inventory --host r1
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".vagrant/machines/r1/virtualbox/private_key",
"ansible_ssh_user": "root"
}
~/vagrant-projects/junos$ ansible-inventory --host r2
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2200,
"ansible_ssh_private_key_file": ".vagrant/machines/r2/virtualbox/private_key",
"ansible_ssh_user": "root"
}
My playbook is copied from the following document which I got from Here on juniper.net.
My Inventory File
[vsrx]
r1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=.vagrant/machines/r1/virtualbox/private_key
r2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=.vagrant/machines/r2/virtualbox/private_key
[vsrx:vars]
ansible_ssh_user=root
My Playbook
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: retrieve facts
juniper_junos_facts:
host: "{{ inventory_hostname }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
As you're using connection: local you need to give the module full connection details (usually packaged in a provider dictionary at the play level to reduce repetition):
- name: retrieve facts
juniper_junos_facts:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
savedir: "{{ playbook_dir }}"
Full docs are here (watch out for the correct role version in the URL): https://junos-ansible-modules.readthedocs.io/en/2.1.0/juniper_junos_facts.html where you can also see what the defaults are.
To fully explain the "provider" method, your playbook should look something like this:
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
vars:
connection_info:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
tasks:
- name: retrieve facts
juniper_junos_facts:
provider: "{{ connection_info }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
This answer for people who will find this question by error message.
If you use connection plugin different from local, it can, and usually caused by this bug related to variables ordering
Bug already fixed in Release 2.2.1 and later, try to update module from Galaxy.

Resources