I am new to ansible. I am trying to get info from an esxi about the VM installed on it. No vcentre available.
I have created the playbook. Connection is working however I am getting the error in the Task debug. Any help is much appreciated. Thanks
Playbook:
- hosts: esxi
gather_facts: false
become: false
tasks:
- name: Gather all registered virtual machines
community.vmware.vmware_vm_info:
hostname: '{{ hostname }}'
username: '{{ ansible_user }}'
password: '{{ password }}'
validate_certs: no
delegate_to: localhost
register: vminfo
- debug:
msg: "{{ item.guest_name }}, {{ item.ip_address }}"
with_items:
- "{{ vminfo.virtual_machines }}"
The Error is:
TASK [debug] *************************************************************************************************************************
fatal: [192.168.233.202]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'item' is undefined\n\nThe error appears to be in '/home/sciclunam/task4/playbook11.yaml': line 14, column 8, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************
192.168.233.202 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The correct indentation is very necessary in Ansible.
Your with_items is not correctly indented.
Please notice the indentation of wiith_items
Here is my sample
# Create Directory Structure
- name: Create directory application
file:
path: "{{item}}"
state: directory
mode: 0755
with_items:
- "/opt/project1"
- "/opt/project1/script"
- "/opt/project1/application"
Related
I'm using ansible to create 2 aws ec2 instances and update them.
The playbook succeeds to create the instances and check if ssh is available but
can't find the hosts using my tag.
Ansible playbook:
---
- name: "Create 2 webservers"
hosts: localhost
tasks:
- name: "Start instances with a public IP address"
ec2_instance:
name: "Ansible host"
key_name: "vockey"
instance_type: t2.micro
security_group: sg-017a67d545bce2047
image_id: "ami-0ed9277fb7eb570c9"
region: "us-east-1"
state: running
exact_count: 2
tags:
Role: Webservers
register: result
- name: "Wait for SSH to come up"
wait_for:
host: "{{ item.public_ip_address }}"
port: 22
sleep: 10
loop: "{{ result.instances }}"
- name: "Update the webServers"
hosts: tag_Role_Webservers
become: yes
tasks:
- name: "Upgrade all packages"
yum:
name: '*'
state: latest
The warning:
[WARNING]: Could not match supplied host pattern, ignoring: tag_Role_Webservers
PLAY [Update the webServers] ******************************************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
So I'm trying to write roles and plays for our environment that are as OS agnostic as possible.
We use RHEL, Debian, Gentoo, and FreeBSD.
Gentoo needs getbinpkg, which I can make the default for all calls to community.general.portage with module_defaults.
But I can, and do, install some packages on the other 3 systems by setting variables and just calling package.
Both of the following plays work for Gentoo, but both fail on Debian etc due to the unknown option getbinpkg in the first example and trying to specifically use portage on systems that don't have portage in the second example.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
tasks:
- package:
name: {{ packages }}
- hosts: localhost
module_defaults:
community.general.portage:
getbinpkg: yes
tasks:
- community.general.portage:
name: {{ packages }}
Is there any way to pass module_defaults "through" the package module to the portage module?
I did try when on module_defaults to restrict setting the defaults, interestingly it was completely ignored.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
when: ansible_facts['distribution'] == 'Gentoo'
tasks:
- package:
name: {{ packages }}
I really don't want to mess with EMERGE_DEFAULT_OPTS.
Q: "How to load distribution-specific module defaults?"
Update
Unfortunately, the solution described below doesn't work anymore. For some reason, the keys of the dictionary module_defaults must be static. The play below fails with the error
ERROR! The field 'module_defaults' is supposed to be a dictionary or list of dictionaries, the keys of which must be static action, module, or group names. Only the values may contain templates. For example: {'ping': "{{ ping_defaults }}"}
See details, workaround, and solution.
A: The variable ansible_distribution, and other setup facts, are not available when a playbook starts. You can see the first task in a playbook "Gathering Facts" when not disabled. The solution is to collect the facts and declare a dictionary with module defaults in the first play, and use them in the rest of the playbook. For example
shell> cat pb.yml
- hosts: test_01
gather_subset: min
tasks:
- debug:
var: ansible_distribution
- set_fact:
my_module_defaults:
Ubuntu:
debug:
msg: "Ubuntu"
FreeBSD:
debug:
msg: "FreeBSD"
- hosts: test_01
gather_facts: false
module_defaults: "{{ my_module_defaults[ansible_distribution] }}"
tasks:
- debug:
gives
shell> ansible-playbook pb.yml
PLAY [test_01] ****
TASK [Gathering Facts] ****
ok: [test_01]
TASK [debug] ****
ok: [test_01] =>
ansible_distribution: FreeBSD
TASK [set_fact] ****
ok: [test_01]
PLAY [test_01] ****
TASK [debug] ****
ok: [test_01] =>
msg: FreeBSD
PLAY RECAP ****
test_01: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
I created an Ansible playbook to configure :
- a database server
- a nginx & php server
The database server needs to know the php server ip address to configure database access rights.
The webserver needs to know the database server ip address to configure the database access for the website.
I tried with delegate_facts :
vars:
Database_server_name: "SVWEB-03"
Web_server_name: "SVWEB-02"
- name: Web server ip address
set_fact:
Web_server_ip: "{{ ansible_default_ipv4.address }}"
delegate_to: "{{ Web_server_name }}"
delegate_facts: True
when:
- "Database_server_name in inventory_hostname"
- name: Database server ip address
set_fact:
Database_server_ip: "{{ ansible_default_ipv4.address }}"
delegate_to: "{{ Database_server_name }}"
delegate_facts: True
when:
- "Web_server_name in inventory_hostname"
- debug:
msg: "{{ Web_server_ip }}"
when:
- "Database_server_name in inventory_hostname"
- debug:
msg: "{{ Database_server_ip }}"
when:
- "Web_server_name in inventory_hostname"
But I have errors :
TASK [Web server ip address] ***************************************************************************************************
skipping: [SVWEB-02]
ok: [SVWEB-03 -> SVWEB-02]
TASK [Database server ip address] ************************************************************************************************
skipping: [SVWEB-03]
ok: [SVWEB-02 -> SVWEB-03]
TASK [debug] ***************************************************************************************************************
skipping: [SVWEB-02]
fatal: [SVWEB-03]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'Web_server_ip' is undefined\n\nThe error appears to have been in '/home/murmure/ansible/playbooks/Install_Wordpress_Separate_DB_and_Web.yml': line 64, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
TASK [debug] ***************************************************************************************************************
fatal: [SVWEB-02]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'Database_server_ip' is undefined\n\nThe error appears to have been in '/home/murmure/ansible/playbooks/Install_Wordpress_Separate_DB_and_Web.yml': line 69, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
Does someone have an idea about my mistake please ?
I believe what you need is in hostvars:
- name: Database server ip address
set_fact:
Database_server_ip: "{{ hostvars.Database_server_name.ansible_default_ipv4.address }}"
I am getting error while trying to retrieve the key values from consul kv store.
we have key values are stored under config/app-name/ folder. there are many keys. I want to retrieve all the key values from the consul using ansible.
But getting following error:
PLAY [Adding host to inventory] **********************************************************************************************************************************************************
TASK [Adding new host to inventory] ******************************************************************************************************************************************************
changed: [localhost]
PLAY [Testing consul kv] *****************************************************************************************************************************************************************
TASK [show the lookups] ******************************************************************************************************************************************************************
fatal: [server1]: FAILED! => {"failed": true, "msg": "{{lookup('consul_kv','config/app-name/')}}: An unhandled exception occurred while running the lookup plugin 'consul_kv'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Error locating 'config/app-name/' in kv store. Error was 500 No known Consul servers"}
PLAY RECAP *******************************************************************************************************************************************************************************
server1 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=1 changed=1 unreachable=0 failed=0
Here is the code i am trying.
---
- name: Adding host to inventory
hosts: localhost
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: Testing consul kv
hosts: "{{ target }}"
vars:
kv_info: "{{lookup('consul_kv','config/app-name/')}}"
become: yes
tasks:
- name: show the lookups
debug: msg="{{ kv_info }}"
but removing folder and adding folder are working well. but getting the key values from consul cluster is throwing error. please suggest some better way here.
- name: remove folder from the store
consul_kv:
key: 'config/app-name/'
value: 'removing'
recurse: true
state: absent
- name: add folder to the store
consul_kv:
key: 'config/app-name/'
value: 'adding'
I tried this but still the same error.
---
- name: Adding host to inventory
hosts: localhost
environment:
ANSIBLE_CONSUL_URL: "http://consul-1.abcaa.com"
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: show the lookups
debug: kv_info= "{{lookup('consul_kv','config/app-name/')}}"
All lookup plugins in Ansible are always evaluated on localhost, see docs:
Note:
Lookups occur on the local computer, not on the remote computer.
I guess you expect kv_info to be populated by executing consul fetch from
{{ target }} server.
But this lookup is actually executed on your Ansible control host (localhost), and if you have no ANSIBLE_CONSUL_URL set, you get No known Consul servers error.
When you use consul_kv module (to create/delete folders), it is executed on {{ target }} host in contrast to consul_kv lookup plugin.