Need ansible inventory file details - ansible

Could someone please help me to write ansible inventory file to connect to bitbucket - clone a file and place into ansible machine.
Playbook
---
- hosts: bitbucketURL
tasks:
- git:
repo: https://p-bitbucket.com:5999/projects/VIT/repos/sample-playbooks/browse/hello.txt
dest: /home/xxx/demo/output/
Inventory file
[bitbucketURL]
p-bitbucket.com:5999
[bitbucketURL:vars]
ansible_connection=winrm
ansible_user=xxx
ansible_pass=<passwd>
I am getting error while using this playbook and inventory file
-bash-4.2$ ansible-playbook -i inv demo_draft1.yml
PLAY [bitbucketURL] *****************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************
fatal: [p-bitbucket.nl.eu.abnamro.com]: UNREACHABLE! => {"changed": false, "msg": "ssl: auth method ssl requires a password", "unreachable": true}
to retry, use: --limit #/home/c55016a/demo/demo_draft1.retry
PLAY RECAP **************************************************************************************************************************************************
p-bitbucket.nl.eu.abnamro.com : ok=0 changed=0 unreachable=1 failed=0
Please help me write a proper inventory file with correct parameters

You need no inventory at all. All you need to do is to set the play to execute on localhost:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- git:
repo: https://p-bitbucket.com:5999/projects/VIT/repos/sample-playbooks/browse/hello.txt
dest: /home/xxx/demo/output/
That said, URL should point to Git repository, not a single file (if hello.txt is a single file).

Related

How to fix "Infoblox IPAM is misconfigured?"

I'm calling infoblox from ansible using the following playbook:
- hosts: localhost
gather_facts: false
tasks:
- name: Include infoblox_vault
include_vars:
file: 'infoblox_vault.yml'
- name: Install infoblox-client for DDI
pip:
name: infoblox-client
environment:
HTTP_PROXY: http://our_internal_proxy.net:8080
HTTPS_PROXY: http://our_internal_proxy.net:8080
delegate_to: localhost
- debug:
msg: can I decrypt username?--> "{{ vault_infoblox_username }}"
- name: Check if DNS Record exists
set_fact:
miqCreateVM_ddiRecord: "{{ lookup('nios', 'record:a', filter={'name': 'infoblox-devtest.net' }, provider={'host': 'ddi-qa.net', 'username': vault_infoblox_username, 'password': vault_infoblox_password }) }}"
- debug:
msg: check var miqCreateVM_ddiRecord "{{ miqCreateVM_ddiRecord }}"
- debug:
msg: test to see amazing vm_name! "{{ vm_name }}"
... code snipped
When the job runs, I get:
Vault password:
PLAY [localhost] ***************************************************************
TASK [Include infoblox_vault] **************************************************
ok: [127.0.0.1]
TASK [Install infoblox-client for DDI] *****************************************
ok: [127.0.0.1 -> localhost]
TASK [debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "can I decrypt username?--> \"manageiq-ddi\""
}
TASK [Check if DNS Record exists] **********************************************
fatal: [127.0.0.1]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."}
PLAY RECAP *********************************************************************
127.0.0.1 : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Here's the main part: "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."
This playbook used to work in the past. I haven't worked on it for a few monhths. Not sure why it's broken.
I confirmed that I can log into infoblox client manually using the credentials. I also tried manually logging the username to ensure it's decrypting the creds from the ansible-vault file. That worked fine. So it's not the credentials, not the vault decryption. It's something else.
I found the following three related topics online, but none of them seem to resolve the problem:
This one (which references adding certs to the request. Anyone know how to do this? I can't find instructions)
This one (which mentions problems from upgrading. I showed the versions mentioned in that post to our networking folks and they said the version numbers didn't correlate at all with what we have in our environment, so it's hard to evaluate whether that's relevant.)
Last one (which calls for using a property 'http_request_timeout' : None that doesn't strike me as being the problem as I can't get it to work at all.)
Any theories? Thanks!
This might not solve it for others, but this solved it for me:
Got a new password for Ansible to use to log into Infoblox.
Create a new ansible vault file containing the new infoblox password. I made a new password for the vault file encryption also.
I created a new credential object in ansible to enable ansible to be able to read the new vault file.
I updated the playbook to use the new vault.
It works now. Something was wrong with the encryption.

Using Netbox Ansible Modules

I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present

Executing python script on remote server using ansible Error

I am logged in as root#x.x.x.12 with ansible 2.8.3 Rhel 8.
I wish to copy few files to root#x.x.x.13 Rhel 8 and then execute a python script.
I am able to copy the files sucessfully using ansible. I had even copied the keys and now it is ssh-less.
But during execution of script :
'fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}'
Please note that I am a novice to ansible.
I guess there is some permission issues.
Please Help me out if possible.
Thanking in anticipation
**yaml_file**
-
name: Copy_all_ansible_files_to_servers
hosts: copy_Servers
become: true
become_user: root
tasks:
-
name: copy_to_all
copy:
src: /home/testuser/ansible_project/{{item}}
dest: /root/ansible_copy/{{item}}
owner: root
group: root
mode: u=rxw,g=rxw,o=rxw
with_items:
- write_file.py
- sink.txt
- ansible_playbook_task.yaml
- copy_codes_2.yaml
notify :
- Run_date_command
-
name: Run_python_script
script: /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
args:
#chdir: '{{ role_path }}'
executable: /usr/bin/python3.6
**inventory_file**
-
web_node1 ansible_host=x.x.x.13
[control]
thisPc ansible_connection=local
#Groups
[copy_Servers]
web_node1
Command: ansible-playbook copy_codes_2.yaml -i inventory.dat =>
PLAY [Copy_all_ansible_files_to_servers] *******************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [web_node1]
TASK [copy_to_all] *****************************************************************************************************************************************************************************************
ok: [web_node1] => (item=write_file.py)
ok: [web_node1] => (item=sink.txt)
ok: [web_node1] => (item=ansible_playbook_task.yaml)
ok: [web_node1] => (item=copy_codes_2.yaml)
TASK [Run_python_script] ***********************************************************************************************************************************************************************************
fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
PLAY RECAP *************************************************************************************************************************************************************************************************
web_node1 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The script command will actually copy the file to the remote server before running it. Thus, when it complains about not being able to find or access the script, it's because it's trying to copy from /root/ansible_copy/write_file.py to the server.
If you don't really need the script to remain on the server after you execute it, you could remove the script from the copy task and change the script task to have the src point at /home/testuser/ansible_project/write_file.py.
Alternatively, instead of using the script command, you can manually run the script after transferring it using:
- name: run the write_file.py after it has already been transferred
command: python3.6 /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
(Note: you may need to provide the full path to your python3.6 executable)

Error trying to create a new vm in ansible

I just started learning Ansible. It has been a pain so far. I have this code to create a new vm. I followed this tutorial.
---
- hosts: localhost
connection: local
tasks:
- vsphere_guest:
vcenter_hostname:1.1.1.12
username: root
password: pasword
guest: newvm001
state: powered_on
validate_certs: no
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
folder: MyFolder
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: storage001
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 256
num_cpus: 1
osid: ubuntu64Guest
scsi: paravirtual
esxi:
datacenter: 1.1.1.12
hostname: 1.1.1.12
I however keep getting this error.
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost]
TASK [setup]
******************************************************************* ok: [localhost]
TASK [vsphere_guest]
*********************************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg":
"Cannot find datacenter named: 9.1.142.86"}
NO MORE HOSTS LEFT
************************************************************* [WARNING]: Could not create retry file 'testing.retry'. [Errno
2] No such file or directory: ''
PLAY RECAP
********************************************************************* localhost : ok=1 changed=0 unreachable=0
failed=1
Why is that so? And what is the difference between a host file and an inventory file?
what is the difference between a host file and an inventory file?
They are the same. However, since you're doing everything on your local machine, it's fine that you only have localhost available.
This is your error:
TASK [vsphere_guest] *********************************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Cannot find datacenter named: 9.1.142.86"}
It's not clear to me why you're receiving this with the playbook you've provided, as it doesn't mention that IP at all and the line I suspect is causing the problem is
datacenter: 1.1.1.12
Are you sure this is the file you're running, and that you've saved any changes you've made to it?

Accessing Ansible host variables in playbook - error

I have a simple ansible playbook with the following data
inventory
----------
[admin]
127.0.0.1
[admin:vars]
service_ip=12.1.2.1
[targets]
admin ansible_connection=local
main.yaml
----------
---
- hosts: admin
roles:
- admin
tags: admin
roles/admin/tasks/main.yaml
---------------------------
- debug: msg="{{ service_ip }}"
when I run the playbook using the command, ansible-playbook -i inventory main.yaml, I get the following error
PLAY [admin] ******************************************************************
GATHERING FACTS ***************************************************************
ok: [admin]
TASK: [admin | debug msg="{{ service_ip }}"] **********************************
fatal: [admin] => One or more undefined variables: 'service_ip' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/karthi/main.yaml.retry
admin : ok=1 changed=0 unreachable=1 failed=0
Any help appreciated. Thanks.
It looks like the issue is the inclusion of
[targets]
admin ansible_connection=local
as per the ansible documentation, You can also select the connection type and user on a per host basis.
So, changing your inventory file to have
[targets]
localhost ansible_connection=local
should fix the issue

Resources