Ansible does not recognize default inventory set in config - ansible

I can use my Ansible inventory file to ping all hosts if I specify it explicity:
ansible -i mmp_default/mmp_static_default all -m ping
mmp-websockets002.prod01.company.com | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
mmp-staticweb001.prod01.company.com | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
But setting it up as a default inventory in my config doesn't work:
ansible all -m ping
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
This is my config:
sudo cat /etc/ansible/ansible.cfg
[defaults]
ansible_managed = This file is managed by Merlin. Do not edit directly.
deprecation_warnings = False
timeout=30
remote_user = centos
private_key_file = /home/centos/AWS.pem
[privilege_escalation]
become=True
become_user=root
[inventory]
## enable inventory plugins, default: 'host_list', 'script', 'yaml', 'ini'
enable_plugins = auto, ini
inventory = /home/centos/R2.4.1/merlin/mmp_default/mmp_static_default
I have my inventory listed as: inventory = /home/centos/R2.4.1/merlin/mmp_default/mmp_static_default
Why doesn't ansible recognize the inventory file I setup in the config?

From the doc, inventory setting should be in defaults section:
[defaults]
...
inventory = /home/centos/R2.4.1/merlin/mmp_default/mmp_static_default
...
[privilege_escalation]
....

Related

Ansible - #CLIXML error on a windows host

I have an issue when I run my playbook.yaml with -vvvv during Gathering Facts. I have the following error message :
fatal: [HOST.DOMAIN.COM]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"ansible.legacy.setup": {
"failed": true,
"module_stderr": "#< CLIXML\r\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
},
"msg": "The following modules failed to execute: ansible.legacy.setup\n"
I search on internet and I try different things like change the size of max memory per shell but it changes nothing.
Do you know how to resolve it or a way that i can explore for solve it pls ? If I need to change my config ?
Playbook.yaml :
- name: Who am I
hosts: win
tasks:
- name: Check my user name
ansible.windows.win_whoami:
win.yaml (variables) :
ansible_user: admin#DOMAIN.COM
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: kerberos
ansible_winrm_server_cert_validation: ignore
ansible_winrm_port: 5986
My Windows host :
OS : Microsoft Windows Server 2016 Standard
Powershell Version : 5.1.14393.5127

Ansible Tower: Custom Credential Type

I created a Custom Credential in Ansible Tower and need to use it in a role.
The credential name is custom_cred -> this has 2 keys custom username and custom password.
I've tried hostvars[inventory_hostname][custom_cred]['custom username'] but its not working.
To debug your Custom Credential Types you could use
- hosts: localhost
gather_facts: yes
tasks:
- name: Get environment
debug:
msg: "{{ ansible_env }}"
resulting into an output of
TASK [Get environment] *********************************************************
ok: [localhost] => {
"msg": [
{
...
"custom_username": "username",
"custom_password": "********",
...
}
...
if such Custom Test Credentials are configured. This is working for AWX/Tower. You can then follow up with
Ansible Tower - How to pass credentials as an extra vars to the job template?

Ansible DellOS6 Config Backup

EDIT: After some research, I wonder if this may be related to the on_become() function as described in this post? https://github.com/Dell-Networking/ansible-dellos-examples/issues/12
I am trying to backup our current configurations on our Dell 2048p switches, running OS6. No matter what I set the timeout to (using persistent_connection in ansible.cfg), it still errors out. I have checked the logs on the switch and it gets both the show ver and show running-config commands, however its just not making it back. I have looked at the Networking and Troubleshooting guide, but am having trouble getting a proper error. Does anyone have this working, or spot anything I can change?
Version
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
Playbook
-
name: Show ver
hosts: Dell
connection: network_cli
gather_facts: yes
tasks:
-
name: "Get Dell EMC OS6 Show version"
dellos6_command:
commands: ['show version']
register: show_ver
-
name: "Backup config file locally"
dellos6_config:
backup: yes
backup_options:
dir_path: "/mnt/c/Users/me/Documents/Programming Projects/netBackupPlaybooks"
filename: "{{ inventory_hostname }}"
authorize: yes
register: backup_dellso6_location
when: ansible_network_os == 'dellos6'
- debug: var=show_ver
- debug: var=backup_dellos6_location
Inventory
[Dell]
sw1 ansible_host=10.10.10.10 ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_network_os=dellos6 ansible_connection=network_cli ansible_become_method=enable ansible_become_password=admin ansible_user=admin ansible_password=admin
sw2 ansible_host=10.10.10.11 ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_network_os=dellos6 ansible_connection=network_cli ansible_become_method=enable ansible_become_password=admin ansible_user=admin ansible_password=admin
Command
sudo ansible-playbook -i inventory.ini DellPB.yaml -vvvv
Error
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_dellos6_config_payload_pjEND4/ansible_dellos6_config_payload.zip/ansible/module_utils/network/dellos6/dellos6.py", line 86, in get_config
return _DEVICE_CONFIGS[cmd]
fatal: [sw2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"after": null,
"auth_pass": null,
"authorize": true,
"backup": true,
"backup_options": null,
"before": null,
"config": null,
"host": null,
"lines": null,
"match": "line",
"parents": null,
"password": null,
"port": null,
"provider": null,
"replace": "line",
"save": false,
"src": null,
"ssh_keyfile": null,
"timeout": null,
"update": "merge",
"username": null
}
},
"msg": "unable to retrieve current config",
"stderr": "command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.",
"stderr_lines": [
"command timeout triggered, timeout value is 30 secs.",
"See the timeout setting options in the Network Debug and Troubleshooting Guide."
]
Just wanted to edit for anyone else experiencing this issue. It looks like it was a bug in the module that will be fixed in the latest release of Ansible.
https://github.com/ansible/ansible/pull/63272

Ansible destination /etc is not writable

I'm getting an error that says 'destination /etc not writable' when I run my playbook:
fatal: [B-mmp-edge-90c9.stg01.aws.company.net]: FAILED! => {"changed": false, "checksum": "686f224b9b97fe890014e1320f48d31cae90abc2", "msg": "Destination /etc not writable"}
fatal: [B-mmp-edge-9df4.stg01.aws.company.net]: FAILED! => {"changed": false, "checksum": "23cd3c17b1f9f84d48dc67affd5d3f4e09506b48", "msg": "Destination /etc not writable"}
My playbook common.yml just has this in it:
---
- hosts: all, !ansible
roles:
- common
And in roles/common/meta/main.yml I have:
---
dependencies:
- { role: selinux_disable }
# - { role: iptables_disable }
- { role: motd }
- { role: ntp }
# - { role: epel }
# - { role: hosts }
- { role: users }
# - { role: limits }
# - { role: sysctl }
- { role: snmp }
I'm using Ansible version 2.6.4.
I think I need to have root privileges. But I don't know how to do that in an Ansible role. Can somebody help with that?
To execute Ansible tasks with root privileges, you would need to add the following:
name: YourPlaybookName
hosts: YourHosts
become: yes
Note the third line and the become: yes directive.
For more information about privilege escalation in Ansible, please take a look at: https://docs.ansible.com/ansible/latest/user_guide/become.html

Ansible way of replacing a line in config

I have a json as follows:
{
"bootstrap": true,
"server": true,
"datacenter": "aws",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true
}
This is on 3 servers which are in ansible inventory file as
[consul]
10.0.0.1
10.0.0.2
10.0.0.3
Now to make the nodes join the cluster i will have to actually add the following config line as well
"start_join": ["ip_of_other_node_1", "ip_of_other_node_2"]
and this will go on each of the 3 servers
So basically it means if 10.0.0.1 is one of those nodes in cluster, it's config will look like
{
"bootstrap": true,
"server": true,
"datacenter": "aws",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true,
"start_join": ["10.0.0.2","10.0.0.3"]
}
I am trying to this via ansible as follows:
- name: Add the ip's of other servers to join cluster
lineinfile:
path: /etc/consul.d/server/config.json
regexp: '^"enable_syslog"'
insertafter: '^"enable_syslog"'
line: '"start_join": ["{{ groups['consul'][1] }}", "{{ groups['consul'][2] }}"]'
when: inventory_hostname == '{{ groups['consul'][0] }}'
Which is not really helping me out saying syntax error at line: , i am not sure what is the best way to achieve something like this via ansible and also what how to tackle the case when i increase the servers in inventory.
You can use the Template module to replace the config file on your servers, instead of changing it in place with the regex. This way you can add a task to generate a new config file with the start_join field containing the elements of your hosts file (or any other and more complex configuration) using regular jinja2 templates.

Resources