Groups in my Ansible inventory does not work as expected - ansible

My yaml inventory (dev environment) is like this:
$> more inventory/dev/hosts.yml
all:
children:
dmz1:
children:
ch:
children:
amq:
hosts:
myamqdev01.company.net: nodeId=1
myamqdev02.company.net: nodeId=2
smx:
hosts:
mysmxdev01.company.net: nodeId=1
mysmxdev02.company.net: nodeId=2
intranet:
children:
ch:
children:
amq:
hosts:
amqintradev01.company.net: nodeId=1
amqintradev02.company.net: nodeId=2
smx:
hosts:
smxintradev01.company.net: nodeId=1
smxintradev02.company.net: nodeId=2
and when I try to ping (with ansible -i inventory/dev -m ping all I got the error:
children: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname children:: Temporary failure in name resolution",
"unreachable": true
}
ch: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ch:: Temporary failure in name resolution",
"unreachable": true
}
all: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname all:: Temporary failure in name resolution",
"unreachable": true
}
lan: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lan:: Temporary failure in name resolution",
"unreachable": true
}
amq: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname amq:: Temporary failure in name resolution",
"unreachable": true
}
hosts: | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname hosts:: Temporary failure in name resolution",
"unreachable": true
}
etc...
For troubleshooting, when I execute ansible -i inventory/dev --list-hosts all I got:
hosts (16):
all:
children:
dmz1:
ch:
amq:
hosts:
myamqdev01.company.net:
myamqdev02.company.net:
smx:
mysmxdev01.company.net:
mysmxdev02.company.net:
intranet:
amqintradev01.company.net:
amqintradev02.company.net:
smxintradev01.company.net:
smxintradev02.company.net:
I think this command should only give the hosts, no?
I am not sure what is the issue but I followed the example from the official doc and I think there's a problem with my hosts.yml file but cannot guess what I am missing.
UPDATE
When I correct the nodeId variables according to the answer, the list all is working fine. However when I try to filter by an intermediate parent this is not working:
ansible -i inventory/dev --list-hosts intranet
does not return the intranet hosts but all.
When I try: ansible -i inventory/dev --list-hosts amq
only amq server are properly returned.

Your inventory file does not respect ansible's format so the yaml inventory plugin fails to parse it. Since it tries plugins in order, for some reason I don't really get it finally succeeds using the ini format and gives one host for each line in your file.
Moreover, you have to understand that a group (e.g. smx) contains all the hosts that have been defined in the inventory, wherever they have been defined (e.g. as a children of intranet or dmz1).
So in your actual inventory structure, dmz1 and intranet both contain as children the groups amq and smx which themselves contain all hosts defined, either in the intranet or dmz1 section. Hence the groups all, dmz1 and intranet are all equivalent here and contain all hosts in the inventory.
Here is an inventory fixing your format problems and with a slightly different structure to address your expectation in terms of group targeting:
---
all:
children:
dmz1:
hosts:
myamqdev01.company.net:
nodeId: 1
myamqdev02.company.net:
nodeId: 2
mysmxdev01.company.net:
nodeId: 1
mysmxdev02.company.net:
nodeId: 2
intranet:
hosts:
amqintradev01.company.net:
nodeId: 1
amqintradev02.company.net:
nodeId: 2
smxintradev01.company.net:
nodeId: 1
smxintradev02.company.net:
nodeId: 2
amq:
hosts:
myamqdev01.company.net:
myamqdev02.company.net:
amqintradev01.company.net:
amqintradev02.company.net:
smx:
hosts:
mysmxdev01.company.net:
mysmxdev02.company.net:
smxintradev01.company.net:
smxintradev02.company.net:
And here are a few example of how to target the desired groups of machines
$ # All machines
$ ansible -i dev/ --list-hosts all
hosts (8):
myamqdev01.company.net
myamqdev02.company.net
mysmxdev01.company.net
mysmxdev02.company.net
amqintradev01.company.net
amqintradev02.company.net
smxintradev01.company.net
smxintradev02.company.net
$ # Intranet
$ ansible -i dev/ --list-hosts intranet
hosts (4):
amqintradev01.company.net
amqintradev02.company.net
smxintradev01.company.net
smxintradev02.company.net
$ # all smx machines
$ ansible -i dev/ --list-hosts smx
hosts (4):
mysmxdev01.company.net
mysmxdev02.company.net
smxintradev01.company.net
smxintradev02.company.net
$ # amq machines only on dmz1
$ # 1. Only whith patterns
$ ansible -i dev/ --list-hosts 'amq:&dmz1'
hosts (2):
myamqdev01.company.net
myamqdev02.company.net
$ # 2. Using limit
$ ansible -i dev/ --list-hosts amq -l dmz1
hosts (2):
myamqdev01.company.net
myamqdev02.company.net

Related

Use ODBC connection to managed Azure SQL Database

I need to run a SQL query on Azure SQL Database from an Ansible playbook.
My task is:
- name: Sql server - rights
vars:
sql_groups:
- { group_name: "{{ reader_group }}", db_access: "db_datareader" }
- { group_name: "{{ contributer_group }}", db_access: "db_datawriter" }
- { group_name: "{{ owner_group }}", db_access: "db_owner" }
community.general.odbc:
dsn: "Driver={ODBC Driver 13 for SQL Server};Server=tcp:{{ sql_server_host }},1433;Database={{ sql_server_db }};Uid={{ mssql_login_user }};Pwd={{ mssql_login_password }};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;Authentication=ActiveDirectoryPassword"
query: |
CREATE USER ["{{ group_name }}"] FROM EXTERNAL PROVIDER
EXEC sp_addrolemember '{{ db_access }}', '{{ group_name }}'
loop: "{{ sql_groups }}"
When I run the playbook with the following command, Ansible tries to communicate via SSH.
ansible-playbook -i inventory.yml playbook.yml --check
The error is :
[WARNING]: Unhandled error in Python interpreter discovery for host XXXXXX: Failed to connect to the host via ssh: ssh: Could not resolve hostname XXXXXX: Name
or service not known
fatal: [XXXXXX]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"XXXXXX\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname XXXXXX: Name or service not known\r\n", "unreachable": true}
I think I need to force the use of an ODBC connection with something like below (example is for Windows server) :
ansible_connection: winrm
ansible_port: 5986
ansible_winrm_transport: credssp
ansible_winrm_server_cert_validation: ignore
What should I do ?
ansible_port: 1433 ? And what other parameters ?
I don't see how to communicate via ODBC.

For the Ansible cisco.asa module "cisco.asa.asa_acls:" why do I get the below error?

I'm running a basic acl creation on Ansible but get this error:
TASK [Merge provided configuration with device configuration] ********************************************************************
fatal: [192.168.0.140]: FAILED! => {"changed": false, "msg": "sh access-list\r\n ^\r\nERROR: % Invalid input detected at '^' marker.\r\n\rASA> "}
---
- name: "ACL TEST 1"
hosts: ASA
connection: local
gather_facts: false
collections:
- cisco.asa
tasks:
- name: Merge provided configuration with device configuration
cisco.asa.asa_acls:
config:
acls:
- name: purple_access_in
acl_type: extended
aces:
- grant: permit
line: 1
protocol_options:
tcp: true
source:
address: 10.0.3.0
netmask: 255.255.255.0
destination:
address: 52.58.110.120
netmask: 255.255.255.255
port_protocol:
eq: https
log: default
state: merged
The hosts file is:
[ASA]
192.168.0.140
[ASA:vars]
ansible_user=admin
ansible_ssh_pass=admin
ansible_become_method=enable
ansible_become_pass=cisco
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=cisco.asa.asa
ansible_python_interpreter=python
There's not much to the code but am struggling to get past the error. I don't even need the "sh access-list" output.

Ansible Dynamic Inventory with Openstack

I am deploying several Linux hosts to an openstack environment and attempting to configure them with ansible. I'm having some difficulty with the stock dynamic inventory script from https://github.com/ansible/ansible/blob/devel/contrib/inventory/openstack.py
If I run ansible with a static hosts file, everything works fine
# inventory/static-hosts
localhost ansible_connection=local
linweb01 ansible_host=10.1.1.101
% ansible linweb01 -m ping -i ./inventory/static-hosts \
--extra-vars="ansible_user=setup ansible_ssh_private_key_file=/home/ian/keys/setup.key"
linweb01 | SUCCESS => {
"changed": false,
"ping": "pong"
}
But if I use the dynamic inventory, the host isn't found
% ansible linweb01 -m ping -i ./inventory/openstack.py \
--extra-vars="ansible_user=setup ansible_ssh_private_key_file=/home/ian/keys/setup.key"
linweb01 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname linweb01: Name or service not known\r\n",
"unreachable": true
}
When I run the inventory script manually, the host is found and the address returned is correct
% ./inventory/openstack.py --host linweb01
[...]
"name": "linweb01",
"networks": {},
"os-extended-volumes:volumes_attached": [],
"power_state": 1,
"private_v4": "10.1.1.101",
[...]
My guess is that the inventory script doesn't know to use the "private_v4" value for the IP address, although I can't seem to find a reference for this.
How do I get ansible to use the "private_v4" value returned by the inventory script as the "ansible_host" value for the host?
Quick look into the code suggests that ip address is expected to be in interface_ip key:
hostvars[key] = dict(
ansible_ssh_host=server['interface_ip'],
ansible_host=server['interface_ip'],
openstack=server)
If you need a workaround, you can try to add this to you group_vars/all.yml:
ansible_host: "{{ private_v4 }}"

Ansible: Can I execute role from command line?

Suppose I have a role called "apache"
Now I want to execute that role on host 192.168.0.10 from the command line from Ansible host
ansible-playbook -i "192.168.0.10" --role "path to role"
Is there a way to do that?
With ansible 2.7 you can do this:
$ ansible localhost --module-name include_role --args name=<role_name>
localhost | SUCCESS => {
"changed": false,
"include_variables": {
"name": "<role_name>"
}
}
localhost | SUCCESS => {
"msg": "<role_name>"
}
This will run role from /path/to/ansible/roles or configured role path.
Read more here:
https://github.com/ansible/ansible/pull/43131
I am not aware of this feature, but you can use tags to just run one role from your playbook.
roles:
- {role: 'mysql', tags: 'mysql'}
- {role: 'apache', tags: 'apache'}
ansible-playbook webserver.yml --tags "apache"
There is no such thing in Ansible, but if this is an often use case for you, try this script.
Put it somewhere within your searchable PATH under name ansible-role:
#!/bin/bash
if [[ $# < 2 ]]; then
cat <<HELP
Wrapper script for ansible-playbook to apply single role.
Usage: $0 <host-pattern> <role-name> [ansible-playbook options]
Examples:
$0 dest_host my_role
$0 custom_host my_role -i 'custom_host,' -vv --check
HELP
exit
fi
HOST_PATTERN=$1
shift
ROLE=$1
shift
echo "Trying to apply role \"$ROLE\" to host/group \"$HOST_PATTERN\"..."
export ANSIBLE_ROLES_PATH="$(pwd)/roles"
export ANSIBLE_RETRY_FILES_ENABLED="False"
ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
roles:
- $ROLE
END
You could also check ansible-toolbox repository. It will allow you to use something like
ansible-role --host 192.168.0.10 --gather --user centos --become my-role
I have written a small Ansible plugin, called auto_tags, that dynamically generates for each role in your playbook a tag of the same name. You can find it here.
After installing it (instructions are in the gist above) you could then execute a specific role with:
ansible-playbook -i "192.168.0.10" --tags "name_of_role"
Have you tried that? it's super cool. I'm using 'update-os' instead of 'apache' role to give a more meaningful example. I have a role called let's say ./roles/update-os/ in my ./ I add a file called ./role-update-os.yml which looks like:
#!/usr/bin/ansible-playbook
---
- hosts: all
gather_facts: yes
become: yes
roles:
- update-os
Make this file executable (chmod +x role-update-os.yml). Now you can run and limit to whatever you have in your inventory ./update-os.yml -i inventory-dev --limit 192.168.0.10 the limit you can pass the group names as well.
--limit web,db > web and db is the group defined in your inventory
--limit 192.168.0.10,192.168.0.201
$ cat inventory-dev
[web]
192.168.0.10
[db]
192.168.0.201
Note that you can configure ssh-keys and sudoers policy to be able to execute without having to type password - ideal for automation, there are security implications with this. therefore you have to analyze your environment to see whether it's suitable.
Since in ansible 2.4 two options are available: import_role and include_role.
wohlgemuth#leela:~/workspace/rtmtb-ansible/kvm-cluster$ ansible localhost -m import_role -a name=rtmtb
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | CHANGED => {
"changed": true,
"checksum": "d31b41e68997e1c7f182bb56286edf993146dba1",
"dest": "/root/.ssh/id_rsa.github",
"gid": 0,
"group": "root",
"md5sum": "b7831c4c72f3f62207b2b96d3d7ed9b3",
"mode": "0600",
"owner": "root",
"size": 3389,
"src": "/home/wohlgemuth/.ansible/tmp/ansible-tmp-1561491049.46-139127672211209/source",
"state": "file",
"uid": 0
}
localhost | CHANGED => {
"changed": true,
"checksum": "1972ebcd25363f8e45adc91d38405dfc0386b5f0",
"dest": "/root/.ssh/config",
"gid": 0,
"group": "root",
"md5sum": "f82552a9494e40403da4a80e4c528781",
"mode": "0644",
"owner": "root",
"size": 147,
"src": "/home/wohlgemuth/.ansible/tmp/ansible-tmp-1561491049.99-214274671218454/source",
"state": "file",
"uid": 0
}
ansible.builtin.import_role – Import a role into a play
ansible.builtin.include_role – Load and execute a role
Yes, import_role is an ansible module and as such it may be invoked through ansible command. The following executes role pki on my_server
ansible my_server -m import_role \
-a "name=pki tasks_from=gencert" \
-e cn=etcdctl \
-e extended_key_usage=clientAuth
You can create the playbook files from the command line:
Install the role (if not already installed)
ansible-galaxy install git+https://github.com/user/apache-role.git
Create playbook and hosts files
cat >> playbook.yml <<EOL
---
- name: Run apache
hosts: all
roles:
- apache-role
EOL
cat >> hosts <<EOL
192.168.0.10
EOL
Run ansible
ansible-playbook playbook.yml -i hosts
Delete the files
rm playbook.yml hosts

Ansbile AWS Dynamic Inventory Groups Fail to Match Play Hosts

I'm having trouble getting my Ansible play's hosts to match the AWS dynamic groups that are coming back for my dynamic inventory. Let's break this problem down.
Given this output of ec2.py --list:
$ ./devops/inventories/dynamic/ec2.py --list
{
"_meta": {
"hostvars": {
"54.37.213.132": {
"ec2__in_monitoring_element": false,
"ec2_ami_launch_index": "0",
"ec2_architecture": "x86_64",
"ec2_client_token": "",
"ec2_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ebs_optimized": false,
"ec2_eventsSet": "",
"ec2_group_name": "",
"ec2_hypervisor": "xen",
"ec2_id": "i-d352c50b",
"ec2_image_id": "ami-63b25203",
"ec2_instance_profile": "",
"ec2_instance_type": "t2.micro",
"ec2_ip_address": "54.37.213.132",
"ec2_item": "",
"ec2_kernel": "",
"ec2_key_name": "peaker-v1-keypair",
"ec2_launch_time": "2016-03-11T20:45:44.000Z",
"ec2_monitored": false,
"ec2_monitoring": "",
"ec2_monitoring_state": "disabled",
"ec2_persistent": false,
"ec2_placement": "us-west-2a",
"ec2_platform": "",
"ec2_previous_state": "",
"ec2_previous_state_code": 0,
"ec2_private_dns_name": "ip-172-31-43-132.us-west-2.compute.internal",
"ec2_private_ip_address": "172.31.43.132",
"ec2_public_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ramdisk": "",
"ec2_reason": "",
"ec2_region": "us-west-2",
"ec2_requester_id": "",
"ec2_root_device_name": "/dev/xvda",
"ec2_root_device_type": "ebs",
"ec2_security_group_ids": "sg-824ac0e5",
"ec2_security_group_names": "peaker-v1-security-group",
"ec2_sourceDestCheck": "true",
"ec2_spot_instance_request_id": "",
"ec2_state": "running",
"ec2_state_code": 16,
"ec2_state_reason": "",
"ec2_subnet_id": "subnet-b96e1bce",
"ec2_tag_Environment": "v1",
"ec2_tag_Name": "peaker-v1-ec2",
"ec2_virtualization_type": "hvm",
"ec2_vpc_id": "vpc-5fe8ae3a"
}
}
},
"ec2": [
"54.37.213.132"
],
"tag_Environment_v1": [
"54.37.213.132"
],
"tag_Name_peaker-v1-ec2": [
"54.37.213.132"
],
"us-west-2": [
"54.37.213.132"
]
}
I should be able write a playbook that matches some of the groups coming back:
---
# playbook
- name: create s3 bucket with policy
hosts: localhost
gather_facts: yes
tasks:
- name: s3
s3:
bucket: "fake"
region: "us-west-2"
mode: create
permission: "public-read-write"
register: s3_output
- debug: msg="{{ s3_output }}"
- name: test on remote machine
hosts: ec2
gather_facts: yes
tasks:
- name: test on remote machine
file:
dest: "/home/ec2-user/test/"
owner: ec2-user
group: ec2-user
mode: 0700
state: directory
become: yes
become_user: ec2-user
However, when I --list-hosts that match these plays it's obvious that the play hosts are not matching anything coming back:
$ ansible-playbook -i devops/inventories/dynamic/ec2/ec2.py devops/build_and_bundle_example.yml --ask-vault-pass --list-hosts
Vault password:
[WARNING]: provided hosts list is empty, only localhost is available
playbook: devops/build_and_bundle_example.yml
play #1 (localhost): create s3 bucket with policy TAGS: []
pattern: [u'localhost']
hosts (1):
localhost
play #2 (ec2): test on remote machine TAGS: []
pattern: [u'ec2']
hosts (0):
The quick fix for what you're doing:
change hosts: localhost in your playbook to hosts: all
It would never work just with dynamic inventory if you're going to keep hosts: localhost in your playbook...
If so, – you must combine dynamic & static inventories. Create file with path ./devops/inventories/dynamic/static.ini (on the same level with ec2.py and ec2.ini) and put this content:
[localhost]
localhost
[ec2_tag_Name_peaker_v1_ec2]
[aws-hosts:children]
localhost
ec2_tag_Name_peaker_v1_ec2
After that, you will be able to run a quick check:
ansible -i devops/inventories/dynamic/ec2 aws-hosts -m ping
and your playbook itself:
ansible-playbook -i devops/inventories/dynamic/ec2 \
devops/build_and_bundle_example.yml --ask-vault-pass
NOTE: devops/inventories/dynamic/ec2 is a path to the folder, but it will be automatically resolved into hybrid dynamic&static inventory with access to aws-hosts group name.
In fact, this isn't the best use of inventory. But it's important to understand, that by combining dynamic&static inventories, you're just appending new group names for particular dynamic host
ansible -i devops/inventories/dynamic/ec2 all -m debug \
-a "var=hostvars[inventory_hostname].group_names"

Resources