I'm trying to use the nmap plugin in ansible to create a dynamic inventory, and then group things that the plugin returns. Unfortunately, I'm missing something, because I can't seem to get a group to be created.
In this scenario, I have a couple hosts named unknownxxxxxxxx that I would like to group.
plugin: nmap
strict: false
address: 10.0.1.0/24
ports: no
groups:
unknown: "'unknown' in hostname"
I run my plugin -
ansible-inventory -i nmap.yml --export --output=inv --list
but the return is always the same...
By now, I've resorted to guessing possible var names
host, hosts, hostnames, hostname, inventory_hostname, hostvars, host.fqdn, and the list goes on and on...
I'm obviously missing something basic, but I can't seem to find anything via search that has yielded any results.
Can someone help me understand what I'm doing wrong with jinja?
Perhaps I need to use compose: and keyed_groups: ?
I'm obviously missing something basic...
I'm not sure that you are. I agree that according to the documentation the nmap plugin is supposed to work the way you're trying to use it, but like you I'm not able to get the groups or compose keys to work as described.
Fortunately, we can work around that problem by directly using the constructed inventory plugin.
We'll need to use an inventory directory, rather than an inventory file, since we need multiple inventory files. We'll put the following into our ansible.cfg:
[defaults]
inventory = inventory
And then we'll create a directory inventory, into which we'll place two files. First, we'll put your nmap inventory in inventory/10nmap.yml. It will look like this:
plugin: nmap
strict: false
address: 10.0.1.0/24
ports: false
And then we'll put the configuration for the constructed plugin to inventory/20constructed.yml:
plugin: constructed
strict: False
groups:
unknown: "'unknown' in inventory_hostname"
We've named the file 10nmap.yml and 20constructed.yml because we need to ensure that the constructed plugin runs after the nmap plugin (also, we're checking against inventory_hostname here because that's the canonical name of a host in your Ansible inventory).
With all this in place, you should see the behavior you're looking for: hosts with unknown in the inventory_hostname variable will end up in the unknown group.
I believe that groups apply only to attributes that the plugin returns, so you should be looking at its output.
For example, running ansible-inventory -i nmap-inventory.yml --list with
---
plugin: nmap
address: 192.168.122.0/24
strict: false
ipv4: yes
ports: yes
sudo: true
groups:
without_hostname: "'192.168.122' in name"
with_ssh: "ports | selectattr('service', 'equalto', 'ssh')"
produces
{
"_meta": {
"hostvars": {
"192.168.122.102": {
"ip": "192.168.122.102",
"name": "192.168.122.102",
"ports": [
{
"port": "22",
"protocol": "tcp",
"service": "ssh",
"state": "open"
}
]
},
"192.168.122.204": {
"ip": "192.168.122.204",
"name": "192.168.122.204",
"ports": [
{
"port": "22",
"protocol": "tcp",
"service": "ssh",
"state": "open"
},
{
"port": "8080",
"protocol": "tcp",
"service": "http",
"state": "open"
}
]
},
"fedora": {
"ip": "192.168.122.1",
"name": "fedora",
"ports": [
{
"port": "53",
"protocol": "tcp",
"service": "domain",
"state": "open"
},
{
"port": "6000",
"protocol": "tcp",
"service": "X11",
"state": "open"
}
]
}
}
},
"all": {
"children": [
"ungrouped",
"with_ssh",
"without_hostname"
]
},
"ungrouped": {
"hosts": [
"fedora"
]
},
"with_ssh": {
"hosts": [
"192.168.122.102",
"192.168.122.204"
]
},
"without_hostname": {
"hosts": [
"192.168.122.102",
"192.168.122.204"
]
}
}
As you can see, I'm using name and ports because the entries have these attributes. I could've also used ip.
To further clarify the point, when I run the plugin with ports: no, the with_ssh grouping filter doesn't produce anything because there are no ports in the output.
{
"_meta": {
"hostvars": {
"192.168.122.102": {
"ip": "192.168.122.102",
"name": "192.168.122.102"
},
"192.168.122.204": {
"ip": "192.168.122.204",
"name": "192.168.122.204"
},
"fedora": {
"ip": "192.168.122.1",
"name": "fedora"
}
}
},
"all": {
"children": [
"ungrouped",
"without_hostname"
]
},
"ungrouped": {
"hosts": [
"fedora"
]
},
"without_hostname": {
"hosts": [
"192.168.122.102",
"192.168.122.204"
]
}
}
Related
I'm trying to transform NFS exports, described in complex data structure, to config option accepted by nfs-server daemon which later be used in ansible.
I have:
nfs_exports:
- path: /export/home
state: present
options:
- clients: "192.168.0.0/24"
permissions:
- "rw"
- "sync"
- "no_root_squash"
- "fsid=0"
- path: /export/public
state: present
options:
- clients: "192.168.0.0/24"
permissions:
- "rw"
- "sync"
- "root_squash"
- "fsid=0"
- clients: "*"
permissions:
- "ro"
- "async"
- "all_squash"
- "fsid=1"
which must become:
[
{
"options": "192.168.0.0/24(rw,sync,no_root_squash,fsid=0)",
"path": "/export/home",
"state": "present"
},
{
"options": "192.168.0.0/24(rw,sync,root_squash,fsid=0) *(ro,async,all_squash,fsid=1)",
"path": "/export/public",
"state": "present"
}
]
So far I was able, using {{ nfs_exports | json_query(query) }}
query: "[].{path:path,state:state,options:options.join(` `,[].join(``,[clients,`(`,join(`,`,permissions),`)`]))}"
get
{
"options": "192.168.0.0/24(rw,sync,no_root_squash,fsid=0)",
"path": "/export/home",
"state": "present"
},
{
"options": "192.168.0.0/24(rw,sync,root_squash,fsid=0)*(ro,async,all_squash,fsid=1)",
"path": "/export/public",
"state": "present"
}
It's probably simple but I can't get pass that last options join, space ' ' gets removed.
So if someone knows the correct query additional explanation will be much appreciated.
Given the query:
[].{ path: path, state: state, options: join(' ', options[].join('', [clients, '(', join(',', permissions), ')'])) }
On the JSON
{
"nfs_exports": [
{
"path": "/export/home",
"state": "present",
"options": [
{
"clients": "192.168.0.0/24",
"permissions": [
"rw",
"sync",
"no_root_squash",
"fsid=0"
]
}
]
},
{
"path": "/export/public",
"state": "present",
"options": [
{
"clients": "192.168.0.0/24",
"permissions": [
"rw",
"sync",
"root_squash",
"fsid=0"
]
},
{
"clients": "*",
"permissions": [
"ro",
"async",
"all_squash",
"fsid=1"
]
}
]
}
]
}
It would give you your expected output:
[
{
"path": "/export/home",
"state": "present",
"options": "192.168.0.0/24(rw,sync,no_root_squash,fsid=0)"
},
{
"path": "/export/public",
"state": "present",
"options": "192.168.0.0/24(rw,sync,root_squash,fsid=0) *(ro,async,all_squash,fsid=1)"
}
]
Please mind: the string litteral `` wont work on a space character string, because, as pointed in the documentation, it will be parsed as JSON:
A literal expression is an expression that allows arbitrary JSON objects to be specified
Source: https://jmespath.org/specification.html#literal-expressions
This is quite easy when you get to the point of:
[].{ path: path, state: state, options: options[].join('', [clients, '(', join(',', permissions), ')']) }
Which is something you seems to have achived, that gives
[
{
"path": "/export/home",
"state": "present",
"options": [
"192.168.0.0/24(rw,sync,no_root_squash,fsid=0)"
]
},
{
"path": "/export/public",
"state": "present",
"options": [
"192.168.0.0/24(rw,sync,root_squash,fsid=0)",
"*(ro,async,all_squash,fsid=1)"
]
}
]
Because you are just left with joining the whole array in options with a space as glue character.
I'm trying to find if we have any options in setup or any combination of ansible modules to find the LVM / VG name if we give the mount point name as input to the playbook, please suggest if you have any options, as of now I can only see the only option in setup is to fetch the device name "device": "/dev/mapper/rhel-root", using the ansible_mounts.device. But to split the LV and VG name from the "/dev/mapper/rhel-root" is another challenge. Kindly suggest if any options.
This is actually possible since 2015 through the ansible_lvm facts returned by setup and part of the hardware subset.
To get a result, you need to run setup as root and the lvm utilities must be installed on the target.
You can make a quick test on your local machine (if relevant, adapt to whatever target where you have privilege escalation rights):
ansible localhost -b -m setup \
-a 'gather_subset=!all,!min,hardware' -a 'filter=ansible_lvm'
Here is an example output from the first test vm I could connect to:
localhost | SUCCESS => {
"ansible_facts": {
"ansible_lvm": {
"lvs": {
"docker_data": {
"size_g": "80.00",
"vg": "docker"
},
"root": {
"size_g": "16.45",
"vg": "system"
},
"swap": {
"size_g": "3.00",
"vg": "system"
}
},
"pvs": {
"/dev/sda2": {
"free_g": "0.05",
"size_g": "19.50",
"vg": "system"
},
"/dev/sdb": {
"free_g": "0",
"size_g": "80.00",
"vg": "docker"
}
},
"vgs": {
"docker": {
"free_g": "0",
"num_lvs": "1",
"num_pvs": "1",
"size_g": "80.00"
},
"system": {
"free_g": "0.05",
"num_lvs": "2",
"num_pvs": "1",
"size_g": "19.50"
}
}
}
},
"changed": false
}
This would be a very simple .yaml file, doing the same as what #Zeitounator suggested:
---
- name: detect lvm setup
setup:
filter: "ansible_lvm"
gather_subset: "!all,!min,hardware"
register: lvm_facts
- debug: var=lvm_facts
I'm looking for a correct way to construct an inventory to share the same var.
Here is my inventory
{
"groupA": {
"hosts": [
"192.168.1.1"
]
},
"groupB": {
"hosts": [
"192.168.1.2"
]
},
"vars": {
"ansible_ssh_user": "admin",
"ansible_ssh_private_key_file": "/admin.pem",
"ansible_become": "yes",
"ansible_become_method": "sudo"
}
}
I want both groupA and groupB to use the same var declared.
Moreover, how can I specify in playbook to run both groupA and groupB. The following one seems not to work
hosts: groupA, groupB
[UPDATE] Below is the correct construct after getting support from Konstantin Suvorov.
{
"groupA": {
"hosts": [
"192.168.1.1"
]
},
"groupB":{
"hosts":[
"192.168.1.2"
]
},
"root":{
"children":[
"groupA",
"groupB"
],
"vars": {
"ansible_ssh_user": "admin"
}
}
}
Drop your vars into some dummy group that is parent to both groups:
"root": {
"children": ["groupA", "groupB"],
"vars": {
"ansible_ssh_user": "admin"
}
},
Correct pattern is hosts: groupA:groupB or hosts: group[AB]
Newbie to Microservices here.
I have been looking into develop a microservice with spring actuator while having Consul for service discovery and fail recovery.
I have configured a cluster as explained in Consul documentation.
Now what I'm trying to do is configure a Consul Watch to trigger when any of my service is down and execute a shell script to restart my service. Following is my configuration file.
{
"bind_addr": "127.0.0.1",
"datacenter": "dc1",
"encrypt": "EXz7LsrhpQ4idwqffiFoQ==",
"data_dir": "/data",
"log_level": "INFO",
"enable_syslog": true,
"enable_debug": true,
"enable_script_checks": true,
"ui":true,
"node_name": "SpringConsulClient",
"server": false,
"service": { "name": "Apache", "tags": ["HTTP"], "port": 8080,
"check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}},
"rejoin_after_leave": true,
"watches": [
{
"type": "service",
"handler": "/Consul-Script.sh"
}
]
}
Any help/tip would be greatly appreciate.
Regards,
Chrishan
Take a closer look at the description of the service watch type in the official documentation. It has an example, how you can specify it:
{
"type": "service",
"service": "redis",
"args": ["/usr/bin/my-service-handler.sh", "-redis"]
}
Note that it has no property handler and but takes a path to the script as an argument. And one more:
It requires the "service" parameter
It seems, in you case you need to specify it as follows:
"watches": [
{
"type": "service",
"service": "Apache",
"args": ["/fully/qualified/path/to/Consul-Script.sh"]
}
]
I am trying to run a Consul container on each of my Mesos slave node.
With Marathon I have the following JSON script:
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"args": ["agent","-bind","$MESOS_SLAVE_IP","-retry-join","$MESOS_MASTER_IP"]
}
However, it seems that marathon treats the args as plain text.
That's why I always got errors:
==> Starting Consul agent...
==> Error starting agent: Failed to start Consul client: Failed to start lan serf: Failed to create memberlist: Failed to parse advertise address!
So I just wonder if there are any workaround so that I can start a Consul container on each of my Mesos slave node.
Update:
Thanks #janisz for the link.
After taking a look at the following discussions:
#3416: args in marathon file does not resolve env variables
#2679: Ability to specify the value of the hostname an app task is running on
#1328: Specify environment variables in the config to be used on each host through REST API
#1828: Support for more variables and variable expansion in app definition
as well as the Marathon documentation on Task Environment Variables.
My understanding is that:
Currently it is not possible to pass environment variables in args
Some post indicates that one could pass environment variables in "cmd". But those environment variables are Task Environment Variables provided by Marathon, not the environment variables on your host machine.
Please correct if I was wrong.
You can try this.
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST",
"parameters": [
"key": "env",
"value": "YOUR_ENV_VAR=VALUE"
]
}
}
}
Or
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"env": {
"ENV_NAME" : "VALUE"
}
}