I get my network inventory for ansible dynamicly from the netbox plugin and I want to use this data to edit the /etc/hosts file and run a script on a special host (librenms) to add all the network devices there.
Has anyone an idea how to run a playbook only on the librenms host with the data of the netbox inventory plugin?
Here my inventory (the librenms host is not part of it)
plugin: netbox
api_endpoint: https://netbox.mydomain.com
token: abcdefghijklmopqrstuvwxyz
validate_certs: False
config_context: True
group_by:
- device_roles
query_filters:
- role: test-switch
- has_primary_ip: True
Many, many thanks in advance!
If the host you want to run the playbook on is not part of your dynamic inventory and at the same time you want to use variables defined in the dynamic inventory in a play for that host, you have to construct an "hybrid" inventory containing both your dynamic inventory ans other static entries.
This is covered in the documentation on inventories: Using multiple inventory sources.
Very basically you have two solutions:
Use two different inventories on the command line:
ansible-playbook -i dynamic_inventory -i static_inventory your_playbook.yml
Create a compound inventory with different sources in a directory. See chapter "Aggregating inventory sources with a directory" in the above documentation.
Is there anyway to template hostnames with the aws ec2 plugin? This is my inventory script:
plugin: aws_ec2
keyed_groups:
- key: tags.role
prefix: aws
hostnames:
- tag:Name
- private-ip-address
groups:
integration: group_names
Running ansible-inventory -i integration.aws_ec2.yml --graph gives me this
#all:
|--#aws_apilb:
| |--apilb001.core.int.ec2.corp.pvt
|--#aws_bastion:
| |--core-bastion
| |--ops-bastion
| |--webint-bastion-bastion
But I want something like this where the private ip is included. I can get one or the other by either removing tag:Name or keeping it but I want both.
#all:
|--#aws_apilb:
| |--apilb001.core.int.ec2.corp.pvt (10.111.0.111)
|--#aws_bastion:
| |--core-bastion (10.111.0.112)
| |--ops-bastion (10.111.0.113)
| |--webint-bastion-bastion (10.111.0.113)
I doubt if there is a way at this moment (ansible v2.9) to do what you are looking for. However, I could find a pending issue requesting for the similar feature which could be available on ansible v2.10.
In the meantime, you can override the existing plugin as suggested here.
Use this content for hostnames:
hostnames:
- name: 'private-ip-address'
separator: '_'
prefix: 'tag:Name'
sample output:
myNameTag_myPrivate_ip
I'm just getting started with ansible and have successfully been able to configure ansible to get dynamic inventory from GCP.
I am able to successfully run the ping module against all instances:
ansible -i ~/git/ansible/inventory all -m ping
I am also able to successfully run the ping module against a single instance based on hostname:
ansible -i ~/git/ansible/inventory instance-2 -m ping
I would now like to utilize tags to group instances. For example, I have set of instances that are labeled 'env:dev'
https://www.evernote.com/l/AfcLWLkermxMyIK7GvGpQXjXdIDFVAiT_z0
I have attempted multiple variations of the command below with no luck
ansible -i ~/git/ansible/inventory tag_env:dev -m ping
How can I filter and group my dynamic inventory on GCP?
So you need to add network tag in instance settings not labels i don't know why but gce.py doesn't return GCP labels so you can only use network tags wich is limited (i mean not key=value but just value)
For example add network tag just 'dev' and then run ansible -i ~/git/ansible/inventory tag_dev -m ping
also if you need to filter by few tags only way i found it's
- name: test stuff
hosts: tag_api:&tag_{{ environment }}
var_files:
vars/{{ environment }}
vars/api
tasks:
- name: test
command: echo "test"
run playbook like this ansible-playbook -i inventory/ -u user playbook/test.yml -e environment=dev
maybe someone know better way, with aws ec2.py i could filter in ec2.ini config but gce.py very limited
also i noticed that sometimes you need to clear cache gce.py --refresh-cache
So, here's the setup.
I am passing a list of FQDN's via extra vars to the playbook, but the key of control machine are copied with their corresponding IP addresses. Also their is no specific group of these in the inventory.
The host for this playbook is a server which may or may not be part of this list that I am passing via extra vars. So, the playbook doesn't have facts of these nodes whatsoever.
So, my question is how do I get the IP addresses of the respective hosts on the fly so that I can loop over them to do some checks for some properties that I have set using their FQDN's via another playbook/ role previously. (Note: I strictly need FQDN's to set these said properties)
I have used delegate_to module, and passing FQDN's and it works but it asks if I want to continue connecting(yes/ no) and waits for user input for the initial run (if given yes, It works as expected). Is their any other simpler, cleaner way to solve this? (Note: Bare in mind I cannot modify the contents of inventory like putting alias etc)
Here is the snippet of the code:
- name: Checks
become_user: hdfs
shell: hdfs dfsadmin –report
delegate_to: "{{ item.host_name }}"
with_items:
- "{{rackDetails}}"
Here is the snippet of the master yml file where the above role is called :
- name: Checks for the property
hosts: servernode
roles:
- Checks
Inventory code snippet looks like this:
[servernode]
10.0.2.15 ansible_ssh_user=grant ansible_ssh_pass=grant
[all]
10.0.2.15 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.16 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.17 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.18 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.19 ansible_ssh_user=grant ansible_ssh_pass=grant
And finally, this is what I am passing via extra vars:
{"rackDetails":[{"host_name":"prod_node1.paas.com","rack_name":"/rack1"},{"host_name":"prod_node2.paas.com","rack_name":"/rack2"},{"host_name":"prod_node3.paas.com","rack_name":"/rack3"}]}
I wonder if delegate_to works if the hosts are passed as extra vars but not living inside inventory.
Anyway, you can use the dig lookup plugin:
http://docs.ansible.com/ansible/playbooks_lookups.html#the-dns-lookup-dig
{{ lookup('dig', item.host_name) }}
What I think you want, instead, is to convert those to FQDN's into IP addresses and put them into the -l (limit) to a playbook:
ansible-playbook my_playbook -l $( getent hosts $( cat fqdns.txt ) | cut -d' ' -f1 | tr '\n' ',' )
I ran into a configuration problem when coding an Ansible playbook for SSH private key files. In static Ansible inventories, I can define combinations of host servers, IP addresses, and related SSH private keys - but I have no idea how to define those with dynamic inventories.
For example:
---
- hosts: tag_Name_server1
gather_facts: no
roles:
- role1
- hosts: tag_Name_server2
gather_facts: no
roles:
- roles2
I use the below command to call that playbook:
ansible-playbook test.yml -i ec2.py --private-key ~/.ssh/SSHKEY.pem
My questions are:
How can I define ~/.ssh/SSHKEY.pem in Ansible files rather than on the command line?
Is there a parameter in playbooks (like gather_facts) to define which private keys should be used which hosts?
If there is no way to define private keys in files, what should be called on the command line when different keys are used for different hosts in the same inventory?
TL;DR: Specify key file in group variable file, since 'tag_Name_server1' is a group.
Note: I'm assuming you're using the EC2 external inventory script. If you're using some other dynamic inventory approach, you might need to tweak this solution.
This is an issue I've been struggling with, on and off, for months, and I've finally found a solution, thanks to Brian Coca's suggestion here. The trick is to use Ansible's group variable mechanisms to automatically pass along the correct SSH key file for the machine you're working with.
The EC2 inventory script automatically sets up various groups that you can use to refer to hosts. You're using this in your playbook: in the first play, you're telling Ansible to apply 'role1' to the entire 'tag_Name_server1' group. We want to direct Ansible to use a specific SSH key for any host in the 'tag_Name_server1' group, which is where group variable files come in.
Assuming that your playbook is located in the 'my-playbooks' directory, create files for each group under the 'group_vars' directory:
my-playbooks
|-- test.yml
+-- group_vars
|-- tag_Name_server1.yml
+-- tag_Name_server2.yml
Now, any time you refer to these groups in a playbook, Ansible will check the appropriate files, and load any variables you've defined there.
Within each group var file, we can specify the key file to use for connecting to hosts in the group:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: /path/to/ssh/key/server1.pem
Now, when you run your playbook, it should automatically pick up the right keys!
Using environment vars for portability
I often run playbooks on many different servers (local, remote build server, etc.), so I like to parameterize things. Rather than using a fixed path, I have an environment variable called SSH_KEYDIR that points to the directory where the SSH keys are stored.
In this case, my group vars files look like this, instead:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: "{{ lookup('env','SSH_KEYDIR') }}/server1.pem"
Further Improvements
There's probably a bunch of neat ways this could be improved. For one thing, you still need to manually specify which key to use for each group. Since the EC2 inventory script includes details about the keypair used for each server, there's probably a way to get the key name directly from the script itself. In that case, you could supply the directory the keys are located in (as above), and have it choose the correct keys based on the inventory data.
The best solution I could find for this problem is to specify private key file in ansible.cfg (I usually keep it in the same folder as a playbook):
[defaults]
inventory=ec2.py
vault_password_file = ~/.vault_pass.txt
host_key_checking = False
private_key_file = /Users/eric/.ssh/secret_key_rsa
Though, it still sets private key globally for all hosts in playbook.
Note: You have to specify full path to the key file - ~user/.ssh/some_key_rsa silently ignored.
You can simply define the key to use directly when running the command:
ansible-playbook \
\ # Super verbose output incl. SSH-Details:
-vvvv \
\ # The Server to target: (Keep the trailing comma!)
-i "000.000.0.000," \
\ # Define the key to use:
--private-key=~/.ssh/id_rsa_ansible \
\ # The `env` var is needed if `python` is not available:
-e 'ansible_python_interpreter=/usr/bin/python3' \ # Needed if `python` is not available
\ # Dry–Run:
--check \
deploy.yml
Copy/ Paste:
ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key deploy.yml
I'm using the following configuration:
#site.yml:
- name: Example play
hosts: all
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"
I had a similar issue and solved it with a patch to ec2.py and adding some configuration parameters to ec2.ini. The patch takes the value of ec2_key_name, prefixes it with the ssh_key_path, and adds the ssh_key_suffix to the end, and writes out ansible_ssh_private_key_file as this value.
The following variables have to be added to ec2.ini in a new 'ssh' section (this is optional if the defaults match your environment):
[ssh]
# Set the path and suffix for the ssh keys
ssh_key_path = ~/.ssh
ssh_key_suffix = .pem
Here is the patch for ec2.py:
204a205,206
> 'ssh_key_path': '~/.ssh',
> 'ssh_key_suffix': '.pem',
422a425,428
> # SSH key setup
> self.ssh_key_path = os.path.expanduser(config.get('ssh', 'ssh_key_path'))
> self.ssh_key_suffix = config.get('ssh', 'ssh_key_suffix')
>
1490a1497
> instance_vars["ansible_ssh_private_key_file"] = os.path.join(self.ssh_key_path, instance_vars["ec2_key_name"] + self.ssh_key_suffix)