Multiple Inventories Not Working - No Hosts Found - amazon-ec2

I'm using the Ansible EC2 dynamic inventory script to access my hosts in EC2. The hosts I am looking for have a tag named sisyphus_environment with a value of development and another tag named sisyphus_project with a value of reddot, so I need the intersection of these tags.
However, I can't even seem to get filtering on one tag working properly with the EC2 inventory.
I can clearly find the instances in question using the AWS CLI:
$ aws ec2 describe-instances --filters \
Name=tag:sisyphus_environment,Values=development \
Name=tag:sisyphus_project,Values=reddot \
Name=tag:sisyphus_role,Values=nginx | \
jq '.Reservations[].Instances[] | { InstanceId: .InstanceId }'
{
"InstanceId": "i-deadbeef"
}
{
"InstanceId": "i-cafebabe"
}
I'm attempting to do the same thing with Ansible expressed as a static group consisting of dynamic members filtered by these tags.
My ansible.cfg:
[defaults]
retry_files_enabled = false
roles_path = roles:galaxy_roles
inventory = inventory/
[ssh_connection]
ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
My EC2 inventory is directly sourced from upstream along with the ec2.ini.
My inventory directory looks like this:
$ tree inventory/
inventory/
├── ec2.ini
├── ec2.py
└── inventory.ini
0 directories, 3 files
I had to name inventory with an .ini extension, otherwise Ansible was trying to load it as YAML and failing.
My inventory/inventory.ini file looks like this:
[tag_sisyphus_project_reddot]
# blank
[reddot:children]
tag_sisyphus_project_reddot
This was done similar to Ansible's documentation on the matter. Unfortunately, this simple thing fails:
$ inventory/ec2.py --refresh-cache
...
$ ansible reddot -m command -a 'echo true'
[WARNING]: No hosts matched, nothing to do
If I query by the tag, I get results:
$ ansible tag_sisyphus_project_reddot -m command -a 'echo true'
10.0.0.1 | SUCCESS | rc=0 >>
true
...
Is there a step I'm missing?
My Ansible version:
$ ansible --version
ansible 2.2.0.0
config file = /home/naftuli/Documents/Development/ansible-reddot/ansible.cfg
configured module search path = Default w/o overrides

Related

Issue reaching boxes behind a bastion host with ansible was_ec2 dynamic inventory plugin

I have gone round a little and I can say this post is not a duplicate. I have been fairly using Ansible 2.9.x and connectivity to the bastion host has always worked fine for me using the ec2.py dynamic inventory . I am switching to the the ansible was_ec2 plugin and one of the reason is even on this other stackoverflow post of mine.
I have gleaned information below are my inventory file and ansible.cfg file
#myprovile.aws_ec2.yml
plugin: amazon.aws.aws_ec2
boto_profile: my profile
strict: True
regions:
- eu-west-1
- eu-central-1
- eu-north-1
keyed_groups:
- key: tags
prefix: tag
hostnames:
- ip-address
# - dns-name
# - tag:Name
- private-ip-address
compose:
ansible_host: private_ip_address
# folder/project level ansible.cfg configuration
[defaults]
roles_path = roles
host_key_checking = False
hash_behaviour = merge ### Note to self: Extremely important settings
interpreter_python = auto ### Note to self: Very important settings for running from localhost
[inventory]
enable_plugins = aws_ec2, host_list, script, auto, yaml, ini, toml
# inventory = plugin_inventory/bb.aws_ec2.yaml
The inventory has group_vars files
➜ plugin_inventory git:(develop) ✗ tree
.
├── myprovile.aws_ec2.yml
└── group_vars
├── tag_Name_main_productname_uat_jumpbox.yml
├── tag_Name_main_productname_uat_mongo.yml
├── tag_Name_main_productname_uat_mongo_arb.yml
├── tag_Name_main_productname_uat_mysql.yml
└── tag_Name_xxx.yml
└── tag_Name_yyy.yml
To get to mongo db which is in private subnet, the group_vars files looks like below
#ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i {{ hostvars.localhost.reg_jumpbox_ssh_key }} -W %h:%p -q ubuntu#{{ hostvars.localhost.reg_jumpbox_facts.instances.0.public_ip_address }}"'
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i ~/Dropbox/creds/pemfiles/ProductUATOps.pem -W %h:%p -q ubuntu#xxx.xxx.xxx.xxx"'
Each time I run the command
AWS_PROFILE=myprofile ansible -i ~/infrastructure_as_code/ansible_projects/productname/plugin_inventory/myprofile.aws_ec2.yml tag_Name_main_productname_uat_mongo -m ping -u ubuntu --private-key ~/Dropbox/creds/pemfiles/ProductUATOps.pem -vvvv
it doesn't connect and the full output and some other information are at pastebin.
Now something odd I have see is that, even though in the ansible.cfg there is the host_key_checking= False I still find the following in the command Are you sure you want to continue connecting (yes/no/[fingerprint])?.
I have also seen that it's looking for ~/.ssh/known_hosts2\, /etc/ssh/ssh_known_hosts and /etc/ssh/ssh_known_hosts2 but ~/.ssh/known_hosts is what's there.
There is also one confusing error in the logs "module_stdout": "/bin/sh: 1: /Users/joseph/.pyenv/shims/python: not found\r\n". But the python installation with pyenv has been consistent, os wise:
➜ ~ which python
/Users/joseph/.pyenv/shims/python
➜ ~ python --version
Python 3.8.12 (9ef55f6fc369, Oct 25 2021, 05:10:01)
[PyPy 7.3.7 with GCC Apple LLVM 13.0.0 (clang-1300.0.29.3)]
➜ ~ ls -lh /Users/joseph/.pyenv/shims/python
-rwxr-xr-x 1 joseph staff 183B Feb 14 22:47 /Users/joseph/.pyenv/shims/python
➜ ~ /usr/bin/env python --version
Python 3.8.12 (9ef55f6fc369, Oct 25 2021, 05:10:01)
[PyPy 7.3.7 with GCC Apple LLVM 13.0.0 (clang-1300.0.29.3)]
I suspect the error is due to the fact that something is preventing the fingerprints from getting into the known hosts file and I am tempted to simulate the ssh tunneling manually myself but I would like to understand why this is happening and whether it's because this is a new machine. Can anyone shed some light on this for me . Thanks
After running ansible-config dump using that ansible.cfg, it emits AnsibleOptionsError: Invalid value "merge ##... so it seems ansible just silently ate the config file, or may be using a different one
It seems that while # is a supported beginning of line comment character, ansible-config (as of 2.12.1) only tolerates ; as and end of line comment character
[defaults]
roles_path = roles
host_key_checking = False
hash_behaviour = merge ;;; Note to self: Extremely important settings
interpreter_python = auto ;;; Note to self: Very important settings for running from localhost
[inventory]
enable_plugins = aws_ec2, host_list, script, auto, yaml, ini, toml

Endpoint from aws_rds plugin - Ansible

I'm using aws_rds plugin to create an inventory with my rds cluster/instances and create groups_vars based on the environment tags.
plugin: aws_rds
regions:
- xx-xxxx-x
include_clusters: true
keyed_groups:
- key: tags.Environment
prefix: "tag_Environment_"
separator: ""
https://docs.ansible.com/ansible/2.9/plugins/inventory/aws_rds.html
The problem is that this plugin doesn't allow to take the Endpoint name so I was wondering if it's possible to add the last part of this cluster-xxxxxxxxxx.xx-xxxx-x.rds.amazonaws.com on each group_var created, like ansible_host= ${ansible_host}.cluster-xxxxxxxxxx.xx-xxxx-x.rds.amazonaws.com or something like that.
Those AWS inventory plugin are gathering a lot of information from the API, I had quite a similar requirement lately on aws_ec2 inventory plugin and here is how I proceeded to find the information I needed under the compose parameter of the inventory's configuration.
To fetch what I needed to put under the compose parameter, I first configured the inventory, like you did.
Then I displayed all the collected hosts, using
ansible-inventory --graph
This yields something like
#all:
|--#nodes:
| |--node1
| |--node2
| |--node3
|--#ungrouped:
With this, I targeted one node, let's say node1, here, and I displayed all the information Ansible had on them, doing:
ansible -m setup node1
and
ansible -m debug -a "var=vars" node1
In all this, in searched for the information needed.
In your case that could be achieved by doing:
ansible -m setup <one-of-the-hosts> | grep "cluster-" | grep ".rds.amazonaws.com"
and
ansible -m debug -a <one-of-the-hosts> | grep "cluster-" | grep ".rds.amazonaws.com"
Last but not least, configured the variable I found out in the compose parameter of the inventory's configuration
compose:
ansible_host: <variable-name-found-at-step-4>
e.g., in my EC2 configuration, it ended up being public_dns_address
plugin: aws_ec2
hostnames:
- tag:Name
# ^-- Given that you indeed have a tag named "Name" on your EC2 instances
compose:
ansible_host: public_dns_address
In the end, I achieved what I wanted by defining this variable in groups_vars (so I could define one for each environment).
ansible_host: "{{ inventory_hostname }}.cluster-xxxxxxxx.xx-xxxx-x.rds.amazonaws.com"

How does Ansible find group_vars when working with Terraform

I'm setting up a project using terraform and Ansible. After Terraform creates all the needed resources, I need to provision the servers using Ansible, where I am using the following:
# provisioner "local-exec" {
# environment = {
# ANSIBLE_SSH_RETRIES = "30"
# ANSIBLE_HOST_KEY_CHECKING = "False"
# }
# command = "ansible-playbook ${var.ec2_ansible_playbook} --extra-vars env_name=${var.env} -u ubuntu -i ${aws_eip.main.0.public_ip},"
# }
Ansible fails to find my group_vars, How do I point Ansible to the correct path where group_vars are located? Previously, I was only using Ansible and I would run something like
ansible-playbook provision.yml -i inventory/prod/hosts
Ansible wouldn't have any problems finding group_vars since they were sitting at the same location as the hosts. Any help is appreciated.
You can specify the working-dir like mentioned in the documentation
I think per default it uses the module directory as working directory.

How to make sure that ansible playbook uses hostname from different file or uses it from command line?

I have an ansible playbook that I run from below command line and it works fine.
ansible-playbook -e 'host_key_checking=False' -e 'num_serial=10' test.yml -u golden
It works on the hosts specified in /etc/ansible/hosts file. But is there any way to pass hostnames directly on the command line or generate new file with hostname line by line in it so that my ansible works on that hostnames instead of working from default /etc/ansible/hosts file?
Below is my ansible file:
# This will copy files
---
- hosts: servers
serial: "{{ num_serial }}"
tasks:
- name: copy files to server
shell: "(ssh -o StrictHostKeyChecking=no abc.host.com 'ls -1 /var/lib/workspace/data/*' | parallel -j20 'scp -o StrictHostKeyChecking=no abc.host.com:{} /data/holder/files/procs/')"
- name: sleep for 3 sec
pause: seconds=3
Now I wanted to generate new file which will have all the servers line by line and then my ansible play book work on that file instead? Is this possible?
I am running ansible 2.6.3 version.
The question has been answered probably but will just answer again to add more points.
Always look for command line for help related to the arguments or any info needed.
ansible-playbook --help | grep inventory
-i INVENTORY, --inventory=INVENTORY, --inventory-file=INVENTORY
specify inventory host path or comma separated host
list. --inventory-file is deprecated
The support of ansible inventory in file format is with two extensions:
yml
ini --> specifying ini extension is not mandatory.
The inventory link provides more info on the format and should be referred before choosing any format to implement.
Adding #HermanTheGermanHesse answer's so that all the possible points are covered.
In case the above is not used/you don't want to use. Ansible at last will refer the ansible.cfg for the hosts and variable definition.
[defaults]
inventory = path/to/hosts
From here:
The ansible.cfg file will be chosen in this order:
ANSIBLE_CONFIG environment variable
/ansible.cfg
~/.ansible.cfg
/etc/ansible/ansible.cfg
You can use the -i flag to specify the inventory to use. For example:
ansible-playbook -i hosts play.yml
A way to specify the inventory file to use is to set inventory in the ansible.cfg-file as such:
[defaults]
inventory = path/to/hosts
From here:
The ansible.cfg file will be chosen in this order:
ANSIBLE_CONFIG environment variable
./ansible.cfg
~/.ansible.cfg
/etc/ansible/ansible.cfg
EDIT
From your comment:
[WARNING]: Could not match supplied host pattern, ignoring: servers PLAY [servers]
It seems that ansible doesn't recognize hosts passed with the -i Flag as belonging to a group. Since you mentioned in chat that you generate a list with the passed hosts, I'd suggest creating a file where the list of hosts to passed is made to belong to a group callerd [servers] and passing the path to it with the -i Flag.

Ansible: How to get a value of a variable from the inventory or group_vars file?

How do I get the value of a variable from the inventory if the variable could be either in the inventory file or group_vars directory?
For example, region=place-a could be in the inventory file or in a file in the group_vars somewhere. I would like a command to be able to retrieve that value using ansible or something retrieve that value. like:
$ ansible -i /somewhere/production/web --get-value region
place-a
That would help me with deployments and knowing which region is being deployed to.
.
A longer explanation to clarify, my inventory structure looks like this:
/somewhere/production/web
/somewhere/production/group_vars/web
The contents with the variables of the inventory file, /somewhere/production/web looks like this:
[web:children]
web1 ansible_ssh_host=10.0.0.1
web2 ansible_ssh_host=10.0.0.2
[web:vars]
region=place-a
I could get the value from the inventory file by simply parsing the file. like so:
$ awk -F "=" '/^region/ {print $2}' /somewhere/production/web
place-a
But that variable could be in the group_vars file, too. For example:
$ cat /somewhere/production/group_vars/web
region: place-a
Or it could look like an array:
$ cat /somewhere/production/group_vars/web
region:
- place-a
I don't want to look for and parse all the possible files.
Does Ansible have a way to get the values? Kind of like how it does with --list-hosts?
$ ansible web -i /somewhere/production/web --list-hosts
web1
web2
Short Answer
NO there is no CLI option to get value of variables.
Long Answer
First of all, you must look into how variable precedence works in Ansible.
The exact order is defined in the documentation. Here is an summary excerpt:
Basically, anything that goes into “role defaults” (the defaults
folder inside the role) is the most malleable and easily overridden.
Anything in the vars directory of the role overrides previous versions
of that variable in namespace. The idea here to follow is that the
more explicit you get in scope, the more precedence it takes with
command line -e extra vars always winning. Host and/or inventory
variables can win over role defaults, but not explicit includes like
the vars directory or an include_vars task.
With that cleared out, the only way to see what value is a variable taking is to use the debug task as follows:
- name: debug a variable
debug:
var: region
This will print out the value of the variable region.
My suggestion would be to maintain a single value type, either a string or a list to prevent confusion across tasks in different playbooks.
You could use something like this:
- name: deploy to regions
<task_type>:
<task_options>: <task_option_value>
// use the region variable as you wish
region: {{ item }}
...
with_items: "{{ regions.split(',') }}"
In this case you could have a variable regions=us-west,us-east comma-separated. The with_items statement would split it over the , and repeat the task for all the regions.
The CLI version of this is important for people who are trying to connect ansible to other systems. Building on the usage of the copy module elsewhere, if you have a POSIX mktemp and a copy of jq locally, then here's a bash one-liner that does the trick from the CLI:
export TMP=`mktemp` && ansible localhost -c local -i inventory.yml -m copy -a "content={{hostvars['SOME_HOSTNAME']}} dest=${TMP}" >/dev/null && cat ${TMP}|jq -r .SOME_VAR_NAME && rm ${TMP}
Breaking it down
# create a tempfile
export TMP=`mktemp`
# quietly use Ansible in local only mode, loading inventory.yml
# digging up the already-merged global/host/group vars
# for SOME_HOSTNAME, copying them to our tempfile from before
ansible localhost --connection local \
--inventory inventory.yml --module-name copy \
--args "content={{hostvars['SOME_HOSTNAME']}} dest=${TMP}" \
> /dev/null
# CLI-embedded ansible is a bit cumbersome. After all the data
# is exported to JSON, we can use `jq` to get a lot more flexibility
# out of queries/filters on this data. Here we just want a single
# value though, so we parse out SOME_VAR_NAME from all the host variables
# we saved before
cat ${TMP}|jq -r .SOME_VAR_NAME
rm ${TMP}
Another more easy solution get a variable through cli from ansible could be this:
export tmp_file=/tmp/ansible.$RANDOM
ansible -i <inventory> localhost -m copy -a "content={{ $VARIABLE_NAME }} dest=$tmp_file"
export VARIBALE_VALUE=$(cat $tmp_file)
rm -f $tmp_file
Looks quite ugly but is really helpful.

Resources