Ansible aws_ec2 inventory plugin issue - amazon-ec2

I am trying to get started using Ansible and the aws_ec2 plugin.
I have the following in my ./ansible.cfg file:
[inventory]
enable_plugins = aws_ec2
and the following in my ./inventory.yml file:
plugin: aws_ec2
aws_access_key_id: **********
aws_secret_access_key: **********
regions:
- us-east-2
when I run ansible-inventory -i inventory.yml --graph I get the following error:
inventory.yml did not meet aws_ec2 requirements, check plugin documentation if this is unexpected

As of Ansible 2.7.6:
aws_ec2 inventory filename must end with 'aws_ec2.yml' or 'aws_ec2.yaml'
proof.
So rename your inventory.yml into inventory_aws_ec2.yml and you are good to go.

Just change the name of the inventory file exactly as the plugin name.
For example: a.yaml to aws_ec2.yaml

Related

Ansible Collections: How to use Google Cloud Compute Engine inventory source

What are the correct commands for using the inventory file along with a playbook when using Ansible Collections google.cloud.gcp_compute. You can find an example of the inventory file very similar to what I'm using at the bottom of this article this article
I'm using this update.yml playbook:
- name: Update apt-get repo and cache
apt: update_cache=yes force_apt_get=yes cache_valid_time=3600
This is my inventory-gcp_compute.yml inventory file:
plugin: google.cloud.gcp_compute
zones: # populate inventory with instances in these regions
- us-central1-a
projects:
- vpn-server-sasp
auth_kind: serviceaccount
scopes:
- 'https://www.googleapis.com/auth/cloud-platform'
- 'https://www.googleapis.com/auth/compute.readonly'
keyed_groups:
# Create groups from GCE labels
- prefix: gcp
key: labels
hostnames:
# List host by name instead of the default public ip
- name
compose:
# Set an inventory parameter to use the Public IP address to connect to the host
# For Private ip use "networkInterfaces[0].networkIP"
ansible_host: networkInterfaces[0].accessConfigs[0].natIP
I've tried these commands:
ansible-playbook -i inventory-gcp_compute.yml update.yml
I got this error:
ansible-playbook 2.9.14
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/cheo/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml as it did not pass its verify_file() method
virtualbox declined parsing /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml with yaml plugin: Plugin configuration YAML file, not YAML inventory
File "/usr/lib/python2.7/dist-packages/ansible/inventory/manager.py", line 280, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/yaml.py", line 112, in parse
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]: * Failed to parse /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml with constructed plugin: Incorrect plugin name in file: google.cloud.gcp_compute
File "/usr/lib/python2.7/dist-packages/ansible/inventory/manager.py", line 280, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/constructed.py", line 109, in parse
self._read_config_data(path)
File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/__init__.py", line 224, in _read_config_data
raise AnsibleParserError("Incorrect plugin name in file: %s" % config.get('plugin', 'none found'))
[WARNING]: Unable to parse /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
ERROR! 'apt' is not a valid attribute for a Play
The error appears to be in '/home/cheo/sergio/ansible-gce/update.yml': line 3, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Update apt-get repo and cache
^ here
Check the docs. It's looking for a file ending in .gcp_compute.yml or .gcp.yml, so while inventory.gcp_compute.yml would qualify, inventory-gcp_compute.yml does not. The errors you're getting says that that file isn't an inventory, but none of the plugin parsers claimed it.

How to fix "Could not match supplied host pattern, ignoring: bigip" errors, works in Ansible, NOT Tower

I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here

Ansible aws_ec2 inventory plugin - dynamic boto_profile

I am using the aws_ec2 inventory plugin and would like to pass the boto_profile in as a var at runtime.
I am trying to run the following command:
ansible-playbook playbook.yml --extra-vars profile=foo
Inside my aws_ec2.yml plugin file I have:
boto_profile: "{{ profile }}"
This returns the error:
The config profile ({{ profile }}) could not be found
I am able to use the profile var inside my playbook. I am using the ec2 module with profile: "{{ profile }}" That seems to work if I define a static inventory.
Is it possible to pass the profile var into the dynamic inventory file?
Jinja2 templates are not applicable to inventory configuration files.
Use environment variable AWS_PROFILE or AWS_DEFAULT_PROFILE to set profile at runtime.
Like: AWS_PROFILE=foo ansible-playbook playbook.yml
I'm using this in GitLab CI/CD and had the same issue, however you can look up the env variables in the dynamic inventory like this:
plugin: amazon.aws.aws_ec2
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
regions:
- eu-central-1
groups:
webservers: "'app-server' in tags.Type"
and I cen set this up in the CI/CD under variables: and this is passed to docker container.

Running an ansible playbook on localhost but referring to inventory's group_var

I am trying to run a playbook locally but I want all the vars in the role's task/main.yml file to refer to a group_var in a specific inventory file.
Unfortunately the playbook is unable to access to the group_vars directory as if fail to recognize the vars specified in the role.
The command ran is the following:
/usr/local/bin/ansible-playbook --connection=local /opt/ansible/playbooks/create.yml -i ./inventory-file
but fails to find the group_vars in the /group_vars directory at the same directory level of the inventory file
fatal: [127.0.0.1]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'admin_user_name' is undefined\n\nThe error appears to have been in '/opt/roles/create/tasks/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: create org\n ^ here\n"
}
This is my configuration:
ansible-playbook 2.7.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ansible-modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0]
Any help is appreciated.
Thanks,
dom
So, theoretically adding localhost in the inventory would have been a good solution, but in my specific case (and in general for large deployments) was not an option.
I also added --extra-vars "myvar.json" but did not work either.
Turns out (evil detail...) that the right way to add a var file via command line is: --extra-vars "#myvar.json"
Posting it here in hope nobody else struggle days to find this solution.
Cheers,
dom
As per the error, your ansible is not able to read the group_vars, Can you please make sure that your group_vars have the same folder called localhost.
Example Playbook
host is localhost
- hosts: localhost
become: true
roles:
- { role: common, tags: [ 'common' ] }
- { role: docker, tags: [ 'docker' ] }
So in group_vars, it should be localhost and in that folder file in main.yml
Or
You can create a folder called all in group_vars and create a file called all.yml
This should solve the issue

Ansible : remote_user in playbook file issues

Actually I've defined remote_user variable for each host group. But remote_user value is not taken from defined one. Rather its using top assigned value.
Ansible version:
# ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609]
Playbook file : info.yml
---
- hosts: all
remote_user: demo
roles:
- common
- hosts: devlocal
remote_user: root
become: yes
roles:
- common
- hosts: testlocal
remote_user: test
become: yes
roles:
- common
when I run the playbook for hosts [ devlocal] , the users name is taken from first assignment [ i.e : "demo" ]. Actually it should use the remote_user "root" in my case.
logs :
# ansible-playbook -i hosts -l devlocal info.yml --ask-pass -vvvv
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: site.yml ********************************************************************************************************************************
3 plays in site.yml
PLAY [all] ****************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/setup.py
<10.11.12.213> ESTABLISH SSH CONNECTION FOR USER: demo
Someone please help what was an issue here. Thanks in advance
Someone please help what was an issue here.
The issue here is that you specified the first play to run as demo:
- hosts: all
remote_user: demo
roles:
- common
And Ansible runs it as demo which seems not to be your objective.
That's why Ansible provides inventory mechanism, so you can specify connection details per host, not in plays.
I've defined remote_user variable for each host group
Wrong. You've defined remote_user for each play and not host group.
Hosts and groups are defined via inventory.
So you should defined devlocal and testlocal groups with ansible_user assigned.
And have single play:
- hosts: all
roles:
- common

Resources