Getting error fatal: [local -> localhost]: FAILED! => {"msg": "The module ec2 was redirected to amazon.aws.ec2, which could not be loaded."} - ansible

I have installed ansible on one ec2 server and created one role with "AmazonEC2FullAccess" and attached it to the instance. I use this instance as ansible controller node and trying to create ec2 instance with below playbook.
name: provisioning EC2 instance using Ansible
hosts: localhost
connection: local
gather_facts: false
tags: provisioning
vars:
keypair: test
instance_type: t2.micro
image: ami-0c9978668f8d55984
wait: yes
group: webserver
count: 2
region: us-east-1
security_group: my-ansible-security-group
tasks:
name: Task#1 - Create my security group
local_action:
module: ec2_group
name: "{{ security_group }}"
description: security group for webserver
region: "{{ region }}"
rules:
proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
proto: tcp
from_port: 8080
to_port: 8080
cidr_ip: 0.0.0.0/0
proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
proto: tcp
from_port: 443
to_port: 443
cidr_ip: 0.0.0.0/0
rules_egress:
proto: all
cidr_ip: 0.0.0.0/0
register: basic_firewall
name: Task#2 - Launch the new EC2 instance
local_action:
module: ec2
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
keypair: "{{ keypair }}"
count: "{{ count }}"
register: ec2
name: Task#3 - Add Tagging to EC2 instance
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: "{{ ec2.instances }}"
args:
tags:
Name: MyTargetEc2Instance
But I am getting below error:
ubuntu#ip-172-31-60-239:~/playbooks$ sudo ansible-playbook create_ec2.yml
PLAY [provisioning EC2 instance using Ansible] **********************************************************************************************************************************************
TASK [Task#1 - Create my security group] ****************************************************************************************************************************************************
ok: [local -> localhost]
TASK [Task#2 - Launch the new EC2 instance] *************************************************************************************************************************************************
fatal: [local -> localhost]: FAILED! => {"msg": "The module ec2 was redirected to amazon.aws.ec2, which could not be loaded."}
PLAY RECAP **********************************************************************************************************************************************************************************
local : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Please help me to resolve it.

Related

Variable is empty or no defined in Ansible

When I run my playbook I see the following error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: list object has no element 0\n\nThe e
rror appears to be in '/etc/ansible/loyalty/tasks/create_ec2_stage.yaml': line 63, column 7, but may\nbe elsewhere in the file depending on the exac
t syntax problem.\n\nThe offending line appears to be:\n\n register: ec2_metadata\n - name: Parse < JSON >\n ^ here\n"}
I run the playbook this way:
/usr/bin/ansible-playbook -i hosts --extra-vars "CARRIER=xx" tasks/create_ec2_stage.yaml
Here is my playbook:
---
- name: New EC2 instances
hosts: localhost
gather_facts: no
vars_files:
- /etc/ansible/loyalty/vars/vars.yaml
tasks:
- name: Run EC2 Instances
amazon.aws.ec2_instance:
name: "new-{{ CARRIER }}.test"
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-east-1
key_name: Kiu
instance_type: t2.medium
image_id: xxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
ebs:
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: xxxxxxxxxxxx
network:
assign_public_ip: no
security_groups: ["xxxxxxxxxx", "xxxxxxxxxxx", "xxxxxxxxxxxxxx"]
tags:
Enviroment: TEST
count: 1
- name: Pause Few Seconds
pause:
seconds: 20
prompt: "Please wait"
- name: Get Information for EC2 Instances
ec2_instance_info:
region: us-east-1
filters:
"tag:Name": new-{{ CARRIER }}.test
register: ec2_metadata
- name: Parse JSON
set_fact:
ip_addr: "{{ ec2_metadata.instances[0].network_interfaces[0].private_ip_address }}"
If I create a slightly smaller playbook to query the private IP address of an existing instance, I don’t see any error.
---
- name: New EC2 Instances
hosts: localhost
gather_facts: no
vars_files:
- /etc/ansible/loyalty/vars/vars.yaml
vars:
pwd_alias: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
CARRIER_UPPERCASE: "{{ CARRIER | upper }}"
tasks:
- set_fact:
MY_PASS: "{{ pwd_alias }}"
- name: Get EC2 info
ec2_instance_info:
region: us-east-1
filters:
"tag:Name": new-{{ CARRIER }}.stage
register: ec2_metadata
- name: Parsing JSON
set_fact:
ip_addr: "{{ ec2_metadata.instances[0].network_interfaces[0].private_ip_address }}"
- name: Show Result
debug:
msg: "{{ ip_addr }}"
Results in
TASK [Show Result] ******************************************************
ok: [localhost] => {
"msg": "172.31.x.x"
}
I am creating an EC2 instance on Amazon and querying the private IP to use that IP on other services like router53 and Cloudflare, other tasks do not add them because the error is in the fact.
You don't have to query AWS with the ec2_instance_info module since the module amazon.aws.ec2_instance already returns you the newly created EC2 when the parameter wait is set to yes, as it is in your case.
You just need to register the return of this task.
So, given the two tasks:
- name: Run EC2 Instances
amazon.aws.ec2_instance:
name: "new-{{ CARRIER }}.test"
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-east-1
key_name: Kiu
instance_type: t2.medium
image_id: xxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
ebs:
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: xxxxxxxxxxxx
network:
assign_public_ip: no
security_groups: ["xxxxxxxxxx", "xxxxxxxxxxx", "xxxxxxxxxxxxxx"]
tags:
Enviroment: TEST
count: 1
register: ec2
- set_fact:
ip_addr: "{{ ec2.instance[0].private_ip_address }}"
You should have the private IP address of the newly created EC2 instance.

Point Ansible to .pem file for a dynamic set of EC2 nodes

I'm pretty new to Ansible.
I get:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).", "unreachable": true}
at the last step when I try to this Ansible playbook
---
- name: find EC2 instaces
hosts: localhost
connection: local
gather_facts: false
vars:
ansible_python_interpreter: "/usr/bin/python3"
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
aws_region: "us-west-2"
vpc_subnet_id: "subnet-xxx"
ec2_filter:
"tag:Name": "airflow-test"
"tag:Team": 'data-science'
"tag:Environment": 'staging'
"instance-state-name": ["stopped", "running"]
vars_files:
- settings/vars.yml
tasks:
- name: Find EC2 Facts
ec2_instance_facts:
region: "{{ aws_region }}"
filters:
"{{ ec2_filter }}"
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
loop: "{{ ec2.instances }}"
- name: Wait for the instances to boot by checking the ssh port
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
sleep: 10
timeout: 120
state: started
loop: "{{ ec2.instances }}"
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
tasks:
- name: ls
command: ls
I know I need to point Ansible to .pem file, I tried to add ansible_ssh_private_key_file to the inventory file but considering nodes are dynamic, not sure how to do it.
Adding ansible_ssh_user solved the problem
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_ssh_user: "ec2-user"
tasks:
- name: ls
command: ls

Trying to get IPs from file and use it as Inventory in Ansible

I get list of IP address in test.text file from which I am trying to get the IP in loop
and then try to get in group or variable and use it as hosts (dynamic_groups)
Below is my playlist
---
- name: provision stack
hosts: localhost
connection: local
gather_facts: no
serial: 1
tasks:
- name: Get Instance IP Addresses From File
shell: cat /home/user/test.text
register: serverlist
- debug: msg={{ serverlist.stdout_lines }}
- name: Add Instance IP Addresses to temporary inventory groups
add_host:
groups: dynamic_groups
hostname: "{{item}}"
with_items: serverlist.stdout_lines
- hosts: dynamic_groups
become: yes
become_user: root
become_method: sudo
gather_facts: True
serial: 1
vars:
ansible_connection: "{{ connection_type }}"
ansible_ssh_user: "{{ ssh_user_name }}"
ansible_ssh_private_key_file: "{{ ssh_private_key_file }}"
tasks:
.....
.....
After running above playbbok I am getting below error
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
"192.168.1.10",
"192.168.1.11",
"192.168.1.50"
]
}
TASK [Add Instance IP Addresses to temporary inventory groups] ***************************************************************************************************************************************************************************
changed: [localhost] => (item=serverlist.stdout_lines)
PLAY [dynamic_groups] *********************************************************************************************************************************************************************************************************************
TASK [Some Command] **********************************************************************************************************************************************************************************************************************
fatal: [serverlist.stdout_lines]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname serverlist.stdout_lines: Name or service not known", "unreachable": true}
What Am I missing here?
Below is correct way to use variable
- name: Add Instance IP Addresses to temporary inventory groups
add_host:
groups: working_hosts
hostname: "{{item}}"
with_items: "{{ serverlist.stdout_lines }}"
It should solve your problem.
As reported in fatal error message "Failed to connect to the host via ssh: ssh: Could not resolve hostname serverlist.stdout_lines", it is trying to connect to "serverlist.stdout_lines", not to a valid IP.
This is caused by an error when passing variable to with_items. In your task:
with_items: serverlist.stdout_lines
it is passing serverlist.stdout_lines string and not its value.
With_items requires variable definition using "{{ ... }}" (https://docs.ansible.com/ansible/2.7/user_guide/playbooks_loops.html#with-items).
This is the correct way for your task:
- name: Add Instance IP Addresses to temporary inventory groups
add_host:
groups: dynamic_groups
hostname: "{{item}}"
with_items: "{{ serverlist.stdout_lines }}"
You can simply use ansible-playbook -i inventory_file_name playbook.yaml for this. inventory_file is the file containing your groups and ips.

ansible ec2 with_items failling

here we have an ec2 module in a playbook:
---
# This playbook creates dev-test instances at AWS.
- name: test creation
hosts: localhost
gather_facts: False
tasks:
- name: instance creation
ec2:
key_name: dev-key
group_id: sg-55667788
instance_type: t2.micro
image: ami-cd0f5cb6
wait: yes
wait_timeout: 300
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 32
delete_on_termination: True
- device_name: /dev/xvdb
volume_type: gp2
volume_size: 1
delete_on_termination: True
- device_name: /dev/xvdc
volume_type: gp2
volume_size: 200
delete_on_termination: True
vpc_subnet_id: "{{ item.vpc_subnet_id }}"
zone: "{{ item.zone }}"
region: us-east-1
assign_public_ip: no
private_ip: "{{ item.private_ip }}"
instance_tags:
Name: "{{ item.tag_name }}"
user_data: |
#!/bin/bash
mkswap /dev/xvdb
swapon /dev/xvdb
echo "/dev/xvdb none swap sw 0 0" >> /etc/fstab
echo "n
e
1
n
l
w
" | fdisk /dev/xvdc
with_items:
- { vpc_subnet_id: 'subnet-11223344', zone: 'us-east-1d', private_ip: '172.31.5.15', tag_name: 'dev-test.vpc-01' }
- { vpc_subnet_id: 'subnet-44332211', zone: 'us-east-1e', private_ip: '172.31.7.13', tag_name: 'dev-test.vpc-01' }
And here the error output:
test#test:~$ ansible-playbook --check create-ec2.yml
[WARNING]: Could not match supplied host pattern, ignoring: all
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [instance creation] *********************************************************************************************************
TASK [instance creation] ************************************************************************************************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: 'item' is undefined\n\nThe error appears to have been in 'create-ec2.yml': line 8, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: cria instancia\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'item' is undefined"}
to retry, use: --limit #create-ec2.retry
PLAY RECAP ***********************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
I don't see where the mistake is. Where is this item undefined?
with_items is wrongly indented. It should be on a task level, not on task parameters level.
In result the directive for with-loop is ignored and the variable item is not defined.

Ansible: VPC redefine: TypeError: string indices must be integers, not str

I am in need of help to make it work in order to add the nat instance id as gw for private subnet inside the routing table.
Here is my vpc playbook:
/tasks/vpc.yml
---
- name: VPC | Creating an AWS VPC inside mentioned Region
local_action:
module: ec2_vpc
region: "{{ vpc_region }}"
state: present
cidr_block: "{{ vpc_cidr_block }}"
resource_tags: { "Name":"{{ vpc_name }}_vpc" }
subnets: "{{ vpc_subnets }}"
internet_gateway: yes
route_tables: "{{ public_subnet_rt }}"
=====
here is my vars/vpc.yml file:
---
ec2_inst_id: i-abc1432c
# Variables for VPC
vpc_name: tendo
vpc_region: ap-southeast-2
vpc_cidr_block: 172.25.0.0/16
public_cidr: 172.25.10.0/24
public_az: "{{ vpc_region}}b"
private_cidr: 172.25.20.0/24
private_az: "{{ vpc_region }}a"
nat_private_ip: 172.25.10.10
# Please don't change the variables below, until you know what you are doing
#
# Subnets Defination for VPC
vpc_subnets:
- cidr: "{{ public_cidr }}" # Public Subnet
az: "{{ public_az }}"
resource_tags: { "Name":"{{ vpc_name }}_public_subnet" }
- cidr: "{{ private_cidr }}" # Private Subnet
az: "{{ private_az }}"
resource_tags: { "Name":"{{ vpc_name }}_private_subnet" }
## Routing Table for Public Subnet
public_subnet_rt:
- subnets:
- "{{ public_cidr }}"
routes:
- dest: 0.0.0.0/0
gw: igw
When I run the above playbook it work fine:
ansible-playbook -i 'localhost,' --connection=local site.yml -vvvv
PLAY [all] ********************************************************************
TASK: [VPC | Creating an AWS VPC inside mentioned Region] *********************
<127.0.0.1> region=ap-southeast-2 cidr_block=172.25.0.0/16 state=present
<127.0.0.1>
<127.0.0.1>
<127.0.0.1> u'LANG=C LC_CTYPE=C /usr/bin/python /Users/arbab/.ansible/tmp/ansible-tmp-1427103212.79-152394513704427/ec2_vpc; rm -rf /Users/arbab/.ansible/tmp/ansible-tmp-1427103212.79-152394513704427/ >/dev/null 2>&1']
changed: [localhost -> 127.0.0.1] => {"changed": true, "subnets": [{"az": "ap-southeast-2b", "cidr": "172.25.10.0/24", "id": "subnet-70845e15", "resource_tags": {"Name": "tendo_public_subnet"}}, {"az": "ap-southeast-2a", "cidr": "172.25.20.0/24", "id": "subnet-8d1fdffa", "resource_tags": {"Name": "tendo_private_subnet"}}], "vpc": {"cidr_block": "172.25.0.0/16", "dhcp_options_id": "dopt-261e0244", "id": "vpc-9cea26f9", "region": "ap-southeast-2", "state": "available"}, "vpc_id": "vpc-9cea26f9"}
Here is the problem when I redefine the VPC with the nat-instance id as gw.
---
- name: NAT | NAT Route
set_fact:
private_subnet_rt: '{{ lookup("template", "../templates/nat_routes.json.j2") }}'
- name: redefine vpc
local_action:
module: ec2_vpc
region: "{{ vpc_region }}"
state: present
cidr_block: "{{ vpc_cidr_block }}"
resource_tags: { "Name":"{{ vpc_name }}_vpc" }
subnets: "{{ vpc_subnets }}"
internet_gateway: yes
route_tables: "{{ private_subnet_rt }}"
Here are the content of the nat_routes.json.j2:
- subnets:
- {{ public_cidr }}
routes:
- dest: 0.0.0.0/0
gw: "igw"
- subnets:
- {{ private_cidr }}
routes:
- dest: 0.0.0.0/0
gw: {{ ec2_inst_id }}
I got this error when I run the above playbook after Creating the NAT instance:
TASK: [redefine vpc] **********************************************************
<127.0.0.1> region=ap-southeast-2 cidr_block=172.25.0.0/16 state=present route_tables=- subnets:
- 172.25.10.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
- subnets:
- 172.25.20.0/24
routes:
- dest: 0.0.0.0/0
gw: i-abc1432c
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/Users/arbab/.ansible/tmp/ansible-tmp-1427101746.8-192243069214182/ec2_vpc", line 2413, in <module>
main()
File "/Users/arbab/.ansible/tmp/ansible-tmp-1427101746.8-192243069214182/ec2_vpc", line 618, in main
(vpc_dict, new_vpc_id, subnets_changed, changed) = create_vpc(module, vpc_conn)
File "/Users/arbab/.ansible/tmp/ansible-tmp-1427101746.8-192243069214182/ec2_vpc", line 425, in create_vpc
for route in rt['routes']:
TypeError: string indices must be integers, not str
Can you please point me that where I am making mistake.
Thanks
Perhaps private_subnet_rt: '{{ lookup("template", "../templates/nat_routes.json.j2") }}' reads contents of nat_routes.json.j2 and assigns it to private_subnet_rt as a string. Not a YAML list of dicts as you expected.
What you need is something like:
Content of the nat_routes.json.yml:
private_subnet_rt:
- subnets:
- "{{ public_cidr }}"
routes:
- dest: 0.0.0.0/0
gw: "igw"
- subnets:
- "{{ private_cidr }}"
routes:
- dest: 0.0.0.0/0
gw: "{{ ec2_inst_id }}"
and then get that variable into your playbook using include_vars, instead of set_fact:.
- include_vars: nat_routes.json.yml
- name: redefine vpc
local_action:
module: ec2_vpc
region: "{{ vpc_region }}"
state: present
cidr_block: "{{ vpc_cidr_block }}"
resource_tags: { "Name":"{{ vpc_name }}_vpc" }
subnets: "{{ vpc_subnets }}"
internet_gateway: yes
route_tables: "{{ private_subnet_rt }}"
You can also put nat_routes.json.yml under sub-folders group_vars or host_vars or roles/<role>/vars. See recommended folder structure, in which case you don't have to do include_vars, ansible would do it for you implicitly based on host/group/role you're operating on.
HTH

Resources