after a day of googling I decided to give up and ask here:
I'm still quite new to ansible and AWS, hence this question might lack background information which I'm happy to provide on request.
What I'm trying to achieve:
Write an Ansible playbook, which creates a new ec2 instance within my vpc.
This instance shall be provided with two new nics, eth0 and eth1. These nics should be associated with each one specific security group.
My playbook so far is built like this:
Create eth0
Create eth1
Create ec2 instance
My problem:
All documentation says I need to provide the eni-id of the interface I'd like to attach to my instance. I can't provide this, since the ids do not exist yet. The only thing I know is the name of the interfaces so I was trying to get the id of the interfaces separately, which also didn't work.
If I try to register the output of the creation of eth{0,1} in ansible, the whole output is stored and breaks the validation later when calling the variables in the section of the instance creation. Same with the extra step after the creation process.
More about the setup:
Running a VPC in AWS, hosts inside VPC are only accessible through VPN.
Running Ansible on macOS:
ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/Users/mg/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/7.1.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/mg/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.11.1 (main, Dec 23 2022, 09:40:27) [Clang 14.0.0 (clang-1400.0.29.202)] (/usr/local/Cellar/ansible/7.1.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Playbook:
---
- name: Create ec2 instances
hosts: localhost
gather_facts: false
tasks:
# Block is a Group of Tasks combined together
- name: Get Info Block
block:
- name: Get Running instance Info
ec2_instance_info:
register: ec2info
- name: Print info
debug: var="ec2info.instances"
# By specifying always on the tag,
# I let this block to run all the time by module_default
# this is for security to net create ec2 instances accidentally
tags: ['always', 'getinfoonly']
- name: Create ec2 block
block:
- amazon.aws.ec2_vpc_net_info:
vpc_ids: vpc-XXXXXXXXXXXXXXXXX
- name: Create ec2 network interface eth0_lan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
description: "eth0_lan_{{ vpc_hostname }}"
subnet_id: "{{ vpc_subnetid }}"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: "sg-XXXXXXXXXXXXXXXXX"
- name: Get id of eth0
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
register: eth0
- name: Create ec2 network interface eth1_wan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
description: "eth1_wan_{{ vpc_hostname }}"
subnet_id: "subnet-XXXXXXXXXXXXXXXXX"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: 'sg-XXXXXXXXXXXXXXXXX'
- name: Get id of eth1
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
register: eth1
- name: Launch ec2 instances
tags: ec2-create
amazon.aws.ec2_instance:
name: "{{ vpc_hostname }}"
region: "eu-central-1"
key_name: "MyKey"
image_id: ami-XXXXXXXXXXXXXXXXX
vpc_subnet_id: "{{ vpc_subnetid }}"
instance_type: "{{ instance_type }}"
volumes:
- device_name: /dev/sda1
ebs:
volume_size: 30
delete_on_termination: true
network:
interfaces:
- id: "{{ eth0 }}"
- id: "{{ eth1 }}"
detailed_monitoring: true
register: ec2
delegate_to: localhost
# By specifying never on the tag of this block,
# I let this block to run only when explicitely being called
tags: ['never', 'ec2-create']
(If you're wondering about the tag-stuff, this comes from the tutorial I followed initially, credits: https://www.middlewareinventory.com/blog/ansible-aws-ec2/#How_Ansible_works_with_AWS_EC2_Setup_Boto_for_Ansible)
The execution of the ansible playbook breaks with this error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Invalid type for parameter NetworkInterfaces[1].NetworkInterfaceId, value: {'changed': True, 'interface': {'id': 'eni-XXXXXXXXXXXXXXXXX', 'subnet_id': 'subnet-XXXXXXXXXXXXXXXXX', 'vpc_id': 'vpc-XXXXXXXXXXXXXXXXX', 'description': 'somedescription', 'owner_id': 'XXXXXXXXXXXXXXXXX', 'status': 'available', 'mac_address': 'xx:xx:xx:xx:xx:xx, 'private_ip_address': 'xx.xx.xxx.xx', 'source_dest_check': True, 'groups': {'sg-XXXXXXXXXXXXXXXXX': 'SGNAME'}, 'private_ip_addresses': [{'private_ip_address': 'xx.xx.xxx.xx', 'primary_address': True}], 'name': 'eth1_wan_<fqdn>', 'tags': {'Name': 'eth1_wan_<fqdn>'}}, 'failed': False}, type: <class 'dict'>, valid types: <class 'str'>
So, my colleague and I managed to solve this: Use "{{ eth0.interface.id }}" instead. However, all instances continue to terminate themselves on creation. In AWS console: Client.InternalError. This is related to kms/ebs encryption which I turned on by default today.
It turned out, that I tried to use an asymmetrical customer-managed-key for default ebs encryption. As soon as I replaced this with a symmetrical one, it worked and the instances would start.
I am new to ansible and AWX. Making good progress with my project but am getting stuck with one part and hoping you guys can help.
I have a Workflow Template as follows
Deploy VM
Get IP of the VM Deployed & create a temp host configuration
Change the deployed machine hostname
Where I am getting stuck is once I create the hostname when the next template kicks off the hostname group is missing. I assume this is because of some sort of runspace. How do I move this information around? I need this as later on I want to get into more complicated flows.
What I have so far:
1.
- name: Deploy VM from Template
hosts: xenservers
tasks:
- name: Deploy VM
shell: xe vm-install new-name-label="{{ vm_name }}" template="{{ vm_template }}"
- name: Start VM
shell: xe vm-start vm="{{ vm_name }}"
---
- name: Get VM IP
hosts: xenservers
remote_user: root
tasks:
- name: Get IP
shell: xe vm-list name-label="{{ vm_name }}" params=networks | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -1
register: vm_ip
until: vm_ip.stdout != ""
retries: 15
delay: 5
Here I was setting the hosts and this works locally to the template but when it moves on it fails. I tried this in a task and a play.
- name: Add host to group
add_host:
name: "{{ vm_ip.stdout }}"
groups: deploy_vm
- hosts: deploy_vm
- name: Set hosts
hosts: deployed_vm
tasks:
- name: Set a hostname
hostname:
name: "{{ vm_name }}"
when: ansible_distribution == 'Rocky Linux'
Thanks
I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.
I'm trying to provision new machines on AWS with ec2 module
and to update my hosts file locally so that the next tasks would already use the hosts file.
So, provisioning isn't and issue and even the creation of the local host file:
- name: Provision a set of instances
ec2:
key_name: AWS
region: eu-west-1
group: default
instance_type: t2.micro
image: ami-6f587e1c # For Ubuntu 14.04 LTS use ami-b9b394ca # For Ubuntu 16.04 LTS use ami-6f587e1c
wait: yes
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 50
wait: true
count: 2
vpc_subnet_id: subnet-xxxxxxxx
assign_public_ip: yes
instance_tags:
Name: Ansible
register: ec2
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
with_items: "{{ ec2.instances }}"
- local_action: file path=./hosts state=absent
ignore_errors: yes
- local_action: file path=./hosts state=touch
- local_action: lineinfile line="[all]" insertafter=EOF dest=./hosts
- local_action: lineinfile line="{{ item.private_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./hosts
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 600
state: started
with_items: "{{ ec2.instances }}"
- name: refreshing inventory cache
meta: refresh_inventory
- hosts: all
gather_facts: False
tasks:
- command: hostname -i
However the next task which is a simple print of hostname -i (just for the test)
fails because it can't find on Ubuntu 16.04 LTS Python 2.7 (there is python3)
For that, in my dynamic host file I add the following line:
ansible_python_interpreter=/usr/bin/python3
But it seems that ansible ignore it and goes straight to python 2.7 which is missing.
I've tried to reload the inventory file
meta: refresh_inventory
but that didn't helped either.
What am I doing wrong ?
I'm not sure why the refresh did not work but I suggest setting it in the add_host section, it takes any variable.
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
ansible_python_interpreter: "/usr/bin/python3"
with_items: "{{ ec2.instances }}"
Also i find it useful to debug with this task
- debug: var=hostvars[inventory_hostname]
I'm trying to configure Ansible 1.9 to launch some OpenStack Nova instances.
For each instance, I'm attempting to auto-assign a floating IP, connecting it to a public segment.
When I try this:
- name: launch Web01 instance
hosts: csc
tasks:
- nova_compute:
state: present
login_username: tenant_2_user
login_password: hello
login_tenant_name: tenant_2
name: Web01
auth_url: http://mylocalhost:5000/v2.0/
region_name: RegionOne
image_id: 95c5f4f2-84f2-47fb-a466-3c786677d21c
wait_for: 200
flavor_id: b772be9a-98cd-446f-879e-89baef600ff0
security_groups: default
- neutron_floating_ip:
state=present
login_username=tenant_2_user
login_password=hello
login_tenant_name=tenant_2
network_name=ext-net
instance_name=Web01
I get : ERROR: neutron_floating_ip is not a legal parameter in an Ansible task or handler
And when I try this:
- name: launch Web01 instance
hosts: csc
tasks:
- nova_compute:
state: present
login_username: tenant_2_user
login_password: hello
login_tenant_name: tenant_2
name: Web01
auth_url: http://mylocalhost:5000/v2.0/
region_name: RegionOne
image_id: 95c5f4f2-84f2-47fb-a466-3c786677d21c
wait_for: 200
flavor_id: b772be9a-98cd-446f-879e-89baef600ff0
security_groups: default
auto_floating_ip: yes
msg: unsupported parameter for module: auto_floating_ip
Here is my Ansible version: ansible --version
ansible 1.9
configured module search path = /usr/share/ansible
What can I do to have Ansible assign these floating IPs?
-Eugene
I got it working. You dont need to use
auto_floating_ip: yes
just use
floating_ip_pools:
- Your-external/public-network-id
Hope this helps.