When I run my playbook I see the following error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: list object has no element 0\n\nThe e
rror appears to be in '/etc/ansible/loyalty/tasks/create_ec2_stage.yaml': line 63, column 7, but may\nbe elsewhere in the file depending on the exac
t syntax problem.\n\nThe offending line appears to be:\n\n register: ec2_metadata\n - name: Parse < JSON >\n ^ here\n"}
I run the playbook this way:
/usr/bin/ansible-playbook -i hosts --extra-vars "CARRIER=xx" tasks/create_ec2_stage.yaml
Here is my playbook:
---
- name: New EC2 instances
hosts: localhost
gather_facts: no
vars_files:
- /etc/ansible/loyalty/vars/vars.yaml
tasks:
- name: Run EC2 Instances
amazon.aws.ec2_instance:
name: "new-{{ CARRIER }}.test"
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-east-1
key_name: Kiu
instance_type: t2.medium
image_id: xxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
ebs:
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: xxxxxxxxxxxx
network:
assign_public_ip: no
security_groups: ["xxxxxxxxxx", "xxxxxxxxxxx", "xxxxxxxxxxxxxx"]
tags:
Enviroment: TEST
count: 1
- name: Pause Few Seconds
pause:
seconds: 20
prompt: "Please wait"
- name: Get Information for EC2 Instances
ec2_instance_info:
region: us-east-1
filters:
"tag:Name": new-{{ CARRIER }}.test
register: ec2_metadata
- name: Parse JSON
set_fact:
ip_addr: "{{ ec2_metadata.instances[0].network_interfaces[0].private_ip_address }}"
If I create a slightly smaller playbook to query the private IP address of an existing instance, I don’t see any error.
---
- name: New EC2 Instances
hosts: localhost
gather_facts: no
vars_files:
- /etc/ansible/loyalty/vars/vars.yaml
vars:
pwd_alias: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
CARRIER_UPPERCASE: "{{ CARRIER | upper }}"
tasks:
- set_fact:
MY_PASS: "{{ pwd_alias }}"
- name: Get EC2 info
ec2_instance_info:
region: us-east-1
filters:
"tag:Name": new-{{ CARRIER }}.stage
register: ec2_metadata
- name: Parsing JSON
set_fact:
ip_addr: "{{ ec2_metadata.instances[0].network_interfaces[0].private_ip_address }}"
- name: Show Result
debug:
msg: "{{ ip_addr }}"
Results in
TASK [Show Result] ******************************************************
ok: [localhost] => {
"msg": "172.31.x.x"
}
I am creating an EC2 instance on Amazon and querying the private IP to use that IP on other services like router53 and Cloudflare, other tasks do not add them because the error is in the fact.
You don't have to query AWS with the ec2_instance_info module since the module amazon.aws.ec2_instance already returns you the newly created EC2 when the parameter wait is set to yes, as it is in your case.
You just need to register the return of this task.
So, given the two tasks:
- name: Run EC2 Instances
amazon.aws.ec2_instance:
name: "new-{{ CARRIER }}.test"
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-east-1
key_name: Kiu
instance_type: t2.medium
image_id: xxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
ebs:
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: xxxxxxxxxxxx
network:
assign_public_ip: no
security_groups: ["xxxxxxxxxx", "xxxxxxxxxxx", "xxxxxxxxxxxxxx"]
tags:
Enviroment: TEST
count: 1
register: ec2
- set_fact:
ip_addr: "{{ ec2.instance[0].private_ip_address }}"
You should have the private IP address of the newly created EC2 instance.
Related
I am building an AWS EC2 Instance using Ansible and want to set three environment variables to be used by the roles. The playbook is launched frmo AWX. I get the STS Assume Role values in the "Assume Credentials" task.
---
- name: Create Instance
hosts: localhost
gather_facts: false
connection: local
tasks:
- import_tasks: usshared.yml
when: (buildenv == "us-shared")
- import_tasks: ossettings.yml
- name: Assume Credentials
sts_assume_role:
region: "{{ target_region }}"
role_arn: "{{ awsarnrole }}"
role_session_name: "AWXBuildServer"
register: assumed_role
# - name: Assume Role Data
# debug:
# var: assumed_role
- name: Find AMI Target
ec2_ami_info:
filters:
name: "cmpc*{{ osselection }}*base*"
owners: self
region: "{{ target_region }}"
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
register: found_base_ami
- import_tasks: osbuildlinux.yml
when: (osselection == "centos7") or (osselection == "rhel7")
- import_tasks: osbuildwin.yml
when: (osselection == "win2019")
- name: Configure New Linux Instance
hosts: new_launch_linux
gather_facts: true
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
- name: Configure New Windows Instance
hosts: new_launch_windows
gather_facts: true
roles:
- systemupdates
- generalostasks
- networkconfig
- winadjoin
- appgroup
This is the assumed_role register contents:
"assumed_role": {
"changed": true,
"sts_creds": {
"access_key": "HIDDEN",
"secret_key": "HIDDEN",
"session_token": "HIDDEN",
"expiration": "2022-06-08T16:58:38+00:00"
},
"sts_user": {
"assumed_role_id": "HIDDEN:AWXBuildServer",
"arn": "arn:aws:sts::0123456789:assumed-role/sre-ec2-role-assumed/AWXBuildServer"
}
}
Now I want to use some of the register values as environment variables for roles like so:
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
However, I am getting this error:
The field 'environment' has an invalid value, which includes an undefined variable. The error was: 'assumed_role' is undefined. 'assumed_role' is undefined
How do I correctly pass the register values to the environment?
ok you have more than one plays in same playbook with different hosts, so i suggest to use add_host after you have register the result:
- name: Assume Credentials
sts_assume_role:
region: "{{ target_region }}"
role_arn: "{{ awsarnrole }}"
role_session_name: "AWXBuildServer"
register: assumed_role
- name: add variables to dummy host
add_host:
name: "variable_holder"
shared_variable: "{{ assumed_role }}"
then in second play:
- name: Configure New Linux Instance
hosts: new_launch_linux
gather_facts: true
vars:
assumed_role: "{{ hostvars['variable_holder']['shared_variable'] }}"
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
I have a lot of inventories inside ansible. I not looking for a dynamic inventory solution.
When I create an instance from ansible in AWS, I need to run some tasks inside that new server but I don’t know what is the best way to do it.
tasks:
- ec2:
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-west-2
key_name: xxxxxxxxxx
instance_type: t2.medium
image: ami-xxxxxxxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: subnet-xxxxxxxxxxxx
assign_public_ip: no
instance_tags:
Name: new-instances
count: 1
- name: Buscamos la IP de la instancia creada
ec2_instance_facts:
filters:
"tag:Name": new-instances
register: ec2_instance_info
- set_fact:
msg: "{{ ec2_instance_info | json_query('instances[*].private_ip_address') }} "
- debug: var=msg
I currently show the IP of the new instance but I need to create a new host file and insert the IP of the new instance alias as I need to run several tasks after creating it.
Any help doing this?
Using ansible I am trying to create ec2 instances and attach an extra network interface to each instance so that they will have two private IP addresses. However, for some reason, it seems that the ec2_eni module can create network interfaces, but will not attach them to the instances specified. What am I doing wrong? Below is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Create new servers
ec2:
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
group_id: "{{ sec_group }}"
assign_public_ip: yes
wait: true
key_name: '{{ key }}'
instance_type: t2.micro
image: '{{ ami }}'
exact_count: '{{ count }}'
count_tag:
Name: "{{ server_name }}"
instance_tags:
Name: "{{ server_name }}"
register: ec2
- name: Show ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
- name: allocate new elastic IPs and associate it with instances
ec2_eip:
region: "{{ region }}"
device_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
register: eips
- name: Show eip instance json data
debug:
msg: "{{ eips['results'] }}"
- ec2_eni:
subnet_id: "{{ subnet }}"
state: present
secondary_private_ip_address_count: 1
security_groups: "{{ sec_group }}"
region: "{{ region }}"
device_index: 1
description: "test-eni"
instance_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
The strange thing is that the ec2_eni task succeeds, saying that it has attached the network interface to each instance when in reality it just creates the network interface and then does nothing with it.
As best I can tell, since attached defaults to None, but the module docs say:
Specifies if network interface should be attached or detached from instance. If ommited, attachment status won't change
then the code does what they claim and skips the attachment step.
This appears to be a bug in the documentation, which claims the default is 'yes' but is not accurate.
I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
I have an amazon console with multiple instances running. All instances have tags
for example:
- tag Name: jenkins
- tag Name: Nginx
- tag Name: Artifactory
I want to run an Ansible playbook against the hosts that are tagged as Nginx.
I use dynamic inventory but how do I limit where the playbook is run?
My playbook looks like this:
- name: Provision an EC2 node
hosts: local
connection: local
gather_facts: False
vars:
instance_type: t2.micro
security_group: somegroup
#image: ami-a73264ce
image: ami-9abea4fb
region: us-west-2
keypair: ansible_ec2
tasks:
- name: Step 1 Create a new AWS EC2 Ubuntu Instance
local_action: ec2 instance_tags="Name=nginx" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}
register: ec2
- name: Step 2 Add new instance to local host group
local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"
with_items: ec2.instances
- name: Step 3 Wait for SSH to come up delay 180 sec timeout 600 sec
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=180 timeout=600 state=started
with_items: ec2.instances
- name: Step 5 Install nginx steps
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- motd
- javaubuntu
- apt-get
- nginx
Try with:
roles/create-instance/defaults/main.yml
quantity_instance: 1
key_pem: "ansible_ec2"
instance_type: "t2.micro"
image_base: "ami-9abea4fb"
sec_group_id: "somegroup"
tag_Name: "Nginx"
tag_Service: "reverseproxy"
aws_region: "us-west-2"
aws_subnet: "somesubnet"
root_size: "20"
---
- hosts: 127.0.0.1
connection: local
gather_facts: False
tasks:
- name: Adding Vars
include_vars: roles/create-instance/defaults/main.yml
- name: run instance
ec2:
key_name: "{{ key_pem }}"
instance_type: "{{ instance_type }}"
image: "{{ image_base }}"
wait: yes
group_id: "{{ sec_group_id }}"
wait_timeout: 500
count: "{{ quantity_instance }}"
instance_tags:
Name: "{{ tag_Name }}"
Service: "{{ tag_Service }}"
vpc_subnet_id: "{{ aws_subnet }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/xvda
volume_size: "{{ root_size }}"
delete_on_termination: true
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host: hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_ip }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- hosts: launched
vars:
ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem
gather_facts: true
user: ubuntu
become: yes
become_method: sudo
become_user: root
roles:
- motd
- javaubuntu
- apt-get
- nginx
To avoid add as a variable ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem, use .ssh/config file and add the following:
IdentityFile ~/.ssh/ansible_ec2.pem
Remember the config file need chmod 600.
If dont want create the instances again.
launch other playbook like this:
- hosts: tag_Name_Nginx
vars:
ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem
gather_facts: true
user: ubuntu
become: yes
become_method: sudo
become_user: root
roles:
- motd
- javaubuntu
- apt-get
- nginx
And notes how we call the specific tag_Name_Nginx.
All tags become groups in dynamic inventory, so you can specify the tag in the "hosts" parameter
- name: Provision an EC2 node
hosts: local
connection: local
gather_facts: False
vars:
instance_type: t2.micro
security_group: somegroup
#image: ami-a73264ce
image: ami-9abea4fb
region: us-west-2
keypair: ansible_ec2
tasks:
- name: Step 1 Create a new AWS EC2 Ubuntu Instance
local_action: ec2 instance_tags="Name=nginx" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}
register: ec2
- name: Step 2 Add new instance to local host group
local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"
with_items: ec2.instances
- name: Step 3 Wait for SSH to come up delay 180 sec timeout 600 sec
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=180 timeout=600 state=started
with_items: ec2.instances
- name: Step 5 Install nginx steps
hosts: tag_Name_Nginx
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- motd
- javaubuntu
- apt-get
- nginx