I would like to create and provision Amazon EC2 machines with a help of Ansible.
Now, I get the following error:
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Instance creation failed => InvalidKeyPair.NotFound: The key pair '~/.keys/EC2-Kibi-Enterprise-Deployment.pem' does not exist"}
But the .pem key exists:
$ ls -lh ~/.keys/EC2-Kibi-Enterprise-Deployment.pem
-r-------- 1 sergey sergey 1.7K Apr 6 09:56 /home/sergey/.keys/EC2-Kibi-Enterprise-Deployment.pem
And it was created in EU (Ireland) region.
Here is my playbook:
--
- name: Setup servers on Amazon EC2 machines
hosts: localhost
gather_facts: no
tasks:
- include_vars: group_vars/all/ec2_vars.yml
### Create Amazon EC2 instances
- name: Amazon EC2 | Create instances
ec2:
count: "{{ count }}"
key_name: "{{ key }}"
region: "{{ region }}"
zone: "{{ zone }}"
group: "{{ group }}"
instance_type: "{{ machine }}"
image: "{{ image }}"
wait: true
wait_timeout: 500
#vpc_subnet_id: "{{ subnet }}"
#assign_public_ip: yes
register: ec2
- name: Amazon EC2 | Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 10
timeout: 60
state: started
with_items: "{{ ec2.instances }}"
- name: Amazon EC2 | Add hosts to the kibi_servers in-memory inventory group
add_host: hostname={{ item.public_ip }} groupname=kibi_servers
with_items: "{{ ec2.instances }}"
### END
### Provision roles
- name: Amazon EC2 | Provision new instances
hosts: kibi_servers
become: yes
roles:
- common
- java
- elasticsearch
- logstash
- nginx
- kibi
- supervisor
### END
And my var file:
count: 2
region: eu-west-1
zone: eu-west-1a
group: default
image: ami-d1ec01a6
machine: t2.medium
subnet: subnet-3a2aa952
key: ~/.keys/EC2-Kibi-Enterprise-Deployment.pem
What is wrong with the .pem file here?
The key parameter for the ec2 module is looking for the key pair name that has been already uploaded to AWS, not a local key.
If you want to get Ansible to upload a public key you can use the ec2_key module.
So your playbook would look like this:
--
- name: Setup servers on Amazon EC2 machines
hosts: localhost
gather_facts: no
tasks:
- include_vars: group_vars/all/ec2_vars.yml
### Create Amazon EC2 key pair
- name: Amazon EC2 | Create Key Pair
ec2_key:
name: "{{ key_name }}"
region: "{{ region }}"
key_material: "{{ item }}"
with_file: /path/to/public_key.id_rsa.pub
### Create Amazon EC2 instances
- name: Amazon EC2 | Create instances
ec2:
count: "{{ count }}"
key_name: "{{ key_name }}"
...
Do not specify extension for the key. So that key name should be " EC2-Kibi-Enterprise-Deployment " only. Ansible doesn't care if your key is on your local machine at this stage. It verifies if it exists on your AWS account. Go to 'EC2 > Key Pairs' section in your AWS account and you'll see keys are listed without file extensions.
The solution has been found. EC2 doesn't like when you put a full path for the .pem key file.
So, I moved EC2-Kibi-Enterprise-Deployment.pem into ~/.ssh, added it to the authentication agent with ssh-add using:
ssh-add ~/.ssh/EC2-Kibi-Enterprise-Deployment.pem
And corrected the key line in my var file to key: EC2-Kibi-Enterprise-Deployment.pem
The same if you use EC2 cli tools, don't specify a full path to the key file.
ec2-run-instances ami-d1ec01a6 -t t2.medium --region eu-west-1 --key EC2-Kibi-Enterprise-Deployment.pem
While providing Key in variable don't give file extension (.pem). Just give file name.
For example: akshay.pem is my key then in vars filoe just provide akshay as key.
Related
I am launching aws ec2 2 instances using ansible using count:2 please check below playbook
- name: Create an EC2 instance
ec2:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ key }}"
key_name: "{{ keypair }}"
region: "{{ region }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: yes
count: 2
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: "{{ assign_public_ip }}"
register: ec2
- name: Add the newly created 1 EC2 instance(s) to webserver group
lineinfile: dest=inventory
insertafter='^\[webserver\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
- name: add newly created remaining ec2 instance to db group
lineinfile: dest=inventory
insertafter='^\[db-server\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
Here i want to add one ip to webserver host group & remaining to db host group but its not working with above playbook please help me to achieve same?
i dont wnt to use add_host here.
Since you are using AWS, have you considered using the aws_ec2 plugin for dynamic inventory?
As long as you are tagging your instances correctly, and you set up the yaml file, it would do what you want it.
Otherwise, your register: ec2 has two elements in it. The way (if it worked) you are looping through ec2 would add both to each group. You would need to add a when condition to match the something like the tag/subnet/cidr to know which server to add to which group.
One way to help see what the return is would be do print out the ec2 variable:
- debug: var=ec2
I was trying to create an ansible playbook in such a way that the playbook first create an EC2 instance using host as local host
After that the instance created using above task must return IP of the new instance and on the newly created instance I wanted to install splunk can someone help me
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: create a new ec2 key pair, returns generated private key
ec2_key:
name: my_keypair3
force: false
region: us-east-1
register: ec2_key_result
- name: Save private key
copy: content="{{ ec2_key_result.key.private_key }}" dest="./akey.pem" mode=0600
when: ec2_key_result.changed
- name: Provision a set of instances
ec2:
key_name: my_keypair3
group: SplunkSecurityGroup
instance_type: t2.micro
image: ami-04b9e92b5572fa0d1
wait: true
region: us-east-1
exact_count: 1
count_tag:
Name: Demo
instance_tags:
Name: v3
- name: Downloading Splunk
get_url:
url: "https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.0.1&product=splunk&filename=splunk-8.0.1-6db836e2fb9e-linux-2.6-amd64.deb&wget=true"
dest: ~/splunk.deb
checksum: md5:29723caba24ca791c6d30445f5dfe6
See Splunkenizer for Ansible configuration for deploying Splunk
I'm trying to use Ansible to create two instances, each instance must have different subnet ids. I'm using exact_count with tag Name to keep track of instances. The main issue is that I am not able to understand how to provide two different subnet ids in the same playbook.
This task can be used in order to create the multiple ec2 instances with different subnets
- name: 7. Create EC2 server
ec2:
image: "{{ image }}"
wait: true
instance_type: t2.micro
group_id: "{{ security_group.group_id }}"
vpc_subnet_id: "{{ item }}"
key_name: "{{ key_name }}"
count: 1
region: us-east-1
with_items:
- "{{ subnet1.subnet.id }}"
- "{{ subnet2.subnet.id }}"
register: ec2
I have playbook which creates and add instance to Load balancer, what is the way I can remove/stop old instance already assigned to ELB, I want to make sure that we stop the old instance first and then new 1s are added or vice verse.
I am using AWS ELB and EC2 instance
I hope that might help you. Once you have the instance id then you can do whatever you want, I am just removing it from the ELB but you can use the ec2 module to remove it.
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Get the facts about the ELB
ec2_elb_facts:
names: rbgeek-dev-web-elb
region: "eu-west-1"
register: elb_facts
- name: Instance(s) ID that are currently register to the ELB
debug:
msg: "{{ elb_facts.elbs.0.instances }}"
- name: Tag the Old instances as zombie
ec2_tag:
resource: "{{ item }}"
region: "eu-west-1"
state: present
tags:
Instance_Status: "zombie"
with_items: "{{ elb_facts.elbs.0.instances }}"
- name: Refresh the ec2.py cache
shell: ./inventory/ec2.py --refresh-cache
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
# must check this step with ec2_remote_facts_module.html ##http://docs.ansible.com/ansible/ec2_remote_facts_module.html
- hosts: tag_Instance_Status_zombie
gather_facts: no
tasks:
- name: Get the facts about the zombie instances
ec2_facts:
- name: remove instance by instance id
ec2:
state: absent
region: "eu-west-1"
instance_ids: "{{ ansible_ec2_instance_id }}"
wait: true
delegate_to: localhost
I am running ansible playbook on a cluster to add new machine. I want this to be run on only to add new machine thinking that there are no old machines existing. I can limit playbook to one machine by using "--limit" but in this case I dont know machine name or ip before creating.
How can I skip existing machines on the cluster while adding new one by ansible?
Thanks
You could use a add_host module if you playbook create a new machine you need catch the public or private IP and relate this with a new group and then use this group in your next play.
Take a look on the next example:
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
key_name: my_keypair
instance_type: m1.small
security_group: my_securitygroup
image: my_ami_id
region: us-east-1
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host: hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- name: Configure instance(s)
hosts: launched
become: True
gather_facts: True
roles:
- my_awesome_role
- my_awesome_test