Point Ansible to .pem file for a dynamic set of EC2 nodes - amazon-ec2

I'm pretty new to Ansible.
I get:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).", "unreachable": true}
at the last step when I try to this Ansible playbook
---
- name: find EC2 instaces
hosts: localhost
connection: local
gather_facts: false
vars:
ansible_python_interpreter: "/usr/bin/python3"
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
aws_region: "us-west-2"
vpc_subnet_id: "subnet-xxx"
ec2_filter:
"tag:Name": "airflow-test"
"tag:Team": 'data-science'
"tag:Environment": 'staging'
"instance-state-name": ["stopped", "running"]
vars_files:
- settings/vars.yml
tasks:
- name: Find EC2 Facts
ec2_instance_facts:
region: "{{ aws_region }}"
filters:
"{{ ec2_filter }}"
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
loop: "{{ ec2.instances }}"
- name: Wait for the instances to boot by checking the ssh port
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
sleep: 10
timeout: 120
state: started
loop: "{{ ec2.instances }}"
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
tasks:
- name: ls
command: ls
I know I need to point Ansible to .pem file, I tried to add ansible_ssh_private_key_file to the inventory file but considering nodes are dynamic, not sure how to do it.

Adding ansible_ssh_user solved the problem
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_ssh_user: "ec2-user"
tasks:
- name: ls
command: ls

Related

Ansible Tower how to pass inventory to my playbook variables

I am setting up a vmware job in Ansible Tower to snapshot a list of VM's, ideally, this list should be generated by AWX/Tower from the vSphere dynamic inventory. Inventory is named "lab_vm" in AWX and use either the hostname or the UUID of the VM.
How do I pass this through in my playbook variables file?
---
vars:
vmware:
host: '{{ lookup("env", "VMWARE_HOST") }}'
username: '{{ lookup("env", "VMWARE_USER") }}'
password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
vcenter_datacenter: "dc1"
vcenter_validate_certs: false
vm_name: "EVE-NG"
vm_template: "Win2019-Template"
vm_folder: "Network Labs"
my playbook
---
- name: vm snapshot
hosts: localhost
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
# hostname: "{{ host }}"
# username: "{{ user }}"
# password: "{{ password }}"
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ vm_name }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
You're going about it backward. Ansible loops through the inventory for you. Use that feature, and delegate the task to localhost:
---
- name: vm snapshot
hosts: all
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ inventory_hostname }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
delegate_to: localhost
I've not used this particular module before, but don't your want snapshot_name to be unique for each guest?

Ansible how to remove groups value by key

I am having a play where i will collect available host names before running a task, i am using this for a purpose,
My play code:
--
- name: check reachable side A hosts
hosts: ????ha???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
When i print this using
- debug:
msg: "group: {{ groups['reachable_other_pairs'] }}"
i am getting below result
"this group : ['testha1', 'testha2', 'testha3']",
Now if again call the same play with different hosts grouping with the same key i am getting the new host names appending to the existing values, like below
- name: check reachable side B hosts
hosts: ????hb???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
if i print the reachable_other_pairs i am getting below results
"msg": " new group: ['testhb1', 'testhb2', 'testhb3', 'testha1', 'testha2', 'testha3']"
All i want is only first 3 entries ['testhb1', 'testhb2', 'testhb3']
Can some one let me know how to achieve this?
Add this as as task just before your block. It will refresh your inventory and clean up all groups that are not in there:
- meta: refresh_inventory

Cannot use ansible_user with local connection

I have a playbook for running on localhost:
- hosts: localhost
connection: local
gather_facts: true
roles:
- role: foo
- role: bar
- role: baz
Then in various tasks I want to use ansible_user, but I get the error:
fatal: [localhost]: FAILED! => {"msg": "'ansible_user' is undefined"}
Okay figured it out.
In any task that uses become: true, e.g.:
- name: foo
become: true
file:
path: /tmp/foo
state: directory
owner: "{{ansible_env.USER}}" # cannot use {{ansible_user}}
group: "{{ansible_env.USER}}" # cannot use {{ansible_user}}
mode: "0755"
So instead of ansible_user, we must use ansible_env.USER or ansible_env.USERNAME.
I think this is because ansible doesn't know which user to use - if it's a local connection, and we elevate permissions, then is the "ansible user" actually root, or is it the user running the playbook? Ansible gets confused so the variable is empty... I think.
(For this to work we must have gather_facts: true at the beginning of the play.)
Q: "Cannot use ansible_user with local connection."
A: There must be some kind of misconfiguration. The playbook below works as expected and there is no reason it wouldn't work with these tasks inside a role.
shell> cat playbook.yml
- hosts: localhost
connection: local
tasks:
- debug:
msg: "ansible_user [{{ ansible_user }}]"
become: false
- file:
path: /tmp/foo
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: false
- debug:
msg: "ansible_user [{{ ansible_user }}]"
become: true
- file:
path: /tmp/bar
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: true

Ansible: How to use multiple vCenter?

I use Ansible to take snapshots of VM in VMware vCenter.
If vcenter only one, then it works fine, but what if there are many vcenter?
How to describe multiple Vmware vCenter in the current role?
My current example for one vcenter:
# cat snapshot.yml
- name: ROLE_update
hosts: all
gather_facts: False
become: yes
become_method: su
become_user: root
vars_files:
- /ansible/vars/privileges.txt
roles:
- role: snapshot
# cat tasks/main.yml
- block:
- name: "Create vmware snapshot of {{ inventory_hostname }}"
vmware_guest_snapshot:
hostname: "{{vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{datacenter_name}}"
folder: find_folder.yml
name: "{{inventory_hostname }}"
state: present
snapshot_name: snap check
validate_certs: False
delegate_to: localhost
register: vm_facts
# cat find_folder.yml
- name: Find folder path of an existing virtual machine
hosts: all
gather_facts: False
vars:
ansible_python_interpreter: "/usr/bin/env python3"
tasks:
- name: "Find folder for VM - {{ vm_name }}"
vmware_guest_find:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: False
name: "{{ vm_name }}"
delegate_to: localhost
register: vm_facts
# cat vars/main.yml variables declared
vcenter_hostname: <secret>
vcenter_username: <secret>
vcenter_password: <secret>
Thanks!
Is having the variables for the different vcenter in different ".yml" files an option? If so, you could easily call for those variables at runtime execution like:
ansible-playbook your_playbook.yml --extra-vars "your_vcenter_1.yml"
There are other ways i could think of, like using tags in your playbook for each task, and invoke your playbook using a specific tag. On each task with the different tags, on the import_role, you could define the new variables like this:
tasks:
- import_tasks: your_role.yml
vars:
vcenter_hostname: your_vcenter_1
tags:
- vcenter1

Ansible AWS EC2 tags

I have an amazon console with multiple instances running. All instances have tags
for example:
- tag Name: jenkins
- tag Name: Nginx
- tag Name: Artifactory
I want to run an Ansible playbook against the hosts that are tagged as Nginx.
I use dynamic inventory but how do I limit where the playbook is run?
My playbook looks like this:
- name: Provision an EC2 node
hosts: local
connection: local
gather_facts: False
vars:
instance_type: t2.micro
security_group: somegroup
#image: ami-a73264ce
image: ami-9abea4fb
region: us-west-2
keypair: ansible_ec2
tasks:
- name: Step 1 Create a new AWS EC2 Ubuntu Instance
local_action: ec2 instance_tags="Name=nginx" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}
register: ec2
- name: Step 2 Add new instance to local host group
local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"
with_items: ec2.instances
- name: Step 3 Wait for SSH to come up delay 180 sec timeout 600 sec
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=180 timeout=600 state=started
with_items: ec2.instances
- name: Step 5 Install nginx steps
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- motd
- javaubuntu
- apt-get
- nginx
Try with:
roles/create-instance/defaults/main.yml
quantity_instance: 1
key_pem: "ansible_ec2"
instance_type: "t2.micro"
image_base: "ami-9abea4fb"
sec_group_id: "somegroup"
tag_Name: "Nginx"
tag_Service: "reverseproxy"
aws_region: "us-west-2"
aws_subnet: "somesubnet"
root_size: "20"
---
- hosts: 127.0.0.1
connection: local
gather_facts: False
tasks:
- name: Adding Vars
include_vars: roles/create-instance/defaults/main.yml
- name: run instance
ec2:
key_name: "{{ key_pem }}"
instance_type: "{{ instance_type }}"
image: "{{ image_base }}"
wait: yes
group_id: "{{ sec_group_id }}"
wait_timeout: 500
count: "{{ quantity_instance }}"
instance_tags:
Name: "{{ tag_Name }}"
Service: "{{ tag_Service }}"
vpc_subnet_id: "{{ aws_subnet }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/xvda
volume_size: "{{ root_size }}"
delete_on_termination: true
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host: hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_ip }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- hosts: launched
vars:
ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem
gather_facts: true
user: ubuntu
become: yes
become_method: sudo
become_user: root
roles:
- motd
- javaubuntu
- apt-get
- nginx
To avoid add as a variable ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem, use .ssh/config file and add the following:
IdentityFile ~/.ssh/ansible_ec2.pem
Remember the config file need chmod 600.
If dont want create the instances again.
launch other playbook like this:
- hosts: tag_Name_Nginx
vars:
ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem
gather_facts: true
user: ubuntu
become: yes
become_method: sudo
become_user: root
roles:
- motd
- javaubuntu
- apt-get
- nginx
And notes how we call the specific tag_Name_Nginx.
All tags become groups in dynamic inventory, so you can specify the tag in the "hosts" parameter
- name: Provision an EC2 node
hosts: local
connection: local
gather_facts: False
vars:
instance_type: t2.micro
security_group: somegroup
#image: ami-a73264ce
image: ami-9abea4fb
region: us-west-2
keypair: ansible_ec2
tasks:
- name: Step 1 Create a new AWS EC2 Ubuntu Instance
local_action: ec2 instance_tags="Name=nginx" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}
register: ec2
- name: Step 2 Add new instance to local host group
local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"
with_items: ec2.instances
- name: Step 3 Wait for SSH to come up delay 180 sec timeout 600 sec
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=180 timeout=600 state=started
with_items: ec2.instances
- name: Step 5 Install nginx steps
hosts: tag_Name_Nginx
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- motd
- javaubuntu
- apt-get
- nginx

Resources