I have an amazon console with multiple instances running. All instances have tags
for example:
- tag Name: jenkins
- tag Name: Nginx
- tag Name: Artifactory
I want to run an Ansible playbook against the hosts that are tagged as Nginx.
I use dynamic inventory but how do I limit where the playbook is run?
My playbook looks like this:
- name: Provision an EC2 node
hosts: local
connection: local
gather_facts: False
vars:
instance_type: t2.micro
security_group: somegroup
#image: ami-a73264ce
image: ami-9abea4fb
region: us-west-2
keypair: ansible_ec2
tasks:
- name: Step 1 Create a new AWS EC2 Ubuntu Instance
local_action: ec2 instance_tags="Name=nginx" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}
register: ec2
- name: Step 2 Add new instance to local host group
local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"
with_items: ec2.instances
- name: Step 3 Wait for SSH to come up delay 180 sec timeout 600 sec
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=180 timeout=600 state=started
with_items: ec2.instances
- name: Step 5 Install nginx steps
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- motd
- javaubuntu
- apt-get
- nginx
Try with:
roles/create-instance/defaults/main.yml
quantity_instance: 1
key_pem: "ansible_ec2"
instance_type: "t2.micro"
image_base: "ami-9abea4fb"
sec_group_id: "somegroup"
tag_Name: "Nginx"
tag_Service: "reverseproxy"
aws_region: "us-west-2"
aws_subnet: "somesubnet"
root_size: "20"
---
- hosts: 127.0.0.1
connection: local
gather_facts: False
tasks:
- name: Adding Vars
include_vars: roles/create-instance/defaults/main.yml
- name: run instance
ec2:
key_name: "{{ key_pem }}"
instance_type: "{{ instance_type }}"
image: "{{ image_base }}"
wait: yes
group_id: "{{ sec_group_id }}"
wait_timeout: 500
count: "{{ quantity_instance }}"
instance_tags:
Name: "{{ tag_Name }}"
Service: "{{ tag_Service }}"
vpc_subnet_id: "{{ aws_subnet }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/xvda
volume_size: "{{ root_size }}"
delete_on_termination: true
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host: hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_ip }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- hosts: launched
vars:
ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem
gather_facts: true
user: ubuntu
become: yes
become_method: sudo
become_user: root
roles:
- motd
- javaubuntu
- apt-get
- nginx
To avoid add as a variable ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem, use .ssh/config file and add the following:
IdentityFile ~/.ssh/ansible_ec2.pem
Remember the config file need chmod 600.
If dont want create the instances again.
launch other playbook like this:
- hosts: tag_Name_Nginx
vars:
ansible_ssh_private_key_file: ~/.ssh/ansible_ec2.pem
gather_facts: true
user: ubuntu
become: yes
become_method: sudo
become_user: root
roles:
- motd
- javaubuntu
- apt-get
- nginx
And notes how we call the specific tag_Name_Nginx.
All tags become groups in dynamic inventory, so you can specify the tag in the "hosts" parameter
- name: Provision an EC2 node
hosts: local
connection: local
gather_facts: False
vars:
instance_type: t2.micro
security_group: somegroup
#image: ami-a73264ce
image: ami-9abea4fb
region: us-west-2
keypair: ansible_ec2
tasks:
- name: Step 1 Create a new AWS EC2 Ubuntu Instance
local_action: ec2 instance_tags="Name=nginx" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}
register: ec2
- name: Step 2 Add new instance to local host group
local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"
with_items: ec2.instances
- name: Step 3 Wait for SSH to come up delay 180 sec timeout 600 sec
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=180 timeout=600 state=started
with_items: ec2.instances
- name: Step 5 Install nginx steps
hosts: tag_Name_Nginx
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- motd
- javaubuntu
- apt-get
- nginx
Related
I am setting up a vmware job in Ansible Tower to snapshot a list of VM's, ideally, this list should be generated by AWX/Tower from the vSphere dynamic inventory. Inventory is named "lab_vm" in AWX and use either the hostname or the UUID of the VM.
How do I pass this through in my playbook variables file?
---
vars:
vmware:
host: '{{ lookup("env", "VMWARE_HOST") }}'
username: '{{ lookup("env", "VMWARE_USER") }}'
password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
vcenter_datacenter: "dc1"
vcenter_validate_certs: false
vm_name: "EVE-NG"
vm_template: "Win2019-Template"
vm_folder: "Network Labs"
my playbook
---
- name: vm snapshot
hosts: localhost
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
# hostname: "{{ host }}"
# username: "{{ user }}"
# password: "{{ password }}"
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ vm_name }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
You're going about it backward. Ansible loops through the inventory for you. Use that feature, and delegate the task to localhost:
---
- name: vm snapshot
hosts: all
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ inventory_hostname }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
delegate_to: localhost
I've not used this particular module before, but don't your want snapshot_name to be unique for each guest?
When I run my playbook I see the following error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: list object has no element 0\n\nThe e
rror appears to be in '/etc/ansible/loyalty/tasks/create_ec2_stage.yaml': line 63, column 7, but may\nbe elsewhere in the file depending on the exac
t syntax problem.\n\nThe offending line appears to be:\n\n register: ec2_metadata\n - name: Parse < JSON >\n ^ here\n"}
I run the playbook this way:
/usr/bin/ansible-playbook -i hosts --extra-vars "CARRIER=xx" tasks/create_ec2_stage.yaml
Here is my playbook:
---
- name: New EC2 instances
hosts: localhost
gather_facts: no
vars_files:
- /etc/ansible/loyalty/vars/vars.yaml
tasks:
- name: Run EC2 Instances
amazon.aws.ec2_instance:
name: "new-{{ CARRIER }}.test"
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-east-1
key_name: Kiu
instance_type: t2.medium
image_id: xxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
ebs:
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: xxxxxxxxxxxx
network:
assign_public_ip: no
security_groups: ["xxxxxxxxxx", "xxxxxxxxxxx", "xxxxxxxxxxxxxx"]
tags:
Enviroment: TEST
count: 1
- name: Pause Few Seconds
pause:
seconds: 20
prompt: "Please wait"
- name: Get Information for EC2 Instances
ec2_instance_info:
region: us-east-1
filters:
"tag:Name": new-{{ CARRIER }}.test
register: ec2_metadata
- name: Parse JSON
set_fact:
ip_addr: "{{ ec2_metadata.instances[0].network_interfaces[0].private_ip_address }}"
If I create a slightly smaller playbook to query the private IP address of an existing instance, I don’t see any error.
---
- name: New EC2 Instances
hosts: localhost
gather_facts: no
vars_files:
- /etc/ansible/loyalty/vars/vars.yaml
vars:
pwd_alias: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
CARRIER_UPPERCASE: "{{ CARRIER | upper }}"
tasks:
- set_fact:
MY_PASS: "{{ pwd_alias }}"
- name: Get EC2 info
ec2_instance_info:
region: us-east-1
filters:
"tag:Name": new-{{ CARRIER }}.stage
register: ec2_metadata
- name: Parsing JSON
set_fact:
ip_addr: "{{ ec2_metadata.instances[0].network_interfaces[0].private_ip_address }}"
- name: Show Result
debug:
msg: "{{ ip_addr }}"
Results in
TASK [Show Result] ******************************************************
ok: [localhost] => {
"msg": "172.31.x.x"
}
I am creating an EC2 instance on Amazon and querying the private IP to use that IP on other services like router53 and Cloudflare, other tasks do not add them because the error is in the fact.
You don't have to query AWS with the ec2_instance_info module since the module amazon.aws.ec2_instance already returns you the newly created EC2 when the parameter wait is set to yes, as it is in your case.
You just need to register the return of this task.
So, given the two tasks:
- name: Run EC2 Instances
amazon.aws.ec2_instance:
name: "new-{{ CARRIER }}.test"
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-east-1
key_name: Kiu
instance_type: t2.medium
image_id: xxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
ebs:
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: xxxxxxxxxxxx
network:
assign_public_ip: no
security_groups: ["xxxxxxxxxx", "xxxxxxxxxxx", "xxxxxxxxxxxxxx"]
tags:
Enviroment: TEST
count: 1
register: ec2
- set_fact:
ip_addr: "{{ ec2.instance[0].private_ip_address }}"
You should have the private IP address of the newly created EC2 instance.
I have a lot of inventories inside ansible. I not looking for a dynamic inventory solution.
When I create an instance from ansible in AWS, I need to run some tasks inside that new server but I don’t know what is the best way to do it.
tasks:
- ec2:
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-west-2
key_name: xxxxxxxxxx
instance_type: t2.medium
image: ami-xxxxxxxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: subnet-xxxxxxxxxxxx
assign_public_ip: no
instance_tags:
Name: new-instances
count: 1
- name: Buscamos la IP de la instancia creada
ec2_instance_facts:
filters:
"tag:Name": new-instances
register: ec2_instance_info
- set_fact:
msg: "{{ ec2_instance_info | json_query('instances[*].private_ip_address') }} "
- debug: var=msg
I currently show the IP of the new instance but I need to create a new host file and insert the IP of the new instance alias as I need to run several tasks after creating it.
Any help doing this?
I use Ansible to take snapshots of VM in VMware vCenter.
If vcenter only one, then it works fine, but what if there are many vcenter?
How to describe multiple Vmware vCenter in the current role?
My current example for one vcenter:
# cat snapshot.yml
- name: ROLE_update
hosts: all
gather_facts: False
become: yes
become_method: su
become_user: root
vars_files:
- /ansible/vars/privileges.txt
roles:
- role: snapshot
# cat tasks/main.yml
- block:
- name: "Create vmware snapshot of {{ inventory_hostname }}"
vmware_guest_snapshot:
hostname: "{{vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{datacenter_name}}"
folder: find_folder.yml
name: "{{inventory_hostname }}"
state: present
snapshot_name: snap check
validate_certs: False
delegate_to: localhost
register: vm_facts
# cat find_folder.yml
- name: Find folder path of an existing virtual machine
hosts: all
gather_facts: False
vars:
ansible_python_interpreter: "/usr/bin/env python3"
tasks:
- name: "Find folder for VM - {{ vm_name }}"
vmware_guest_find:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: False
name: "{{ vm_name }}"
delegate_to: localhost
register: vm_facts
# cat vars/main.yml variables declared
vcenter_hostname: <secret>
vcenter_username: <secret>
vcenter_password: <secret>
Thanks!
Is having the variables for the different vcenter in different ".yml" files an option? If so, you could easily call for those variables at runtime execution like:
ansible-playbook your_playbook.yml --extra-vars "your_vcenter_1.yml"
There are other ways i could think of, like using tags in your playbook for each task, and invoke your playbook using a specific tag. On each task with the different tags, on the import_role, you could define the new variables like this:
tasks:
- import_tasks: your_role.yml
vars:
vcenter_hostname: your_vcenter_1
tags:
- vcenter1
I'm pretty new to Ansible.
I get:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).", "unreachable": true}
at the last step when I try to this Ansible playbook
---
- name: find EC2 instaces
hosts: localhost
connection: local
gather_facts: false
vars:
ansible_python_interpreter: "/usr/bin/python3"
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
aws_region: "us-west-2"
vpc_subnet_id: "subnet-xxx"
ec2_filter:
"tag:Name": "airflow-test"
"tag:Team": 'data-science'
"tag:Environment": 'staging'
"instance-state-name": ["stopped", "running"]
vars_files:
- settings/vars.yml
tasks:
- name: Find EC2 Facts
ec2_instance_facts:
region: "{{ aws_region }}"
filters:
"{{ ec2_filter }}"
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
loop: "{{ ec2.instances }}"
- name: Wait for the instances to boot by checking the ssh port
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
sleep: 10
timeout: 120
state: started
loop: "{{ ec2.instances }}"
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
tasks:
- name: ls
command: ls
I know I need to point Ansible to .pem file, I tried to add ansible_ssh_private_key_file to the inventory file but considering nodes are dynamic, not sure how to do it.
Adding ansible_ssh_user solved the problem
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_ssh_user: "ec2-user"
tasks:
- name: ls
command: ls