Multi VM Create using ansible playbook from Templates - ansible

Problem Statement:
while trying to create a VM from a template especially in the case of a windows OS template(working fine for linux OS template). VM after being created through ansible playbook is "powered off".
Here is my playbook to create multiple VMS from template, please help me in making this playbook more efficient...thanks in advance.
---
- name: Create a multiple VMs asynchronously
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Clone the template
vmware_guest:
hostname: xxx.xxx.xx
username: username
password: password
validate_certs: False
name: "{{ item }}"
template: Template-Win-2016
cluster: cluster
datacenter: Dev and QA DC
folder: /
state: poweredon
hardware:
memory_mb: 12288
num_cpus: 4
disk:
- size_gb: 60
type: thin
autoselect_datastore: True
wait_for_ip_address: yes
with_items:
- testvm001
- testvm002
register: vm_create
async: 600
poll: 0
- name: Waiting for status
async_status:
jid: "{{ item.ansible_job_id }}"
register: job_result
until: job_result.finished
delay: 60
retries: 60
with_items: "{{ vm_create.results }}"

Related

Ansible - vmware: how to find a guest on a multiple hosts vcenter?

One of the many Ansible Community.VMWare modules parameters is 'hostname', which is the name of the ESXi server.
In my case, a guest could be in one of multiple ESXi servers (8, for now), and also a new server could be added by the support team at any time.
Is there a way to find on which ESXi server a guest is? Or is it mandatory that I know this at start?
I could have a list of the ESXi servers, keep updating it on demand, and loop over this list using module 'community.vmware.vmware_guest_find' and "with_items", but actually, I don't know how would I do this (iterate over the servers, changing the 'hostname', and stopping when I finally find the guest).
Any help?
I came up with this solution below. It's necessary to have previously the list of ESXi hosts.
[...]
vars:
vcenters_hostname:
- vcenter01
- vcenter02
- ...
[...]
- block:
- name: Navigate throughout all vcenters looking for the guest
community.vmware.vmware_guest_find:
hostname: "{{ item }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ guest_name }}"
validate_certs: no
delegate_to: localhost
register: guest_find_result
with_items: "{{ vcenter_hostnames }}"
rescue:
- name: Doing nothing only to don't raise a fail message
meta: noop
always:
- name: Record which vcenter and folder is the guest
ansible.builtin.set_fact:
guest_folder: "{{ item['folders'][0] }}"
vcenter_hostname: "{{ item['item'] }}"
with_items: "{{ guest_find_result['results'] }}"
when: item['failed'] == false

Ansible how restart service one after one on different hosts

Hy every one !
pls help .
I have 3 server. Each of them have one systemd service. I need reboot this service by one.
So after i reboot service on host 1 , and this service is up ( i can check tcp port), i need reboot service on host 2 and so on.
How i can do it with ansible
Now i have such playbook:
---
- name: Install business service
hosts: business
vars:
app_name: "e-service"
app: "{{ app_name }}-{{ tag }}.war"
app_service: "{{ app_name }}.service"
app_bootstrap: "{{ app_name }}_bootstrap.yml"
app_folder: "{{ eps_client_dir }}/{{ app_name }}"
archive_folder: "{{ app_folder }}/archives/arch_{{ansible_date_time.date}}_{{ansible_date_time.hour}}_{{ansible_date_time.minute}}"
app_distrib_dir: "{{ eps_distrib_dir }}/{{ app_name }}"
app_dependencies: "{{ app_distrib_dir }}/dependencies.tgz"
tasks:
- name: Copy app {{ app }} to {{ app_folder }}
copy:
src: "{{ app_distrib_dir }}/{{ app }}"
dest: "{{ app_folder }}/{{ app }}"
group: ps_group
owner: ps
mode: 0644
notify:
- restart app
- name: Copy service setting to /etc/systemd/system/{{app_service}}
template:
src: "{{ app_distrib_dir }}/{{ app_service }}"
dest: /etc/systemd/system/{{ app_service }}
mode: 0644
notify:
- restart app
- name: Start service {{ app }}
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: started
enabled: true
handlers:
- name: restart app
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: restarted
enabled: true
and all service restart at one time.
try serial and max_fail_percentage, max_fail_percentage value is percent of the whole number of your hosts, if server 1 failed, then the rest server will not run,
---
- name: Install eps-business service
hosts: business
serial: 1
max_fail_percentage: 10
Try with
---
- name: Install eps-business service
hosts: business
serial: 1
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword
https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html
create a script that reboots your service and then add a loop that will check whether the service is up or not and once it is executed successfully based on the status it has returned you can go for the next service.

How to change vmware network adapter with ansible

Im using ansible 2.9.2, i need to replace network adapter on vmware specific vm.
In my vcenter vm settings i see :
Networks: Vlan_12
My playbook doesnt see that network name.
tasks:
- name: Changing network adapter
vmware_guest_network:
datacenter: "{{ datacenter"}}
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
folder: "{{ folder }}"
cluster: "{{ cluster }}"
validate_certs: no
name: test
networks:
- name: "Vlan_12"
vlan: "Vlan_12"
connected: false
state: absent
register: output
I get this error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Network 'Vlan_12' does not exist."}
Im trying to replace vlan_12 with another network adapter named Vlan_13, so i tried first to delete the exsisting network adapter. in ansible docs they have a very limited examples.
Thanks.
You dont have to poweroff the machine while addin/removing networks.
You can remove/add nic on the fly, well at least on linux vm's not sure about Win vm's.
Changing networks live works just fine. All you need is state: present, the label of your current interface (almost certainly 'Network adapter 1', which is the default for the first network interface), and the name of the port group that you'd like to connect to this interface.
Here's the playbook I've been using:
---
- hosts: localhost
gather_facts: no
tasks:
- name: migrate network
vmware_guest_network:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter: '{{ datacenter }}'
validate_certs: False
name: '{{ vm_hostname }}'
gather_network_info: False
networks:
- state: present
label: "Network adapter 1"
name: '{{ new_net_name }}'
delegate_to: localhost
You need vmware specific machine to be powerdoff so you can change network adapter:
tasks:
- name: Changing network adapter
vmware_guest_network:
datacenter: "{{ datacenter"}}
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
folder: "{{ folder }}"
cluster: "{{ cluster }}"
validate_certs: no
name: test
networks:
- name: "Vlan_12"
label: "Network adapter 1"
connected: False
state: absent
- label: "Network adapter 1"
state: new
connected: True
name: Vlan_13
This playbook deletes the present network adapter and the it adds the new adapter instead. i couldnt find a way to change. only delete and add.

ansible dynamic hosts refuse to use custom interpreter

I'm trying to provision new machines on AWS with ec2 module
and to update my hosts file locally so that the next tasks would already use the hosts file.
So, provisioning isn't and issue and even the creation of the local host file:
- name: Provision a set of instances
ec2:
key_name: AWS
region: eu-west-1
group: default
instance_type: t2.micro
image: ami-6f587e1c # For Ubuntu 14.04 LTS use ami-b9b394ca # For Ubuntu 16.04 LTS use ami-6f587e1c
wait: yes
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 50
wait: true
count: 2
vpc_subnet_id: subnet-xxxxxxxx
assign_public_ip: yes
instance_tags:
Name: Ansible
register: ec2
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
with_items: "{{ ec2.instances }}"
- local_action: file path=./hosts state=absent
ignore_errors: yes
- local_action: file path=./hosts state=touch
- local_action: lineinfile line="[all]" insertafter=EOF dest=./hosts
- local_action: lineinfile line="{{ item.private_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./hosts
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 600
state: started
with_items: "{{ ec2.instances }}"
- name: refreshing inventory cache
meta: refresh_inventory
- hosts: all
gather_facts: False
tasks:
- command: hostname -i
However the next task which is a simple print of hostname -i (just for the test)
fails because it can't find on Ubuntu 16.04 LTS Python 2.7 (there is python3)
For that, in my dynamic host file I add the following line:
ansible_python_interpreter=/usr/bin/python3
But it seems that ansible ignore it and goes straight to python 2.7 which is missing.
I've tried to reload the inventory file
meta: refresh_inventory
but that didn't helped either.
What am I doing wrong ?
I'm not sure why the refresh did not work but I suggest setting it in the add_host section, it takes any variable.
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
ansible_python_interpreter: "/usr/bin/python3"
with_items: "{{ ec2.instances }}"
Also i find it useful to debug with this task
- debug: var=hostvars[inventory_hostname]

Ansible - How to loop and get attributes of new EC2 instances

I'm creating a set of new EC2 instances using this play
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
assign_public_ip: yes
aws_access_key: XXXXXXXXXX
aws_secret_key: XXXXXXXXXX
group_id: XXXXXXXXXX
instance_type: t2.micro
image: ami-32a85152
vpc_subnet_id: XXXXXXXXXX
region: XXXXXXXXXX
user_data: "{{ lookup('file', '/SOME_PATH/cloud-config.yaml') }}"
wait: true
exact_count: 1
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2
- name: Add new CoreOS machines to coreos-launched group
add_host: hostname="{{ item.public_ip }}" groups=coreos-launched
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for: host="{{ item.public_dns_name }}" port=22 delay=60 timeout=320
with_items: "{{ ec2.instances }}"
Now, I need to create an SSL/TLS certificate for every of those new machines. To do so, I require their private IP. Yet, I don't know how to access the "{{ ec2.instances }}" I registered in the previous play.
I tried something, in the same playbook, to do something like this
- hosts: coreos-launched
gather_facts: False
tasks:
- name: Find the current machine IP addresse
command: echo "{{ item.private_ip }}" > /tmp/private_ip
with_items: "{{ ec2.instances }}"
sudo: yes
But without any success. Is there a way to use the "{{ ec2.instances }}" items inside a same playbook but in a different play?
-- EDIT --
Following Theo advices, I manage to get the instances attributes using
- name: Gather current facts
action: ec2_facts
register: ec2_facts
- name: Use the current facts
command: echo "{{ ec2_facts.ansible_facts.ansible_ec2_local_ipv4 }}"
with_items: "{{ ec2_facts }}"
The best way to learn about a return structure (short of the documentation) is to wrap it in a debug task.
- name: Debug ec2 variable
debug: var=ec2.instances
From there, follow the structure to get the variable you seek.
Also (following the idempotence model), you can use the ec2_remote_facts module to get the facts from the instances and call them in future plays/tasks as well.
See Variables -- Ansible Documentation for more info about calling registered variables.

Resources