I am using Ansible to deploy VM.
My playbook looks like this
- name: Create a virtual machine on given ESXi hostname
This is the VMware guest module provided by Ansible:
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
This is the hostname of a particular ESXi server on which the user wants a VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
This is the hardware and network information:
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
wait_for_ip_address: yes
delegate_to: localhost
register: deploy_vm
But how should I provide Booting (i.e username/press enter/password) information i.e boot process. How should I automate this process after the VM is deployed on the ESXi server?
I know we can use template module for this but is there any way provision it through playbook?
Related
Problem Statement:
while trying to create a VM from a template especially in the case of a windows OS template(working fine for linux OS template). VM after being created through ansible playbook is "powered off".
Here is my playbook to create multiple VMS from template, please help me in making this playbook more efficient...thanks in advance.
---
- name: Create a multiple VMs asynchronously
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Clone the template
vmware_guest:
hostname: xxx.xxx.xx
username: username
password: password
validate_certs: False
name: "{{ item }}"
template: Template-Win-2016
cluster: cluster
datacenter: Dev and QA DC
folder: /
state: poweredon
hardware:
memory_mb: 12288
num_cpus: 4
disk:
- size_gb: 60
type: thin
autoselect_datastore: True
wait_for_ip_address: yes
with_items:
- testvm001
- testvm002
register: vm_create
async: 600
poll: 0
- name: Waiting for status
async_status:
jid: "{{ item.ansible_job_id }}"
register: job_result
until: job_result.finished
delay: 60
retries: 60
with_items: "{{ vm_create.results }}"
Im using ansible 2.9.2, i need to replace network adapter on vmware specific vm.
In my vcenter vm settings i see :
Networks: Vlan_12
My playbook doesnt see that network name.
tasks:
- name: Changing network adapter
vmware_guest_network:
datacenter: "{{ datacenter"}}
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
folder: "{{ folder }}"
cluster: "{{ cluster }}"
validate_certs: no
name: test
networks:
- name: "Vlan_12"
vlan: "Vlan_12"
connected: false
state: absent
register: output
I get this error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Network 'Vlan_12' does not exist."}
Im trying to replace vlan_12 with another network adapter named Vlan_13, so i tried first to delete the exsisting network adapter. in ansible docs they have a very limited examples.
Thanks.
You dont have to poweroff the machine while addin/removing networks.
You can remove/add nic on the fly, well at least on linux vm's not sure about Win vm's.
Changing networks live works just fine. All you need is state: present, the label of your current interface (almost certainly 'Network adapter 1', which is the default for the first network interface), and the name of the port group that you'd like to connect to this interface.
Here's the playbook I've been using:
---
- hosts: localhost
gather_facts: no
tasks:
- name: migrate network
vmware_guest_network:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter: '{{ datacenter }}'
validate_certs: False
name: '{{ vm_hostname }}'
gather_network_info: False
networks:
- state: present
label: "Network adapter 1"
name: '{{ new_net_name }}'
delegate_to: localhost
You need vmware specific machine to be powerdoff so you can change network adapter:
tasks:
- name: Changing network adapter
vmware_guest_network:
datacenter: "{{ datacenter"}}
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
folder: "{{ folder }}"
cluster: "{{ cluster }}"
validate_certs: no
name: test
networks:
- name: "Vlan_12"
label: "Network adapter 1"
connected: False
state: absent
- label: "Network adapter 1"
state: new
connected: True
name: Vlan_13
This playbook deletes the present network adapter and the it adds the new adapter instead. i couldnt find a way to change. only delete and add.
I know we can do this in PowerCli, but would like to know if there is any other method to register an VM in vcenter like using vcenter api or ansible ?
For migrating a machine vCenter A to vCenter B, my vRO does not have access to the vcenterB, but I need to register the machine in vCenter B. It would be easy if I can use rest api or ansible module to do it or is there a way I can use VMWare converter with vRO ?
Here recently a module was added to Ansible that can do just that. The module: https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_guest_register_operation.py
Some example code:
- name: Register VM to inventory
vmware_guest_register_operation:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
folder: "/vm"
esxi_hostname: "{{ esxi_hostname }}"
name: "{{ vm_name }}"
template: no
path: "[datastore1] vm/vm.vmx"
state: present
I am trying to create a new Windows host on Azure, add it to Ansible's In-Memory inventory and run play against it. However it seems Ansible is trying to use ssh to connect to my windows host.
Below is my playbook
- name: Windows VM Playbook
hosts: localhost
vars:
nicName: "Blue-xxxx"
vmName: "Blue-xxxx"
vmPubIp: "51.141.x.x"
tasks:
- name: Create Windows VM
azure_rm_virtualmachine:
resource_group: AnsibleVMxxx
name: "{{ vmName }}"
vm_size: Standard_B2ms
storage_account: vmstoragedisksxxx
admin_username: xxxxxxxx
admin_password: xxxxxxxx
network_interface_names: "{{ nicName }}"
managed_disk_type: Standard_LRS
data_disks:
- lun: 0
disk_size_gb: 64
managed_disk_type: Standard_LRS
storage_container_name: vhd
storage_account_name: AnsibleVMxxxxtorageaccount
os_type: Windows
image:
offer: "WindowsServer"
publisher: MicrosoftWindowsServer
sku: '2012-R2-Datacenter'
location: 'uk west'
version: latest
- name: Custom Script Extension
azure_rm_deployment:
state: present
location: 'uk west'
resource_group_name: 'AnsibleVMxxxx'
template: "{{ lookup('file', '/etc/ansible/playbooks/ConfigureAnsibleForPowershellRemoting.json') | from_json }}"
deployment_mode: incremental
parameters:
vmName:
value: "{{ vmName }}"
- name: Add machine to in-memory inventory_plugins
add_host:
name: 51.141.x.x
groups: webservers
ansible_user: adminUser
ansible_password: xxxxxxxx
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
- name: Ping
hosts: webservers
gather_facts: false
connection: local
win_ping:
I also have a webserver.yml file under /playbooks/group_vars folder with following content
ansible_user: adminUser
ansible_password: xxxxxxxx
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
Can anyone please advise me on what I am doing wrong here?
Figured it out.
The problem was occuring because I had host: localhost at the begining of the playbook. I had to move the plays relying on in memory inventory out of localhost block.
I'm trying to provision new machines on AWS with ec2 module
and to update my hosts file locally so that the next tasks would already use the hosts file.
So, provisioning isn't and issue and even the creation of the local host file:
- name: Provision a set of instances
ec2:
key_name: AWS
region: eu-west-1
group: default
instance_type: t2.micro
image: ami-6f587e1c # For Ubuntu 14.04 LTS use ami-b9b394ca # For Ubuntu 16.04 LTS use ami-6f587e1c
wait: yes
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 50
wait: true
count: 2
vpc_subnet_id: subnet-xxxxxxxx
assign_public_ip: yes
instance_tags:
Name: Ansible
register: ec2
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
with_items: "{{ ec2.instances }}"
- local_action: file path=./hosts state=absent
ignore_errors: yes
- local_action: file path=./hosts state=touch
- local_action: lineinfile line="[all]" insertafter=EOF dest=./hosts
- local_action: lineinfile line="{{ item.private_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./hosts
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 600
state: started
with_items: "{{ ec2.instances }}"
- name: refreshing inventory cache
meta: refresh_inventory
- hosts: all
gather_facts: False
tasks:
- command: hostname -i
However the next task which is a simple print of hostname -i (just for the test)
fails because it can't find on Ubuntu 16.04 LTS Python 2.7 (there is python3)
For that, in my dynamic host file I add the following line:
ansible_python_interpreter=/usr/bin/python3
But it seems that ansible ignore it and goes straight to python 2.7 which is missing.
I've tried to reload the inventory file
meta: refresh_inventory
but that didn't helped either.
What am I doing wrong ?
I'm not sure why the refresh did not work but I suggest setting it in the add_host section, it takes any variable.
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
ansible_python_interpreter: "/usr/bin/python3"
with_items: "{{ ec2.instances }}"
Also i find it useful to debug with this task
- debug: var=hostvars[inventory_hostname]