So far I believe I am very close to getting Ansible to launch a VM template in oVirt. I'm able to authenticate without an issue, I believe specify the quota but even though it's launching a template it cannot find its data_center. No examples that I can find define data_center in the ovirt_vm module, this is what I have:
---
# tasks file for ovirt-template-launch
- name: Login to oVirt
ovirt_auth:
hostname: "{{ ovirt_hostname }}"
url: "{{ ovirt_ind_url }}"
username: "{{ ovirt_login_username }}"
password: "{{ ovirt_login_password }}"
insecure: true
- name: Create persistent SSO token
ovirt_vm:
auth: "{{ ovirt_auth }}"
state: absent
name: myvm
# Gather quota/datacenter
- ovirt_quota_info:
data_center: "{{ ovirt_datacenter }}"
name: "{{ ovirt_quota }}"
auth: "{{ ovirt_auth }}"
register: result
- name: Run VM with cloud init
ovirt_vm:
name: "{{ vmname }}"
template: centos7-template1
cluster: Default
memory: 2GiB
high_availability: true
high_availability_priority: 50
disks:
- size: 10GiB
name: data
storage_domain: mydomain
interface: virtio
auth: "{{ ovirt_auth }}"
quota_id: "{{ ovirt_quota_id }}"
cloud_init:
nic_boot_protocol: static
nic_ip_address: 10.0.1.5
nic_netmask: 255.255.254.0
nic_gateway: 10.0.1.1
nic_name: eth0
nic_on_boot: true
host_name: testdomain.com
custom_script: |
write_files:
- content: |
Hello, world!
path: /tmp/greeting.txt
permissions: '0644'
Error is:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'NoneType' object has no attribute 'data_center'
fatal: [localhost]: FAILED! => {"changed": false, "msg": "'NoneType' object has no attribute 'data_center'"}
Related
I have built a playbook to build a virtual server in F5. I want to make a line only execute if someone enters the variable. In this case the default_persistence_profile: line has a variable "{{ persistenceProfile }}". Sometimes the developers don't want persistence applied to their app but sometimes they do. I have found when I make the variable optional in the run task and don't select a persistence profile the task errors out. See playbook below:
- name: Build the Virtual Server
bigip_virtual_server:
state: present
partition: Common
name: "{{ vsName }}"
destination: "{{ vsIpAddress }}"
port: "{{ vsPort }}"
pool: "{{ poolName }}"
default_persistence_profile: "{{ persistenceProfile }}"
ip_protocol: tcp
snat: automap
description: "{{ vsDescription }}"
profiles:
- tcp
- http
- name: "{{ clientsslName }}"
context: client-side
- name: default-server-ssl
context: server-side
Ansible has a mechanism for omitting parameters using the default filter, like this:
- name: Build the Virtual Server
bigip_virtual_server:
state: present
partition: Common
name: "{{ vsName }}"
destination: "{{ vsIpAddress }}"
port: "{{ vsPort }}"
pool: "{{ poolName }}"
default_persistence_profile: "{{ persistenceProfile|default(omit) }}"
ip_protocol: tcp
snat: automap
description: "{{ vsDescription }}"
profiles:
- tcp
- http
- name: "{{ clientsslName }}"
context: client-side
- name: default-server-ssl
context: server-side
If persistenceProfile is unset, the default_persistence_profile parameter should not be passed to the bigip_virtual_server module.
I am trying to automate VMware builds using Ansible. I am expecting a workflow engine to output a file that would act as a var_file and have all of the objects that can be used to build the VM using the vmware_guest module. It works great until you get to the networks dictionary portion of the module then it falls apart.
I initially tried setting up a vars_file with all of the variables like this:
---
validate_certs: no
datacenter: this is the DC
cluster: this is the cluster
folder: "this is the folder"
name: some-server
template: template-name
datastore: "datastore-name"
netname: This is the network
ip: 10.6.6.10
netmask: 255.255.255.0
gateway: 10.6.6.1
mac: aa:bb:dd:aa:00:14
domain: domain.com
However, that returned:
argument networks is of type <type 'dict'> and we were unable to convert to list: <type 'dict'> cannot be converted to a list"}
Where the code fails is on this task:
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: "{{ validate_certs }}"
datacenter: "{{ datacenter }}"
cluster: "{{ cluster }}"
folder: "{{ folder }}"
name: "{{name }}"
template: "{{ template }}"
datastore: "{{ datastore}}"
networks:
name: "{{ netname }}"
ip: "{{ ip }}"
netmask: "{{ netmask }}"
gateway: "{{ gateway }}"
mac: "{{ mac }}"
domain: "{{ domain }}"```
I tried creating a dictionary in the variable file like this:
---
validate_certs: no
datacenter: this is the DC
cluster: this is the cluster
folder: "this is the folder"
name: some-server
template: template-name
datastore: "datastore-name"
bnetworks:
name: This is the network
ip: 10.6.6.10
netmask: 255.255.255.0
gateway: 10.6.6.1
mac: aa:bb:dd:aa:00:14
domain: americas.global-legal.com
And changed the task to include this:
networks:
name: "{{ item.value.name }}"
ip: "{{ item.value.ip }}"
netmask: "{{ item.value.netmask }}"
gateway: "{{ item.value.gateway }}"
mac: "{{ item.value.mac }}"
domain: "{{ item.value.domain }}"
with_dict: bnetworks```
And I get this error:
The task includes an option with an undefined variable. The error was: 'item' is undefined
Any help would be appreciated.
argument networks is of type <type 'dict'> and we were unable to convert to list: cannot be converted to a list"}
There might be more networks in one VM therefor a list is needed. The corect syntax is below
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: "{{ validate_certs }}"
datacenter: "{{ datacenter }}"
cluster: "{{ cluster }}"
folder: "{{ folder }}"
name: "{{name }}"
template: "{{ template }}"
datastore: "{{ datastore}}"
networks:
- name: "{{ netname }}"
ip: "{{ ip }}"
netmask: "{{ netmask }}"
gateway: "{{ gateway }}"
mac: "{{ mac }}"
domain: "{{ domain }}"```
This is a list
list:
- key: value
This is a dictionary
dictionary:
key: value
This is a list of dictionaries
dictionary:
- key1: value-1-1
key2: value-2-1
- key1: value-2-1
key2: value-2-2
The task includes an option with an undefined variable. The error was: 'item' is undefined
The indentation of with_dict is wrong. The correct syntax is below.
vmware_guest:
...
networks:
- name: "{{ item.value.name }}"
ip: "{{ item.value.ip }}"
netmask: "{{ item.value.netmask }}"
gateway: "{{ item.value.gateway }}"
mac: "{{ item.value.mac }}"
domain: "{{ item.value.domain }}"
with_dict: "{{ bnetworks }}"
I deploy some low supported OS like Debian 9, Debian 8, Rehhat 6,7, Centos 7.
The IP configuration is unsupported at boot time, so I add only the VLAN/virtual network interface, then I use vmware_vm_shell to configure the OS step by step.
What I'm looking for is a trick to wait for an event like /proc/net/dev exists on remote VM to continue other steps
What I tried so far :
- hosts: localhost
tasks:
- name: Create a virtual machine "{{ vm_name }}"
vmware_guest:
datacenter: '{{ datacenter }}'
hostname: '{{ vcenter }}'
username: "{{ login }}"
password: "{{ passwd }}"
folder: "{{ folder }}"
name: "{{ vm_name }}"
template: '{{ template }}'
cluster: "{{ cluster }}"
state: poweredon
disk:
- size_gb: "{{ disksizeGB }}"
datastore: '{{ datastore }}'
hardware:
memory_mb: '{{ ramsizeMB }}'
num_cpus: '{{ vcpu_num }}'
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: True
networks: '{{ vlans }}'
#wait_for_ip_address: yes # ERR there's ifaces, but not ip at this time
register: deploy
- name: Wait for server to start
local_action:
module: wait_for
timeout=15
when: deploy.changed
The last wait code block sucks (waiting N seconds) , I would like something smarter.
Any idea ?
If I don't wait, I sometimes get error : fatal: [localhost]: FAILED! => {"changed": false, "msg": "VMWareTools is not installed or is not running in the guest. VMware Tools are necessary to run this module."}
because the VM is not booted. The template have vmware-tools.
https://docs.ansible.com/ansible/2.6/modules/vmware_guest_module.html#vmware-guest-module
https://docs.ansible.com/ansible/latest/modules/vmware_vm_shell_module.html
Ok, found myself :)
- name: wait for server to boot
vmware_vm_shell:
datacenter: '{{ datacenter }}'
hostname: 'vcenter{{ vcenter }}'
username: "{{ login }}"
password: "{{ passwd }}"
validate_certs: False
folder: "{{ folder }}"
vm_id: "{{ vm_name }}"
cluster: "{{ cluster }}"
vm_password: '{{ passwd }}'
vm_username: root
vm_shell: '/bin/sleep'
vm_shell_args: 0
when: deploy.changed and 'debian' in distro
register: has_reboot
until: has_reboot.failed != 'true'
delay: 2
retries: 150
I have create ansible role to create multiple lambda function, where I am passing some parameters from variable file. My variable file looks like
Variable file
S3BucketName: "test_bucket"
S3Key1: "test.zip"
runtime: "python3.6"
handler1: "test.lambda_handler"
role1: "test_role_arn"
memory_size: "128"
timeout: "180"
s3_key2: "temp.zip"
role2: "temp_role_Arn"
handler2: "temp.lambda_handler"
In my playbook, I am using ansible loop to create multiple aws lambda functions at the same time. when I am using variable in with_items.
Playbook file
- hosts: localhost
roles:
- ansible-lambda
vars_files:
- "ansible-lambda/vars/cf_vars.yaml"
lambda:
name: '{{ item.name }}'
region: "{{ aws_region }}"
state: "{{state}}"
runtime: "{{ runtime }}"
timeout: "{{timeout}}"
memory_size : "{{memory_size}}"
s3_bucket: "{{ S3BucketName}}"
s3_key: '{{ item.s3_key }}'
role: '{{ item.role }}'
handler: '{{ item.handler }}'
with_items:
- name: test
s3_key: "{{ S3Key1 }}" #refering to variable 1
- name: temp
s3_key: "{{ S3Key2 }}" #refering to variable 2
- debug:
msg: "Lambda creation Complete!!"
Following is the error:
fatal: [localhost]: FAILED! => {"msg": "'S3Key1' is undefined"}
This playbook works, when I pass the absolute values instead of variables. I mean s3_key: test.zip
how to use variables in with item?
-------------- var file ---------------
aws_region: austin
lambda_list:
- name: lambda1
state: "UR STATE HERE"
S3BucketName: "test_bucket"
S3Key: "test.zip"
runtime: "python3.6"
handler: "test.lambda_handler"
role_desc: "test_role_arn"
memory_size: "128"
timeout: "180"
- name: lambda2
state: "UR STATE HERE"
S3BucketName: "test_bucket"
S3Key: "test2.zip"
runtime: "python2.7"
handler: "test.lambda_handler"
role_desc: "test_role_ARN"
memory_size: "256"
timeout: "150"
---------------playbook------------------------
- hosts: localhost
vars_files: "ansible-lambda/vars/cf_vars.yaml"
tasks:
lambda:
name: '{{ item.name }}'
region: "{{ aws_region }}"
state: "{{ item.state }}"
runtime: "{{ item.runtime }}"
timeout: "{{ item.timeout }}"
memory_size : "{{ item.memory_size }}"
s3_bucket: "{{ item.S3BucketName }}"
s3_key: "{{ item.s3_key }}"
role: "{{ item.role_desc }}"
handler: "{{ item.handler }}"
with_items:
- "{{ lambda_list }}"
Here's a snippet of the correct way IMHO how to achieve what you're trying to do, sure there are few other ways but to be both efficient and easy config in example above you can see that there's a dict which holds every lambda's info as a key dict, when u use with_items it iterates each key to the task while using the item's data as {{ item.name }}.
You could even put a dict/list in a dict. for exmaple:
lambda_list:
- name: lambda1 # <--- each dash('-') is a key with a value, that value is a dict
S3: #
S3BucketName: "test_bucket"
S3Key: "test.zip"
- name: lambda2
S3: # <-- without the dash its indicated as a list inside the dict.
S3BucketName: "test_bucket"
S3Key: "test2.zip"
in this case to access your nested list you would use {{ item.S3.S3BucketName }} or {{ item['S3']['S3BucketName'] }}
if it was a dict in a dict you would get the key/value of each key without a proper way to access a specific key(with loops you can iterate the dict and use 'when' to get the desired key.)
Here's few references worth reading about loops, dicts and how to access them.
http://ansible-docs.readthedocs.io/zh/stable-2.0/rst/playbooks_loops.html#nested-loops
I have the following dict:
endpoint:
esxi_hostname: servername.domain.com
I'm trying to use it as an option via jinja2 for the vmware_guest but have been unsuccessful. The reason I'm trying to do it this way is because the dict is dynamic...it can either be cluster: clustername or esxi_hostname: hostname, both mutually exclusive in the vmware_guest module.
Here is how I'm presenting it to the module:
- name: Create VM pysphere
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: no
datacenter: "{{ ansible_host_datacenter }}"
folder: "/DCC/{{ ansible_host_datacenter }}/vm"
"{{ endpoint }}"
name: "{{ guest }}"
state: present
guest_id: "{{ osid }}"
disk: "{{ disks }}"
networks: "{{ niclist }}"
hardware:
memory_mb: "{{ memory_gb|int * 1024 }}"
num_cpus: "{{ num_cpus|int }}"
scsi: "{{ scsi }}"
customvalues: "{{ customvalues }}"
cdrom:
type: client
delegate_to: localhost
And here is the error I'm getting when including the tasks file:
TASK [Preparation : Include VM tasks] *********************************************************************************************************************************************************************************
fatal: [10.10.10.10]: FAILED! => {"reason": "Syntax Error while loading YAML.
The error appears to have been in '/data01/home/hit/tools/ansible/playbooks/roles/Preparation/tasks/prepareVM.yml': line 36, column 4, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
"{{ endpoint }}"
hostname: "{{ vcenter_hostname }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
exception type: <class 'yaml.parser.ParserError'>
exception: while parsing a block mapping
in "<unicode string>", line 33, column 3
did not find expected key
in "<unicode string>", line 36, column 4"}
So in summary, I'm not sure how to format this or if it is even possible.
The post from techraf sums up your problem, but for a possible solution, in the docs, especially regarding Jinja filters, there is the following bit:
Omitting Parameters
As of Ansible 1.8, it is possible to use the default filter to omit
module parameters using the special omit variable:
- name: touch files with an optional mode
file: dest={{item.path}} state=touch mode={{item.mode|default(omit)}} > with_items:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
For the first two files in the list, the default mode will be
determined by the umask of the system as the mode= parameter will not
be sent to the file module while the final file will receive the
mode=0444 option.
So it looks like what should be tried is:
esxi_hostname: "{{ endpoint.esxi_hostname | default(omit) }}"
# however you want the alternative cluster settings done.
# I dont know this module.
cluster: "{{ cluster | default(omit) }}"
This is obviously reliant on the vars to only have one choice set.
There is no way you could ever use the syntax you tried in the question, because firstly and foremostly Ansible requires a valid YAML file.
The closest workaround would be to use a YAML anchor/alias although it would work only with literals:
# ...
vars:
endpoint: &endpoint
esxi_hostname: servername.domain.com
tasks:
- name: Create VM pysphere
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: no
datacenter: "{{ ansible_host_datacenter }}"
folder: "/DCC/{{ ansible_host_datacenter }}/vm"
<<: *endpoint
name: "{{ guest }}"
state: present
guest_id: "{{ osid }}"
disk: "{{ disks }}"
networks: "{{ niclist }}"
hardware:
memory_mb: "{{ memory_gb|int * 1024 }}"
num_cpus: "{{ num_cpus|int }}"
scsi: "{{ scsi }}"
customvalues: "{{ customvalues }}"
cdrom:
type: client
delegate_to: localhost