I created this playbook to deploy new VM from template.
Now if I try to deploy a new Linux VM it works correctly, if I try to deploy a Windows WM I received this error:
TASK [Clone the template]
************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to
create a virtual machine : A specified parameter was not correct:
spec.identity"}
This is my playbook:
...
- name: Clone the template
vmware_guest:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vsphere_pass_result.user_input }}"
validate_certs: False
name: "{{ vm_name }}"
template: "{{ template_name_sel[tmpl] }}"
datacenter: "{{ datacenter_name }}"
cluster: "{{ cluster_name }}"
folder: "/{{ host_location_sel[loc] }}/{{ vm_env_sel[env] }}"
datastore: "{{ vm_datastore_sel[dstg] }}"
guest_id: "{{ vm_name }}"
networks:
- name: "{{ vm_network_sel[vlan] }}"
ip: "{{ vm_ip_result.user_input }}"
netmask: "{{ vm_mask_result.user_input }}"
gateway: "{{ vm_gw_result.user_input }}"
type: static
dns_servers: "8.8.8.8"
customization:
autologon: yes
dns_servers:
- "{{ vm_dns1_result.user_input }}"
- "{{ vm_dns2_result.user_input }}"
hardware:
memory_mb: "{{ vm_ram_sel[ram] }}"
num_cpus: "{{ vm_cpu_result.user_input }}"
num_cpu_cores_per_socket: "{{ vm_core_result.user_input }}"
state: poweredon
wait_for_ip_address: yes
delegate_to: localhost
register: vm_info
- debug:
var: vm_info
Thanks for the support.
Related
ansible theforeman.foreman.content_view module has repositories arguments need to be used like this:
- name: "Create or update Fedora content view"
theforeman.foreman.content_view:
username: "admin"
password: "changeme"
server_url: "https://foreman.example.com"
name: "Fedora CV"
organization: "My Cool new Organization"
repositories:
- name: 'Fedora 26'
product: 'Fedora'
I need to generate it dynamically. tried to do like following:
- name: "Create or update content views"
theforeman.foreman.content_view:
username: "{{ foreman.user }}"
password: "{{ foreman.password }}"
server_url: "{{ foreman.url }}"
organization: "{{ org.name }}"
name: "{{ item.0.name }}_content"
repositories: |
- name: '{{ item.1.name }}'
product: "{{ item.0.name }}_repos"
loop: "{{ os|subelements('repos') }}"
loop_control:
label: "{{ item.0.name }}"
this also not worked
- name: "Create or update content views"
theforeman.foreman.content_view:
username: "{{ foreman.user }}"
password: "{{ foreman.password }}"
server_url: "{{ foreman.url }}"
organization: "{{ org.name }}"
name: "{{ item.0.name }}_content"
repositories: |
{% for n in item.0.name %}
- name: '{{ item.1.name }}'
product: "{{ item.0.name }}_repos"
{% endfor %}
loop: "{{ os|subelements('repos') }}"
loop_control:
label: "{{ item.0.name }}"
but it always overwrites previous one and last executed remains. Howcould we achive to generate repositories with loop?
example vars:
os:
- name: alma9
repos:
- name: "almalinux9_base_x86_64"
url: "https://repo.almalinux.org/almalinux/9/BaseOS/x86_64/os/"
gpg_key: "RPM-GPG-KEY-AlmaLinux9"
- name: "almalinux9_appstream_x86_64"
url: "https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/"
gpg_key: "RPM-GPG-KEY-AlmaLinux9"
Is there a way to check if all loop items from a previous step have been skipped?
I want to download the latest files from the GitHub API and compare them to the templated ones. Only the files that have changed will be commited as blobs. Based on the skipped property, I can check what needs to be included in the tree.
But how can I skip the Create tree action when all items in blobs have been skipped?
when: "not skipped(blobs)" in Create tree only for illustration purposes.
- name: Download current files
uri:
url: https://github.com/api/v3/repos{{ repository.path }}/contents{{ item.path }}
user: "{{ API_USER }}"
password: "{{ API_TOKEN }}"
force_basic_auth: yes
status_code: [ 200, 404 ]
register: current_files
loop:
- {path: "/.github/workflows/build.yml"}
- {path: "/.github/workflows/deploy-dev.yml"}
- {path: "/.github/workflows/deploy-int.yml"}
- {path: "/.github/workflows/deploy-prod.yml"}
- name: Commit file blob
uri:
url: https://github.com/api/v3/repos{{ repository.path }}/git/blobs
method: "POST"
user: "{{ API_USER }}"
password: "{{ API_TOKEN }}"
force_basic_auth: yes
body_format: json
status_code: [200, 201]
body: |
{
"encoding": "base64",
"content": "{{ lookup('template', item.src) | b64encode }}",
}
vars:
target_group: "{{ item.target_group }}"
service_deployment_env: "{{ item.service_deployment_env }}"
when: "(lookup('template', item.src) | b64encode) != current_files['results'][index]['json']['content'].replace('\n', '')"
register: blobs
loop:
- { src: 'build.yml.jinja2', dest: '.github/workflows/build.yml', service_deployment_env: '' }
- { src: 'deploy.yml.jinja2', dest: '.github/workflows/deploy-dev.yml', service_deployment_env: 'dev' }
- { src: 'deploy.yml.jinja2', dest: '.github/workflows/deploy-int.yml', service_deployment_env: 'int' }
- { src: 'deploy.yml.jinja2', dest: '.github/workflows/deploy-prod.yml', service_deployment_env: 'prod' }
loop_control:
index_var: index
- name: Create tree
uri:
url: https://github.com/api/v3/repos{{ repository.path }}/git/trees
method: "POST"
user: "{{ API_USER }}"
password: "{{ API_TOKEN }}"
force_basic_auth: yes
body_format: json
status_code: [200, 201]
body: "{{ lookup('template', 'create_tree_body.json.jinja2') }}"
when: "not skipped(blobs)" # HOW IS THIS POSSIBLE??
register: tree
Ok... this was too easy. Apparently, ansible resolves the blobs list by itself, so simply using
when: "blobs is not skipped"
works just fine.
Team, I have below task that runs on local host. But I need to run it on all hosts under a group defined in inventory for which I need to do delegate_to and with_items. Any hint how can run this task on all hosts?
- name: verify if kernel modules exists
stat:
path: /lib/modules/{{ kernel_version }}/kernel/{{ item.dir }}/{{ item.module_name }}
checksum_algorithm: sha1
register: res
failed_when: res.stat.checksum != item.sha1
with_items:
- { dir: fs/fscache, module_name: fscache.ko, sha1: "{{ checksum.fscache }}" }
- { dir: fs/cachefiles, module_name: cachefiles.ko, sha1: "{{ checksum.cachefiles }}" }
- { dir: fs/isofs, module_name: isofs.ko, sha1: "{{ checksum.isofs }}" }
- { dir: drivers/target , module_name: target_core_user.ko, sha1: "{{ checksum.target_core_user }}" }
- { dir: drivers/target/loopback , module_name: tcm_loop.ko, sha1: "{{ checksum.tcm_loop }}" }
- fail:
msg: "FAILED TO FIND {{ item.item }}"
with_items: "{{ res.results }}"
when: item.stat.exists == False
Do I need to use below?
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-gpu-node'] }}”
tried to use hosts: directly in task as below but i get syntax error
# tasks file for maglev-deployment
- name: "verify if kernel modules exists"
hosts: kube-gpu-node
stat:
path: /lib/modules/{{ gpu_node_kernel }}/kernel/{{ item.dir }}/{{ item.module_name }}
checksum_algorithm: sha1
register: res
#failed_when: res.results.item.sha1 != item.sha1
#failed_when: res.results[0] == ''
with_items:
- { dir: fs/fscache, module_name: fscache.ko, sha1: "{{ checksum.fscache }}" }
- { dir: fs/cachefiles, module_name: cachefiles.ko, sha1: "{{ checksum.cachefiles }}" }
- { dir: fs/isofs, module_name: isofs.ko, sha1: "{{ checksum.isofs }}" }
- { dir: drivers/target , module_name: target_core_user.ko, sha1: "{{ checksum.target_core_user }}" }
- { dir: drivers/target/loopback , module_name: tcm_loop.ko, sha1: "{{ checksum.tcm_loop }}" }
ignore_errors: yes
- debug:
var: res
- name: check if cachefilesd.conf exists
stat:
path: /etc/cachefilesd.conf
register: result
ERROR! 'hosts' is not a valid attribute for a Task
The error appears to be in '/home/ssh/tasks/main.yaml': line 2, column 9, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
deployment
- name: "verify if kernel modules exists"
^ here
This error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration
I have ansbile playbook code :
- name: Send file to server
uri:
url: "{{ url }}:{{ port }}/application/{{ item | basename }}"
method: POST
body: "{{ lookup('file','{{ item }}') }}"
force_basic_auth: yes
status_code: 201
body_format: json
with_fileglob: "{{ tmp_dir)_for_files }}/configs/*"
register: code_result
until: code_result.status == 201
retries: "{{ uri_retries }}"
delay: "{{ uri_delays }}"
What I need to achieve is that file will be uploaded to the sever only if the is not already there.
Here is my approach to this problem
- name: Get file from server
uri:
url: "{{ url }}:{{ port }}/application/{{ item | basename }}"
method: GET
body: "{{ lookup('file','{{ item }}') }}"
force_basic_auth: yes
status_code: 404
body_format: json
with_fileglob: "{{ tmp_dir }}/configs/*"
register: get_code_result
ignore_errors: true
- name: Send file to server
uri:
url: "{{ url }}:{{ port }}/application/{{ item.item | basename }}"
method: POST
body: "{{ lookup('file','{{ item.item }}') }}"
force_basic_auth: yes
status_code: 201
body_format: json
with_items:
- "{{ get_code_result.results }}"
when: item.status == 404
register: code_result
until: code_result.status == 201
retries: "{{ uri_retries }}"
delay: "{{ uri_delays }}"
It appears to be working
I have an inventory file with the following variables:
vpc_public_net1=["10.30.0.0/24","AZ=a"]
vpc_public_net2=["10.30.1.0/24","AZ=b"]
I can extract AZ value with "{{ item[1].split('=')[1] }}"
I'm having difficulties extracting both subnet and AZ using the same
ansible task
my Ansible code:
- name: Create Public Subnets
ec2_vpc_subnet: state="present"
vpc_id="{{ vpc_id }}"
cidr="{{ item.subnet }}"
az="{{ item.az }}"
region="{{ aws_region }}"
aws_access_key="{{ aws_access_key }}"
aws_secret_key="{{ aws_secret_key }}
Redefine as a dictionary:
vpc_public_net1='{"subnet": "10.30.0.0/24", "az": "a"}'
vpc_public_net2='{"subnet": "10.30.1.0/24", "az": "b"}'
and:
- name: Create Public Subnets
ec2_vpc_subnet:
state: present
vpc_id: "{{ vpc_id }}"
cidr: "{{ item.subnet }}"
az: "{{ item.az }}"
region: "{{ aws_region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
with_items:
- "{{ vpc_public_net1 }}"
- "{{ vpc_public_net2 }}"