Ansible, vmware_guest and mounting a disk - ansible
I'm using ansible and the vmware_guest module to spin up VMs.
Specifically, I'm creating VMs from a template, like so:
- name: Deploy VM from template
vmware_guest:
annotation: "{{ lookup('template', './templates/annotations.j2') }}"
hostname: '{{ deploy_vsphere_host }}'
username: '{{ deploy_vsphere_user }}'
password: '{{ deploy_vsphere_password }}'
validate_certs: no
datacenter: '{{ deploy_vsphere_datacenter }}'
esxi_hostname: '{{ deploy_vsphere_cluster }}'
folder: '{{ deploy_vsphere_folder }}'
name: '{{ inventory_hostname }}'
guest_id: '{{ guest_id }}'
disk:
- size_gb: '{{ disk_size }}'
type: thin
datastore: '{{ deploy_vsphere_datastore }}'
networks:
- name: '{{ guest_network }}'
start_connected: true
allow_guest_control: true
ip: "{{ ansible_host }}"
netmask: '{{ guest_netmask }}'
gateway: '{{ guest_gateway }}'
dns_servers:
- '{{ guest_dns_server }}'
hardware:
memory_mb: '{{ guest_memory }}'
num_cpus: '{{ guest_vcpu }}'
customization:
dns_servers:
- '{{ guest_dns_server }}'
domain : '{{ guest_domain_name }}'
hostname: '{{ inventory_hostname }}'
template: '{{ guest_template }}'
wait_for_ip_address: yes
wait_for_customization: yes
state: "{{ state }}"
delegate_to: localhost
However, the disk is not automatically mounted.
For example, I'm adding a 1000GB disk, using the {{ disk_size }}. When I ssh to the VM, I get the following details:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 1000G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 15G 0 part
├─centos-root 253:0 0 13.4G 0 lvm /
└─centos-swap 253:1 0 1.6G 0 lvm
sr0 11:0 1 1024M 0 rom
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 14G 12G 2.3G 84% /
devtmpfs 12G 0 12G 0% /dev
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 12G 2.3M 12G 1% /run
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/sda1 1014M 189M 826M 19% /boot
tmpfs 2.4G 0 2.4G 0% /run/user/1000
fdisk
Disk /dev/sda: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a708c
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 33554431 15727616 8e Linux LVM
Disk /dev/mapper/centos-root: 14.4 GB, 14382268416 bytes, 28090368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 1719 MB, 1719664640 bytes, 3358720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
blkid
/dev/mapper/centos-root: UUID="1f352c63-6b7d-4005-89ce-a0dbc149d2e0" TYPE="xfs"
/dev/sda2: UUID="KvUYqI-biVh-jl4C-yi2F-ZPtK-Fokq-1CHzwH" TYPE="LVM2_member"
/dev/sda1: UUID="cdaaa519-be25-4ada-badb-28edbfae5750" TYPE="xfs"
/dev/mapper/centos-swap: UUID="c614ac9a-665f-4a0c-bd13-31c377db3c43" TYPE="swap"
Question
What is the best way, using Ansible, to mount and use the 1000gb disk?
i think module vmware_guest_disk help you.
Related
Elasticsearch incrase total_in_bytes Memory
I'm running a single node on an Ubuntu machine. Currently I have not enough space on my node. "mem" : { "total_in_bytes" : 25217441792, "free_in_bytes" : 674197504, "used_in_bytes" : 24543244288, "free_percent" : 3, "used_percent" : 97 } On the other hand when I execute the command df -h , I see that I still have enough space on my linux server Filesystem Size Used Avail Use% Mounted on udev 12G 12G 0 100% /dev tmpfs 2,4G 1,5M 2,4G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 158G 110G 42G 73% / ***<=====*** tmpfs 12G 0 12G 0% /dev/shm tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 12G 0 12G 0% /sys/fs/cgroup /dev/sda2 974M 304M 603M 34% /boot /dev/mapper/ubuntu--vg-lv--opt 206G 70G 127G 36% /opt***<=======*** tmpfs 2,4G 0 2,4G 0% /run/user/1000 /dev/loop0 64M 64M 0 100% /snap/core20/1634 /dev/loop10 56M 56M 0 100% /snap/core18/2620 /dev/loop8 64M 64M 0 100% /snap/core20/1695 /dev/loop2 56M 56M 0 100% /snap/core18/2632 /dev/loop7 92M 92M 0 100% /snap/lxd/23991 /dev/loop3 50M 50M 0 100% /snap/snapd/17883 /dev/loop4 92M 92M 0 100% /snap/lxd/24061 Please how can I increase the value of total_in_bytes ? Thanks.
df -h shows disk space, not memory. If you want to increase the memory heap you can modify the jvm.options file. # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space -Xms4g -Xmx4g Here heap will be 4gb. The recommendation is to set this parameter to 50% of the physical ram of the node, and not more than 32GB. You can read more about memory in Elasticsearch here: https://www.elastic.co/blog/managing-and-troubleshooting-elasticsearch-memory
ansible print the stdout_lines in csv file in the exact format that playbook prints on the console
I have the following ansible playbook code which prints the some metrices of one remote server. Here I want to print the output in the csv file with the exact msg format shown in the output. How to print this in csv file. Ansible playbook: tasks: - name: Get ip address of the remote node ansible.builtin.shell: hostname -i | awk '{print $2}' register: ipaddr - name: Check uptime shell: uptime | cut -d',' -f1 register: uptime_op - debug: msg: "{{uptime_op.stdout_lines}}" - name: Get lsbkl value shell: lsblk register: lsblk_output - debug: msg: "{{lsblk_output.stdout_lines}}" - name: Get Disc space value shell: df -h register: df_output - debug: msg: "{{df_output.stdout_lines}}" output: PLAY [test_host] ************************************************************************************************************* TASK [Gathering Facts] ****************************************************************************************************** Tuesday 20 December 2022 10:07:07 -0800 (0:00:00.017) 0:00:00.017 ****** ok: [hostname.domain.com] TASK [Get ip address of the remote node] ************************************************************************************ Tuesday 20 December 2022 10:07:14 -0800 (0:00:07.399) 0:00:07.417 ****** changed: [hostname.domain.com] TASK [Check uptime] ********************************************************************************************************* Tuesday 20 December 2022 10:07:18 -0800 (0:00:03.860) 0:00:11.278 ****** changed: [hostname.domain.com] TASK [debug] **************************************************************************************************************** Tuesday 20 December 2022 10:07:22 -0800 (0:00:03.781) 0:00:15.059 ****** ok: [hostname.domain.com] => { "msg": [ " 23:37pm up 359 days 5:53" ] } TASK [Get lsbkl value] ****************************************************************************************************** Tuesday 20 December 2022 10:07:22 -0800 (0:00:00.086) 0:00:15.145 ****** changed: [hostname.domain.com] TASK [debug] **************************************************************************************************************** Tuesday 20 December 2022 10:07:26 -0800 (0:00:03.815) 0:00:18.960 ****** ok: [hostname.domain.com] => { "msg": [ "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT", "sda 8:0 0 1.1T 0 disk ", "├─sda1 8:1 0 15G 0 part /", "├─sda2 8:2 0 518M 0 part /boot/efi", "├─sda3 8:3 0 1K 0 part ", "├─sda5 8:5 0 2G 0 part /ctools", "├─sda6 8:6 0 10G 0 part /var", "├─sda7 8:7 0 48G 0 part [SWAP]", "├─sda8 8:8 0 250M 0 part /dsm", "├─sda9 8:9 0 501M 0 part /var/cfengine", "├─sda10 8:10 0 10G 0 part /tmp", "└─sda11 8:11 0 1T 0 part /infrastructure", "sdb 8:16 0 1.8T 0 disk ", "├─sdb1 8:17 0 484.3G 0 part /p4depot", "├─sdb2 8:18 0 931.3G 0 part /p4meta", "└─sdb3 8:19 0 372.9G 0 part /p4log" ] } TASK [Get Disc space value] ************************************************************************************************* Tuesday 20 December 2022 10:07:26 -0800 (0:00:00.088) 0:00:19.049 ****** changed: [hostname.domain.com] TASK [debug] **************************************************************************************************************** Tuesday 20 December 2022 10:07:30 -0800 (0:00:03.787) 0:00:22.836 ****** ok: [hostname.domain.com] => { "msg": [ "Filesystem Size Used Avail Use% Mounted on", "devtmpfs 189G 8.0K 189G 1% /dev", "tmpfs 189G 0 189G 0% /dev/shm", "tmpfs 189G 4.0G 185G 3% /run", "tmpfs 189G 0 189G 0% /sys/fs/cgroup", "/dev/sda1 15G 11G 4.8G 69% /", "/dev/sda2 518M 0 518M 0% /boot/efi", "/dev/sda10 10G 83M 10G 1% /tmp", "/dev/sda11 1.1T 34M 1.1T 1% /infrastructure", "/dev/sda8 247M 62M 185M 25% /dsm", "/dev/sda6 10G 1.5G 8.6G 15% /var", "/dev/sda9 498M 119M 379M 24% /var/cfengine", "/dev/sdb2 931G 30G 902G 4% /p4meta", "/dev/sdb3 373G 61M 373G 1% /p4log", "/dev/sdb1 485G 112G 373G 23% /p4depot", "/dev/sda5 2.1G 3.6M 1.8G 1% /ctools", "tmpfs 1.0G 0 1.0G 0% /dsm/tmp/dsmbg.tmpfs", "10.223.232.121:/new_itools 951G 497G 454G 53% /nfs/site/itools", "incfs03n03b-04:/common_usr_local 11G 1.2G 8.9G 12% /nfs/iind/local", "incfs04n08b-1:/prod 513M 1.5M 512M 1% /nfs/iind/proj/prod", "incfs06n11b-1:/home0 351G 149G 202G 43% /nfs/iind/disks/home23", "incfs02n10a-1:/iind_disks_home24 501G 59G 442G 12% /nfs/iind/disks/home24", "incfs06n04a-05:/iind_gen_adm 301G 176G 125G 59% /nfs/site/gen/adm", "incfs03n06b-1:/ba_ctg_home01 301G 263G 38G 88% /nfs/iind/disks/home110", "inc08n07b-1:/home_tree 11G 79M 10G 1% /nfs/iind/home", "incfs06n10a-1:/iind_gen_adm_netmeter_m 81G 28G 53G 35% /nfs/iind/disks/iind_gen_adm_netmeter", "tmpfs 38G 0 38G 0% /run/user/37124", "incfs07n05b-1:/common 201G 158G 43G 79% /nfs/site/disks/iind_gen_adm_common", "tmpfs 38G 0 38G 0% /run/user/12142325" ] } PLAY RECAP ****************************************************************************************************************** hostname.domain.com : ok=8 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Attaching the expected csv file how it should look like.
Given the registered data df_output.stdout_lines there must be also df_output.stdout attribute. Use the filter community.general.jc and parse the registered data - set_fact: df: "{{ df_output.stdout|community.general.jc('df') }}" gives df: - available: 189 filesystem: devtmpfs mounted_on: /dev size: 189G use_percent: 1 used: 8 - available: 189 filesystem: tmpfs mounted_on: /dev/shm size: 189G use_percent: 0 used: 0 ... Then, for each host create a CSV file on the controller. For example, - copy: dest: "/tmp/ansible_df_{{ item }}.csv" content: | {{ df_output.stdout_lines.0.split()[:-1]|join(',') }} {% for m in hostvars[item]['df'] %} {{ m.filesystem }},{{ m.size }},{{ m.used }},{{m.available }},{{ m.use_percent }},{{ m.mounted_on }} {% endfor %} loop: "{{ ansible_play_hosts }}" run_once: true delegate_to: localhost will create shel> cat /tmp/ansible_df_localhost.csv Filesystem,Size,Used,Avail,Use%,Mounted devtmpfs,189G,8,189,1,/dev tmpfs,189G,0,189,0,/dev/shm tmpfs,189G,4,185,3,/run tmpfs,189G,0,189,0,/sys/fs/cgroup /dev/sda1,15G,11,4,69,/ /dev/sda2,518M,0,518,0,/boot/efi /dev/sda10,10G,83,10,1,/tmp /dev/sda11,1.1T,34,1,1,/infrastructure /dev/sda8,247M,62,185,25,/dsm /dev/sda6,10G,1,8,15,/var /dev/sda9,498M,119,379,24,/var/cfengine /dev/sdb2,931G,30,902,4,/p4meta /dev/sdb3,373G,61,373,1,/p4log /dev/sdb1,485G,112,373,23,/p4depot /dev/sda5,2.1G,3,1,1,/ctools tmpfs,1.0G,0,1,0,/dsm/tmp/dsmbg.tmpfs 10.223.232.121:/new_itools,951G,497,454,53,/nfs/site/itools incfs03n03b-04:/common_usr_local,11G,1,8,12,/nfs/iind/local incfs04n08b-1:/prod,513M,1,512,1,/nfs/iind/proj/prod incfs06n11b-1:/home0,351G,149,202,43,/nfs/iind/disks/home23 incfs02n10a-1:/iind_disks_home24,501G,59,442,12,/nfs/iind/disks/home24 incfs06n04a-05:/iind_gen_adm,301G,176,125,59,/nfs/site/gen/adm incfs03n06b-1:/ba_ctg_home01,301G,263,38,88,/nfs/iind/disks/home110 inc08n07b-1:/home_tree,11G,79,10,1,/nfs/iind/home incfs06n10a-1:/iind_gen_adm_netmeter_m,81G,28,53,35,/nfs/iind/disks/iind_gen_adm_netmeter tmpfs,38G,0,38,0,/run/user/37124 incfs07n05b-1:/common,201G,158,43,79,/nfs/site/disks/iind_gen_adm_common tmpfs,38G,0,38,0,/run/user/12142325 Given the data for testing shell> cat data.json { "df_stdout_lines": [ "Filesystem Size Used Avail Use% Mounted on", "devtmpfs 189G 8.0K 189G 1% /dev", "tmpfs 189G 0 189G 0% /dev/shm", "tmpfs 189G 4.0G 185G 3% /run", "tmpfs 189G 0 189G 0% /sys/fs/cgroup", "/dev/sda1 15G 11G 4.8G 69% /", "/dev/sda2 518M 0 518M 0% /boot/efi", "/dev/sda10 10G 83M 10G 1% /tmp", "/dev/sda11 1.1T 34M 1.1T 1% /infrastructure", "/dev/sda8 247M 62M 185M 25% /dsm", "/dev/sda6 10G 1.5G 8.6G 15% /var", "/dev/sda9 498M 119M 379M 24% /var/cfengine", "/dev/sdb2 931G 30G 902G 4% /p4meta", "/dev/sdb3 373G 61M 373G 1% /p4log", "/dev/sdb1 485G 112G 373G 23% /p4depot", "/dev/sda5 2.1G 3.6M 1.8G 1% /ctools", "tmpfs 1.0G 0 1.0G 0% /dsm/tmp/dsmbg.tmpfs", "10.223.232.121:/new_itools 951G 497G 454G 53% /nfs/site/itools", "incfs03n03b-04:/common_usr_local 11G 1.2G 8.9G 12% /nfs/iind/local", "incfs04n08b-1:/prod 513M 1.5M 512M 1% /nfs/iind/proj/prod", "incfs06n11b-1:/home0 351G 149G 202G 43% /nfs/iind/disks/home23", "incfs02n10a-1:/iind_disks_home24 501G 59G 442G 12% /nfs/iind/disks/home24", "incfs06n04a-05:/iind_gen_adm 301G 176G 125G 59% /nfs/site/gen/adm", "incfs03n06b-1:/ba_ctg_home01 301G 263G 38G 88% /nfs/iind/disks/home110", "inc08n07b-1:/home_tree 11G 79M 10G 1% /nfs/iind/home", "incfs06n10a-1:/iind_gen_adm_netmeter_m 81G 28G 53G 35% /nfs/iind/disks/iind_gen_adm_netmeter", "tmpfs 38G 0 38G 0% /run/user/37124", "incfs07n05b-1:/common 201G 158G 43G 79% /nfs/site/disks/iind_gen_adm_common", "tmpfs 38G 0 38G 0% /run/user/12142325" ] } Example of a complete playbook for testing - hosts: localhost vars_files: - data.json vars: df_output: stdout: | Filesystem Size Used Avail Use% Mounted on devtmpfs 189G 8.0K 189G 1% /dev tmpfs 189G 0 189G 0% /dev/shm tmpfs 189G 4.0G 185G 3% /run tmpfs 189G 0 189G 0% /sys/fs/cgroup /dev/sda1 15G 11G 4.8G 69% / /dev/sda2 518M 0 518M 0% /boot/efi /dev/sda10 10G 83M 10G 1% /tmp /dev/sda11 1.1T 34M 1.1T 1% /infrastructure /dev/sda8 247M 62M 185M 25% /dsm /dev/sda6 10G 1.5G 8.6G 15% /var /dev/sda9 498M 119M 379M 24% /var/cfengine /dev/sdb2 931G 30G 902G 4% /p4meta /dev/sdb3 373G 61M 373G 1% /p4log /dev/sdb1 485G 112G 373G 23% /p4depot /dev/sda5 2.1G 3.6M 1.8G 1% /ctools tmpfs 1.0G 0 1.0G 0% /dsm/tmp/dsmbg.tmpfs 10.223.232.121:/new_itools 951G 497G 454G 53% /nfs/site/itools incfs03n03b-04:/common_usr_local 11G 1.2G 8.9G 12% /nfs/iind/local incfs04n08b-1:/prod 513M 1.5M 512M 1% /nfs/iind/proj/prod incfs06n11b-1:/home0 351G 149G 202G 43% /nfs/iind/disks/home23 incfs02n10a-1:/iind_disks_home24 501G 59G 442G 12% /nfs/iind/disks/home24 incfs06n04a-05:/iind_gen_adm 301G 176G 125G 59% /nfs/site/gen/adm incfs03n06b-1:/ba_ctg_home01 301G 263G 38G 88% /nfs/iind/disks/home110 inc08n07b-1:/home_tree 11G 79M 10G 1% /nfs/iind/home incfs06n10a-1:/iind_gen_adm_netmeter_m 81G 28G 53G 35% /nfs/iind/disks/iind_gen_adm_netmeter tmpfs 38G 0 38G 0% /run/user/37124 incfs07n05b-1:/common 201G 158G 43G 79% /nfs/site/disks/iind_gen_adm_common tmpfs 38G 0 38G 0% /run/user/12142325 stdout_lines: "{{ df_stdout_lines }}" tasks: - debug: var: df_output.stdout_lines - debug: var: df_output.stdout - set_fact: df: "{{ df_output.stdout|community.general.jc('df') }}" - debug: var: df - copy: dest: "/tmp/ansible_df_{{ item }}.csv" content: | {{ df_output.stdout_lines.0.split()[:-1]|join(',') }} {% for m in hostvars[item]['df'] %} {{ m.filesystem }},{{ m.size }},{{ m.used }},{{m.available }},{{ m.use_percent }},{{ m.mounted_on }} {% endfor %} loop: "{{ ansible_play_hosts }}" run_once: true delegate_to: localhost
how to increase disk space of virtual machine created by vagrant
I use vagrant create centos virtual machines using the following script: Vagrant.configure("2") do |config| (2..4).each do |i| config.vm.define "node#{i}" do |node| node.vm.provider "virtualbox" do |v| v.name = "node#{i}" v.memory = 3072 v.cpus = 2 config.disksize.size = '20GB' end node.vm.box = "cnode" node.vm.hostname = "node#{i}" node.vm.network :private_network, ip: "192.168.3.#{i}" end end end But / space is only 8.4GB: [vagrant#node2 opt]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 8.4G 1.1G 7.4G 13% / devtmpfs 1.4G 0 1.4G 0% /dev tmpfs 1.4G 0 1.4G 0% /dev/shm tmpfs 1.4G 8.3M 1.4G 1% /run tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup /dev/sda1 497M 118M 379M 24% /boot none 234G 148G 87G 64% /vagrant
config.disksize.size is a parameter from the 3rd party https://github.com/sprotheroe/vagrant-disksize As mentioned in the README : Depending on the guest, you may need to resize the partition and the filesystem from within the guest. At present the plugin only resizes the underlying disk. you can refer to vagrant no space left on device for the steps on the resizing partition
Is it "growpart" or "resize2fs" for these new c5 instanced?
$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 4G 0 loop /var/tmp nvme0n1 259:0 0 500G 0 disk ├─nvme0n1p1 259:1 0 1M 0 part └─nvme0n1p2 259:2 0 300G 0 part / $ sudo fdisk -l /dev/nvme0n1p2 Disk /dev/nvme0n1p2: 322.1 GB, 322120433152 bytes, 629141471 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Thanks!
Oracle 11gr2 failed check of kernel parameters on hp-ux
I'm installing oracle 11gR2 on 64 bit itanium HP-UX (v 11.31) system ( for HP Operation Manager 9 ). According with the installation requiremens, I've changed the kernel parameters but when I start the installation process it don't recognize them. Below the parameters that I've set : Parameter ( Manual) (on server) ------------------------------------------------------------- fs_async 0 0 ksi_alloc_max (nproc*8) 10240*8 = 81920 executable_stack 0 0 max_thread_proc 1024 3003 maxdsiz 0x40000000 (1073741824) 2063835136 maxdsiz_64bit 0x80000000 (2147483648) 2147483648 maxfiles 256 (a) 4096 maxssiz 0x8000000 (134217728) 134217728 maxssiz_64bit 0x40000000 (1073741824) 1073741824 maxuprc ((nproc*9)/10) 9216 msgmni (nproc) 10240 msgtql (nproc) 32000 ncsize 35840 95120 nflocks (nproc) 10240 ninode (8*nproc+2048) 83968 nkthread (((nproc*7)/4)+16) 17936 nproc 4096 10240 semmni (nproc) 10240 semmns (semmni*2) 20480 semmnu (nproc-4) 10236 semvmx 32767 65535 shmmax size of memory or 0x40000000 (higher one) 1073741824 shmmni 4096 4096 shmseg 512 1024 vps_ceiling 64 64 if this can help: [root#HUG30055 /] # swapinfo Kb Kb Kb PCT START/ Kb TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME dev 4194304 0 4194304 0% 0 - 1 /dev/vg00/lvol2 dev 8388608 0 8388608 0% 0 - 1 /dev/vg00/lvol10 reserve - 742156 -742156 memory 7972944 3011808 4961136 38% [root#HUG30055 /] # bdf /tmp Filesystem kbytes used avail %used Mounted on /dev/vg00/lvol6 4194304 1773864 2401576 42% /tmp