Before Catalina I never had issues with resizing macOS HD using VMware Fusion.
I am using:
VMware Fusion Professional Version 11.5.0 (14634996)
Host: macOS Mojave 10.14.6
Steps to Reproduce
Create new VM, File > New > Create a custom virtual machine or install from disk or image
Select Apple OS X > macOS 10.15
[x] Create a new virtual disk
Select button Customize Settings
In the VM Settings > Hard Disk (SATA), set Disk size to 400 GB.
Note: VMware does not actually consume physical disk space on your host until needed.
Continue with Catalina install
When install is completed, the Catalina partition is only 42 GB.
I expected 400 GB.
Solution:
In case it may help others:
Open terminal in Catalina VM
Use diskutil list to identify the Apple_APFS Container disk
Use diskutil apfs resizeContainer to resize Apple_APFS Container disk1 to use all free space
Before
jenkins#Jenkinss-Mac ~ % diskutil list
/dev/disk0 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *425.2 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_APFS Container disk1 42.6 GB disk0s2
/dev/disk1 (synthesized):
#: TYPE NAME SIZE IDENTIFIER
0: APFS Container Scheme - +42.6 GB disk1
Physical Store disk0s2
1: APFS Volume Macintosh HD - Data 3.6 GB disk1s1
2: APFS Volume Preboot 83.9 MB disk1s2
3: APFS Volume Recovery 529.0 MB disk1s3
4: APFS Volume VM 1.1 MB disk1s4
5: APFS Volume Macintosh HD 10.6 GB disk1s5
Fix: Resize Apple_APFS Container disk1 to use all free space
***** Warning ***** Make sure the disk number and partition (the row number), in my case "s2", matches the diskutil list on your system
diskutil apfs resizeContainer /dev/disk0s2 0
After
/dev/disk0 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *425.2 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_APFS Container disk1 425.0 GB disk0s2
Related
The exfat partition of my external drive is not mounted automatically on my mac.
I created a /mytest directory in the /Volumes directory and tried to mount it, but I get the following error and cannot mount it.
This works for everything else, but I cannot mount this partition.
How can I get it mounted?
sh-3.2# diskutil list rdisk7
/dev/disk7 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *8.0 TB disk7
1: Microsoft Reserved 16.8 MB disk7s1
2: Microsoft Basic Data myexfhdtosi 4.4 TB disk7s2
3: Microsoft Basic Data 1.6 TB disk7s3
4: Apple_HFS myhfshddtosi 1000.0 GB disk7s4
sh-3.2# mount -v -t exfat /dev/rdisk7s2 /Volumes/mytest
mount_exfat: /dev/rdisk7s2 on /Volumes/mytest: Block device required
mount: /Volumes/mytest failed with 71
Try killing the fsck service and mount again.
sudo pkill -f fsck
See this thread for detail discussion.
I am trying to deploy the all-in-one configuration using kolla-ansible with ceph enabled
enable_ceph: "yes"
#enable_ceph_mds: "no"
enable_ceph_rgw: "yes"
#enable_ceph_nfs: "no"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
#enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
glance_backend_ceph: "yes"
gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
cinder_backend_ceph: "{{ enable_ceph }}"
cinder_backup_driver: "ceph"
nova_backend_ceph: "{{ enable_ceph }}"
And, my setup consists of a Virtual Box VM with Ubuntu 18.04.4 desktop version with 2 CPU cores, 30 GB Disk (single disk), 2GB RAM, the partitioning type is msdos.
ansible version==2.9.7
kolla-ansible version==9.1.0
In order to install ceph OSD using kolla-ansible i read that a partition should have the name KOLLA_CEPH_OSD_BOOTSTRAP_BS.
Hence, i created root partition with 20 GB i.e /dev/sda1 and then an extended partition /dev/sda2 for the rest 20GB and followed by two logical partitions (/dev/sda5 and /dev/sda6) each of 10GB for OSD. But in msdos type partitioning there is no feature to allocate name to partitions.
So my questions are:
How do I go about labeling the partition in case of msdos type partition in order for kolla-ansible to recognize that /dev/sda5 and /dev/sda6 is designated for Ceph-OSD ?
Is it a must to have a separate storage drive than the one containing Operating System for Ceph OSD (i know its not recommended to have all in single disk) ?
How do I have to provision my single drive HD space in order to install Ceph-OSD using kolla-ansible?
P.S: I also tried to install ceph using kolla-ansible using an OpenStack VM (4 CPU cores, 80GB disk space - single drive, as i didn"t install Cinder in my OpenStack infra.) and Ubuntu 18.04.4 Cloud image, which has GPT partition type and supports naming partitions. The partitions were as follows:
/dev/vda1 for root partition
/dev/vda2 for ceph OSD
/dev/vda3 for ceph OSD
But the drawback was that, kolla-ansible wiped out complete disk and resulted in failure in installation.
Any help is highly appreciated. Thanks a lot in advance.
I also had installed an Kolla-Ansible single-node all-in-one with Ceph as storage backend, so I had the same problem.
Yes, the bluestore installation of the ceph doesn't work with a single partition. I had also tried different ways of labeling, but for me it only worked with a whole disk, instead of a partition. So for your virtual setup create a whole new disk, for example /dev/vdb.
For labeling, I used the following as bash-script:
#!/bin/bash
DEV="/dev/vdb"
(
echo g # create GPT partition table
echo n # new partiton
echo # partition number (automatic)
echo # start sector (automatic)
echo +10G # end sector (use 10G size)
echo w # write changes
) | fdisk $DEV
parted $DEV -- name 1 KOLLA_CEPH_OSD_BOOTSTRAP_BS
Be aware, that DEV at the beginning is correctly set for your setup. This creates a new partiton table and one partition on the new disc with 10GB size. The kolla-ansible deploy-run register the label and wipe the whole disc, so the size-value has nothing to say and is only for the temporary partition on the disc.
One single disc is enough for the Ceph-OSD in kolla-ansible. You don't need a second OSD. For this, add the following config-file in your kolla-ansible setup in this path, when you used the default kolla installation path: /etc/kolla/config/ceph.conf with the content:
[global]
osd pool default size = 1
osd pool default min size = 1
This is to make sure, that there is only one OSD requested by kolla-ansible. If your kolla-directory with the globals.yml is not under /etc/kolla/, you have to change the path of the config-file as well.
Solution for setup with one single disc with multiple partitions is to switch the storage-type of the ceph-storage in the kolla-ansible setup from bluestore to the older filestore OSD type. This also requires different partition-labels like written here: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive .
With filestore you need one parition with the label KOLLA_CEPH_OSD_BOOTSTRAP_FOO and a small journal-partition with label KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J (the FOO in the name is really required...). To be able to switch your kolla-installation to filestore OSD type, edit all-in-one file's [storage] section by adding ceph_osd_store_type=filestore next to the host as follows, to override the default bluestore.
[storage]
localhost ansible_connection=local ceph_osd_store_type=filestore
The above method has been tested with ansible==2.9.7 and kolla-ansible==9.1.0 and OpenStack Train release and prior releases.
I'm launching a fresh Amazon EC2 image running Amazon Linux (Amazon Linux AMI 2014.03.2 (HVM)). I'm using a larger type (e.g. m3.large) which comes with 1 x 32 (SSD) Instance Storage according to the launch page.
However, I'm not able to find that storage anywhere. In documentation that I found it mentioned that it should be listed with lsblk and mounted in /media, but /media is empty and lsblk only gives me the 8GB root disk:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
Anyone an idea how to access my 'default' instance storage that I get with my instance?
I have since received a solution via the AWS Forums. See here: https://forums.aws.amazon.com/thread.jspa?messageID=565737
I have a vagrant configured VirtualBox VM running Ubuntu 14.04. Whenever doing a major mysql import within the box, I notice that the import was incomplete. If I try to restart mysql, this error occurs:
[FAIL] /etc/init.d/mysql: ERROR: The partition with /var/lib/mysql is too full! ... failed!
I resized the VMDK image by cloning to VDI, resizing, and cloning back to VMDK as I had read this might fix the problem.
$ df -h
rootfs 9.1G 8.7G 8.0K 100% /
I'm aware that the /tmp filesystem is mounted separately in Ubuntu, so I even tried changing the tmp directory on mysql.ini and that didn't work either.
$ VBoxManage showhdinfo <guid>
Parent UUID: base
State: created
Type: normal (base)
Storage format: VMDK
Format variant: dynamic default
Capacity: 24000 MBytes
Size on disk: 6894 MBytes
I am running a micro instace in EC2 with 592 MB available RAM
Jenkins was crashing due to Out Of Memory build errors while running UPDATE on big SQL Table in backend.
Disk utilisation is 83% with 6 GB out of 8GB EBS volume used ..
sudo du -hsx * | sort -rh | head -10
/
2.7G opt
1.5G var
1.2G usr
I found only 6 MB was free with command - "free -m " with these services running -
(i) LAMPP
(ii) Jenkins
(iii) Mysql 5.6
I stopped LAMPP and that created 70 MB free space
Then , I closed Jenkins, it created 320 MB free space
Closing MySQL 5.6 brings it up to 390 MB free space ..
So, 200MB RAM is still getting used with none of my services running.
Is 200MB RAM minimum required for an Ubuntu micro Instance running on Amazon EC2 ?
Nope, i believe it can run till its 100% used.
If a task that requires a large memory than what is available, the task is killed.
To free up more memory space, you can run this from your terminal
sudo apt-get autoremove