As title mentioned, I want to get mount point from a given path. For example, execute the command df /etc, the result is:
(my system is MacOS Catalina 10.15.1, go version go1.13.3 darwin/amd64)
➜ / df /etc
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1s1 976490576 628632776 308713384 68% 2457719 4879995161 0% /System/Volumes/Data
then "/System/Volumes/Data" is what I need.
So how can I get the mount point from a given path in Go? (I tried syscall.Stat_t{}, but failed)
Related
The exfat partition of my external drive is not mounted automatically on my mac.
I created a /mytest directory in the /Volumes directory and tried to mount it, but I get the following error and cannot mount it.
This works for everything else, but I cannot mount this partition.
How can I get it mounted?
sh-3.2# diskutil list rdisk7
/dev/disk7 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *8.0 TB disk7
1: Microsoft Reserved 16.8 MB disk7s1
2: Microsoft Basic Data myexfhdtosi 4.4 TB disk7s2
3: Microsoft Basic Data 1.6 TB disk7s3
4: Apple_HFS myhfshddtosi 1000.0 GB disk7s4
sh-3.2# mount -v -t exfat /dev/rdisk7s2 /Volumes/mytest
mount_exfat: /dev/rdisk7s2 on /Volumes/mytest: Block device required
mount: /Volumes/mytest failed with 71
Try killing the fsck service and mount again.
sudo pkill -f fsck
See this thread for detail discussion.
I'm trying to expand more space for my virtual server (Homestead) using Parallels provider on Macbook.
The default disk size is 18GB
vagrant#homestead:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 964M 0 964M 0% /dev
tmpfs 199M 7.7M 192M 4% /run
/dev/mapper/homestead--vg-root 18G 11G 5.9G 65% /
tmpfs 994M 8.0K 994M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/mapper/homestead--vg-mysql--master 9.8G 234M 9.1G 3% /homestead-vg/master
10.211.55.2:/Users/orange/code 234G 234G 165G 59% /home/vagrant/code
vagrant 234G 69G 165G 30% /vagrant
tmpfs 199M 0 199M 0% /run/user/1000
I don't know why the default space of VM is 64G but actually Homestead server is just 18GB
☁ homestead-7.pvm prl_disk_tool resize --info --units G --hdd harddisk1.hdd
Operation progress 100 %
Disk information:
SectorSize: 512
Size: 64G
Minimum: 64G
Minimum without resizing the last partition: 64G
Maximum: 2047G
Warning! The last partition cannot be resized because its file system is either not supported or damaged.
Make sure that the virtual HDD is not used by another process.
Warning! The disk image you specified has snapshots.
You need to delete all snapshots using the prlctl command line utility before resizing the disk.
I've so many searched but it still not solve.
How can I solve it?
(Sorry my bad English)
Based on the discussion in the reported issue on GitHub the following command will help:
lvextend -r -l +100%FREE /dev/mapper/homestead--vg-root
There's a commit that should handle the issue, but it wasn't released yet within tagged Vagrant version.
The reason for this whole dance is that Vagrant packages the VirtualBox disk as .vmdk which doesn't have the same resizability options as .vdi does.
I'm running spark job consuming 50GB+, my guess is that shuffle operations written to disk are causing space to run out.
I'm using the current Spark 1.6.0 EC2 script to build my cluster, close to finishing I get this error:
16/03/16 22:11:16 WARN TaskSetManager: Lost task 29948.1 in stage 3.0 (TID 185427, ip-172-31-29-236.ec2.internal): java.io.FileNotFoundException: /mnt/spark/spark-86d64093-d1e0-4f51-b5bc-e7eeffa96e82/executor-b13d39ba-0d17-428d-846a-b1b1f69c0eb6/blockmgr-12c0d9df-3654-4ff8-ba16-8ed36ca68612/29/shuffle_1_29948_0.index.3065f0c8-2511-48ab-8bf0-d0f40ab524ba (No space left on device)
I've tried using various EC2 types, but they all seem to just have the 8GB mounted for / when they start. Doing a df -h doesn't show any other storage mounted for /mnt/spark so does that mean it's only using the little bit of space left?
My df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 4.1G 3.7G 53% /
devtmpfs 30G 56K 30G 1% /dev
tmpfs 30G 0 30G 0% /dev/shm
How do you expand the disk space? I've created my own AMI for this based off the Amazon default Spark one, because of extra packages I need.
I'm running spark jobs on a standalone cluster (generated using spark-ec2 1.5.1) using crontab and my worker nodes are getting hammered by these app files that get created by each job.
java.io.IOException: Failed to create directory /root/spark/work/app-<app#>
I've looked at http://spark.apache.org/docs/latest/spark-standalone.html and changed my spark-env.sh (located in spark/conf on the master and worker nodes) to reflect the following:
SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.appDataTtl=3600"
Am I doing something wrong? I've added the line to the end of each spark-env.sh file on the master and both workers.
On maybe a related note, what are these mounts pointing to? I would use them, but I don't want to use them blindly.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 8256952 0 100% /
tmpfs 3816808 0 3816808 0% /dev/shm
/dev/xvdb 433455904 1252884 410184716 1% /mnt
/dev/xvdf 433455904 203080 411234520 1% /mnt2
Seems like a 1.5.1 issue - I'm no longer using the spark-ec2 script to spin up the cluster. Ended up creating a cron job to clear out the directory as mentioned in my comment.
I'm using Amazon EMR and I'm able to run most jobs fine. I'm running into a problem when I start loading and generating more data within the EMR cluster. The cluster runs out of storage space.
Each data node is a c1.medium instance. According to the links here and here each data node should come with 350GB of instance storage. Through the ElasticMapReduce Slave security group I've been able to verify in my AWS Console that the c1.medium data nodes are running and are instance stores.
When I run hadoop dfsadmin -report on the namenode, each data node has about ~10GB of storage. This is further verified by running df -h
hadoop#domU-xx-xx-xx-xx-xx:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 2.6G 6.8G 28% /
tmpfs 859M 0 859M 0% /lib/init/rw
udev 10M 52K 10M 1% /dev
tmpfs 859M 4.0K 859M 1% /dev/shm
How can I configure my data nodes to launch with the full 350GB storage? Is there a way to do this using a bootstrap action?
After more research and posting on the AWS forum I got a solution although not a full understanding of what happened under the hood. Thought I would post this as an answer if that's okay.
Turns out there is a bug in the AMI Version 2.0, which of course was the version I was trying to use. (I had switched to 2.0 because I wanted hadoop 0.20 to be the default) The bug in AMI Version 2.0 prevents mounting of instance storage on 32-bit instances, which is what the c1.mediums launch as.
By specifying on the CLI tool that the AMI Version should use "latest", the problem was fixed and each c1.medium launched with the appropriate 350GB of storage.
For example
./elastic-mapreduce --create --name "Job" --ami-version "latest" --other-options
More information about using AMIs and "latest" can be found here. Currently "latest" is set to AMI 2.0.4. AMI 2.0.5 is the most recent release but looks like it is also still a little buggy.