Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
i created my EC2 Machine using Community Image of Centos 6.3 x64. i have added a 35 GB disk. Now when i do #df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.2G 6.4G 16% /
tmpfs 7.3G 0 7.3G 0% /dev/shm
my disk is 35GB but its showing 8 GB in root and 7 as tmpfs.
i tried to use resize2fs but it didnt work on centos. disk has ext4 partation..
# resize2fs /dev/xvda
resize2fs 1.41.12 (17-May-2010)
resize2fs: Device or resource busy while trying to open /dev/xvda
Couldn't find valid filesystem superblock.
or even if i tried resize2fs /dev/xvda1 it says device has nothing to do.
any idea or other way, its my root disk(/). so cant unmount it.
i found a way to do that, resize2fs not working in case not sure why but it says device or resource busy. i found a very good article on resizedisk using fdisk we can increase block size by deleting and creating it and Make the partition bootable. all it requires is a reboot. it wont effect your data if you use same start cylinder.
# df -h <<1>>
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 6.0G 2.0G 3.7G 35% /
tmpfs 15G 0 15G 0% /dev/shm
# fdisk -l <<2>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders
Units = cylinders of 1649 * 512 = 844288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 * 2 7632 6291456 83 Linux
# fdisk /dev/xvda <<3>>
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): u <<4>>
Changing display/entry units to sectors
Command (m for help): p <<5>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 * 2048 12584959 6291456 83 Linux
Command (m for help): d <<6>>
Selected partition 1
Command (m for help): n <<7>>
Command action
e extended
p primary partition (1-4)
p <<8>>
Partition number (1-4): 1 <<9>>
First sector (17-41943039, default 17): 2048 <<10>>
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): <<11>>
Using default value 41943039
Command (m for help): p <<12>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 2048 41943039 20970496 83 Linux
Command (m for help): a <<13>>
Partition number (1-4): 1 <<14>>
Command (m for help): w <<15>>
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
# reboot <<16>>
<wait>
# df -h <<17>>
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 2.0G 17G 11% /
tmpfs 15G 0 15G 0% /dev/shm
# resize2fs /dev/xvda1 <<18>>
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 5242624 blocks long. Nothing to do!
The following steps very simple works very well for me:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 8G 0 part /
Perform the following command as root:
# yum install cloud-utils-growpart
# growpart /dev/xvda 1
# reboot
After the reboot:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 30G 0 part /
I got the same problem. All I need to do is
reboot the instance
run the command
sudo resize2fs -f /dev/xxxx
and it works well for me.
An Addition to Adeel Ahmad's Answer:
If you are attempting to start an instance from an AMI with a swap partition, then additional steps will have to be performed.
For example, if the ami contains as follows:
# fdisk -l
Disk /dev/xvde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe211223f
Device Boot Start End Blocks Id System
/dev/xvde1 * 1 1291 10369926 83 Linux
/dev/xvde2 1292 1305 112455 82 Linux swap / Solaris
If I have to upgrade my capacity to 20GB, i will create an AMI and try to launch another instance with 20GB space. After this, if i try the above steps, the disk space wont increase as there is a xvde2 partition in-between the xvde1 and the new space.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 9.8G 7.5G 1.8G 81% /
$ fdisk -l
Disk /dev/xvde: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe211223f
Device Boot Start End Blocks Id System
/dev/xvde1 * 1 1291 10369926 83 Linux
/dev/xvde2 1292 1305 112455 82 Linux swap / Solaris
$ resize2fs /dev/xvde1
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 2592481 blocks long. Nothing to do!
In this case do the following
Delete both the partitions
Create new Primary partition with the new required size minus the size for swap space
Add bootable flag for this partition
Create second partition
Mark it as swap
write changes and reboot
Extend partition 1
Setup swap
OR
Deleting partition 1 Selected partition 1
Command (m for help): d <<6>>
Partition number (1-4): 1 <<6.0.1>>
Deleting partition 2 Selected partition 2
Command (m for help): d <<6.2>>
Creating resized primary partition 1
Command (m for help): n <<7>>
Command action
e extended
p primary partition (1-4)
p <<8>>
Partition number (1-4): 1 <<9>>
First sector (17-41943039, default 17): 2048 <<10>>
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):<<NEW_UPPER_LIMIT>> <<11>>
TAKE CARE : 2048 should be replaced by your original starting sector
or the system wont boot. NEW_UPPER_LIMIT will be the new sector number
for upper limit and the rest will be left for swap. For maintaining
the same swap space, Subtract the original start and end sector
numbers and then subtract the result from 41943039(or your upper
limit)
Creating swap partition
Command (m for help): n <<12>>
Command action
e extended
p primary partition (1-4)
p <<13>>
Partition number (1-4): 2 <<14>>
First sector (<<NEW_UPPER_LIMIT+1>>-41943039, default <<NEW_UPPER_LIMIT+1>>): <<USE_DEFAULT>> <<15>>
Last sector, +sectors or +size{K,M,G}(<<NEW_UPPER_LIMIT+1>>-41943039,default 41943039):<<USE_DEFAULT>> <<16>>
Using default value 41943039
Adding bootable bit for partition 1
Command (m for help): a <<17>>
Partition number (1-4): 1 <<18>>
Marking partition 2 as swap
Command (m for help): l <<19>>
Now you will see a list of filesystems. Note the one corresponding to Linux swap (say 82)
Command (m for help): t <<20>>
Partition number (1-4): 2 <<21>>
Hex Code (type l to list codes) : 82 <<22>>
Write changes and reboot
Command (m for help): w <<23>> The partition table has been altered!
....
$ sudo reboot
After reboot run
resize2fs /dev/xvde1
This will resize your fs
Now to use the second partition as swap
$ mkswap /dev/<<SECOND SWAP PARTITION(run fdisk -l to get the name)>>
$ swapon /dev/<<SECOND SWAP PARTITION(run fdisk -l to get the name)>>
You can check the /proc/swaps file to verify
$ cat /proc/swaps
Now add the following to the /etc/fstab for these changes to be persistent
At the end of /etc/fstab (open with nano or vi etc)
/dev/<<SECOND SWAP PARTITION>> swap swap defaults 0 0
Save and Exit
Reboot and check
I had faced the same issue with my Debian 8 ec2 instance and getting below error
FAILED: failed to get CHS from /dev/xvda
Solution:
$ sudo parted /dev/xvda resizepart 1
Warning: Partition /dev/xvda1 is being used. Are you sure you want to continue?
Yes/No? yes
End? [8588MB]? 100
$ sudo resize2fs /dev/xvda1
$ lsblk
$ df -h
you will see that ebs volume has increased now.
Related
I have question about my disk partition,
here is the result from fdisk -l command
Disk /dev/loop0: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvda: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00050d75
Device Boot Start End Blocks Id System
/dev/xvda1 * 1 26109 209714176 83 Linux
As you can see, i have 500GB space (/dev/xvda) and our cPanel is using only 200GB (/dev/xvda1).
here is the result from lsblk command
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 4G 0 loop /tmp
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 200G 0 part /
Here you can see i have 500GB disk,
My question is , How i can resize xvda1 so it can use it available space OR How i can can create new disk space to use in our cPanel to use more space.
My aim is to increase the disk space in cPanel but dont know how this is possible.
Thank's for your help !
You can use "growpart" to resize the partition and then reszie the file system.
install "cloud-guest-utils" if it is not installed already
apt install cloud-guest-utils
resize partition
growpart /dev/xvda 1
check the result
lsblk
resize filesystem
resize2fs /dev/xvda1
Check after resizing
df -h
Take a snapshot of your volume before trying this.
Run the following:
sudo yum install cloud-guest-utils
growpart /dev/xvda 1
then reboot
Hi I'm a newbie of embeded linux. I'm following this tutorial (https://e2e.ti.com/support/embedded/linux/f/354/t/398780?Script-to-Erase-Emmc-independently-Beagle-Bone-Black) for flashing my linux system to beaglebone eMMC.
But I have an error: umount: can't umount /dev/mmcblk1p1: Invalid argument
This is my cmd :
Disk /dev/mmcblk1: 3825 MB, 3825205248 bytes
4 heads, 16 sectors/track, 116736 cylinders
Units = cylinders of 64 * 512 = 32768 bytes
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 * 2048 2536 15648 e Win95 FAT16 (LBA)
/dev/mmcblk1p2 1 2047 65496 83 Linux
Partition table entries are not in disk order
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table
[ 235.274729] mmcblk1: p1 p2
root#beaglebone:/# umount /dev/mmcblk1p1
umount: can't umount /dev/mmcblk1p1: Invalid argument
Sorry my English is not good. Does anybody have any idea of what did I do wrong or did I miss something?
This is an error in the script you are following. If you have created new partitions without a file system you would not expect them to be mounted.
Creating the 2nd partition in sectors 1 - 2047 is probably not what you want to do. You should use all the space after partition 1.
I'm attempting to rebuild a development vm using the debian/stretch64 box (i.e. Debian 9). I'm using Vagrant 2.0.2, with VirtualBox 5.2.6 on MacOS Sierra 10.12.6).
In my Vagrantfile I've specified a 30GB disk:
config.disksize.size = "30GB"
Virtual Media Manager (File menu in VirtualBox) shows the "virtual size" (capacity) of the stretch.vdi as 30GB. VBoxManage showhdinfo "stretch.vdi" also gives me the same information and indicates it's a dynamic default (i.e. resizable) disk, unlike .vmdk.
However, Debian reports a much smaller file system:
/dev/sda1 8.7G 8.7G 0 100% /
(it did have some space, but rsyncing a large shared folder on boot keeps filling it up).
Before it was full:
I ran sudo cfdisk /dev/sda and found the volume was reporting 20.1G free space and only 8.7G on /dev/sda1.
I also did apt-get install lvm2 so I would have the tools with which to manage volumes.
I then used fdisk (not the curses version) to reconfigure the partitions (i.e. I deleted them all, made the first one 29G and added an extended 1G partition with the 'type' set to Linux swap).
Although I saw the message "Re-reading the partition table failed. Device or resource busy.", after a reboot the cfdisk /dev/sda output all looked correct:
Device Boot Start End Sectors Size Id Type
>> /dev/sda1 2048 60819455 60817408 29G 83 Linux
/dev/sda2 60819456 62914559 2095104 1023M 5 Extended
└─/dev/sda5 60821504 62914559 2093056 1022M 82 Linux swap / Solaris
Still however, df returns:
/dev/sda1 8.7G 8.7G 0 100% /
Various tutorials mention pvcreate and pvresize, however for the latter I get:
sudo pvresize /dev/sda
Failed to find physical volume "/dev/sda".
0 physical volume(s) resized / 0 physical volume(s) not resized
Here's my complete fdisk -l output:
Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe133a040
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 60819455 60817408 29G 83 Linux
/dev/sda2 60819456 62914559 2095104 1023M 5 Extended
/dev/sda5 60821504 62914559 2093056 1022M 82 Linux swap / Solaris
What else should I be doing to get Debian to see the full 29G?
Fixed simply with:
sudo resize2fs /dev/sda1
Now I have:
/dev/sda1 29G 8.7G 19G 32% /
Which I found in this answer
(Have voted to close my own question).
My server at Hetzner is crushed. Not all data was backed up. Please help me to mount and repair data from crashed disk.
Any help would be very good.
Here is mdstat information:
root#rescue ~ # cat /proc/mdstat
Personalities : [raid1] md2 : active raid1 sdb3[1]
1927689152 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sda1[0] sdb1[1]
25149312 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sda2[0]
523968 blocks super 1.2 [2/1] [U_]
unused devices: <none>
Fdisk information:
root#rescue ~ # fdisk -l
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006cff0
Device Boot Start End Blocks Id System
/dev/sda1 2048 50333696 25165824+ fd Linux raid autodetect
/dev/sda2 50335744 51384320 524288+ fd Linux raid autodetect
/dev/sda3 51386368 3907027120 1927820376+ fd Linux raid autodetect
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e2728
Device Boot Start End Blocks Id System
/dev/sdb1 2048 50333696 25165824+ fd Linux raid autodetect
/dev/sdb2 50335744 51384320 524288+ fd Linux raid autodetect
/dev/sdb3 51386368 3907027120 1927820376+ fd Linux raid autodetect
Disk /dev/md1: 536 MB, 536543232 bytes
2 heads, 4 sectors/track, 130992 cylinders, total 1047936 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md0: 25.8 GB, 25752895488 bytes
2 heads, 4 sectors/track, 6287328 cylinders, total 50298624 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md2: 1974.0 GB, 1973953691648 bytes
2 heads, 4 sectors/track, 481922288 cylinders, total 3855378304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md2 doesn't contain a valid partition table
/md0 is swap..:
root#rescue ~ # mount /dev/md0 ./mnt2
/dev/md0 looks like swapspace - not mounted
mount: you must specify the filesystem type
Successfully mounted /dev/md1:
mount /dev/md1 ./mnt/
But these files a useless:
root#rescue ~/mnt # ls
abi-3.13.0-37-generic boot config-3.13.0-66-generic initrd.img-3.13.0-49-generic System.map-3.13.0-37-generic vmlinuz-3.13.0-37-generic
abi-3.13.0-49-generic config-3.13.0-37-generic grub initrd.img-3.13.0-66-generic System.map-3.13.0-49-generic vmlinuz-3.13.0-49-generic
abi-3.13.0-66-generic config-3.13.0-49-generic initrd.img-3.13.0-37-generic lost+found System.map-3.13.0-66-generic vmlinuz-3.13.0-66-generic
Next one..
root#rescue ~ # mount /dev/md2 ./mnt2
mount: Stale NFS file handle
mdadm says:
root#rescue ~ # mdadm -E -s
ARRAY /dev/md/1 metadata=1.2 UUID=81310c9d:bffcc81e:a2da516c:17950be1 name=rescue:1
ARRAY /dev/md/0 metadata=1.2 UUID=b8c0db78:11d75c0c:ee97c975:7f6dcfd5 name=rescue:0
ARRAY /dev/md/2 metadata=1.2 UUID=df88a4f2:bc751f21:609301cf:7336becf name=rescue:2
Going next..
root#rescue ~ # fsck /dev/md2
fsck from util-linux 2.20.1
e2fsck 1.42.5 (29-Jul-2012)
ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmap
fsck.ext4: Group descriptors look bad... trying backup blocks...
fsck.ext4: Attempt to read block from filesystem resulted in short read while using the backup blocksfsck.ext4: going back to original superblock
/dev/md2 contains a file system with errors, check forced.
fsck.ext4: A block group is missing an inode table while reading bad blocks inode
This doesn't bode well, but we'll try to go on...
Pass 1: Checking inodes, blocks, and sizes
Group 2's inode table at 33554432 conflicts with some other fs block.
Relocate<y>? yes
Group 2's block bitmap at 33554432 conflicts with some other fs block.
Relocate<y>? yes
Restarting with -y:
root#rescue ~ # fsck /dev/md2 -y
...
Error reading block 24117281 (Attempt to read block from filesystem resulted in short read) while getting next inode from scan. Ignore error? yes
Force rewrite? yes
... (many of them)
...
fsck.ext4: e2fsck_read_bitmaps: illegal bitmap block(s) for /dev/md2
/dev/md2: ***** FILE SYSTEM WAS MODIFIED *****
e2fsck: aborted
/dev/md2: ***** FILE SYSTEM WAS MODIFIED *****
Give me a direction or any advice to move please.
Looks like I found a solution:
root#rescue ~ # mdadm -A -R /dev/md9 /dev/sda3
mdadm: /dev/md9 has been started with 1 drive (out of 2).
root#rescue ~ # mount /dev/md9 ./mnt
Thanks to https://blog.sleeplessbeastie.eu/2012/05/08/how-to-mount-software-raid1-member-using-mdadm/
Today I started getting errors on simple operations, like creating small files in vim, the bash completion started to complain as well.
Here is the result of df -h :
vagrant#machine:/vagrant$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 40G 38G 249M 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 12K 2.0G 1% /dev
tmpfs 396M 396K 395M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 148K 876K 15% /tmp
192.168.50.1:/Users/nha/repo/assets 233G 141G 93G 61% /var/www/assets
vagrant 233G 141G 93G 61% /vagrant
So apparently / doesn`t have space anymore ? Isn't it weird since I have space in the other filesystems (or am I misreading something) ?
How do I get more space on my vm ?
Even though you have space on your Guest OS, the VM is limited.There are couple of steps required in order to increase the size of your disk:
first, vagrant haltto close your VM
resize disk
VBoxManage clonehd box-disk1.vmdk box-disk1.vdi --format vdi
VBoxManage modifyhd box-disk1.vdi --resize 50000
start Virtual box and change configuration of the VM to associate the new disk
use fdisk to resize disk
you need to create a new partition with the new space and allocate it, so first start the VM and logged on as super user
vagrant up && vagrant ssh
su -
the command (as illustrated from my instance) are
[root#oracle ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): p
Disk /dev/sda: 52.4 GB, 52428800000 bytes
255 heads, 63 sectors/track, 6374 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00041a53
Device Boot Start End Blocks Id System
/dev/sda1 * 1 39 307200 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 39 2611 20663296 8e Linux LVM
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (2611-6374, default 2611):
Using default value 2611
Last cylinder, +cylinders or +size{K,M,G} (2611-6374, default 6374):
Using default value 6374
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root#oracle ~]#
note you might need to change /dev/sda compare to your configuration
create a new partition (again logged on as super user su -)
su -
[root#oracle ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 linux lvm2 a-- 19.70g 0
[root#oracle ~]# pvcreate /dev/sda3
Physical volume "/dev/sda3" successfully created
[root#oracle ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 linux lvm2 a-- 19.70g 0
/dev/sda3 lvm2 a-- 28.83g 28.83g
[root#oracle ~]# vgextend linux /dev/sda3
Volume group "linux" successfully extended
[root#oracle ~]# lvextend -l +100%FREE /dev/linux/root
[root#oracle ~]# resize2fs /dev/linux/home
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/linux/home is mounted on /home; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/linux/home to 7347200 (4k) blocks.
The filesystem on /dev/linux/home is now 7347200 blocks long.
You can increase space in your box, without losing data or creating new partitions.
Halt your VM;
Go to /home_dir/VirtualBox VMs
Change file format from .vmdk to .vdi. Then use command from the answer above to increase space.
Change the file extension back and change the file name.
Attach an extended disk to your VM.
VBoxManage storageattach <your_box_name> --storagectl "IDE Controller" --
port 0 --device 0 --type hdd --medium new_extended_file.vmdk
In your VirtualBox application go to Your_VM -> Settings -> Storage. Click on the controller and choose 'add new disk' below. Choose from existing disks the one you have just expanded.
Here's a step by step instruction how to expand the space in your vagrant box or virtual machine.
The easiest way to increase the size of the vagrant box is with the vagrant-disksize plugin.
In your vagrant root folder, run vagrant plugin install vagrant-disksize
Then add the new size to the Vagrantfile:
Vagrant.configure('2') do |config|
...
config.disksize.size = '60GB'
end
Then vagrant halt and vagrant up.
vagrant reload will not work.
I have read that the plugin has issues shrinking disk size if you overshoot.
EDIT:
On Mac, this plugin also resized the partition within the Guest OS (Ubuntu in my case).
On Windows, Vagrant reserves the space on the host OS (it enlarges the disk), but you can't use the space until resizing the partition from within the Guest OS.
I used GParted, but other solutions look simpler, such as: https://nguyenhoa93.github.io/Increase-VM-Partition
I sometimes have to destroy the machine and build it up again which in my case frees up quite a lot of space, you can do that by running
vagrant destroy
vagrant up
Please note this will result in database data being lost.