What are some (reliable) tests to find the disk a certain partition is on and put that result into a variable?
For example, output of lsblk:
...
sda 8:0 0 9.1T 0 disk
└─sda1 8:1 0 9.1T 0 part /foopath
...
mmcblk0 179:0 0 29.7G 0 disk
├─mmcblk0p1 179:1 0 256M 0 part /barpath
└─mmcblk0p2 179:2 0 29.5G 0 part /foobarpath
If partition="/dev/mmcblk0p2", how can I put mmcblk0 as the disk it is a part of into a variable? Or similarly, if partition="/dev/sda1", how to put sda as the disk it is a part of into a variable?
disk=${partition::-1} seemed to be a hack until I encountered partitions such as mmcblk0p1, hence the request for a more reliable test...
The purpose of isolating the disk and using variable is to pass it to smartctl -n standby /dev/sda to find if disk is currently spinning, etc.
Operating environment is Linux Mint 19.3 and Ubuntu 20.
Any ideas?
Thanks to #KamilCuk and #don_crissti ;)
"Print just the parent device" using lsblk
#!/bin/bash
partition="/dev/sda1"
disk="$(lsblk -no pkname "${partition}")"
We are trying to index large datasets to elastic search and indexing is stopped due to watermark reached and nodes are set to read-only.
We ran the command
GET /_cat/allocation?v
and from the output, we came to know that the disk space allocated for elastic is 10Gb and 95% is occupied.
We have some more free space on our machine that can be allocated to elastic.
We are trying to figure out how to increase the space allocation to elastic search.
Any pointers would be helpful.
Increase disc capacity to 100GB(based on data need) from 10GB(In AWS just upsized EBS volume) and follow below steps
connect to your instance
[ec2-user ~]$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p1 xfs 8.0G 1.6G 6.5G 20% /
/dev/nvme1n1 xfs 8.0G 33M 8.0G 1% /data
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 30G 0 disk /data
nvme0n1 259:1 0 16G 0 disk
└─nvme0n1p1 259:2 0 8G 0 part /
└─nvme0n1p128 259:3 0 1M 0 part
[ec2-user ~]$ sudo growpart /dev/nvme0n1 1
[ec2-user ~]$ sudo resize2fs /dev/xvda1
Reference : We followed the recomendation form here. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Relatively old Dell R620 server (32 cores / 128GB RAM) was working perfect for years with Ubuntu. Plain OS install, no Virtualization.
2 system disks in mirror (XFS)
6 RAID 5 disks for /var (XFS)
server is used for a nightly check of a MySQL Xtrabackup file.
Before the format and move to Centos 7 the process would finish by 08:00, Now running late at noon.
99% of the job is opening a large tar.gz file.
htop : there are only two processes doing something :
1. gzip -d : about 20% CPU
2. tar zxf Xtrabackup.tar.gz : about 4-7% CPU
iotop : it's steady at around 3M/s (Read) / 20-25 M/s (Write) which is about 25% of what i would expect at minimum.
Memory : Used : 1GB of 128GB
Server is fully updated both OS / HW / Firmware including the disks firmware.
IDRAC shows no problems.
Bottom line : Server is not working hard (to say the least) but performance is way off.
Any ideas would be appreciated.
vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 2 0 469072 0 130362040 0 0 57 341 0 0 0 0 98 2 0
0 2 0 456916 0 130374568 0 0 3328 24576 1176 3241 2 1 94 4 0
You have blocked processes and also io operations (around 20MB/s). And this mean for me you have few processes which concurrently access disc resources. What you can do to improve the performance is instead of
tar zxf Xtrabackup.tar.gz
use
gzip -d Xtrabackup.tar.gz|tar xvf -
The second add parallelism and can benefit from multy processor, You can also benefit from increase of the pipe (fifo) buffer. Check this answer for some ideas
Also consider to tune filesystem where are stored output files of tar
I am adding a physical disk of 8GB for glusterfs storage
physical drive-xvdf, partition-xvdf1
[root#ip-10-xx-x-xx replicated1]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
└─xvdf1 202:81 0 8G 0 part /data/brick1
Install xfs and format the partition
mkfs.xfs -i size=512 /dev/sdf1
Now,mount the directory data/brick1 to the newly created partition
echo "/dev/sdf1 /data/brick1 xfs defaults 1 2" >> /etc/fstab
mount -a && mount
[root#ip-10-xx-x-xx replicated1]# gluster volume status test-volume detail
Status of volume: rep-volume
------------------------------------------------------------------------------
Brick : Brick 10.xx.x.xx:/data/brick1/replicated1
Port : 49154
Online : Y
Pid : 2103
File System : xfs
Device : /dev/xvdf1
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 8.0GB
Total Disk Space : 8.0GB
Inode Count : 4193792
Free Inodes : 4193697
There is also another option such as gluster volume status test-volume mem.
My question is what is my brick size here ?
Also, Can i have multiple bricks in a single partition?
my question is what is my brick size here ?
8 GB
Also, Can i have multiple bricks in a single partition?
You could for testing purposes but usually there is a 1:1 mapping between bricks and their mount points.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
i created my EC2 Machine using Community Image of Centos 6.3 x64. i have added a 35 GB disk. Now when i do #df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.2G 6.4G 16% /
tmpfs 7.3G 0 7.3G 0% /dev/shm
my disk is 35GB but its showing 8 GB in root and 7 as tmpfs.
i tried to use resize2fs but it didnt work on centos. disk has ext4 partation..
# resize2fs /dev/xvda
resize2fs 1.41.12 (17-May-2010)
resize2fs: Device or resource busy while trying to open /dev/xvda
Couldn't find valid filesystem superblock.
or even if i tried resize2fs /dev/xvda1 it says device has nothing to do.
any idea or other way, its my root disk(/). so cant unmount it.
i found a way to do that, resize2fs not working in case not sure why but it says device or resource busy. i found a very good article on resizedisk using fdisk we can increase block size by deleting and creating it and Make the partition bootable. all it requires is a reboot. it wont effect your data if you use same start cylinder.
# df -h <<1>>
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 6.0G 2.0G 3.7G 35% /
tmpfs 15G 0 15G 0% /dev/shm
# fdisk -l <<2>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders
Units = cylinders of 1649 * 512 = 844288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 * 2 7632 6291456 83 Linux
# fdisk /dev/xvda <<3>>
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): u <<4>>
Changing display/entry units to sectors
Command (m for help): p <<5>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 * 2048 12584959 6291456 83 Linux
Command (m for help): d <<6>>
Selected partition 1
Command (m for help): n <<7>>
Command action
e extended
p primary partition (1-4)
p <<8>>
Partition number (1-4): 1 <<9>>
First sector (17-41943039, default 17): 2048 <<10>>
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): <<11>>
Using default value 41943039
Command (m for help): p <<12>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 2048 41943039 20970496 83 Linux
Command (m for help): a <<13>>
Partition number (1-4): 1 <<14>>
Command (m for help): w <<15>>
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
# reboot <<16>>
<wait>
# df -h <<17>>
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 2.0G 17G 11% /
tmpfs 15G 0 15G 0% /dev/shm
# resize2fs /dev/xvda1 <<18>>
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 5242624 blocks long. Nothing to do!
The following steps very simple works very well for me:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 8G 0 part /
Perform the following command as root:
# yum install cloud-utils-growpart
# growpart /dev/xvda 1
# reboot
After the reboot:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 30G 0 part /
I got the same problem. All I need to do is
reboot the instance
run the command
sudo resize2fs -f /dev/xxxx
and it works well for me.
An Addition to Adeel Ahmad's Answer:
If you are attempting to start an instance from an AMI with a swap partition, then additional steps will have to be performed.
For example, if the ami contains as follows:
# fdisk -l
Disk /dev/xvde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe211223f
Device Boot Start End Blocks Id System
/dev/xvde1 * 1 1291 10369926 83 Linux
/dev/xvde2 1292 1305 112455 82 Linux swap / Solaris
If I have to upgrade my capacity to 20GB, i will create an AMI and try to launch another instance with 20GB space. After this, if i try the above steps, the disk space wont increase as there is a xvde2 partition in-between the xvde1 and the new space.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 9.8G 7.5G 1.8G 81% /
$ fdisk -l
Disk /dev/xvde: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe211223f
Device Boot Start End Blocks Id System
/dev/xvde1 * 1 1291 10369926 83 Linux
/dev/xvde2 1292 1305 112455 82 Linux swap / Solaris
$ resize2fs /dev/xvde1
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 2592481 blocks long. Nothing to do!
In this case do the following
Delete both the partitions
Create new Primary partition with the new required size minus the size for swap space
Add bootable flag for this partition
Create second partition
Mark it as swap
write changes and reboot
Extend partition 1
Setup swap
OR
Deleting partition 1 Selected partition 1
Command (m for help): d <<6>>
Partition number (1-4): 1 <<6.0.1>>
Deleting partition 2 Selected partition 2
Command (m for help): d <<6.2>>
Creating resized primary partition 1
Command (m for help): n <<7>>
Command action
e extended
p primary partition (1-4)
p <<8>>
Partition number (1-4): 1 <<9>>
First sector (17-41943039, default 17): 2048 <<10>>
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):<<NEW_UPPER_LIMIT>> <<11>>
TAKE CARE : 2048 should be replaced by your original starting sector
or the system wont boot. NEW_UPPER_LIMIT will be the new sector number
for upper limit and the rest will be left for swap. For maintaining
the same swap space, Subtract the original start and end sector
numbers and then subtract the result from 41943039(or your upper
limit)
Creating swap partition
Command (m for help): n <<12>>
Command action
e extended
p primary partition (1-4)
p <<13>>
Partition number (1-4): 2 <<14>>
First sector (<<NEW_UPPER_LIMIT+1>>-41943039, default <<NEW_UPPER_LIMIT+1>>): <<USE_DEFAULT>> <<15>>
Last sector, +sectors or +size{K,M,G}(<<NEW_UPPER_LIMIT+1>>-41943039,default 41943039):<<USE_DEFAULT>> <<16>>
Using default value 41943039
Adding bootable bit for partition 1
Command (m for help): a <<17>>
Partition number (1-4): 1 <<18>>
Marking partition 2 as swap
Command (m for help): l <<19>>
Now you will see a list of filesystems. Note the one corresponding to Linux swap (say 82)
Command (m for help): t <<20>>
Partition number (1-4): 2 <<21>>
Hex Code (type l to list codes) : 82 <<22>>
Write changes and reboot
Command (m for help): w <<23>> The partition table has been altered!
....
$ sudo reboot
After reboot run
resize2fs /dev/xvde1
This will resize your fs
Now to use the second partition as swap
$ mkswap /dev/<<SECOND SWAP PARTITION(run fdisk -l to get the name)>>
$ swapon /dev/<<SECOND SWAP PARTITION(run fdisk -l to get the name)>>
You can check the /proc/swaps file to verify
$ cat /proc/swaps
Now add the following to the /etc/fstab for these changes to be persistent
At the end of /etc/fstab (open with nano or vi etc)
/dev/<<SECOND SWAP PARTITION>> swap swap defaults 0 0
Save and Exit
Reboot and check
I had faced the same issue with my Debian 8 ec2 instance and getting below error
FAILED: failed to get CHS from /dev/xvda
Solution:
$ sudo parted /dev/xvda resizepart 1
Warning: Partition /dev/xvda1 is being used. Are you sure you want to continue?
Yes/No? yes
End? [8588MB]? 100
$ sudo resize2fs /dev/xvda1
$ lsblk
$ df -h
you will see that ebs volume has increased now.