Bash: Find the disk a certain partition is on and put result into a variable - bash

What are some (reliable) tests to find the disk a certain partition is on and put that result into a variable?
For example, output of lsblk:
...
sda 8:0 0 9.1T 0 disk
└─sda1 8:1 0 9.1T 0 part /foopath
...
mmcblk0 179:0 0 29.7G 0 disk
├─mmcblk0p1 179:1 0 256M 0 part /barpath
└─mmcblk0p2 179:2 0 29.5G 0 part /foobarpath
If partition="/dev/mmcblk0p2", how can I put mmcblk0 as the disk it is a part of into a variable? Or similarly, if partition="/dev/sda1", how to put sda as the disk it is a part of into a variable?
disk=${partition::-1} seemed to be a hack until I encountered partitions such as mmcblk0p1, hence the request for a more reliable test...
The purpose of isolating the disk and using variable is to pass it to smartctl -n standby /dev/sda to find if disk is currently spinning, etc.
Operating environment is Linux Mint 19.3 and Ubuntu 20.
Any ideas?

Thanks to #KamilCuk and #don_crissti ;)
"Print just the parent device" using lsblk
#!/bin/bash
partition="/dev/sda1"
disk="$(lsblk -no pkname "${partition}")"

Related

growpart "failed to get start sector" on expanding partition

Using the instructions located: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
I can run:
lsblk
nvme0n1
259:1 0 200G 0 disk
├─nvme0n1p1
259:2 0 1M 0 part
└─nvme0n1p2
259:3 0 100G 0 part /
I then attempt to grow nvme0n1p2 by running:
growpart /dev/nvme0n1 3
FAILED: disk=/dev/nvme0n1 partition=3: failed to get start sector
Any thoughts on what I could be doing wrong? Running this as root. I read the other similar threads but was unable to resolve based on them.
This is because partition 3 does not exist.
The command you run is
growpart /dev/nvme0n1 3
How ever the first device is the drive. The number ( in this case 3 ) is the partition number. As you can see by the output of the lsblk command there is only 2 partitions. Hence the output of can't find partition
nvme0n1 -> device
259:1 0 200G 0 disk
├─nvme0n1p1
259:2 0 1M 0 part -> first partition ( leave this alone )
└─nvme0n1p2
259:3 0 100G 0 part / -> your root partition and what we want to grow in order to add room
The command you need to run is
growpart /dev/nvme0n1 2
As this will enlarge the partition assigned to /. You can continue to follow this guide after this

Bad Disk performance after moving from Ubuntu to Centos 7

Relatively old Dell R620 server (32 cores / 128GB RAM) was working perfect for years with Ubuntu. Plain OS install, no Virtualization.
2 system disks in mirror (XFS)
6 RAID 5 disks for /var (XFS)
server is used for a nightly check of a MySQL Xtrabackup file.
Before the format and move to Centos 7 the process would finish by 08:00, Now running late at noon.
99% of the job is opening a large tar.gz file.
htop : there are only two processes doing something :
1. gzip -d : about 20% CPU
2. tar zxf Xtrabackup.tar.gz : about 4-7% CPU
iotop : it's steady at around 3M/s (Read) / 20-25 M/s (Write) which is about 25% of what i would expect at minimum.
Memory : Used : 1GB of 128GB
Server is fully updated both OS / HW / Firmware including the disks firmware.
IDRAC shows no problems.
Bottom line : Server is not working hard (to say the least) but performance is way off.
Any ideas would be appreciated.
vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 2 0 469072 0 130362040 0 0 57 341 0 0 0 0 98 2 0
0 2 0 456916 0 130374568 0 0 3328 24576 1176 3241 2 1 94 4 0
You have blocked processes and also io operations (around 20MB/s). And this mean for me you have few processes which concurrently access disc resources. What you can do to improve the performance is instead of
tar zxf Xtrabackup.tar.gz
use
gzip -d Xtrabackup.tar.gz|tar xvf -
The second add parallelism and can benefit from multy processor, You can also benefit from increase of the pipe (fifo) buffer. Check this answer for some ideas
Also consider to tune filesystem where are stored output files of tar

How to make cpuset.cpu_exclusive function of cpuset work correctly

I'm trying to use the kernel's cpuset to isolate my process. To obtain this, I follow the instructions(2.1 Basic Usage) from kernel doc cpusets, however, it didn't work in my environment.
I have tried in both my centos7 server and my ubuntu16.04 work pc, but neither did work.
centos kernel version:
[root#node ~]# uname -r
3.10.0-327.el7.x86_64
ubuntu kernel version:
4.15.0-46-generic
What I have tried is as follows.
root#Latitude:/sys/fs/cgroup/cpuset# pwd
/sys/fs/cgroup/cpuset
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.cpus
0-3
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.mems
0
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.cpu_exclusive
1
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.mem_exclusive
1
root#Latitude:/sys/fs/cgroup/cpuset# find . -name cpuset.cpu_excl
usive | xargs cat
0
0
0
0
0
1
root#Latitude:/sys/fs/cgroup/cpuset# mkdir my_cpuset
root#Latitude:/sys/fs/cgroup/cpuset# echo 1 > my_cpuset/cpuset.cpus
root#Latitude:/sys/fs/cgroup/cpuset# echo 0 > my_cpuset/cpuset.mems
root#Latitude:/sys/fs/cgroup/cpuset# echo 1 > my_cpuset/cpuset.cpu_exclusive
bash: echo: write error: Invalid argument
root#Latitude:/sys/fs/cgroup/cpuset#
It just printed the error bash: echo: write error: Invalid argument.
Google it, however, I can't get the correct answers.
As I pasted above, before my operation, I confirmed that the cpuset root path have enabled the cpu_exclusive function and all the cpus are not been excluded by other sub-cpuset.
By using ps -o pid,psr,comm -p $PID, I can confirm that the cpus can be assigned to some process if I don't care cpu_exclusive. But I have also proved that if cpu_exclusive is not set, the same cpus can also be assigned to another processes.
I don't know if it is because some pre-setting are missed.
What I expected is "using cpuset to obtain exclusive use of cpus". Can anyboy give any clues?
Thanks very much.
i believe it is a mis-understanding of cpu_exclusive flag, as i did. Here is the doc https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt, quoting:
If a cpuset is cpu or mem exclusive, no other cpuset, other than
a direct ancestor or descendant, may share any of the same CPUs or
Memory Nodes.
so one possible reason you have bash: echo: write error: Invalid argument, is that you have some other cgroup cpuset enabled, and it conflicts with your operations of echo 1 > my_cpuset/cpuset.cpu_exclusive
please run find . -name cpuset.cpus | xargs cat to list all your cgroup's target cpus.
assume you have 12 cpus, if you want to set cpu_exclusive of my_cpuset, you need to carefully modify all the other cgroups to use cpus, eg. 0-7, then set cpus of my_cpuset to be 8-11. After all these cpus configurations , you can set cpu_exclusive to be 1.
But still, other process can still use cpu 8-11. Only the tasks that belongs to the other cgroups will not use cpu 8-11
for me, i had some docker container running, which prevents me from setting my cpuset cpu_exclusive
with kernel doc, i do not think it is possible to use cpus exclusively by cgroup itself. One approach (i know this approach is running on production) is that we isolate cpus, and manage the cpu affinity/cpuset by ourselves

How to script sfdisk or parted for multiple partitions?

For QA purposes I need to be able to partition a drive via a bash script up to 30 or more partitions for both RHEL and SLES.
I have attempted to do this in BASH with fdisk via a "here document" which works but as you can guess blows up in various steps. I assume this is because of timing of the input commands occurring at the wrong times and getting out of sync. 1 out of 10 times my script will work correctly.
I have looked at parted and sfdisk and don't really understand how to use these tools.
I have only ever used fdisk.
My issue is that with fdisk you can state something like "new partition +1gb" over and over and this works for me because in my script I don't need to keep track of prior partitions or remaining space or do any calculations. Every time I run this function this just makes an additional 1gb partition from any unused space.
Is there a way to use parted or sfdisk (or any other tool that would already be a part of these distros) so that I could script a loop from 1 to x operations without having to do this accounting of remaining space? Does anyone have any examples they could share?
Update
Here is an example of one of my functions. At the start of the script we ask the user for the number of partitions to create, their size (this is static for all), and type of FS if any. This functions creates partition 1 thru 3 then a different function handles the extended (4th) and another handles 5th to nn.
As I said before, this script is fully functional; my problem is that at times the commands being sent to fdisk seem to arrive at the wrong timing, which then breaks the entire script thereby breaking any automation.
So the commands are being sent like this:
n
p
1
+1M
w
I have been reading up on fdisk and have learned it is not suited well for scripting so what I am seeing is that when in script mode, fdisk might be asking for p my script already thinks it's time to send the 1.
The thing about fdisk that worked for me is that after you specify the partition number it already calculated the next free sector so all I have to do at this point is send a blank line for my start and then the +1M for my total size. Parted and sfdisk don't appear to work this way from what I can tell and I am still very new at this to understand how to automate those tools at this time.
Create1to3Primary_Func() {
Size=\+$partSize\MB
for i in {1..3}
do
echo " this loop i= $i"
echo "Creating Partition $i on $targetFull as $targetFull$i using Create1to3Primary_Func()"
rm -f /tmp/myScript
echo -e "n" >> /tmp/myScript
echo -e "p" >> /tmp/myScript
echo -e "$i" >> /tmp/myScript
echo -e " " >> /tmp/myScript
echo -e "$Size" >> /tmp/myScript
echo -e "w" >> /tmp/myScript
echo -e "EOF" >> /tmp/myScript
fdisk $targetFull < /tmp/myScript
echo " sleeping Create1to3Primary_Func()"
sleep 4s
if [ "$RawOrFs" == "f" ]; then
mkfsCMD="mkfs.$fsType"
mkfsFullTarget="$targetFull$i"
cmdline="$mkfsCMD $mkfsFullTarget -L 'Partition$i'"
echo "Creating $fsType File System on $mkfsFullTarget"
$cmdline
fi
void="/mnt/mymnt$i"
if [ ! -d $void ] ; then
echo "Creating Mount Point /mnt/mymnt$i"
void="/mnt/mymnt$i"
mkdir $void
fi
echo "Part Probe on $targetFull "
partprobe $targetFull ; sleep 4s
done
}
Not sure to get what you really want, but you may be interested by the fact that sfdisk can dump a partition layout and use this layout to partition other disks. For instance:
sfdisk -d /dev/sda > mydiskpartitionslayout
Then in your script (take care of course) you can specify
sfdisk /dev/sdx < mydiskpartitionslayout
sfdisk
sfdisk is a Scripted version of fdisk
It is part of util-linux, just like fdisk, so availability should be the same.
A partition table with a single partition that takes the whole disk can be
created with:
echo 'type=83' | sudo sfdisk /dev/sdX
and more complex partition tables are explained below.
To generate an example script, get the setup of one of your disks:
sudo sfdisk -d /dev/sda > sda.sfdisk
Sample output on my Lenovo T430 Windows 7 / Ubuntu dual boot:
label: dos
label-id: 0x7ddcbf7d
device: /dev/sda
unit: sectors
/dev/sda1 : start= 2048, size= 3072000, type=7, bootable
/dev/sda2 : start= 3074048, size= 195430105, type=7
/dev/sda3 : start= 948099072, size= 28672000, type=7
/dev/sda4 : start= 198504446, size= 749594626, type=5
/dev/sda5 : start= 198504448, size= 618891264, type=83
/dev/sda6 : start= 940277760, size= 7821312, type=82
/dev/sda7 : start= 817397760, size= 61437952, type=83
/dev/sda8 : start= 878837760, size= 61437500, type=83
Once you have the script saved to a file, you can apply it to sdX with:
sudo sfdisk /dev/sdX < sda.sfdisk
For sfdisk input, you can just omit the device names, and use lines of type:
start= 2048, size= 3072000, type=7, bootable
They are just ignored if present, and the device name is taken from the command line argument.
Some explanations:
header lines: all optional:
label: type of partition table. dos (MBR) is the old an widely supported one, gpt the new shiny thing.
unit: only sector is supported. 1 sector usually equals 512 bytes. Find with cat /sys/block/sda/queue/hw_sector_size See also: https://unix.stackexchange.com/questions/2668/finding-the-sector-size-of-a-partition
device: informative only I think
partition lines:
start: offset inside the disk at which the partition starts.
start has very good defaults, and can often be ommited:
on the first line, start is 2048, i.e. 1Mb (2048 + 512), which is a sane default for disk compatibility
further start default to the first unallocated position
size: man sfdisk says: The default value of size indicates "as much as possible". So to fill the disk with a single partition use: /dev/sda : start=2048, type=83
type: magic byte stored on the boot sector for each partition entry. Possible values: https://en.wikipedia.org/wiki/Partition_type On this example we observe:
7 (sda1, 2 and 3): filesystems that Windows supports. Preinstalled Windows stuff and Lenovo recovery partitions. sudo blkid labels help identify them.
5 (sda4): extended primary partition, which will contain other logical partitions (because we can only have 4 primary partitions with MBR)
83(sda5, 7, and 8): partitions which Linux supports. For me one home, and two roots with different Ubuntu versions
82 (sd6): swap
fdisk can also read sfdisk scripts with the I command, which "sources" them during an interactive fdisk session, allowing you further customization before writing the partition.
Tested on Ubuntu 16.04, sfdisk 2.27.1.
Format and populate the partitions an image file without sudo
This is a good way to learn to use sfdisk without blowing up your hard disks: How to create a multi partition SD disk image without root privileges?
An approach I like (which I saw in this article) is to "script" the fdisk input directly, since it's smarter than sfdisk about creating a partition "until the end of the disk" or "2 GB large". Example:
echo "d
1
d
2
d
3
n
p
1
+2G
n
p
2
w
" | fdisk /dev/sda
This script deletes up to 3 existing partitions, creates a 2 GB partition (e.g. swap) and then creates a partition that would extend over the remaining disk space.
In contrast, if a partition layout was created and used in sfdisk, the script would not cover the whole disk if more space was available.
Automating repetitive task is a norm in the life of automation and we need a method to automatically provide answers to these programs if we are to include them in our script.
This is where a program called “Expect” steps in to automate. For Red Hat based systems, execute the below command to install "Expect"
yum install expect
For Debian based or Ubuntu, execute the below command.
apt-get install expect
below is the expect script to create a partition /dev/sdc
!/usr/bin/expect
log_file -a "/tmp/expect.log"
set timeout 600
spawn /sbin/fdisk /dev/sdc
expect "Command (m for help): " { send "n\r" }
expect "p primary partition (1-4)"
expect "" { send "p\r" }
expect "Partition number (1-4): " { send "1\r" }
expect "First cylinder (1-133544, default 1): " { send "1\r" }
expect ": " { send "\r" }
expect "Command (m for help): " { send "w\r" }
interact

How to obtain the virtual private memory of a process from the command line under OSX?

I would like to obtain the virtual private memory consumed by a process under OSX from the command line. This is the value that Activity Monitor reports in the "Virtual Mem" column. ps -o vsz reports the total address space available to the process and is therefore not useful.
You can obtain the virtual private memory use of a single process by running
top -l 1 -s 0 -i 1 -stats vprvt -pid PID
where PID is the process ID of the process you are interested in. This results in about a dozen lines of output ending with
VPRVT
55M+
So by parsing the last line of output, one can at least obtain the memory footprint in MB. I tested this on OSX 10.6.8.
update
I realized (after I got downvoted) that #user1389686 gave an answer in the comment section of the OP that was better than my paltry first attempt. What follows is based on user1389686's own answer. I cannot take credit for it -- I've just cleaned it up a bit.
original, edited with -stats vprvt
As Mahmoud Al-Qudsi mentioned, top does what you want. If PID 8631 is the process you want to examine:
$ top -l 1 -s 0 -stats vprvt -pid 8631
Processes: 84 total, 2 running, 82 sleeping, 378 threads
2012/07/14 02:42:05
Load Avg: 0.34, 0.15, 0.04
CPU usage: 15.38% user, 30.76% sys, 53.84% idle
SharedLibs: 4668K resident, 4220K data, 0B linkedit.
MemRegions: 15160 total, 961M resident, 25M private, 520M shared.
PhysMem: 917M wired, 1207M active, 276M inactive, 2400M used, 5790M free.
VM: 171G vsize, 1039M framework vsize, 1523860(0) pageins, 811163(0) pageouts.
Networks: packets: 431147/140M in, 261381/59M out.
Disks: 487900/8547M read, 2784975/40G written.
VPRVT
8631
Here's how I get at this value using a bit of Ruby code:
# Return the virtual memory size of the current process
def virtual_private_memory
s = `top -l 1 -s 0 -stats vprvt -pid #{Process.pid}`.split($/).last
return nil unless s =~ /\A(\d*)([KMG])/
$1.to_i * case $2
when "K"
1000
when "M"
1000000
when "G"
1000000000
else
raise ArgumentError.new("unrecognized multiplier in #{f}")
end
end
Updated answer, thats work under Yosemite, from user1389686:
top -l 1 -s 0 -stats mem -pid PID

Resources