Is there a way, using wmic, to reverse engineer which volume maps to which partition(s)? [closed] - windows

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Problem.. I only have access to wmic... Lame I know.. but need to figure out which volume corresponds to what partition(s) which correspond to what disk.. I know how to correspond which partition corresponds to what disk because the disk id is directly in the results of the wmic query. However, the first part of the problem is more difficult. How to correlate which volume belongs to which partitions?..
Is there a way, using wmic, to reverse engineer which volume maps to which partition(s)?
If so how would this query look?

wmic logicaldisk get name, volumename
for more info use wmic logicaldisk get /?

The easiest way to do this is with diskpart from a command prompt:
C:\>diskpart
Microsoft DiskPart version 10.0.10586
Copyright (C) 1999-2013 Microsoft Corporation.
On computer: TIMSPC
DISKPART> select disk 0
Disk 0 is now the selected disk.
DISKPART> detail disk
HGST HTS725050A7E630 *(Note: This is the Model of my hard disk)*
Disk ID: 00C942C7
Type : SATA
Status : Online
Path : 0
Target : 0
LUN ID : 0
Location Path : PCIROOT(0)#PCI(1F02)#ATA(C00T00L00)
Current Read-only State : No
Read-only : No
Boot Disk : Yes
Pagefile Disk : Yes
Hibernation File Disk : No
Crashdump Disk : Yes
Clustered Disk : No
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 System NTFS Partition 350 MB Healthy System
Volume 1 C OSDisk NTFS Partition 464 GB Healthy Boot
Volume 2 NTFS Partition 843 MB Healthy Hidden
DISKPART> exit
Leaving DiskPart...
C:\>
You have access to a command line since you have access to WMIC, so this method should work.
Based on the comments below:
No, there is no way to use WMIC to determine with 100% accuracy exactly which partition corresponds to which partition on a specific drive. The problem with determining this information via WMI is that not all drives are basic drives. Some disks may be dynamic disks containing a RAID volume that spans multiple drives. Some may be a complete hardware implemented abstraction like a storage array (for example, a p410i RAID controller in an HP ProLiant). In addition, there are multiple partitioning schemes (eg UEFI/GPT vs BIOS/MBR). WMI is however independent of its environment. That is, it doesn't care about the hardware. It is simply another form of abstraction that provides a common interface model that unifies and extends existing instrumentation and management standards.
To get the level of detail you desire will require a tool that can interface at a much lower level like the driver for the device and hope that the driver provides the information you need. If it doesn't, you will be looking at very low level programming to interface with the device itself...essentially creating a new driver that provides the information you want. But based on your limitation of only having command line access, Diskpart is the closest prebuilt tool you will find.
There are volumes which do not have traditional letters.
And? Diskpart can select disk, partitions, and volumes based on the number assigned. The drive letter is irrelevant.
At no point in disk part is any kind of ID listed which allows a user to 100% know which partition they are dealing with when they reference a volume.
Here is an example from one of my servers with two 500gb hard drives. The first in the Boot/OS drive. The second has 2gb of unallocated space.
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- ------
Volume 0 System NTFS Partition 350 MB Healthy System
Volume 1 C OSDisk NTFS Partition 465 GB Healthy Boot
Volume 2 D New Volume NTFS Partition 463 GB Healthy
DISKPART> select volume 2
Volume 2 is the selected volume.
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 465 GB 0 B
* Disk 1 Online 465 GB 2049 MB
DISKPART> list partition
Partition ### Type Size Offset
------------- ---------------- ------- -------
* Partition 1 Primary 463 GB 1024 KB
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- ------
Volume 0 System NTFS Partition 350 MB Healthy System
Volume 1 C OSDisk NTFS Partition 465 GB Healthy Boot
* Volume 2 D New Volume NTFS Partition 463 GB Healthy
DISKPART>
Notice the asterisks? Those denote the active disk, partition, and volume. While these are not the ID you require to allow a user to 100% know which partition they are dealing with, you can at least clearly see that Volume 2 (D:) is on Partition 1 of Disk 1.
There are volumes that are RAW disks which is essentially saying.. this is a raw disk and I want to find out where these raw disks are at.
As you can see after I have created a volume with no file system on the 2gb of free space, this does not make any difference:
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- -------
Volume 0 System NTFS Partition 350 MB Healthy System
Volume 1 C OSDisk NTFS Partition 465 GB Healthy Boot
Volume 2 D New Volume NTFS Partition 463 GB Healthy
Volume 3 RAW Partition 2048 MB Healthy
DISKPART> select volume 3
Volume 3 is the selected volume.
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- -------
Volume 0 System NTFS Partition 350 MB Healthy System
Volume 1 C OSDisk NTFS Partition 465 GB Healthy Boot
Volume 2 D New Volume NTFS Partition 463 GB Healthy
* Volume 3 RAW Partition 2048 MB Healthy
DISKPART> list partition
Partition ### Type Size Offset
------------- ---------------- ------- -------
Partition 1 Primary 463 GB 1024 KB
* Partition 2 Primary 2048 MB 463 GB
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 465 GB 0 B
* Disk 1 Online 465 GB 1024 KB
The reason that I am using wmic is because I need to script out many disk ops. Have you ever tried to script out getting information from diskpart?
No, but it is scriptable.
In your sample data, you can enumerate the disk, volumes, and partitions. By looping through each object and selecting it, you can create a map of which volume is on which partition and which drive contains that partition. Diskpart may not provide 100% of the data you need 100% of the time with 100% of the accuracy you want, but it is the closest command line tool you are going to find to meet your goal.

Related

How to identify Ideal Size for a Application host C drive?

I want to make sure that the Application Drive is having enough space , for that i wanted to understand the Ideal Size for the Drive where i am hosting the application. Can somebody help me find the ideal size of the drive -
Application Size - 24 GB
Having log Data of - 14 GB- 20 GB
Other Files - 10 GB.
For the Above specification , what should be ideal size of the drive.

Memory management by OS

I am trying to understand memory management by the OS .
What I understand till now is that in a 32 bit system ,each process is allocated a space of 4gb [2gb user + 2gb kernel] ,in the virtual address space.
What confuses me is that is this 4gb space unique for every process . if I have say 3 processes p1 ,p2 ,p3 running would I need 12 gb of space on the hard disk ?
Also if say I have 2gb ram on a 32 bit system ,how will it manage to handle a process which needs 4gb ?[through the paging file ] ?
[2gb user + 2gb kernel]
That is a constraint by the OS. On an x86 32-bit system without PAE enabled, the virtual address space is 4 GiB (note that GB usually denotes 1000 MB while GiB stands for 1024 MiB).
What confuses me is that is this 4gb space unique for every process .
Yes, every process has its own 4 GiB virtual address space.
if I have say 3 processes p1 ,p2 ,p3 running would I need 12 gb of
space on the hard disk ?
No. With three processes, they can occupy a maximum of 12 GiB of storage. Whether that's primary or secondary storage is left to the kernel (primary is preferred, of course). So, you'd need your primary memory size + some secondary storage space to be at least 12 GiB to contain all three processes if all those processes really occupied the full range of 4 GiB, which is pretty unlikely to happen.
Also if say I have 2gb ram on a 32 bit system ,how will it manage to
handle a process which needs 4gb ?[through the paging file ] ?
Yes, in a way. You mean the right thing, but the "paging file" is just an implementation detail. It is used by Windows, but Linux, for example, uses a seperate swap partition instead. So, to be technically correct, "secondary storage (a HDD, for example) is needed to store the remaining 2 GiB of the process" would be right.

Ridiculously slow ZFS

Am I misinterpreting iostat results or is it really writing just 3.06 MB per minute?
# zpool iostat -v 60
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 356G 588G 465 72 1.00M 3.11M
xvdf 356G 588G 465 72 1.00M 3.11M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 356G 588G 568 58 1.26M 3.06M
xvdf 356G 588G 568 58 1.26M 3.06M
---------- ----- ----- ----- ----- ----- -----
Currently rsync is writing files from the other HDD (ext4). Based on our file characteristics (~50 KB files) it seems that math is correct 3.06 * 1024 / 58 = 54 KB.
For the record:
primarycache=metadata
compression=lz4
dedup=off
checksum=on
relatime=on
atime=off
Server is on the EC2, currently 1 core, 2GB RAM (t2.small), HDD - the cheapest one on amazon. OS - Debian Jessie, zfs-dkms installed from the debian testing repository.
If it's really that slow, then why? Is there a way to improve performance without moving all to SSD and adding 8 GB of RAM? Can it perform well on VPS at all, or was ZFS designed with bare metal in mind?
EDIT
I've added a 5 GB general purpose SSD to be used as ZIL, as it was suggested in the answers. That didn't help much, as ZIL doesn't seem to be used at all. 5 GB should be more than plenty in my use case, as according to the following Oracle article I should have 1/2 of the size of the RAM.
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 504G 440G 47 36 272K 2.74M
xvdf 504G 440G 47 36 272K 2.74M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 504G 440G 44 37 236K 2.50M
xvdf 504G 440G 44 37 236K 2.50M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
EDIT
dd test shows pretty decent speed.
# dd if=/dev/zero of=/mnt/zfs/docstore/10GB_test bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 29.3561 s, 366 MB/s
However iostat output hasn't changed much bandwidth-wise. Note higher number of write operations.
# zpool iostat -v 10
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 529G 415G 0 40 1.05K 2.36M
xvdf 529G 415G 0 40 1.05K 2.36M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 529G 415G 2 364 3.70K 3.96M
xvdf 529G 415G 2 364 3.70K 3.96M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 529G 415G 0 613 0 4.48M
xvdf 529G 415G 0 613 0 4.48M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 529G 415G 0 490 0 3.67M
xvdf 529G 415G 0 490 0 3.67M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 529G 415G 0 126 0 2.77M
xvdf 529G 415G 0 126 0 2.77M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfs-backup 529G 415G 0 29 460 1.84M
xvdf 529G 415G 0 29 460 1.84M
logs - - - - - -
xvdg 0 4.97G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
Can it perform well on VPS at all, or was ZFS designed with bare metal in mind?
Yes to both.
Originally it was designed for bare metal, and that is were you naturally get the best performance and full feature set (otherwise you have to trust the underlying storage, for example if writes are really committed to disk when requesting synchronized writes). Although it is quite flexible, as your vdevs can consist of any files or devices you have available - of course, performance can only be as good as the underlying storage.
Some points for consideration:
Moving files between different ZFS file systems is always a full copy/remove, not just rearranging of links (does not apply to your case, but may in the future)
Sync writing is much much slower than async (ZFS has to wait for every single request to be committed and cannot queue the writes in the usual fashion*), and can only be speed up by moving the ZFS intent log to a dedicated vdev suitable for high write IOPS, low latency and high endurance (in most cases this will be a SLC SSD or similar, but it could be any device different from the devices already in the pool). A system with normal disks that can easily saturate 110 MB/s async might have sync performance of about 0.5 to 10 MB/s (depending on vdevs) without separating the ZIL onto a dedicated SLOG device. Therefore I would not consider your values out of the ordinary.
Even with good hardware, ZFS will never be as fast as simpler file systems, because of overhead for flexibility and safety. This was stated from Sun from the beginning and should not surprise you. If you value performance over anything, choose something else.
Block size of the file system in question can affect performance, but I do not have reliable test numbers at hand.
More RAM will not help you much (over a low threshold of about 1 GB for the system itself), because it is used only as read cache (unless you have deduplication enabled)
Suggestions:
Use faster (virtual) disks for your pool
Separate the ZIL from your normal pool by using a different (virtual) device, preferably faster than the pool, but even a device of same speed but not linked to the other devices improves your case
Use async instead of sync and verify it after your transaction (or at sizeable chunks of it) yourself
*) To be more precise: in general all small sync writes below a certain size are additionally collected in the ZIL before being written to disk from RAM, which happens either every five seconds or about 4 GB, whichever comes first (all those parameters can be modified). This is done because:
the writing from RAM to spinning disks every 5 seconds can be continuous and is therefore faster than small writes
in case of sudden power loss, the aborted in-flight transactions are stored savely in the ZIL and can be reapplied upon reboot. This works like in a database with a transaction log and guarantees a consistent state of the file system (for old data) and also that no data to be written is los (for new data).
Normally the ZIL resides on the pool itself, which should be protected by using redundant vdevs, making the whole operation very resilient against power loss, disk crashes, bit errors etc. The downside is that the pool disks need to do the random small writes before they can flush the same data to disk in more efficient continuous transfer - therefore it is recommended to move the ZIL onto another device - usually called an SLOG device (Separate LOG device). This can be another disk, but an SSD performs much better at this workload (and will wear out pretty fast, as most transactions are going through it). If you never experience a crash, your SSD will never be read, only written to.
This particular problem may be due to a noisy neighbor. Being that its a t2 instance, you will end up with the lowest priority. In this case you can stop/start your instance to get a new host.
Unless you are using instance storage (which is not really an option for t2 instances anyway), all disk writing is done to what are essentially SAN volumes. The network interface to the EBS system is shared by all instances on the same host. The size of the instance will determine the priority of the instance.
If you are writing from one volume to another, you are passing all read and write traffic over the same interface.
There may be other factors at play depending which volume types you have and if you have any CPU credits left on your t2 instance

trying to load mbr from sector 2

i wanted to make a code that will run before the mbr, so i moved the mbr to the second sector, my code to sector zero. in sector 1 I loaded the second sector(which contains the mbr) than i call to address 7c00 to begin the mbr code.
so hard disk looks like this:
sector 0: my program that does IO ans loads sector 1
sector 1: code that loads sector 2
sector 2 mbr code
when i boot i receive this message:
"could not open drive multi 0 disk 0 rdisk 0 partition 1"
its importent to say that i want windows xp to run after my code
What you describe is exactly how the MBR code works:
The MBR of a hard disk is located at the first sector of the hard disk. BIOS will load that sector.
The MBR sector will move itself to some other address and load the first sector of the bootable hard disk partition to address 7C00 (hex). Then it will jump to 7C00 (hex).
However:
The MBR also contains information about the hard disk partitions in the last 80 bytes. If you want to replace the MBR by your own boot sector you'll have to copy the data located in the last 80 bytes. Otherwise hard disk access won't work any longer because the OS will look for hard disk information in thes last 80 bytes of the first sector of the hard disk.
If you want to replace the boot sector of the bootable partition you have a similar problem. Depending on the file system used there is file system information stored in some bytes of the boot sector. For FAT or NTFS the first three bytes must be a "JMP" instruction and the following about 65 bytes contain file system information.

Get the hard disk information using Perl

I want to use the perl to check the hard disk space on Windows, is there any way to do it?
Best Regards,
Most of the time, a portable solutions such as Filesys::DfPortable is the better choice. Recognise the opportunity to be virtuously lazy.
see Win32::DriveInfo
($SectorsPerCluster, $BytesPerSector, $NumberOfFreeClusters,
$TotalNumberOfClusters, $FreeBytesAvailableToCaller, $TotalNumberOfBytes, $TotalNumberOfFreeBytes) = Win32::DriveInfo::DriveSpace( drive );
$SectorsPerCluster - number of sectors per cluster.
$BytesPerSector - number of bytes per sector.
$NumberOfFreeClusters - total number of free clusters on the disk.
$TotalNumberOfClusters - total number of clusters on the disk.
$FreeBytesAvailableToCaller - total number of free bytes on the disk that
are available to the user associated with the
calling thread, b.
$TotalNumberOfBytes - total number of bytes on the disk, b.
$TotalNumberOfFreeBytes - total number of free bytes on the disk, b.
Win32 Drive Info should do the trick.I guess your are looking for
$TotalNumberOfFreeBytes
Another way to query the system state is to query the Windows Management Interface using DBD::WMI.
The following query should give you the basic info about disk space:
Select DeviceID,Size,FreeSpace from Win32_LogicalDisk

Resources