IPMI get disk volumes and raid volumes (iDRAC Dell 5/6/7) - disk

How to get with IPMI enabled on your Dell iDRAC 5/6/7 the disk volumes and raid volumes?
At this moment we only get the status and fan speed.
We couldn't find any documentation about the disk part.
Kind regards

The Disk part and Raid Part are some thing done by OS.
IPMI is only used as a remote control for the BIOS. Essentially, Processor (Board), Display, General Peripherals and monitoring Hardware used in the server chassis.
Most of the Disk based components are either from external hardware vendors or custom OS/software based solution which is difficult to be integrated and used in IPMI/KVM's firmware.
Thus the best solution for getting raid and disk volumes is from querying into OS's information.
This could eventually become a future IPMI work.

Related

Network problems on one device

I currently have a problem, that some packages are getting dropped in my local network.
But just on one device.
Ping to local router
Here you can see a ping to my router. I only have this problem on my pc. Mobilephone and Laptops are completly fine.
I tried a network card and two Wlan usb sticks all with the same problem.
Does somebody a clue on what could cause these problems?
*OS: Windows 10 21H2
*CPU usage ideling around 4-10%
*RAM usage 40%
*Network usage 0-1%
Your question is a bit broad - there are so many things that can disturb a network connection, from physical issues (i.e. cable defects, WIFI interference), to driver problems, CPU bottlenecks, etc. That being said, I would tip at a CPU bottleneck (app using most or all of your CPU), but even that is by no means certain.
Take a look at you CPU usage with TaskManager or ProcessExplorer (from the Sysinternals package). They both also show network usage. If your machine shows excessive CPU (constantly over 30% with frequent peeks), then you might want to explorer the reasons for that, and there can be many.
Using those same tools you can also try to identify apps that are possibly using alot of network bandwidth.
Windows has much happening in the background and those processes require resources (CPU, RAM, Network, Harddisk, etc.). Should any of those resources be limited then, you can easily see issues as you describe as there is a certain interdependence between those resources, i.e. you have many apps running with limited RAM, that leads to paging and as the hard disk is slow, the CPU is then busy with data shoveling and can't keep up with the NIC requests.
However, I am theorizing here. Supply some hard data (machine config, OS Info, Network info/config, task list, CPU usage, etc.) and we can continue.

Redis versus hardware cache

Do tools like Redis provide control over the hardware cache present in the computer or does it run on the computer RAM? If it is the latter, how can it give better performance than the existing hardware cache which is controlled by the operating system?
After a lot of scattered reading I think I have got a better idea about this. So answering the question in case someone else has this question too.
The cache in a computer is not controlled by the Operating System. It is a part of the micro architecture. No software access can 'alter' cache configuration. On a linux machine, typing vi /proc/cpuinfo will show the cache size and alignment as prescribed by the chip manufacturer.
Tools like Redis and memcached,'cache' data by persisting it in the physical memory (RAM) of a machine. It is still caused caching as this prevents the data from being written to disk and hence gives faster access.

what is the difference between hot pluggable device and removable device?

I have read that USB HDD are hot-pluggable but not removable whereas USB Flash drives are both removable and hot-pluggable.Internally, the windows DEVICE_OBJECT structure has Characteristics flag that can have a value FILE_REMOVABLE_MEDIA for removable media (not the removable device). Also, STORAGE_HOTPLUG_INFO structure has Devicehotplug boolean member that says device is hot pluggable or not. Can you please justify your answer with a little details?
David Zeuthen explains it best:
[...] "removable" means that the media of the device is removable. For
example, CD-ROM drives or Nin1 card readers for flash media. [...]
ATA disks connected via eSATA aren't removable, you can't remove the
platters.
Yet of course, you can intuitively understand that even non-removable devices can be hotpluggable (i.e. you can plug and unplug the entire device as a whole, as opposed to inserting/removing the media it contains).
Now, all (modern) buses in use in current systems are hotpluggable -
most new systems allow you to add/remove SATA disks while the system
is running.
Indeed you shouldn't have to care much about whether something is hotpluggable or not anymore: virtually all storage devices are. (In the past, you had to shutdown the machine to manipulate the storage devices).
So, it should follow that external USB drives (either HDDs or flash sticks) for example should be non-removable and hopefully always hotpluggable.
Unfortunately:
Of course, hardware sucks so virtually all USB keyfobs reports
"removable==1" probably because the maker of the device wanted to be
"helpful" and make things work better on windows.
I have no sources regarding the real reasons, but it turns out that many USB drives report themselves as removable too. David's suggestion that it might be because of certain operating systems which didn't use to support hotplugging but did support removable devices (CD-ROMs, etc) sounds reasonable: the manufacturers reused the same technique to trick the OS into letting the user "eject" USB drives.
Nowadays I would guess all modern operating systems make the distinction clear, and this has many advantages from a management standpoint (e.g. you might have a hotpluggable DVD drive with removable DVDs and you would thus need to be more clear about which you want to interact with). Still, older drives and old habits die hard, so you'll still find some "removable" USB drives even if they're really not.
Note: The bug report linked is about udisks which is more often found in the free software world. But again, I'm sure all systems make the distinction now even if the terminology is not exactly the same. Also note that the terminology is really rather arbitrary, though whichever terms you use for these two concepts best be well understood.
A simple Google search could have answered your question...
Hot plugging is the ability to replace or install a device without shutting down the attached computer. Hot plugging is implemented when
a peripheral device is added or removed; a device or working system
requires reconfiguration; a defective component requires replacement
or a device and computer require data synchronization. Also known as hot swapping. Hot swapping
allows easy accessibility to equipment and the convenience of
uninterrupted systems.
Removable media are data storage devices capable of computer system removal without powering off the system. Removable media devices are
used for backup, storage or transportation of data.
source: techopedia dot com

Disable Linux hard disk support

I'm building a custom linux kernel that should be able to access cdrom and usb mass storage devices, but not hard disks.
I tried disabling CONFIG_BLK_DEV_SD, but I lose usb mass storage support.
How can I achieve that? If not possible, is there a way to remove hard disk nodes in /dev at startup?
First, you need to define, what exactly "hard disk" means.
Second, you need to express the above definition as a set of udev rules. This way, device nodes for devices you don't want would not even get created in /dev/ in the first place.
One nice tutorial for udev rules is here:
http://www.reactivated.net/writing_udev_rules.html
Relevant Q/A:
https://unix.stackexchange.com/questions/66897/what-is-the-udev-rule-to-allow-specific-thumb-drive-vendors
Frankly, I'm amazed you even managed a bootable system with CONFIG_BLK_DEV_SD disabled: modern Linux kernels funnel virtually all storage I/O through the SCSI layer, then treat the specific types (SATA, PATA, USB mass storage, etc.) as flavors of SCSI.
I'd try disabling things at the next layer down in the system: enable SCSI disk and CD-ROM support, then disable all methods of actually talking to those disks: low-level SCSI drivers, ATA SFF support, ACHI support, etc.

using pci to interconnect motherboards

i've got a few old mobos and i was wondering whether it might be possible to create a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other? i'm sure it would be an arduous undertaking involving writing a device driver for the header boards and then writing an application to make use of the interconnect; perhaps a simple demo demonstrating a thread running on each processor and use of both sets of ram, perhaps creating a mini virtual machine that maps 2x3gb ram on 32 bit mobos to a single 6 gb address space. a microcontroller may be needed on each pci header card to act as a translator.
given that mobos almost always have multiple pci slots, i wonder if these interconnected card pairs could be used to daisychain mobos in a sort of high speed beowulf cluster.
i would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
does anyone have any advice or has this sort of thing been done before?
Update:
Thanks Martin. What you say makes sense, and it would also seem that if it were possible that it would have already been done before.
Instead, would it be possible to indirectly control the slave cpu by booting it using a "pretend" bootable storage device (hard disk, usb stick, etc)? As long as the slave mobo thinks its being operated by an operating system on a real device it should work.
This could potentially extend to any interface (sata, ide, usb etc); if you connected two pcs together with a sata/ide/usb cable (plug one end of an ide ribbon into one mobo and the other into another mobo), that would be all the hardware you need. the key is in creating a new driver for that interface on the master pc, so instead of the master pc treating that interface as having a storage device on it, it would be driven as a dummy bootable hard disk for the slave computer. this would still be a pretty difficult job for me because i've never done device drivers before, but at least i wouldn't need a soldering iron (which would be much further beyond me). i might be able to take an open source ide driver for linux, study it, and then butcher it to create something that kindof acts in reverse (instead of getting data off it, an application puts data onto it for the slave machine to access like a hard disk). i could then take a basic linux kernel and try booting the slave computer from an application on the master computer (via the butchered master pc ide/sata/usb device driver). for safety, i would probably try to isolate my customised driver as much as possible by targeting an interface not being used for anything else on the master pc (the master pc might use all sata hard disks with the ide bus normally unused, so if i created a custom ide driver it might cause less problems with the host system - because it is sata driven).
Does anyone know if anything like this (faking a bootable hard disk from another pc) has ever been tried before? It would make a pretty cool hackaday on youtube, but also seriously it could add a new dimension to parallel computing if it proved promising.
The PCI bus can't take over the other CPU.
You could make an interconnect that can transfer data from a program on one machine to another. An ethernet card is the most common implementation but for high performance clusters there are faster direct connections like infiband.
Unfortunately PCI is more difficult to build cards for than the old ISA bus, you need surface mount controller chips and specific track layouts to match the impedance requirements of PCI.
Going faster than a few megabit/s involves understanding things like transmission lines and the characteristics of the connection cable.
would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
LOL. PCI is an 32-Bit 33MHz Bus at minimum. So simply out of reach for an ATMEGA.
But your idea of:
a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other [...]
This is cheaply possible with just a pair of PCI Firewire (IEEE 1394) cards (and a Firewire cable). There is even a linux driver that allows remote debugging over firewire.

Resources