Disable Linux hard disk support - linux-kernel

I'm building a custom linux kernel that should be able to access cdrom and usb mass storage devices, but not hard disks.
I tried disabling CONFIG_BLK_DEV_SD, but I lose usb mass storage support.
How can I achieve that? If not possible, is there a way to remove hard disk nodes in /dev at startup?

First, you need to define, what exactly "hard disk" means.
Second, you need to express the above definition as a set of udev rules. This way, device nodes for devices you don't want would not even get created in /dev/ in the first place.
One nice tutorial for udev rules is here:
http://www.reactivated.net/writing_udev_rules.html
Relevant Q/A:
https://unix.stackexchange.com/questions/66897/what-is-the-udev-rule-to-allow-specific-thumb-drive-vendors

Frankly, I'm amazed you even managed a bootable system with CONFIG_BLK_DEV_SD disabled: modern Linux kernels funnel virtually all storage I/O through the SCSI layer, then treat the specific types (SATA, PATA, USB mass storage, etc.) as flavors of SCSI.
I'd try disabling things at the next layer down in the system: enable SCSI disk and CD-ROM support, then disable all methods of actually talking to those disks: low-level SCSI drivers, ATA SFF support, ACHI support, etc.

Related

what is the difference between hot pluggable device and removable device?

I have read that USB HDD are hot-pluggable but not removable whereas USB Flash drives are both removable and hot-pluggable.Internally, the windows DEVICE_OBJECT structure has Characteristics flag that can have a value FILE_REMOVABLE_MEDIA for removable media (not the removable device). Also, STORAGE_HOTPLUG_INFO structure has Devicehotplug boolean member that says device is hot pluggable or not. Can you please justify your answer with a little details?
David Zeuthen explains it best:
[...] "removable" means that the media of the device is removable. For
example, CD-ROM drives or Nin1 card readers for flash media. [...]
ATA disks connected via eSATA aren't removable, you can't remove the
platters.
Yet of course, you can intuitively understand that even non-removable devices can be hotpluggable (i.e. you can plug and unplug the entire device as a whole, as opposed to inserting/removing the media it contains).
Now, all (modern) buses in use in current systems are hotpluggable -
most new systems allow you to add/remove SATA disks while the system
is running.
Indeed you shouldn't have to care much about whether something is hotpluggable or not anymore: virtually all storage devices are. (In the past, you had to shutdown the machine to manipulate the storage devices).
So, it should follow that external USB drives (either HDDs or flash sticks) for example should be non-removable and hopefully always hotpluggable.
Unfortunately:
Of course, hardware sucks so virtually all USB keyfobs reports
"removable==1" probably because the maker of the device wanted to be
"helpful" and make things work better on windows.
I have no sources regarding the real reasons, but it turns out that many USB drives report themselves as removable too. David's suggestion that it might be because of certain operating systems which didn't use to support hotplugging but did support removable devices (CD-ROMs, etc) sounds reasonable: the manufacturers reused the same technique to trick the OS into letting the user "eject" USB drives.
Nowadays I would guess all modern operating systems make the distinction clear, and this has many advantages from a management standpoint (e.g. you might have a hotpluggable DVD drive with removable DVDs and you would thus need to be more clear about which you want to interact with). Still, older drives and old habits die hard, so you'll still find some "removable" USB drives even if they're really not.
Note: The bug report linked is about udisks which is more often found in the free software world. But again, I'm sure all systems make the distinction now even if the terminology is not exactly the same. Also note that the terminology is really rather arbitrary, though whichever terms you use for these two concepts best be well understood.
A simple Google search could have answered your question...
Hot plugging is the ability to replace or install a device without shutting down the attached computer. Hot plugging is implemented when
a peripheral device is added or removed; a device or working system
requires reconfiguration; a defective component requires replacement
or a device and computer require data synchronization. Also known as hot swapping. Hot swapping
allows easy accessibility to equipment and the convenience of
uninterrupted systems.
Removable media are data storage devices capable of computer system removal without powering off the system. Removable media devices are
used for backup, storage or transportation of data.
source: techopedia dot com

where is device driver code executed? Kernel space or User space?

Part1:
To the linux/unix experts out there, Could you please help me understanding about device drivers. As i understood, a driver is a piece of code that directly interacts with hardware and exposes some apis to access the device. My question is where does this piece of code runs, User space or Kernel space?
I know that code that is executed in kernel space has some extra privileges like accessing any memory location(pls correct if i'm wrong). If we install a third party driver and if it runs in kernel space, wouldn't this be harmful for the whole system? How any OS handles this?
Part2:
Lets take an example of USB device(camera, keyboard..), How the system recognizes these devices? how does the system know which driver to install? How does the driver know the address of the device to read and write the data?
(if this is too big to answer here, pls provide links of some good documentation or tutorials.., I've tried and couldn't find answers for these. pls help)
Part 1
On linux, drivers run in kernel space. And yes, as you state there a significant security implications to this. Most exceptions in drivers will take down the kernel, potentially corrupt kernel memory (with all manner of consequences). Buggy drivers also have an impact on system security, and malicious drivers can do absolutely anything they want.
A trend seen on MacOSX and Window NT kernels is user-space drivers. Microsoft has for some time been pushing the Windows Userspace Driver Framework, and MacOSX has long provided user-space APIs for Firewire and USB drivers, and class-compliant drivers for many USB peripherals. it is quite unusual to install 3rd party kernel-mode device drivers on MacOSX.
Arguably, the bad reputation Windows used to have for kernel panics can be attributed to the (often poor quality) kernel mode drivers that came with just about every mobile phone, camera and printer.
Linux graphics drivers are pretty much all implemented in user-space with a minimal kernel-resident portion, and Fuse allows the implementation of filing systems in user-space.
Part 2
USB, Firewire, MCI (and also PCI-e) all have enumeration mechanisms through which a bus driver can match the device to a driver. In practice this means that all devices expose metadata describing what they are.
Contained within the metadata is a DeviceID, VendorID and a description of functions the device provides and associated ClassIDs. ClassIDs facilitate generic Class Drivers.
Conceptually, the operating system will attempt to find a driver that specifically supports the VendorID and DeviceID, and then fall back to one that supports the ClassID(s).
Matching devices to drivers is a core concept at the heart of the Linux Device Model, and exact matching criteria used for matching is match() function in the specific bus driver.
Once device drivers are bound to a device, it uses the bus-driver (or addressing information given by it) to perform read and writes. In the case of PCI and Firewire, this is a memory mapped IO address. For USB it bus addressing information.
The Linux Documentation tree provides some insight into the design of the Linux Device Model, but isn't really entry-level reading.
I'd also recommend reading Linux Device Driver (3rd Edition)

usb hard drive/pen drive emulation via bios

I am trying to read/write usb hard drives and flash drives for a DOS application.
I read EDD spec. and it mentions that function Int13(Fn = 48h) can be used to get interface path and device path for a particular disk drive. This also includes usb interface and ata interface.
Now this function also returns a Device Parameter Table Extension (DPTE) table that gives I/O addresses via which a software bypassing Int13h can use to read/write the device. But this table is only for ATA afaik.
I wanna read/write to usb disk/pen drives w/o using Int 13h. Is it possible?
Actually it is a disk I/O sensitive application in protected mode. So using Int13h would involve heavy penalty in terms of performance, due to mode switching. So, I am trying to avoid using Int13h.
Does bios initalizes usb drives to appear as ATA drives too. If so, then I can use DPTE to get the I/O base addresses of the command block and control block and then access the usb drives just like ATA drives. Am I right?
Thanks
This particular issue has been discussed at length among the FreeDOS community. The best guide to the whole issue is the FreeDOS technote "USB with DOS"

using pci to interconnect motherboards

i've got a few old mobos and i was wondering whether it might be possible to create a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other? i'm sure it would be an arduous undertaking involving writing a device driver for the header boards and then writing an application to make use of the interconnect; perhaps a simple demo demonstrating a thread running on each processor and use of both sets of ram, perhaps creating a mini virtual machine that maps 2x3gb ram on 32 bit mobos to a single 6 gb address space. a microcontroller may be needed on each pci header card to act as a translator.
given that mobos almost always have multiple pci slots, i wonder if these interconnected card pairs could be used to daisychain mobos in a sort of high speed beowulf cluster.
i would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
does anyone have any advice or has this sort of thing been done before?
Update:
Thanks Martin. What you say makes sense, and it would also seem that if it were possible that it would have already been done before.
Instead, would it be possible to indirectly control the slave cpu by booting it using a "pretend" bootable storage device (hard disk, usb stick, etc)? As long as the slave mobo thinks its being operated by an operating system on a real device it should work.
This could potentially extend to any interface (sata, ide, usb etc); if you connected two pcs together with a sata/ide/usb cable (plug one end of an ide ribbon into one mobo and the other into another mobo), that would be all the hardware you need. the key is in creating a new driver for that interface on the master pc, so instead of the master pc treating that interface as having a storage device on it, it would be driven as a dummy bootable hard disk for the slave computer. this would still be a pretty difficult job for me because i've never done device drivers before, but at least i wouldn't need a soldering iron (which would be much further beyond me). i might be able to take an open source ide driver for linux, study it, and then butcher it to create something that kindof acts in reverse (instead of getting data off it, an application puts data onto it for the slave machine to access like a hard disk). i could then take a basic linux kernel and try booting the slave computer from an application on the master computer (via the butchered master pc ide/sata/usb device driver). for safety, i would probably try to isolate my customised driver as much as possible by targeting an interface not being used for anything else on the master pc (the master pc might use all sata hard disks with the ide bus normally unused, so if i created a custom ide driver it might cause less problems with the host system - because it is sata driven).
Does anyone know if anything like this (faking a bootable hard disk from another pc) has ever been tried before? It would make a pretty cool hackaday on youtube, but also seriously it could add a new dimension to parallel computing if it proved promising.
The PCI bus can't take over the other CPU.
You could make an interconnect that can transfer data from a program on one machine to another. An ethernet card is the most common implementation but for high performance clusters there are faster direct connections like infiband.
Unfortunately PCI is more difficult to build cards for than the old ISA bus, you need surface mount controller chips and specific track layouts to match the impedance requirements of PCI.
Going faster than a few megabit/s involves understanding things like transmission lines and the characteristics of the connection cable.
would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
LOL. PCI is an 32-Bit 33MHz Bus at minimum. So simply out of reach for an ATMEGA.
But your idea of:
a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other [...]
This is cheaply possible with just a pair of PCI Firewire (IEEE 1394) cards (and a Firewire cable). There is even a linux driver that allows remote debugging over firewire.

How to read/write a hard disk when CPU is in Protected Mode?

I am doing an OS experiment. Until now, all my code utilized the real mode BIOS interrupts to manipulate the hard disk and floppy. But once my code enables Protected Mode, all the real mode BIOS interrupt service routines won't be available.
I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop?
I know that hardware is controlled by reading from and writing to certain control or data registers. For example, I know that the "Command Block Registers" of a hard disk range from 0x1F0 to 0x1F7. I am wondering whether the register addresses of so many different hardware devices are consistent on different platforms? Or do I have to detect that before using them? How would I do that?
Since I am not sure about how to read/write a floppy or a hard disk in Protected Mode, I have to use BIOS interrupts to load all my necessary kernel files from the floppy before entering protected mode. What could I do if my kernel file exceeds the real mode 1M space limit?
How do I read/write a hard disk when the CPU is in Protected Mode?
I have a feeling that I need to do some hardware drivers now. Am I right?
Strictly speaking; (and depending on your requirements) "need" may be too strong - in theory you can switch back to real mode to use BIOS functions, or use a virtual8086 monitor, or write an interpreter that interprets the firmware's instructions instead of executing them directly.
However, the BIOS is awful (designed for an "only one thing can happen at a time" environment that is completely unsuitable for modern systems where its expected that all devices are able to do useful work at the same time), and the BIOS is deprecated (replaced by UEFI), and it's hard to call something an OS when it doesn't have control over the hardware (because the firmware still has control of the hardware).
Note that if you do continue using BIOS functions; the state of various pieces of hardware (interrupt controller, PCI configuration space for various devices, any PCI bridges, timer/s, etc) has to match the expectations of the BIOS. What this means is that you will either be forced to accept huge limitations (e.g. never being able to use IO APICs, etc. properly) because it will break BIOS functions used by other pre-existing code, or you will be forced to do a huge amount of work to make the BIOS happy (emulating various pieces of hardware so the BIOS thinks the hardware is still in the state it expects even though it's not).
In other words; if you want an OS that's good then you do need to write drivers; but if you only want an OS that doesn't work on modern computers (UEFI), has severe performance problems ("only one thing can happen at a time"), is significantly harder to improve, doesn't support any devices that the BIOS doesn't support (e.g. sound cards), and doesn't support any kind of "hot-plug" (e.g. plugging in a USB device), then you don't need to write drivers.
Is this why an OS is so difficult to develop?
A bad OS is easy to develop. For example, something that is as horrible as MS-DOS (but not compatible with MS-DOS) could probably be slapped together in 1 month.
What makes an OS difficult to develop is making it good. Things like caring about security, trying to get acceptable performance, supporting multi-CPU, providing fault tolerance, trying to make it more future-proof/extensible, providing a nice GUI, creating well thought-out standards (for APIs, etc), and power management - these are what makes an OS difficult.
Device drivers add to the difficulty. Before you can write drivers you'll need support for things that drivers depend on (memory management, IRQ handling, etc - possibly including scheduler and some kind of communication); then something to auto-detect devices (e.g. to scan PCI configuration space) and try to start the drivers for whatever was detected (possibly/hopefully from file system or initial RAM disk, with the ability to add/unload/replace drivers without rebooting); and something to manage the tree of devices - e.g. so that you know which "child devices" will be effected when you put a "parent device" to sleep (or the "parent device" has hardware faults, or its driver crashes, or the device is unplugged). Of course then you'd need to write the device drivers, where the difficulty depends on the device itself (e.g. a device driver for a NVidia GPU is probably harder to write than a device driver for a RS232 serial port controller).
For storage devices themselves (assuming "80x86 PC") there's about 8 standards that matter (ATA/ATAPI, AHCI and NVMe; then OHCI, UHCI, eHCI and xHCI for USB controllers, then the USB mass storage device spec). However, there is also various RAID controllers and/or SCSI controllers where there's no standard (each of these controllers need their own driver), and some obsolete stuff (floppy controller, tape drives that plugged into floppy controller or parallel port, three proprietary CD-ROM interfaces that were built into sound cards).
Please understand that supporting all of this isn't the goal. The goal should be to provide things device drivers depend on (described above), then provide specifications that describe the device driver interfaces (possibly/hopefully including things like IO priorities and synchronization, and notifications for device/media removal, error handling, etc) so that other people can write device drivers for you. Once that's done you might implement a few specific device drivers yourself (e.g. maybe just AHCI initially - everything else could be left until much later or until someone else writes it).
You don't necessarily HAVE to write drivers. You could drop back into real mode to call the BIOS service, and then hop back into protected mode when you're done. This is essentially how DPMI DOS extenders (DOS4GW, Causeway, etc) work.
The source code for the Causeway DOS extender is public domain, you can look at that for a reference. http://www.devoresoftware.com/freesource/cwsrc.htm

Resources