I am a newbie to Linux Kernel device driver codes.
One question up on the other: which is the difference between:
Character Device
Platform Driver
Kernel Module
I am writing this question because, within the same code I am examining, there are three section: one for each.
Platform Device Driver:
A platform device driver is generally written for on-chip components/devices and on-chip/off-chip unspeakable/un-discoverable devices.
If there is a device on-chip/off-chip, which doesn't have a self-identifying capability, like say i2x devices, GPIO line based, or in-circuit (on-chip) timers, etc. Then such devices need to be identified by the drivers because the devices don't have self-ID, or capability to identify themselves. This generally happens with bus lines and on-chip components.
Here is detailed explanation.
Example platform Devices: i2c devices, kernel/Documentation/i2c/instantiating-devices states:
Basically, all device drivers can be categorized into character, or block; based on the data transaction size.
Though there are many sub-classifications like network devices drivers and X device drivers, they too can be brought into devices, which carry data transactions (operations) in terms of few bytes that undergo tr
Typically, a platform device driver can fit into character device driver section, as they generally involve on-chip operations, for initialization and to transfer a few bytes, whenever needed, but not in terms of blocks (KB, MB, GB) of data.
Kernel Module?
Now, a driver can be either compiled (to be integrated) into kernel image (zImage/bzImage/...) OR can be compiled (off-the kernel) to be optionally invokable modular driver, which is not part of kernel image, but is part of filesystem as a .ko (kernel object) file (find /lib/modules/`uname -r`/ -name "*.ko"), that stays off the kernel image, but can be inserted (using modprobe/insmod) or removed (using rmmod/modprobe -r) as necessary.
On the other hand, a built-in driver can't be removed dynamically, even if we don't need it momentarily. A built-in driver would remain in the kernel and hence on RAM, as long as the system is running, even if the respective device is "not found"/"not necessary/"shutdown"), just wasting memory space (on RAM).
The Module (or modular driver), would only step-in, when necessary, from secondary storage to RAM, and can be removed if the device is removed or not-in-action. This saves RAM and helps dynamic allocation of resources.
Related
I wonder how windows distinguishes between diferrent drives and memory modules, I mean how can windows writte somethig specificaly to disc C or disc D.
In every programming language when you declare variable it gets saved into to the memory, and when you need to store something to hdd, you have to use some library.
So, how does windows handle it?
Does it treat all discs and memory modules as a single line of data, and does it only save each mediums beginning adress? - like 0x00000 is where the disc C begins, 0x15616 is where the disc D begins.
Like #MSalters said,
C: is a symlink to something like Device\HarddiskVolume1.
What it means is that disk drivers on Windows are implemented as virtual filesystems a bit like on Linux. I'll explain for Linux since there's much more documentation but the answer is quite similar for Windows even though both OSes do things differently.
Basically, on Linux everything is a file. Linux ships with disk drivers as these are at the basis of every computer. Linux exposes a driver model like every OS. The Linux driver model for files (including hard disks) exposes functions that will be called by the kernel to read/write to disk. There are open, read and write functions that the kernel expects to be present for a file driver.
If you wanted, you could write a disk driver and replace the existing one. You write drivers as modules that you can then load in the kernel using certain utilities that ship with Linux. I won't share more details as I'm not that much aware. Once your code is loaded in the kernel, it has access to all kernel code and all hardware since it runs in kernel mode.
Today, disk drivers probably use PCI DMA which is a controller connected to the PCI bus which allows to do disk operations which ignore the CPU and load disk data to RAM directly. The PCI convention says that all compatible devices (like PCI DMA controllers) must expose a certain interface to the computer. This interface is mostly some memory mapped registers that can be used to send commands to the controller. The OS will write data in these registers to tell the DMA controller to do disk operations. Then the DMA controller will trigger an interrupt once it is done. The OS will then know that the data is readily loaded into RAM and ready for use. The same applies for writing
The OS knows the location of these registers by looking in the ACPI tables at boot.
In modern Windows (2000 or later) C: is a symlink to something like Device\HarddiskVolume1. The number there can vary. Typically, \Device\Bootpartition is also a symlink to the same HarddiskVolume.
Windows doesn't use libraries to write to disk. Instead, it uses drivers. The chief difference is that drivers run as part of the OS kernel, while libraries run as part of applications.
I'm reading the book and it tells that:
After U-Boot loads Linux kernel, the kernel will claim all the resources of U-Boot
What does this mean? Does it mean that all data structures that allocated in U-Boot will be discarded?
For example: during U-Boot, PCIE and Network Device will be initialized.
After booting Linux kernel, will the PCIE and Network Device data structure be discarded? Will the Linux kernel do PCIE and NEtwork initialize again? Or U-Boot will transfer some data to kernel?
It depends on your CPU architecture how the communication happens, but it is usually via a special place in RAM, flash or the filesystem. No data structures are transferred, they would be meaningless to the kernel and the memory space will be different between the two. Uboot generally passes boot parameters like what type of hardware is present, what memory to use for something, or which type of mode to use for a specific driver. So yes, the kernel will re-initialize the hardware. The exception may be some of the low level CPU specifics which the kernel may expect uboot or a BIOS to have setup already.
Depending on your architecture, there may be different mechanism for the u-boot to communicate with the Linux kernel.
Actually there may be some structures defined by u-boot which are transferred to and used by the kernel using ATAGS. The address in which these structure are passed is stored in r2 register on ARM. They convey information such as available RAM size and location, kernel command line, ...
Note that on some architectures (like ARM again) we have support for device-tree which intends for defining the hardware in which the kernel is going to be run as well as kernel command line, memory and other thins. Such description is usually created during kernel compile time, loaded into the memory by the u-boot and in case of ARM architecture, its address is transferred through r2 register.
The interesting thing about this (regarding your question) is that u-boot can change this device-tree structure before passing it to the kernel through device tree overlay mechanism. So this is a (relatively) new way of u-boot/kernel communication. Note that device-tree is not supported on some architectures.
And at the end, yes, the hardware is reinitialized by the kernel even in they have already initialized by the u-boot except for memory controller and some other very low level initialization, AFAIK.
I'm evaluating to port a device driver I wrote several years ago from 32 to 64 bits. The physical device is a 32-bit PCI card. That is, the device is 32 bits but I need to access it from Win7x64. The device presents some registers to the Windows world and then performs heavy bus master data transferring into a chunk of driver-allocated memory.
I've read in the Microsoft documentation that you can signal whether the driver supports 64-bit DMA or not. If it doesn't, then the DMA is double buffered. However, I'm not sure if this is the case. My driver would/could be a full 64-bit one, so it could support 64-bit addresses in the processor address space, but the actual physical device WON'T support it. In fact, the device BARs must be mapped under 4 GB and the device must get a PC RAM address to perform bus master below 4 GB. Does this mean that my driver will go through double buffering always? This is a very performance-sensitive process and the double buffering could prevent the whole system from working.
Of course, designing a new 64-bit PCI (or PCI-E) board is out of question.
Anybody could give me some resources for this process (apart from MS pages)?
Thanks a lot!
This is an old post, I hope the answer is still relevant...
There are two parts here, PCI target and PCI master access.
PCI target access: The driver maps PCI BARs to 64bit virtual address space and the driver just reads/writes through a pointer.
PCI master access: You need to create a DmaAdapter object by calling IoGetDmaAdapter(). When creating, you also describe your device is a 32bit (see DEVICE_DESCRIPTION parameter). Then you call DmaAdapter::AllocateCommonBuffer() method to allocate a contiguous DMA buffer in PC RAM.
I am not sure about double-buffering though. From my experience, double-buffering is not used, instead, DmaAdapter::AllocateCommonBuffer() simply fails if cannot allocate a buffer that satisfies the DEVICE_DESCRIPTION (in your case - 32bit dma addressing).
There's no problem writing a 64-bit driver for a device only capable of 32-bit PCI addressing. As Alexey pointed out, the DMA adapter object you create specifies the HW addressing capabilities of your device. As you allocate DMA buffers, the OS takes this into account and will make sure to allocate these within your HW's accessible region. Linux drivers behave similar, where your driver must supply a DMA address mask to associate with your device that DMA functions used later will refer to.
The performance hit you could run into is if your application allocates a buffer that you need to DMA to/from. This buffer could be scattered all throughout memory, with pages in memory above 4G. If your driver plans to DMA to these, it will need to lock the buffer pages in RAM during the DMA and build an SGL for your DMA engine based on the page locations. The problem is, for those pages above 4G, the OS would then have to copy/move them to pages under 4G so that your DMA engine is able to access them. That is where the potential performance hit is.
Part1:
To the linux/unix experts out there, Could you please help me understanding about device drivers. As i understood, a driver is a piece of code that directly interacts with hardware and exposes some apis to access the device. My question is where does this piece of code runs, User space or Kernel space?
I know that code that is executed in kernel space has some extra privileges like accessing any memory location(pls correct if i'm wrong). If we install a third party driver and if it runs in kernel space, wouldn't this be harmful for the whole system? How any OS handles this?
Part2:
Lets take an example of USB device(camera, keyboard..), How the system recognizes these devices? how does the system know which driver to install? How does the driver know the address of the device to read and write the data?
(if this is too big to answer here, pls provide links of some good documentation or tutorials.., I've tried and couldn't find answers for these. pls help)
Part 1
On linux, drivers run in kernel space. And yes, as you state there a significant security implications to this. Most exceptions in drivers will take down the kernel, potentially corrupt kernel memory (with all manner of consequences). Buggy drivers also have an impact on system security, and malicious drivers can do absolutely anything they want.
A trend seen on MacOSX and Window NT kernels is user-space drivers. Microsoft has for some time been pushing the Windows Userspace Driver Framework, and MacOSX has long provided user-space APIs for Firewire and USB drivers, and class-compliant drivers for many USB peripherals. it is quite unusual to install 3rd party kernel-mode device drivers on MacOSX.
Arguably, the bad reputation Windows used to have for kernel panics can be attributed to the (often poor quality) kernel mode drivers that came with just about every mobile phone, camera and printer.
Linux graphics drivers are pretty much all implemented in user-space with a minimal kernel-resident portion, and Fuse allows the implementation of filing systems in user-space.
Part 2
USB, Firewire, MCI (and also PCI-e) all have enumeration mechanisms through which a bus driver can match the device to a driver. In practice this means that all devices expose metadata describing what they are.
Contained within the metadata is a DeviceID, VendorID and a description of functions the device provides and associated ClassIDs. ClassIDs facilitate generic Class Drivers.
Conceptually, the operating system will attempt to find a driver that specifically supports the VendorID and DeviceID, and then fall back to one that supports the ClassID(s).
Matching devices to drivers is a core concept at the heart of the Linux Device Model, and exact matching criteria used for matching is match() function in the specific bus driver.
Once device drivers are bound to a device, it uses the bus-driver (or addressing information given by it) to perform read and writes. In the case of PCI and Firewire, this is a memory mapped IO address. For USB it bus addressing information.
The Linux Documentation tree provides some insight into the design of the Linux Device Model, but isn't really entry-level reading.
I'd also recommend reading Linux Device Driver (3rd Edition)
I am doing an OS experiment. Until now, all my code utilized the real mode BIOS interrupts to manipulate the hard disk and floppy. But once my code enables Protected Mode, all the real mode BIOS interrupt service routines won't be available.
I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop?
I know that hardware is controlled by reading from and writing to certain control or data registers. For example, I know that the "Command Block Registers" of a hard disk range from 0x1F0 to 0x1F7. I am wondering whether the register addresses of so many different hardware devices are consistent on different platforms? Or do I have to detect that before using them? How would I do that?
Since I am not sure about how to read/write a floppy or a hard disk in Protected Mode, I have to use BIOS interrupts to load all my necessary kernel files from the floppy before entering protected mode. What could I do if my kernel file exceeds the real mode 1M space limit?
How do I read/write a hard disk when the CPU is in Protected Mode?
I have a feeling that I need to do some hardware drivers now. Am I right?
Strictly speaking; (and depending on your requirements) "need" may be too strong - in theory you can switch back to real mode to use BIOS functions, or use a virtual8086 monitor, or write an interpreter that interprets the firmware's instructions instead of executing them directly.
However, the BIOS is awful (designed for an "only one thing can happen at a time" environment that is completely unsuitable for modern systems where its expected that all devices are able to do useful work at the same time), and the BIOS is deprecated (replaced by UEFI), and it's hard to call something an OS when it doesn't have control over the hardware (because the firmware still has control of the hardware).
Note that if you do continue using BIOS functions; the state of various pieces of hardware (interrupt controller, PCI configuration space for various devices, any PCI bridges, timer/s, etc) has to match the expectations of the BIOS. What this means is that you will either be forced to accept huge limitations (e.g. never being able to use IO APICs, etc. properly) because it will break BIOS functions used by other pre-existing code, or you will be forced to do a huge amount of work to make the BIOS happy (emulating various pieces of hardware so the BIOS thinks the hardware is still in the state it expects even though it's not).
In other words; if you want an OS that's good then you do need to write drivers; but if you only want an OS that doesn't work on modern computers (UEFI), has severe performance problems ("only one thing can happen at a time"), is significantly harder to improve, doesn't support any devices that the BIOS doesn't support (e.g. sound cards), and doesn't support any kind of "hot-plug" (e.g. plugging in a USB device), then you don't need to write drivers.
Is this why an OS is so difficult to develop?
A bad OS is easy to develop. For example, something that is as horrible as MS-DOS (but not compatible with MS-DOS) could probably be slapped together in 1 month.
What makes an OS difficult to develop is making it good. Things like caring about security, trying to get acceptable performance, supporting multi-CPU, providing fault tolerance, trying to make it more future-proof/extensible, providing a nice GUI, creating well thought-out standards (for APIs, etc), and power management - these are what makes an OS difficult.
Device drivers add to the difficulty. Before you can write drivers you'll need support for things that drivers depend on (memory management, IRQ handling, etc - possibly including scheduler and some kind of communication); then something to auto-detect devices (e.g. to scan PCI configuration space) and try to start the drivers for whatever was detected (possibly/hopefully from file system or initial RAM disk, with the ability to add/unload/replace drivers without rebooting); and something to manage the tree of devices - e.g. so that you know which "child devices" will be effected when you put a "parent device" to sleep (or the "parent device" has hardware faults, or its driver crashes, or the device is unplugged). Of course then you'd need to write the device drivers, where the difficulty depends on the device itself (e.g. a device driver for a NVidia GPU is probably harder to write than a device driver for a RS232 serial port controller).
For storage devices themselves (assuming "80x86 PC") there's about 8 standards that matter (ATA/ATAPI, AHCI and NVMe; then OHCI, UHCI, eHCI and xHCI for USB controllers, then the USB mass storage device spec). However, there is also various RAID controllers and/or SCSI controllers where there's no standard (each of these controllers need their own driver), and some obsolete stuff (floppy controller, tape drives that plugged into floppy controller or parallel port, three proprietary CD-ROM interfaces that were built into sound cards).
Please understand that supporting all of this isn't the goal. The goal should be to provide things device drivers depend on (described above), then provide specifications that describe the device driver interfaces (possibly/hopefully including things like IO priorities and synchronization, and notifications for device/media removal, error handling, etc) so that other people can write device drivers for you. Once that's done you might implement a few specific device drivers yourself (e.g. maybe just AHCI initially - everything else could be left until much later or until someone else writes it).
You don't necessarily HAVE to write drivers. You could drop back into real mode to call the BIOS service, and then hop back into protected mode when you're done. This is essentially how DPMI DOS extenders (DOS4GW, Causeway, etc) work.
The source code for the Causeway DOS extender is public domain, you can look at that for a reference. http://www.devoresoftware.com/freesource/cwsrc.htm