How does windows distinguishes between discs? - windows

I wonder how windows distinguishes between diferrent drives and memory modules, I mean how can windows writte somethig specificaly to disc C or disc D.
In every programming language when you declare variable it gets saved into to the memory, and when you need to store something to hdd, you have to use some library.
So, how does windows handle it?
Does it treat all discs and memory modules as a single line of data, and does it only save each mediums beginning adress? - like 0x00000 is where the disc C begins, 0x15616 is where the disc D begins.

Like #MSalters said,
C: is a symlink to something like Device\HarddiskVolume1.
What it means is that disk drivers on Windows are implemented as virtual filesystems a bit like on Linux. I'll explain for Linux since there's much more documentation but the answer is quite similar for Windows even though both OSes do things differently.
Basically, on Linux everything is a file. Linux ships with disk drivers as these are at the basis of every computer. Linux exposes a driver model like every OS. The Linux driver model for files (including hard disks) exposes functions that will be called by the kernel to read/write to disk. There are open, read and write functions that the kernel expects to be present for a file driver.
If you wanted, you could write a disk driver and replace the existing one. You write drivers as modules that you can then load in the kernel using certain utilities that ship with Linux. I won't share more details as I'm not that much aware. Once your code is loaded in the kernel, it has access to all kernel code and all hardware since it runs in kernel mode.
Today, disk drivers probably use PCI DMA which is a controller connected to the PCI bus which allows to do disk operations which ignore the CPU and load disk data to RAM directly. The PCI convention says that all compatible devices (like PCI DMA controllers) must expose a certain interface to the computer. This interface is mostly some memory mapped registers that can be used to send commands to the controller. The OS will write data in these registers to tell the DMA controller to do disk operations. Then the DMA controller will trigger an interrupt once it is done. The OS will then know that the data is readily loaded into RAM and ready for use. The same applies for writing
The OS knows the location of these registers by looking in the ACPI tables at boot.

In modern Windows (2000 or later) C: is a symlink to something like Device\HarddiskVolume1. The number there can vary. Typically, \Device\Bootpartition is also a symlink to the same HarddiskVolume.
Windows doesn't use libraries to write to disk. Instead, it uses drivers. The chief difference is that drivers run as part of the OS kernel, while libraries run as part of applications.

Related

Are PCIe device drivers beneficial if using Linux as a bootloader for bare-metal code?

I am developing an embedded system on a PowerPC processor and there is need for communication with an FPGA via PCIe. I wish to use Linux/embedded-Linux as a bootloader to leverage its PCIe initialization code and driver API for simplified PCIe driver development. However in the end I want to be running bare-metal code (no OS running). So I am looking at using PetitBoot/kexec to jump from Linux to my own code.
Is this possible?
My current understanding of PCIe drivers leads me to believe that once the device is initialized, so long as I have a pointer to the address space, I should be able to simply execute MMIO R/W operations directly to the memory space. So even if kexec overwrites the driver code I should be able to use the device because the driver has done its job already.
Is this correct?
If not, what are my alternatives?
I don't think this approach would be a good idea. Drivers that were written with the Linux OS in mind are going to assume that all of the OS's resources are available, not just memory allocations. For example, it may configure interrupt handlers, but when the OS is not longer available, your hardware may get hung because nothing is acknowledging and servicing its interrupt requests.
I'm skeptical of the memory initialization as well. I suppose you could theoretically allocate some DMA memory and pass the resulting physical address to your bare-metal application as it takes over, but the whole process seems sketchy. It would be very difficult to make sure everything in Linux is shut down cleanly while leaving the PCIe subsystem running. You'll have to look at the driver's shut down routines and see what it does to the card to make sure it doesn't shut down the device and make it unresponsive to your bare-metal code.
I would suggest that you instead go through the Linux-based driver and use it as a guide to construct a new bare-metal driver. Copy the initialization code that you need, and leave out the Linux-specific configuration details.

View linux kernel drivers built into the kernel, and how do they get binded/mounted/started

I'm having a bit of a hard time fully understanding how the kernel starts in linux. I'm a wince developer and our company decided to run with linux instead now.
We outsourced all of the board bringup and the package I recieved is quit a bit different for the prototype board we have compared to the nitrogen6x we have been using.
Before i start listing the differences for the distro we created, the kernels are identical. The distro we have been using is a busybox system. The one we recieved from the vendor is sysvinit. I removed mdev from busybox and we are only using udev.
when I use the kernel on our busybox build the touch screen drivers breaks, or doesn' run, or does something totally magical. I'm not quit sure... there is a /dev/input/event0 device which when run on the sysvinit side is a touch device.. Is the kernel not the mechanism that binds the built-in drivers to a device node? I thought udev was for more dynamic events in the system.
On the other hand I can't really tell whats been loaded on my device. Is there a way to list running drivers that were built into the kernel? my touch pad is up? This is a fairly simple process of looking at the registry on wince to see which devices were loaded.
I guess what I'm really trying to discover, isn't so much how to add a driver to the kernel, its how the whole thing gets is plumbed together. I've found plenty of documents on createing kernel modules, but i haven't found a good resource on how to pull everything together from scratch so you can actually use said modules. Going back to the example of the touchscreen driver, its built into the kernel, how does that get plugged into /dev/input/event0??
I'm kind of having a difficult time finding good resources mostly because searching google for varations of linux/drivers/device nodes/ piles in tons of random crap from everywhere.
What you probably want to use now is evtest. It will allow you to know what are the input devices that are present and ready to use on your system.
To get more information on the input subsystem and more generic information on how the kernel is working, I can direct you to our training materials. The materials are free to download, use and redistribute.
The general answer is, there is no single, easy place to look to discover what drivers have been loaded by the kernel if they are compiled in. Of course, lsmod will display any drivers that were dynamically loaded after kernel boot.
The kernel does not create device nodes. That is, to quote your question, the kernel does not "bind" the driver to the device node. The association between kernel driver and device node is contained in the major and minor numbers registered when the driver is initialized. You can have a device node on your file system for which there is no corresponding driver (common especially in older devices where device nodes were statically created on the file system) and you can also have a driver installed for which there is no device node.
Modern Linux distros have dynamically created device nodes created on a mount point called /dev and this is usually a tmpfs file system, meaning it is volatile - it gets destroyed on every boot and recreated dynamically on each new boot.
udev is the magic that creates most device nodes based on events that it receives from the kernel when a new device is discovered (this can be after boot on device plugin, like a USB disk) or on startup when udev reads the queued events and acts on them. As you noted, busybox has a limited udev implementation called mdev.
Study udev and you will get a much better understanding of the process. Hope this helps a little.

where is device driver code executed? Kernel space or User space?

Part1:
To the linux/unix experts out there, Could you please help me understanding about device drivers. As i understood, a driver is a piece of code that directly interacts with hardware and exposes some apis to access the device. My question is where does this piece of code runs, User space or Kernel space?
I know that code that is executed in kernel space has some extra privileges like accessing any memory location(pls correct if i'm wrong). If we install a third party driver and if it runs in kernel space, wouldn't this be harmful for the whole system? How any OS handles this?
Part2:
Lets take an example of USB device(camera, keyboard..), How the system recognizes these devices? how does the system know which driver to install? How does the driver know the address of the device to read and write the data?
(if this is too big to answer here, pls provide links of some good documentation or tutorials.., I've tried and couldn't find answers for these. pls help)
Part 1
On linux, drivers run in kernel space. And yes, as you state there a significant security implications to this. Most exceptions in drivers will take down the kernel, potentially corrupt kernel memory (with all manner of consequences). Buggy drivers also have an impact on system security, and malicious drivers can do absolutely anything they want.
A trend seen on MacOSX and Window NT kernels is user-space drivers. Microsoft has for some time been pushing the Windows Userspace Driver Framework, and MacOSX has long provided user-space APIs for Firewire and USB drivers, and class-compliant drivers for many USB peripherals. it is quite unusual to install 3rd party kernel-mode device drivers on MacOSX.
Arguably, the bad reputation Windows used to have for kernel panics can be attributed to the (often poor quality) kernel mode drivers that came with just about every mobile phone, camera and printer.
Linux graphics drivers are pretty much all implemented in user-space with a minimal kernel-resident portion, and Fuse allows the implementation of filing systems in user-space.
Part 2
USB, Firewire, MCI (and also PCI-e) all have enumeration mechanisms through which a bus driver can match the device to a driver. In practice this means that all devices expose metadata describing what they are.
Contained within the metadata is a DeviceID, VendorID and a description of functions the device provides and associated ClassIDs. ClassIDs facilitate generic Class Drivers.
Conceptually, the operating system will attempt to find a driver that specifically supports the VendorID and DeviceID, and then fall back to one that supports the ClassID(s).
Matching devices to drivers is a core concept at the heart of the Linux Device Model, and exact matching criteria used for matching is match() function in the specific bus driver.
Once device drivers are bound to a device, it uses the bus-driver (or addressing information given by it) to perform read and writes. In the case of PCI and Firewire, this is a memory mapped IO address. For USB it bus addressing information.
The Linux Documentation tree provides some insight into the design of the Linux Device Model, but isn't really entry-level reading.
I'd also recommend reading Linux Device Driver (3rd Edition)

How to read/write a hard disk when CPU is in Protected Mode?

I am doing an OS experiment. Until now, all my code utilized the real mode BIOS interrupts to manipulate the hard disk and floppy. But once my code enables Protected Mode, all the real mode BIOS interrupt service routines won't be available.
I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop?
I know that hardware is controlled by reading from and writing to certain control or data registers. For example, I know that the "Command Block Registers" of a hard disk range from 0x1F0 to 0x1F7. I am wondering whether the register addresses of so many different hardware devices are consistent on different platforms? Or do I have to detect that before using them? How would I do that?
Since I am not sure about how to read/write a floppy or a hard disk in Protected Mode, I have to use BIOS interrupts to load all my necessary kernel files from the floppy before entering protected mode. What could I do if my kernel file exceeds the real mode 1M space limit?
How do I read/write a hard disk when the CPU is in Protected Mode?
I have a feeling that I need to do some hardware drivers now. Am I right?
Strictly speaking; (and depending on your requirements) "need" may be too strong - in theory you can switch back to real mode to use BIOS functions, or use a virtual8086 monitor, or write an interpreter that interprets the firmware's instructions instead of executing them directly.
However, the BIOS is awful (designed for an "only one thing can happen at a time" environment that is completely unsuitable for modern systems where its expected that all devices are able to do useful work at the same time), and the BIOS is deprecated (replaced by UEFI), and it's hard to call something an OS when it doesn't have control over the hardware (because the firmware still has control of the hardware).
Note that if you do continue using BIOS functions; the state of various pieces of hardware (interrupt controller, PCI configuration space for various devices, any PCI bridges, timer/s, etc) has to match the expectations of the BIOS. What this means is that you will either be forced to accept huge limitations (e.g. never being able to use IO APICs, etc. properly) because it will break BIOS functions used by other pre-existing code, or you will be forced to do a huge amount of work to make the BIOS happy (emulating various pieces of hardware so the BIOS thinks the hardware is still in the state it expects even though it's not).
In other words; if you want an OS that's good then you do need to write drivers; but if you only want an OS that doesn't work on modern computers (UEFI), has severe performance problems ("only one thing can happen at a time"), is significantly harder to improve, doesn't support any devices that the BIOS doesn't support (e.g. sound cards), and doesn't support any kind of "hot-plug" (e.g. plugging in a USB device), then you don't need to write drivers.
Is this why an OS is so difficult to develop?
A bad OS is easy to develop. For example, something that is as horrible as MS-DOS (but not compatible with MS-DOS) could probably be slapped together in 1 month.
What makes an OS difficult to develop is making it good. Things like caring about security, trying to get acceptable performance, supporting multi-CPU, providing fault tolerance, trying to make it more future-proof/extensible, providing a nice GUI, creating well thought-out standards (for APIs, etc), and power management - these are what makes an OS difficult.
Device drivers add to the difficulty. Before you can write drivers you'll need support for things that drivers depend on (memory management, IRQ handling, etc - possibly including scheduler and some kind of communication); then something to auto-detect devices (e.g. to scan PCI configuration space) and try to start the drivers for whatever was detected (possibly/hopefully from file system or initial RAM disk, with the ability to add/unload/replace drivers without rebooting); and something to manage the tree of devices - e.g. so that you know which "child devices" will be effected when you put a "parent device" to sleep (or the "parent device" has hardware faults, or its driver crashes, or the device is unplugged). Of course then you'd need to write the device drivers, where the difficulty depends on the device itself (e.g. a device driver for a NVidia GPU is probably harder to write than a device driver for a RS232 serial port controller).
For storage devices themselves (assuming "80x86 PC") there's about 8 standards that matter (ATA/ATAPI, AHCI and NVMe; then OHCI, UHCI, eHCI and xHCI for USB controllers, then the USB mass storage device spec). However, there is also various RAID controllers and/or SCSI controllers where there's no standard (each of these controllers need their own driver), and some obsolete stuff (floppy controller, tape drives that plugged into floppy controller or parallel port, three proprietary CD-ROM interfaces that were built into sound cards).
Please understand that supporting all of this isn't the goal. The goal should be to provide things device drivers depend on (described above), then provide specifications that describe the device driver interfaces (possibly/hopefully including things like IO priorities and synchronization, and notifications for device/media removal, error handling, etc) so that other people can write device drivers for you. Once that's done you might implement a few specific device drivers yourself (e.g. maybe just AHCI initially - everything else could be left until much later or until someone else writes it).
You don't necessarily HAVE to write drivers. You could drop back into real mode to call the BIOS service, and then hop back into protected mode when you're done. This is essentially how DPMI DOS extenders (DOS4GW, Causeway, etc) work.
The source code for the Causeway DOS extender is public domain, you can look at that for a reference. http://www.devoresoftware.com/freesource/cwsrc.htm

Device drivers and Windows

I am trying to complete the picture of how the PC and the OS interacts together. And I am at point, where I am little out of guess when it comes to device drivers.
Please, don´t write things like its too complicated, or you don´t need to know when using high programming laguage and winapi functions. I want to know, it´s for study purposes.
So, the very basic structure of how OS and PC (by PC I mean of course HW) is how I see it is that all other than direct CPU commands, which can CPU do on itself (arithmetic operation, its registers access and memory access) must pass thru OS. Mainly becouse from ring level 3 you cannot use in and out intructions which are used for acesing other HW. I know that there is MMIO,but it must be set by port comunication first.
It was not like this all the time. Even I am bit young to remember MSDOS, I know you could access HW directly, becouse there ws no limitation, no ring mode. So you could to write string to diplay use wheather DOS function, or directly acess video card memory and write it by yourself.
But as OS developed, there is no longer this possibility. But it is fine, since OS now handles all the HW comunication, and frankly it more convinient and much more safe (I would say the only option) in multitasking environment. So nowdays you instead of using int instructions to use BIOS mapped function or DOS function you call dll which internally than handles everything you don´t need to know about.
I understand this. I also undrstand that device drivers is the piece of code that runs in ring level 0, so it can do all the HW interactions. But what I don´t understand is connection between OS and device driver. Let´s take a example - I want to make a sound card make a sound. So I call windows API to acess sound card, but what happens than? Does windows call device drivers to do so?
But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective? But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out.
Please, help me understand this mess. I am really getting mad. Thanks.
A Windows device driver is a bit like a DLL: except that instead of an application dynamic linking/loading it, it's the O/S that dynamic links/loads it.
Registry entries tell the O/S what device drivers exist (so that the O/S knows which device drivers to dynamic-link/load).
The device drivers run in ring 0. In ring zero, they (device drivers) don't have access to (can't link to or use) Windows APIs: instead they have access to various NT kernel APIs.
But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective?
Basically yes. All the device drivers within a given type or class (e.g. all video drivers, or all disk drivers) have a similar API, which is invoked by the O/S (and/or invoked by higher-level drivers, for example disk drivers are used/invoked by file system drivers).
The Windows Device Driver Kit defines the various APIs and includes sample drivers for the various types of device.
But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out.
The O/S is dynamic-linking to the device driver functions: because device driver APIs are predefined, device drivers are interchangeable as far as the O/S is concerned; new device drivers can be written, provided they support (are backward-compatible with) the standard device driver API.
The dynamic-linking mechanism is very similar to the way in which COM objects or C++ classes implement any predefined pure-abstract interface: a header file in the DDK declares the pure-abstract interface (like virtual functions), device drivers implement these functions, and the O/S loads the drivers and invokes these functions.
The basics:
Please note that this explanation is simplified and sometime only true for most cases and not all.
Most HW devices you will ever encounter will have these basic operations:
Write to memory(or Registers) on them.
Read from memory(or Registers) on them.
This is enough to control the HW, to give it the data it needs, and to get the data you want from it.
These memory areas are mapped by the BIOS and/or the OS to the Physical memory range on your PC (which may in turn be accessed by your driver.)
So we now have two operations READ and WRITE that the device driver knows to do.
In addition, the driver can read and write in a manner that does not involve the cpu. This is called Direct Memory Access (DMA) and usually performed by your HW.
The last type of operation is called INTERRUPTS and is meant for your HW to notify your driver of something that just happend. This is usually done by the HW interrupting the CPU and calling your driver to perform some operation in high priority. For example: an image is ready in the HW to be read by the driver.

Resources