What is kernel's KMS(kernel mode setting) API? - linux-kernel

What is kernel's KMS(kernel mode setting) API?

ModeSetting does refer to the graphic stack. It is the process of setting up the clocks and scanout buffers, initialize the chips, lighting up the displays and so on.
The kernel subsystem responsible for this is the DRM subsystem. It has a userspace library that is developed in lock-step with the kernel part and allows i.e. Xorg access to the userland facing part of the interface (normally called ABI). The hardware-facing side of the kernel interface is usually referred to as the API.
Specifically you can use the 'xrandr' binary to instruct XOrg via the randr-protocol to instruct the kernel to change the mode. That binary is installed alongside the X server and also gives you some information about the graphics card and the current mode.
The DRM ModeSetting API is IOCTL based and the following site gives an technical overview: http://dri.freedesktop.org/wiki/DrmModesetting
Also the documentation in the current linux-3.7 releases is quite improved. To check that out, you have to fetch the latest kernel sources, and then, in the kernel sourcetree do
$ make htmldocs
and then look at the generated file Documentation/DocBook/drm/index.html .
Hth

Mode setting is usually related to Graphics setup.
A reference article dated April 19, 2008 notes,
kernel mode-setting involves moving the mode-setting code for video adapters from the user-space X server drivers into the Linux kernel. This may seem like an uninteresting topic for end-users, but having the mode-setting done in the kernel allows for a cleaner and richer boot process, improved suspend and resume support, and more reliable VT switching (along with other advantages). Kernel mode-setting isn't yet in the mainline Linux kernel nor is the API for it frozen, but Fedora 9 shipping next month will be the first major distribution carrying this initial support. In this article we're looking more closely at kernel mode-setting with the Intel X.Org driver as well as showing videos of kernel-based mode-setting in action.
Here is a Fedora wiki KernelModesetting page.

Related

How does the ARM Linux kernel map console output to a hardware device on boot?

I'm currently struggling to determine how I can get an emulated environment via QEMU to correctly display output on the command line. I have an environment that displays perfectly well using the virt reference board, a cortex-a9CPU, and the 4.1 Linux kernel cross-compiled for ARM. However, if I swap out the 4.1 kernel for 2.6 or 3.1, suddenly I can no longer see console output.
While solving this issue is my main goal, I feel like I lack a critical understanding of how Linux and the hardware initially integrate before userspace configurations via boot scripts and whatnot have a chance to execute. I am aware of the device tree, and have a loose understanding of how it works. But the issue I ran into where a different kernel version broke console availability entirely confounds me. Can someone explain how Linux initially maps console output to a hardware device on the ARM architecture?
Thank you!
The answer depends quite a bit on which kernel version, what config options are set, what hardware, and also possibly on kernel command line arguments.
For modern kernels, the answer is that it looks in the device tree blob it is passed for descriptions of devices, some of which will be serial ports, and it initializes those. The kernel config or command line will specify which of those is to be used for the console. For earlier kernels, especially if you go all the way back to 2.6, use of device tree was less universal, and for some hardware the boot loader simply said "this is a versatile express board" (for instance) and the kernel had compiled-in data structures to tell it where the devices were for each board that it supported. As the transition to device tree progressed, boards were converted one by one, and sometimes a few devices at a time, so what exactly the situation was for any specific kernel version depends on which board you're using.
The other thing that I rather suspect you're running into is that if the kernel crashes early in bootup (ie before it finds the serial port at all) then it will never output anything. So if the kernel is just too early to support the "virt" board properly at all, or if your kernel config is missing something important, then the chances are good that it crashes in early boot without being able to print you a useful message. (Sometimes "earlycon" or "earlyprintk" kernel arguments can assist here, but not always.)

How PSCI interface can be used to boot kernel in Hyp/EL2 mode?

I'm trying to understand how U-boot PSCI interface is used to boot kernel into HYP mode.
Going through u-boot source, I do see there is a generic psci.S and other psci.S which is board specific and have following doubts.
1). How and where psci.S fits in normal u-boot flow(when and how psci service like cpu_on and cpu_off is called while booting u-boot).s
2). How this psci interface of u-boot is used to boot kernel in HYP mode(what is it in psci interface that allow Linux kernel to booted in HYP mode)?
1). How and where psci.S fits in normal u-boot flow
U-boot sets aside some secure-world memory and copies that secure monitor code there, so that it can stay resident after U-Boot exits and provide minimal handling of some PSCI SMC calls.
(when and how psci service like cpu_on and cpu_off is called while booting u-boot)
They aren't. U-Boot runs and hands over to Linux on the primary CPU only. Linux may bring up secondary cores later in its own boot process, but U-Boot is long gone by then, except for the aforementioned secure monitor code.
2). How this psci interface of u-boot is used to boot kernel in HYP mode
It isn't.
(what is it in psci interface that allow Linux kernel to booted in HYP mode)?
Nothing.
The point behind the patch series you refer to is that it was (and unfortunately still is) all too common for 32-bit ARM platforms to have TrustZone-aware hardware designs but crap software support. The vendor BSPs implement just enough bootloader to get the thing started, and never switch out of secure SVC mode from boot, so the whole of Linux runs in the secure world, and their kernel is full of highly platform-specific code poking secure-only hardware like the power controller directly. This poses a problem if you want to use virtualisation (which, if you're the KVM/ARM co-maintainer and have recently bought yourself a CubieTruck, is obviously a pressing concern...) - it's easy enough to take the U-boot code for such a platform and make it switch into NS-HYP before starting Linux, thus enabling KVM (upstream U-boot already had some rudimentary support for this at the time), but once you've dropped out of the secure world you can't then later bring up your secondary CPUs with the original smp_operations that depend on touching secure-only hardware.
By implementing some trivial runtime "firmware" hanging off the back of the bootloader, you then have the simplest way of calling back into the secure world as needed, and it makes the most sense to move the necessary platform-specific code there to abstract the operations away behind a simple firmware call interface, especially if there's already a suitable one supported by Linux.
There's absolutely nothing special about PSCI itself - there are plenty of ARM platforms that have proper secure firmware, with which they implement power management and SMP operations, but via their own proprietary protocol. The only vaguely relevant aspect is that conforming to the PSCI spec guarantees that the all CPUs are going to come up in the same mode, thus if you did initially enter Linux in HYP, you won't see mismatched secondaries coming up in some other incompatible mode or security state.

View linux kernel drivers built into the kernel, and how do they get binded/mounted/started

I'm having a bit of a hard time fully understanding how the kernel starts in linux. I'm a wince developer and our company decided to run with linux instead now.
We outsourced all of the board bringup and the package I recieved is quit a bit different for the prototype board we have compared to the nitrogen6x we have been using.
Before i start listing the differences for the distro we created, the kernels are identical. The distro we have been using is a busybox system. The one we recieved from the vendor is sysvinit. I removed mdev from busybox and we are only using udev.
when I use the kernel on our busybox build the touch screen drivers breaks, or doesn' run, or does something totally magical. I'm not quit sure... there is a /dev/input/event0 device which when run on the sysvinit side is a touch device.. Is the kernel not the mechanism that binds the built-in drivers to a device node? I thought udev was for more dynamic events in the system.
On the other hand I can't really tell whats been loaded on my device. Is there a way to list running drivers that were built into the kernel? my touch pad is up? This is a fairly simple process of looking at the registry on wince to see which devices were loaded.
I guess what I'm really trying to discover, isn't so much how to add a driver to the kernel, its how the whole thing gets is plumbed together. I've found plenty of documents on createing kernel modules, but i haven't found a good resource on how to pull everything together from scratch so you can actually use said modules. Going back to the example of the touchscreen driver, its built into the kernel, how does that get plugged into /dev/input/event0??
I'm kind of having a difficult time finding good resources mostly because searching google for varations of linux/drivers/device nodes/ piles in tons of random crap from everywhere.
What you probably want to use now is evtest. It will allow you to know what are the input devices that are present and ready to use on your system.
To get more information on the input subsystem and more generic information on how the kernel is working, I can direct you to our training materials. The materials are free to download, use and redistribute.
The general answer is, there is no single, easy place to look to discover what drivers have been loaded by the kernel if they are compiled in. Of course, lsmod will display any drivers that were dynamically loaded after kernel boot.
The kernel does not create device nodes. That is, to quote your question, the kernel does not "bind" the driver to the device node. The association between kernel driver and device node is contained in the major and minor numbers registered when the driver is initialized. You can have a device node on your file system for which there is no corresponding driver (common especially in older devices where device nodes were statically created on the file system) and you can also have a driver installed for which there is no device node.
Modern Linux distros have dynamically created device nodes created on a mount point called /dev and this is usually a tmpfs file system, meaning it is volatile - it gets destroyed on every boot and recreated dynamically on each new boot.
udev is the magic that creates most device nodes based on events that it receives from the kernel when a new device is discovered (this can be after boot on device plugin, like a USB disk) or on startup when udev reads the queued events and acts on them. As you noted, busybox has a limited udev implementation called mdev.
Study udev and you will get a much better understanding of the process. Hope this helps a little.

where is device driver code executed? Kernel space or User space?

Part1:
To the linux/unix experts out there, Could you please help me understanding about device drivers. As i understood, a driver is a piece of code that directly interacts with hardware and exposes some apis to access the device. My question is where does this piece of code runs, User space or Kernel space?
I know that code that is executed in kernel space has some extra privileges like accessing any memory location(pls correct if i'm wrong). If we install a third party driver and if it runs in kernel space, wouldn't this be harmful for the whole system? How any OS handles this?
Part2:
Lets take an example of USB device(camera, keyboard..), How the system recognizes these devices? how does the system know which driver to install? How does the driver know the address of the device to read and write the data?
(if this is too big to answer here, pls provide links of some good documentation or tutorials.., I've tried and couldn't find answers for these. pls help)
Part 1
On linux, drivers run in kernel space. And yes, as you state there a significant security implications to this. Most exceptions in drivers will take down the kernel, potentially corrupt kernel memory (with all manner of consequences). Buggy drivers also have an impact on system security, and malicious drivers can do absolutely anything they want.
A trend seen on MacOSX and Window NT kernels is user-space drivers. Microsoft has for some time been pushing the Windows Userspace Driver Framework, and MacOSX has long provided user-space APIs for Firewire and USB drivers, and class-compliant drivers for many USB peripherals. it is quite unusual to install 3rd party kernel-mode device drivers on MacOSX.
Arguably, the bad reputation Windows used to have for kernel panics can be attributed to the (often poor quality) kernel mode drivers that came with just about every mobile phone, camera and printer.
Linux graphics drivers are pretty much all implemented in user-space with a minimal kernel-resident portion, and Fuse allows the implementation of filing systems in user-space.
Part 2
USB, Firewire, MCI (and also PCI-e) all have enumeration mechanisms through which a bus driver can match the device to a driver. In practice this means that all devices expose metadata describing what they are.
Contained within the metadata is a DeviceID, VendorID and a description of functions the device provides and associated ClassIDs. ClassIDs facilitate generic Class Drivers.
Conceptually, the operating system will attempt to find a driver that specifically supports the VendorID and DeviceID, and then fall back to one that supports the ClassID(s).
Matching devices to drivers is a core concept at the heart of the Linux Device Model, and exact matching criteria used for matching is match() function in the specific bus driver.
Once device drivers are bound to a device, it uses the bus-driver (or addressing information given by it) to perform read and writes. In the case of PCI and Firewire, this is a memory mapped IO address. For USB it bus addressing information.
The Linux Documentation tree provides some insight into the design of the Linux Device Model, but isn't really entry-level reading.
I'd also recommend reading Linux Device Driver (3rd Edition)

Can anyone please tell me how is Kernel Programming done in Linux, as Windows DDK in Windows

I am aware of windows kernel but new to linux kernel. I just need to know how its done in linux, i.e. the program development.
You can check there (free-electrons.com), it's a good informations source for kernel developement. (specialized in embedded linux, but most of the docs are available for standard development)
You have also the classical Linux Devices Drivers, which is very complete and detailled.
And last but not least, the Linux kernel documentation.
Linux does not have a stable kernel API. This is by design, so you should generally avoid writing kernel code if you can; it is unlikely to remain source-compatible indefinitely, and will definitely NOT be binary-compatible, even between minor releases.
This is less-or-more true for vendor kernels; Redhat etc DO maintain source & binary kernel compatibility between major revisions.
More work is gradually being done in the kernel to reduce the amount of kernel-code required to carry out various tasks, such as driver development (for example, USB drivers can typically be done in userspace with libusb), filesystem development (FUSE) and network filtering (NFQUEUE). However, there are still some cases where you need to; in particular, block devices still need to be in the kernel to be able to be usefully used for boot devices and swap.

Resources