Can anyone please tell me how is Kernel Programming done in Linux, as Windows DDK in Windows - windows

I am aware of windows kernel but new to linux kernel. I just need to know how its done in linux, i.e. the program development.

You can check there (free-electrons.com), it's a good informations source for kernel developement. (specialized in embedded linux, but most of the docs are available for standard development)
You have also the classical Linux Devices Drivers, which is very complete and detailled.
And last but not least, the Linux kernel documentation.

Linux does not have a stable kernel API. This is by design, so you should generally avoid writing kernel code if you can; it is unlikely to remain source-compatible indefinitely, and will definitely NOT be binary-compatible, even between minor releases.
This is less-or-more true for vendor kernels; Redhat etc DO maintain source & binary kernel compatibility between major revisions.
More work is gradually being done in the kernel to reduce the amount of kernel-code required to carry out various tasks, such as driver development (for example, USB drivers can typically be done in userspace with libusb), filesystem development (FUSE) and network filtering (NFQUEUE). However, there are still some cases where you need to; in particular, block devices still need to be in the kernel to be able to be usefully used for boot devices and swap.

Related

How does the ARM Linux kernel map console output to a hardware device on boot?

I'm currently struggling to determine how I can get an emulated environment via QEMU to correctly display output on the command line. I have an environment that displays perfectly well using the virt reference board, a cortex-a9CPU, and the 4.1 Linux kernel cross-compiled for ARM. However, if I swap out the 4.1 kernel for 2.6 or 3.1, suddenly I can no longer see console output.
While solving this issue is my main goal, I feel like I lack a critical understanding of how Linux and the hardware initially integrate before userspace configurations via boot scripts and whatnot have a chance to execute. I am aware of the device tree, and have a loose understanding of how it works. But the issue I ran into where a different kernel version broke console availability entirely confounds me. Can someone explain how Linux initially maps console output to a hardware device on the ARM architecture?
Thank you!
The answer depends quite a bit on which kernel version, what config options are set, what hardware, and also possibly on kernel command line arguments.
For modern kernels, the answer is that it looks in the device tree blob it is passed for descriptions of devices, some of which will be serial ports, and it initializes those. The kernel config or command line will specify which of those is to be used for the console. For earlier kernels, especially if you go all the way back to 2.6, use of device tree was less universal, and for some hardware the boot loader simply said "this is a versatile express board" (for instance) and the kernel had compiled-in data structures to tell it where the devices were for each board that it supported. As the transition to device tree progressed, boards were converted one by one, and sometimes a few devices at a time, so what exactly the situation was for any specific kernel version depends on which board you're using.
The other thing that I rather suspect you're running into is that if the kernel crashes early in bootup (ie before it finds the serial port at all) then it will never output anything. So if the kernel is just too early to support the "virt" board properly at all, or if your kernel config is missing something important, then the chances are good that it crashes in early boot without being able to print you a useful message. (Sometimes "earlycon" or "earlyprintk" kernel arguments can assist here, but not always.)

Is linux suggest us use sysfs or udev?

While we want to create a device file in file system, which one should we choose right now? Make a node in udev, which will show up in /dev or use sysfs which will show up in /sys.
I just think I can accomplish most of functions for a device through these two different ways. So it confused me a lot.
Thanks.
Use udev (and or define and publish some major & minor device numbers, like for mknod). See makedev(3)
Application programs want to access physical devices in /dev/ (not in /sys/). Data to/from a device go usually thru /dev/ char or block devices. Metadata and configuration can go thru sysfs
Read more about udev and about sysfs. See also device file wikipage.
You won't get very useful answers if you don't explain more concretely your issues... What exact kind of device are you thinking about? Very probably there exist already similar devices....
Publish very early (even in alpha stage, when it is not fully working) your device driver and software source code as free software preferably as GPLv2 (the license used by Linux kernel). Ask also on kernelnewbies. Work hard (perhaps more than a year) to get your driver incorporated in the official Linux kernel.
You should be familiar with Advanced Linux Programming (in the application userspace world) before attempting to code a kernel driver. After that, read books and resources on Linux kernel driver programming and study the source code of existing drivers in the recent Linux kernels.

Do I need two machines to develop IOKit Mac drivers?

I'm building an IOKit CFPlugin driver for OS X. I'll be working with network data coming in that will be translated to MIDI data. No hardware is involved other than the built-in Airport. I have experience with drivers on Windows machines and firmware but this is my first dip into doing it on the Mac. So far things are going pretty well, but the Apple documentation sez: "For safety reasons, you should not load your driver on your development machine."
I only have one Mac. I really don't want two Macs- sorry, Apple. Should I take this warning seriously? Are there things I need to know?
Thanks, Tom Jeffries
You could also consider running OS X inside a VM as your testbed. It would surely be much more convenient that having a separate boot volume.
The warning is rather poorly worded; what you should consider doing is using a separate boot volume (partition) for trying out your driver, since it's possible to arbitrarily hose your system with your driver.
If you're doing kernel development on any OS that isn't isolated from your main system (via a VM, alternate boot disk, etc.), you're crazy!
What may be a bigger issue is that you can't do any kernel debugging, because the only option for that is to use GDB on a remote OS X system. For this, you may want to consider running OS X in virtualization.
you DEFINITELY want to have some way to recover a fubar kext installation: a bootable external drive or something you can quickly restore from-- this is the main reason for Apple's warning against running in-development-kernel-extensions on your production machine.
Nicholas is right that in order to debug using gdb (the only way in kernel space) you do need two machines. I've never tried using a VM as Coxy suggests: but I guess it's feasible (assuming that you run your kext on the virtual machine and use the real host machine to run gdb).
My preferred method for tracing and debugging in the kernel is kprintf() routed to firewire (aka firewire kprintf (man fwkpfv) ). for this you do need two machines with firewire ports.
finally, being an old computer musician myself, I wonder why you want to program a MIDI synthesizer (or transformer) on the network stack level. my guess is that you would have a much more gratifying experience working in userland (where you can use floating point math...)
if you need some hints or tips, feel free to get in touch...
|K<
from the ADC Kernel Programming Guide
Kernel programming is a black art that
should be avoided if at all possible.
Fortunately, kernel programming is
usually unnecessary. You can write
most software entirely in user space.
Even most device drivers (FireWire and
USB, for example) can be written as
applications, rather than as kernel
code. A few low-level drivers must be
resident in the kernel's address
space, however, and this document
might be marginally useful if you are
writing drivers that fall into this
category.

What's the relationship between a Linux OS and a kernel?

I've been using Linux for several years, but never stepped beyond installing from a CD/DVD. If the app manager didn't have what I was looking for in the software, then I was a lost cause.
But right now I'm trying to get a grip around what "Linux" is.
The first word that pops into my head is "kernel". After reading on Wikipedia, I understand that a kernel is software running to give other software (OS + apps) access to hardware (CPU, RAM+++). It also handles memory, but isn't that what the OS is supposed to do (what I remember from OS class)?
Is the Linux distro just a packed list of software?
Take my favorite distro: Fedora. It's now in version 14 and ships with kernel 2.6.35.
Does the kernel come from somewhere central and is the core of every Linux distro? If this is true, then is the Linux distro just a way of making the computer with the kernel more user-friendly to use? In that way, the distro+kernel is the OS because the one without the other is not usable (maybe pure kernel, but who sits on that?).
Pretty much correct. To me, "linux" is just the kernel. But it is pretty common to refer to entire distributions as linux. That is what annoys RMS so much. He maintains it should be called GNU/Linux, as he sees distributions as the linux kernel plus the additional software from the GNU project. This makes sense too but I never use the term GNU/Linux. I am either talking about the kernel linux, or "linux distributions", or a specific distribution.
So yes. A distribution is just the kernel (which may include distribution specific patches) plus all the extra programs that make it usable.
The kernel is a central project, and is nominally the same in each distro, but most distros customize it a bit.
And the extra software doesn't just make the kernel more user friendly, it makes it usable at all. A kernel is just interrupt handlers, device drivers, and system calls. It basically virtualizes the hardware and provides a standard environment for programs to work on.
As far as the phrase "operating system" goes, it can be confusing. Some people may say the kernel IS the operating system, and everything else is either a utility or an application or something else. Other people may say the kernel plus some other packages make up the operating system, but most of the software is not part of the operating system. Others may say all the software in the distro forms part of the operating system.
Linux is the kernel. That's what Linus wrote and that's what the kernel developers continue to work on today. It controls the hardware.
An operating system is something that includes a kernel plus quite a few lower-level "applications" to allow you as a user to do useful stuff with your computer (think file manager, control panel and so on).
A distro (distribution) is an operating system packaged with an absolute massive amount of higher-level applications(a) like DVD authoring tools, web browsers, office suites and so on ad-near-infinitum(b).
Now there are grey areas between kernel/OS and even OS/distro but I think that's a fair starting point for understanding how it hangs together.
(a) Even Windows does this to some extent, with the inclusion of Wordpad, Calculator and Paint, though not to the insanely prolific level that Linux distros extend to - do we really need 472 different file managers? Choice is good, yes, but only up to a point :-)
(b) Yes, I'm aware that "ad-near-infinitum" makes no sense since any finite amount subtracted from the infinite is still infinite. But, if you want mathematical accuracy, you should probably be over at https://math.stackexchange.com :-)
OS is just kernel and Shell which work hand in hand.
Distro is combination of customized shell(s) working on a kernel. This means, for examples - Kali, Ubuntu, fedora, Mint, etc. are different distros which work on Linux kernel.
Shell acts as an interface between the user and the kernel. Shell can be command line interface or Graphic user interface. Bash, sh, Windows GUI are some shells.
Kernel is hub of operating system. It allocates time and memory to programs and handles the filestore etc.
To further explain shell and Kernel suppose you type cd. The shell searches the filestore for the file containing the program cd, and then requests the kernel, through system calls, to execute the program cd on myfile.
To take a simple example - Windows GUI is a Shell, Windows OS a distribution by Microsoft.
Similarly, Ubuntu OS or fedora OS etc. a distro working on various shell using Linux kernel.
Shell or a distro does not make Kernel more user friendly to use but it makes it usable for user.
So now, simply you can say Linux is a kernel.
Linux + shell (Bash, Gnome etc.) is a Linux distro say Ubuntu, Mint, Kali etc. and each of them is a OS.
"kernel" and "shell" are the original terms, as in let's say "core" and "shell". "Shell" is the command interpreter. "Distro" is a term that means a customized shell(s) + specific programs included in that distribution. One distribution might several shells though. From a user perspective this is close to the concept of human language. Is the language that you have to talk to the terminal which will talk to shell. Shell will read it and look for a file within the filestore (still inside the shell/ distro). Once the file (executable) is found, shell sends this to the kernel which does the job (process). Think of a car which will have the same basically unmodified engine over many years but will change its frame/ body. I think I need to stop here...

kvm vs. vmware for kernel debugging / USB driver development

I'm currently setting up vmware Server 2.0 for kernel debugging with gdb ( see this setup guide ) and someone asked me why not use kvm?
So I ask: kvm vs. vmware for kernel debugging / USB driver development
what are the pros and cons of each?
Driver development? are you working on a driver for a particular piece of hardware? if so, then you probably won't be able to use virtualization, because the virtualized instance won't have access to the new hardware.
For this you will need two machines, one running a remote debugger on the other.
*Edit: * Apparently you're developing a driver for a USB Device? this is one area in particular that a VM actually Can help. These days most VM's have the ability to delegate specific USB devices to a guest OS.
That said, this situation doesn't really offer any benefits over the remote debugger option, because you still need a way to inspect the state of the running or crashed OS, and VM's offer very little assistance in this regard. You might be able to replay saved states from just before a crash.
You might be able to get a bit of traction using UML, which would allow you to do local debugging as on a regular user process, which is a little bit less trouble.
Instead of answering the direct question I'll add another option... Depending on if the kernel in question is a Linux kernel, and what part(s) of it you are working on, you might find that UserModeLinux (included in the 2.6.x source, and available as patch sets for 2.4 and 2.2) may trump both of those options.
As it runs the kernel as a userland process under the host kernel it is easier to attach common debugging tools to. I believe it is very commonly used in the early stages of updates/additions to file-system related code. If you are developing/debugging modules that interact directly with hardware it may be much less use to you though.
Reference links: home,
other
I recently started building GNU Mach/HURD and found the combination of QEmu/KVM to work really quite well.. for the following reasons:
QEmu presents quite a clean environment
Networking has alot of options
I can easily mount the filesystem using a raw device file / loopback
Bottom line is, for kernel work I just want the minimum of functionality to boot and see the result. VMWare is much more for usable virtualization rather than down-and-dirty.
There is however no comparison to booting on a real machine with real hardware. The VM environment can seem like a safety blanket somtimes ... because even my toaster would know what a Realtek RTL8139C was.
If it is a "real hardware" device, of course, vmware will not emulate it, so you won't be able to debug the driver under it (nor will any other virtualisation software, unless you extend one to do so).
Device driver debugging can be done to some extent with a real hardware machine with a normal kernel - although there are obviously things you can't do - like set breakpoints.
It is still possible to attach a debugger to the kernel and inspect stuff. Moreover, traditional printf() debugging is quite possible (printk, anyone), and there are various features in the kernel which make debugging easier. It's possible to build the kernel with various debug options to try to detect pointer problems, memory leaks etc.
By default, the kernel even gives a nice-ish stack trace on the log when it encounters an OOPS or BUG condition (obviously this does not necessarily get written anywhere if the system hangs or crashes). Of course a pointer-out-of-range condition happening inside an interrupt is a recipe for disaster, but you could still get a stack trace on the screen immediately before the panic :)

Resources