Is the excuted instruction stream always the same when Linux boots? - linux-kernel

For the same computer, when the Linux operating system boots, is the executed instruction stream always the same? I guess it might be but I am not sure. Any help will be welcome!
Cheers,

Related

How does the ARM Linux kernel map console output to a hardware device on boot?

I'm currently struggling to determine how I can get an emulated environment via QEMU to correctly display output on the command line. I have an environment that displays perfectly well using the virt reference board, a cortex-a9CPU, and the 4.1 Linux kernel cross-compiled for ARM. However, if I swap out the 4.1 kernel for 2.6 or 3.1, suddenly I can no longer see console output.
While solving this issue is my main goal, I feel like I lack a critical understanding of how Linux and the hardware initially integrate before userspace configurations via boot scripts and whatnot have a chance to execute. I am aware of the device tree, and have a loose understanding of how it works. But the issue I ran into where a different kernel version broke console availability entirely confounds me. Can someone explain how Linux initially maps console output to a hardware device on the ARM architecture?
Thank you!
The answer depends quite a bit on which kernel version, what config options are set, what hardware, and also possibly on kernel command line arguments.
For modern kernels, the answer is that it looks in the device tree blob it is passed for descriptions of devices, some of which will be serial ports, and it initializes those. The kernel config or command line will specify which of those is to be used for the console. For earlier kernels, especially if you go all the way back to 2.6, use of device tree was less universal, and for some hardware the boot loader simply said "this is a versatile express board" (for instance) and the kernel had compiled-in data structures to tell it where the devices were for each board that it supported. As the transition to device tree progressed, boards were converted one by one, and sometimes a few devices at a time, so what exactly the situation was for any specific kernel version depends on which board you're using.
The other thing that I rather suspect you're running into is that if the kernel crashes early in bootup (ie before it finds the serial port at all) then it will never output anything. So if the kernel is just too early to support the "virt" board properly at all, or if your kernel config is missing something important, then the chances are good that it crashes in early boot without being able to print you a useful message. (Sometimes "earlycon" or "earlyprintk" kernel arguments can assist here, but not always.)

Issue with qemu and gdb

I have a device driver that is freezing the OS. The mouse wont even move. I am trying to debug this issue and I believe one good approach is to use gdb with qemu, two things I have never used before. Is there a better approach?
So first I need to compile the kernel with debug symbols which I have done already.
Now, there is a new file that is generated called vmlinux that is located in that same folder as the source. It seems that I also need a bzImage file according to this so I can run the newly compiled kernel using:
qemu-system-i386 -kernel bzImage
or in debug mode
qemu-system-i386 -s -S -kernel bzImage
I cannot locate the bzImage file. Where do I find it or what is missing here? Is the bzImage referring to the OS Image I created using qemu-img create?
Also, what I do not understand is that now the kernel is compiled (vmlinux) how does I run it with qemu? So my question is when I run it with qemu or the debugger is the kernel running as an app in my main OS?
also how can I install my device driver? My understanding the kernel is not Ubuntu so there is no UI?
Also, I installed qemu and when I type qemu I get command not found. I am guessing I have to pick a specific processor emulator as in qemu-system-i386, qemu-system-x86_64, or qemu-x86_64?
How is qemu different or similar to the kvm command?
Thanks.
So, if I understand the problem correctly, you have a kernel module that needs no specific hardware. When you are working with the module, the system freezes but the kernel log contains nothing special.
The following may be helpful.
Getting the log
The symptoms you described may still be a result of a kernel oops or panic. The logging facilities sometimes die before they can output the information about the error to the log file. You may try to output the log via a serial port, this should be more reliable.
As your kernel module does not need any specific hardware, the easiest way is probably to install the same Linux distro as you use to a virtual machine and connect the virtual serial port (COM) of that machine to a pipe on your host system.
This is usually quite easy to do. For example, this blog post contains the detailed instructions in case the host OS and the guest OS are Ubuntu 11.10.
VirtualBox is used there to manage the virtual machines. If you prefer QEMU, this should be possible as well. I suppose it is a bit easier to go with VirtualBox though but it is a matter of personal preference.
Basically, you need to perform the following steps.
Create a virtual machine and install the Linux distro you need as a guest OS there.
Enable a serial port (COM1, ...) in the configuration of the virtual machine and configure it to connect to a special file on the host ("host pipe"), say /tmp/vbox_serial.
Start the guest OS and adjust its boot options: at least, add console=ttyS0,115200 or something like that to the kernel options in the boot loader menu.
On the host, start minicom, socat or whatever else to read from /tmp/vbox_serial.
That is it. Now you should get the kernel log of the guest OS pouring to your host system via /tmp/vbox_serial. If the guest system crashes then, you will get the log even if it is not saved into a file on the guest itself.
To make things easier, you may use socat on your host system rather than minicom that the author of that blog post suggests. The power of minicom is probably not needed here.
This way, you can use socat and tee to save the log to guest.log file while still outputting it to the console:
socat /tmp/vbox_serial - | tee guest.log
If there was a kernel oops or panic, the backtrace in the log usually helps to find out what
has gone wrong.
Detecting Deadlocks
If you have obtained the full log via a serial connection or some other means and still there is nothing suspicious there and you suspect there has been a deadlock in the kernel,
lockdep tool may help. It is included into the kernel (but you may need to rebuild the kernel with CONFIG_LOCKDEP_SUPPORT=y).
Lockdep detects the potential deadlocks and outputs the results to the kernel log. This presentation may help you analyse its output.
Tracing Facilities
If you need tracing of some events in the kernel to debug your system, there are some tools that could be handy.
Kprobes - a kind of breakpoints you can set in almost arbitrary place in the kernel. Can be used to trace function calls among other things, with a moderate performance impact.
SystemTap - a powerful system to analyze what is going on in the kernel. Part of it is based on Kprobes.
Ftrace - a tracing system included into the kernel, incurs less overhead than Kprobes if that matters.

Do I need two machines to develop IOKit Mac drivers?

I'm building an IOKit CFPlugin driver for OS X. I'll be working with network data coming in that will be translated to MIDI data. No hardware is involved other than the built-in Airport. I have experience with drivers on Windows machines and firmware but this is my first dip into doing it on the Mac. So far things are going pretty well, but the Apple documentation sez: "For safety reasons, you should not load your driver on your development machine."
I only have one Mac. I really don't want two Macs- sorry, Apple. Should I take this warning seriously? Are there things I need to know?
Thanks, Tom Jeffries
You could also consider running OS X inside a VM as your testbed. It would surely be much more convenient that having a separate boot volume.
The warning is rather poorly worded; what you should consider doing is using a separate boot volume (partition) for trying out your driver, since it's possible to arbitrarily hose your system with your driver.
If you're doing kernel development on any OS that isn't isolated from your main system (via a VM, alternate boot disk, etc.), you're crazy!
What may be a bigger issue is that you can't do any kernel debugging, because the only option for that is to use GDB on a remote OS X system. For this, you may want to consider running OS X in virtualization.
you DEFINITELY want to have some way to recover a fubar kext installation: a bootable external drive or something you can quickly restore from-- this is the main reason for Apple's warning against running in-development-kernel-extensions on your production machine.
Nicholas is right that in order to debug using gdb (the only way in kernel space) you do need two machines. I've never tried using a VM as Coxy suggests: but I guess it's feasible (assuming that you run your kext on the virtual machine and use the real host machine to run gdb).
My preferred method for tracing and debugging in the kernel is kprintf() routed to firewire (aka firewire kprintf (man fwkpfv) ). for this you do need two machines with firewire ports.
finally, being an old computer musician myself, I wonder why you want to program a MIDI synthesizer (or transformer) on the network stack level. my guess is that you would have a much more gratifying experience working in userland (where you can use floating point math...)
if you need some hints or tips, feel free to get in touch...
|K<
from the ADC Kernel Programming Guide
Kernel programming is a black art that
should be avoided if at all possible.
Fortunately, kernel programming is
usually unnecessary. You can write
most software entirely in user space.
Even most device drivers (FireWire and
USB, for example) can be written as
applications, rather than as kernel
code. A few low-level drivers must be
resident in the kernel's address
space, however, and this document
might be marginally useful if you are
writing drivers that fall into this
category.

Does MC work independently from OS libraries and kernel?

Does the windows version of Midnight Commander (MC) work independently from Windows libraries? I mean does it have it's own way of reading data off the disk or is it using the OS's abilities?
If it's not independent do you know of any file manager that is? (Is it possible?)
Any help is greatly appreciated.
Only the windows kernel, and device drivers, can access the disks directly; all user mode programs must use the windows API (e.g. FindFirstFile).
Do you want MC to access the HDD controller directly? If no, you'll deal with Windows file system driver stack no matter what file manager you use.

kvm vs. vmware for kernel debugging / USB driver development

I'm currently setting up vmware Server 2.0 for kernel debugging with gdb ( see this setup guide ) and someone asked me why not use kvm?
So I ask: kvm vs. vmware for kernel debugging / USB driver development
what are the pros and cons of each?
Driver development? are you working on a driver for a particular piece of hardware? if so, then you probably won't be able to use virtualization, because the virtualized instance won't have access to the new hardware.
For this you will need two machines, one running a remote debugger on the other.
*Edit: * Apparently you're developing a driver for a USB Device? this is one area in particular that a VM actually Can help. These days most VM's have the ability to delegate specific USB devices to a guest OS.
That said, this situation doesn't really offer any benefits over the remote debugger option, because you still need a way to inspect the state of the running or crashed OS, and VM's offer very little assistance in this regard. You might be able to replay saved states from just before a crash.
You might be able to get a bit of traction using UML, which would allow you to do local debugging as on a regular user process, which is a little bit less trouble.
Instead of answering the direct question I'll add another option... Depending on if the kernel in question is a Linux kernel, and what part(s) of it you are working on, you might find that UserModeLinux (included in the 2.6.x source, and available as patch sets for 2.4 and 2.2) may trump both of those options.
As it runs the kernel as a userland process under the host kernel it is easier to attach common debugging tools to. I believe it is very commonly used in the early stages of updates/additions to file-system related code. If you are developing/debugging modules that interact directly with hardware it may be much less use to you though.
Reference links: home,
other
I recently started building GNU Mach/HURD and found the combination of QEmu/KVM to work really quite well.. for the following reasons:
QEmu presents quite a clean environment
Networking has alot of options
I can easily mount the filesystem using a raw device file / loopback
Bottom line is, for kernel work I just want the minimum of functionality to boot and see the result. VMWare is much more for usable virtualization rather than down-and-dirty.
There is however no comparison to booting on a real machine with real hardware. The VM environment can seem like a safety blanket somtimes ... because even my toaster would know what a Realtek RTL8139C was.
If it is a "real hardware" device, of course, vmware will not emulate it, so you won't be able to debug the driver under it (nor will any other virtualisation software, unless you extend one to do so).
Device driver debugging can be done to some extent with a real hardware machine with a normal kernel - although there are obviously things you can't do - like set breakpoints.
It is still possible to attach a debugger to the kernel and inspect stuff. Moreover, traditional printf() debugging is quite possible (printk, anyone), and there are various features in the kernel which make debugging easier. It's possible to build the kernel with various debug options to try to detect pointer problems, memory leaks etc.
By default, the kernel even gives a nice-ish stack trace on the log when it encounters an OOPS or BUG condition (obviously this does not necessarily get written anywhere if the system hangs or crashes). Of course a pointer-out-of-range condition happening inside an interrupt is a recipe for disaster, but you could still get a stack trace on the screen immediately before the panic :)

Resources