Where is the kprintf (kernel printf) log on Sierra? - macos

There are lots of pages that explain it but I can't find it. Many of the articles I find only work on El Capitan and older systems.
I cannot use the fwkpfv right now as I don't have the right dongles. My client is getting me a used MacBook that will support firewire.
My kernel extension panics my box. Quite oddly if my coworker builds my extension, it works just fine. I remain flummoxed.

You can get "live" local kernel logs using the command
log stream --process 0
For looking at past logs, use log show instead, e.g.:
log show --predicate 'processID == 0' --last 1h | less
None of that will help you much with kernel panics, however, as the logging happens asynchronously in user space, so you won't get the very last messages before the panic.
A few more options for debugging KPs without firewire, which you're probably already aware of but I'll mention them for completeness' sake:
Ethernet-based kernel debugging (as opposed to firewire). Only the test device needs wired/thunderbolt ethernet, the Mac running the debugger can be on wifi.
You can often extract quite a lot of info from the panic log itself: in addition to symbolicating the stack (use keepsyms=1 boot-arg so you don't have to do it retroactively), looking at the register contents and disassembly can often tell you the values of variables.
If you're missing parts of Apple's code the stack trace, run a debug or development kernel instead of the release one. Those are built with fewer optimisations enabled, so functions are less likely to be inlined, etc.
There are a bunch of memory debugging and other diagnostic options you can turn on in the kernel, e.g. -zp, -zc and so on.
If you can repro the crash in a VM (VMWare Fusion, Parallels, VirtualBox, KVM/Qemu, whatever), you can use the VM's simulated serial port to log kprintf output. The virtual ethernet ports also tend to support kernel debugging if you set them up right.

Related

Qemu-Arm Is stuck with black screen - running vanilla kernel

I have tried to run qemu-arm with a complied linux kernel (Version 4.9)
and with an initfs that i have created with a sample program.
This is was based on an excellent post from here.
This is the command that i have executed:
qemu-system-arm -M vexpress-a9 -kernel linux-4.9/arch/arm/boot/zImage -initrd initramfs -append "console=tty1"
then, qemu shows me these errors and its graphical window is getting stuck:
pulseaudio: set_sink_input_volume() failed
pulseaudio: Reason: Invalid argument
pulseaudio: set_sink_input_mute() failed
pulseaudio: Reason: Invalid argument
Even when I run it without the -initrd parameter, for just loading the kernel - nothing happens.
When I tried run it with a vmlinuz-3.2.0-4-vexpress image in this example, it worked for me.
Does someone have clue what may be the problem? Something with the fact that it is a zImage? Is there a way to debug it?
Thanks!
"QEMU sits there and prints nothing" is quite a common symptom, and it almost always means "the guest kernel crashed before being able to print anything, because it wasn't configured correctly". This is pretty much the same effect you get if you try to boot a wrongly configured kernel on real hardware, and the process for debugging it is about the same:
check the obvious kernel config options are set correctly: in particular, that you have built it to support the ARM board and CPU that you're trying to run it on, and that you've enabled support for whatever devices you're trying to use for console output
give yourself the maximum chance of being able to see something, by configuring QEMU to output serial port information, and configuring the guest to send its console output to serial, and enabling any earlycon/earlyprintk options you can (serial output happens much earlier than graphics output, and the Linux kernel earlycon/earlyprintk options mean the kernel will start printing output earlier than it defaults to)
if you have a kernel that works, and one that doesn't, look at the differences between the kernel configs to see if one is missing something
if all else fails, you have to break out the debugger to find out what's going on
Nothing about this is particularly QEMU specific -- it's the same sort of pain you have to go through if you're trying to do kernel bringup on hardware.
PS: my first guess is that the kernel is crashing because it doesn't have enough memory -- you haven't passed QEMU a '-m' option, so it is defaulting to 128MB; the vexpress-a9 board can handle up to 1GB. earlycon would probably be sufficient debug output to identify this issue. You also aren't passing a device tree blob via -dtb, which may be an issue for newer kernels (older kernels would happily boot without one).

Is There Ever an Advantage to User Mode Debug over Kernel Mode Debug?

From what I understand, on a high level, user mode debugging provides you with access to the private virtual address for a process. A debug session is limited to that process and it cannot overwrite or tamper w/ other process' virtual address space/data.
Kernel mode debug, I understand, provides access to other drivers and kernel processes that need full access to multiple resources, in addition to the original process address space.
From this, I get to thinking that kernel mode debugging seems more robust than user mode debugging. This raises the question for me: is there a time, when both options of debug mode are available, that it makes sense to choose user mode over a more robust kernel mode?
I'm still fairly new to the concept, so perhaps I am thinking of the two modes incorrectly. I'd appreciate any insight there, as well, to better understand anything I may be missing. I just seem to notice that a lot of people seem to try to avoid kernel debugging. I'm not entirely sure why, as it seems more robust.
The following is mainly from a Windows background, but I guess it should be fine for Linux too. The concepts are not so different.
Some inline answers first
From what I understand, on a high level, user mode debugging provides you with access to the private virtual address for a process.
Correct.
A debug session is limited to that process
No. You can attach to several processes at the same time, e.g. with WinDbg's .tlist/.attach command.
and it cannot overwrite or tamper w/ other process' virtual address space/data.
No. You can modify the memory, e.g. with WinDbg's ed command.
Kernel mode debug, I understand, provides access to other drivers and kernel processes that need full access to multiple resources,
Correct.
in addition to the original process address space.
As far as I know, you have access to physical RAM only. Some of the virtual address space may be swapped, so not the full address space is available.
From this, I get to thinking that kernel mode debugging seems more robust than user mode debugging.
I think the opposite. If you write incorrect values somewhere in kernel mode, the PC crashes with a blue screen. If you do that in user mode, it's only the application that crashes.
This raises the question for me: is there a time, when both options of debug mode are available, that it makes sense to choose user mode over a more robust kernel mode?
If you debug an application only and no drivers are involved, I prefer user mode debugging.
IMHO, kernel mode debugging is not more robust, it's more fragile - you can really break everything at the lowest level. User mode debugging provides the typical protection against crashes of the OS.
I just seem to notice that a lot of people seem to try to avoid kernel debugging
I observe the same. And usually it's not so difficult once they try it. In my debugging workshops, I explain processes and threads from kernel point of view and do it live in the kernel. And once people try kernel debugging, it's not such a mystery any more.
I'm not entirely sure why, as it seems more robust.
Well, you really can blow up everything in kernel mode.
User mode debugging
User mode debugging is the default that any IDE will do. The integration is usually good, in some IDEs it feels quite native.
During user mode debugging, things are easy. If you access memory that is paged out to disk, the OS is still running and will simply page it in, so you can read and write it.
You have access to everything that you know from application development. There are threads and you can suspend or resume them. The knowledge you have from application development will be sufficient to operate the debugger.
You can set breakpoints and inspect variables (as long as you have correct symbols).
Some kinds of debugging is only available in user mode. E.g. the SOS extension for WinDbg to debug .NET application only works in user mode.
Kernel debugging
Kernel debugging is quite complex. Typically, you can't simply do local kernel debugging - if you stop somewhere in the kernel, how do you control the debugger? The system will just freeze. So, for kernel debugging, you need 2 PCs (or virtual PCs).
During kernel mode debugging, things are complex. While you are just inside an application, a millisecond later, some interrupt occurs and does something completely different. You don't only have threads, you also need to deal with call stacks that are outside your application, you'll see CPU register content, instruction pointers etc. That's all stuff a "normal" app developer does not want to care about.
You don't only have access to everything that you implemented. You also have access to everything that Microsoft, Intel, NVidia and lots of other companies developed.
You cannot simply access all memory, because some memory that is paged out to the swap file will first generate a page fault, then involve some disk driver to fetch the data, potentially page out some other data, etc.
There is so much giong on in kernel mode and in order to not break it, you need to have really professional comprehension of all those topics.
Conclusion
Most developers just want to care about their source code. So if they are writing programs (aka. applications, scripts, tools, games), they just want user mode debugging. If "their code" is driver code, of course they want kernel debugging.
And of course Security Specialists and Crackers want kernel mode debugging because they want privileges.

how to debug a pci device and linux driver

I am programming a pci device with verilog and also writing its driver,
I have probably inserted some bug in the hardware design and when i load the driver with insmod the kernel just gets stuck and doesnt respond. Now Im trying to figure out what's the last driver code line that makes my computer stuck. I have inserted printk in all relevant functions like probe and init but non of them get printed.
What other code is running when i use insmod before it gets to my init function? (I guess the kernel gets stuck over there)
printks are often not useful debugging such a problem. They are buffered sufficiently that you won't see them in time if the system hangs shortly after printk is called.
It is far more productive to selectively comment out sections of your driver and by process of elimination determine which line is the (first) problem.
Begin by commenting out the entire module's init section leaving only return 0;. Build it and load it. Does it hang? Reboot system, reenable the next few lines (class_create()?) and repeat.
From what you are telling, it is looks like that Linux scheduler is deadlocking by your driver. That's mean that interrupts from the system timer doesn't arrive or have a chance to be handled by kernel. There are two possible reasons:
You hang somewhere in your driver interrupt handler (handler starts its work but never finish it).
Your device creates interrupts storm (Device generates interrupts too frequently as a result your system do the only job -- handling of your device interrupts).
You explicitly disable all interrupts in your driver but doesn't reenable them.
In all other cases system will either crash, either oops or panic with all appropriate outputs or tolerate potential misbehavior of your device.
I guess that printk won't work for such extreme scenario as hang in kernel mode. It is quite heavy weight and due to this unreliable diagnostic tool for scenarios like your.
This trick works only in simpler environments like bootloaders or more simple kernels where system runs in default low-end video mode and there is no need to sync access to the video memory. In such systems tracing via debugging output to the display via direct writing to the video memory can be great and in many times the only tool that can be used for debugging purposes. Linux is not the case.
What techniques can be recommended from the software debugging point of view:
Try to review you driver code devoting special attention to interrupt handler and places where you disable/enable interrupts for synchronization.
Commenting out of all driver logic with gradual uncommenting can help a lot with localization of the issue.
You can try to use remote kernel debugging of your driver. I advice to try to use virtual machine for that purposes, but I'm not aware about do they allow to pass the PCI device in the virtual machine.
You can try the trick with in-memory tracing. The idea is to preallocate the memory chunk with well known virtual and physical addresses and zeroes it. Then modify your driver to write the trace data in this chunk using its virtual address. (For example, assign an unique integer value to each event that you want to trace and write '1' into the appropriate index of bytes array in the preallocated memory cell). Then when your system will hang you can simply force full memory dump generation and then analyze the memory layout packed in the dump using physical address of the memory chunk with traces. I had used this technique with VmWare Workstation VM on Windows. When the system had hanged I just pause a VM instance and looked to the appropriate .vmem file that contains raw memory latout of the physical memory of the VM instance. Not sure that this trick will work easy or even will work at all on Linux, but I would try it.
Finally, you can try to trace the messages on the PCI bus, but I'm not an expert in this field and not sure do it can help in your case or not.
In general kernel debugging is a quite tricky task, where a lot of tricks in use and all they works only for a specific set of cases. :(
I would put a logic analyzer on the bus lines (on FPGA you could use chipscope or similar). You'll then be able to tell which access is in cause (and fix the hardware). It will be useful anyway in order to debug or analyze future issues.
Another way would be to use the kernel crash dump utility which saved me some headaches in the past. But depending your Linux distribution requires installing (available by default in RH). See http://people.redhat.com/anderson/crash_whitepaper/
There isn't really anything that is run before your init. Bus enumeration is done at boot, if that goes by without a hitch the earliest cause for freezing should be something in your driver init AFAIK.
You should be able to see printks as they are printed, they aren't buffered and should not get lost. That's applicable only in situations where you can directly see kernel output, such as on the text console or over a serial line. If there is some other application in the way, like displaying the kernel logs in a terminal in X11 or over ssh, it may not have a chance to read and display the logs before the computer freezes.
If for some other reasons the printks still do not work for you, you can instead have your init function return early. Just test and move the return to later in the init until you find the point where it crashes.
It's hard to say what is causing your freezes, but interrupts is one of those things I would look at first. Make sure the device really doesn't signal interrupts until the driver enables them (that includes clearing interrupt enables on system reset) and enable them in the driver only after all handlers are registered (also, clear interrupt status before enabling interrupts).
Second thing to look at would be bus master transfers, same thing applies: Make sure the device doesn't do anything until it's asked to and let the driver make sure that no busmaster transfers are active before enabling busmastering at the device level.
The fact that the kernel gets stuck as soon as you install your driver module makes me wonder if any other driver (built in to kernel?) is already driving the device. I made this mistake once which is why i am asking. I'd look for the string "kernel driver in use" in the output of 'lspci' before installing the module. In any case, your printk's should be visible in dmesg output.
in addition to Claudio's suggestion, couple more debug ideas:
1. try kgdb (https://www.kernel.org/doc/htmldocs/kgdb/EnableKGDB.html)
2. use JTAG interfaces to connect to debug tools (these i think vary between devices, vendors so you'll have to figure out which debug tools you need to the particular hardware)

How to know that the kernel has panicked?

I want to be able to monitor kernel panics - know if and when they have happened.
Is there a way to know, after the machine has booted, that it went down due to a kernel panic (and not, for example, an ordered reboot or a power failure)?
The machine may be configured with KDUMP and/or KDB, but I prefer not to assume that either is or is not installed.
Patching the kernel is an option, though I prefer to avoid it. But even if I do it, I'm not sure what can the patch do.
I'm using kernel 2.6.18 (ancient, I know). Solutions for newer kernels may be interesting too.
Thanks.
The kernel module 'netconsole' may help you to log kernel printk messages over UDP.
You can view the log message in remote syslog server, event if the machine is rebooted.
Introduction:
=============
This module logs kernel printk messages over UDP allowing debugging of
problem where disk logging fails and serial consoles are impractical.
It can be used either built-in or as a module. As a built-in,
netconsole initializes immediately after NIC cards and will bring up
the specified interface as soon as possible. While this doesn't allow
capture of early kernel panics, it does capture most of the boot
process.
Check kernel document for more information: https://www.kernel.org/doc/Documentation/networking/netconsole.txt

Gnu Debugger & Linux Kernel

I have compiled my own Kernel module and now I would like to be able to load it
into the GNU Debugger GDB. I did this once, a year ago or so to have a look
at the memory layout. It worked fine then, but of course I was too silly to
write down the single steps I took to accomplish this... Can anyone enlighten
me or point me to a good tutorial?
Thank you so much
For kernels > 2.6.26 (i.e. after May 2008), the preferred way is probably to use "kgdb light" (not to be confused with its ancestor kgdb, available as a set of kernel patches).
"kgdb light" is now part of the kernel (in by default in current Ubuntu kernels, for instance), and it's capabilities are improving fast (Jason Wessel is working on it - possible google key).
Drawback: You need two machines, the one you're debugging and the development machine (host) where gdb runs. Currently, those two machines can only be linked through a serial link.
kgdb runs in the target machine where it handles the breakpoints, stepping, etc. and the remote debugging protocol use to talk with the development machine.
gdb runs in the development machine where it handles the user interface.
An USB-to-serial adapter works OK on the development machine, but currently, you need a real UART on the target machine - and that's not so frequent anymore on recent hardware.
The (terse) kgdb documentation is in the kernel sources, in
Documentation/DocBook
I suggest you google around for "kgdb light" for the complete story.
Again, don't confuse kgdb and kgdb light, they come together in google searches but are mostly different animals. In particular, info from linsyssoft.com relate to the "ancestor" kgdb, so try queries like:
kgdb module debugging -"linsyssoft.com" -site:linsyssoft.com
and discard articles prior to May 2008 / 2.6.26 kernel.
Finally, for module debugging, you need to manually load the module symbols in the dev machine for all the code and sections you are interested in. That's a bit too long to address here, but some clues there, there and there.
Bottom line is, kgdb is a very welcome improvement but don't expect this trip to be as easy as running gdb in user mode. Yet. :)
It has been a while since I was actively developing drivers for Linux, so maybe my answer is a bit out of date. I would say you cannot use GDB. If at all, only to debug post mortem on dump files. To debug you should rather use a kernel debugger. Build the kernel with a kernel debugger enabled (there is one out-of-the box debugger for 2.6, which was lacking at the time I was active). I used the kernel patches for KDB from Sun ftp://oss.sgi.com/www/projects/kdb/download/, which I was quite happy with. A user space tool won't be of much use unless new gdb communicate somehow with the internal kernel debugger (which anyway you would have to activate)
I hope this gives you at least some hints, while not being a detailled answer. Better than no answer at all. Regards.
I suspect what you did was
gdb /boot/vmlinux /proc/kcore
Of course you can't actually do any debugging, but it's certainly good enough to have a poke around the kernel.

Resources