According to my understanding MCE is triggered when the cpu detected some error.
Cosmic particle hitting a register file and altering values should trigger MCE.
Now, I have tried running a linux kernel machine on QEMU and connected a GDB to it, then I altered the ESP register and the machine crashed.
However when I checked the logs, I could not find any traces of MCE.
Why is that?
Related
I'm working on a custom Linux kernel for a RISC-V architecture. I am debugging using GDB/QEMU now that those tools are available. As I am debugging I notice that I am not able to access memory at addresses that are virtualized. That is once memory gets transitioned from physical to virtual addressing in the kernel, I can't access those memory locations any longer in gdb. For example, the kernel shows up like this in QEMU's info mem command.
paddr: 0x80200000 --> vaddr: 0xffffffff80000000
I think this question/problem is more an issue with QEMU or maybe my understanding of how to access it in QEMU correctly. As it stands, single stepping to this point in my kernel where virtual memory starts being used is fine but single stepping beyond this causes QEMU to effectively stop--it gives the same instruction each step. However, if I continue it boots in QEMU. How can I debug this via single stepping? Is there something I need to switch in GDB/QEMU?
I did try to access an address 0xffffffff8000007c for example and I could get that successfully, QEMU just doesn't transition to virtual memory when I single step past that point.
I'm experiencing a similar problem and have formed the following hypothesis:
I think the kernel is switching to a reduced page table when in idle,
one that does not map loaded module memory. An asynchronous break by GDB has a high likelihood of interrupting the CPU while in idle, of course.
Single stepping out idle (e.g., after hitting a key in the Linux console) and re-attempting setting a breakpoint on the loaded module succeeds at some point.
A viable strategy is probably to break on the conclusion of the module loading code and to set relevant breakpoints at that point.
I have a device which loads a small 'safe mode' Yocto image from coreboot, then selects a larger image to load, and performs a kexec to load that image. Typically this works, but in rare cases the target image's file system has been corrupted and kernel panics on boot.
Since the device will eventually be deployed into locations that are difficult to access, I was hoping to find a way to recover from any kernel panic without having to physically reboot the device.
To fix this, I added an init script using "init=/sbin/init.sh" in the kexec command line when the new kernel is loaded, and I added a recovery kernel load using "kexec --load-panic" in the init script on the 2nd file system. This method successfully recovers kernel panics that happen late in the boot process, but I encountered a file system which was trashed in a particular way so that the kernel panic would occur before the init script gets launched. Since the init script isn't executed, the panic kernel never gets loaded, and the device must be power cycled.
To fix this, I tried adding the recovery kernel into the initial small kernel loaded by coreboot, but it appears to only handle kernel panics that occur before the "kexec --exec" command loads the new kernel.
I'm trying to figure out what is the best way to solve this. For example, I could add validation before I kexec to the new image. I currently check that the file system can be mounted, that its kernel file and the init script are present. If anyone knows which other files are necessary to get to the init script, I could add them to my validation.
Alternatively, is there a way to load the new kernel and kexec to it with the recovery kernel "--load-panic" parameter already loaded?
I tried putting both the kexec --load and --load-panic in the same line, but that doesn't work.
Any recommendation is greatly appreciated.
I have tried to run qemu-arm with a complied linux kernel (Version 4.9)
and with an initfs that i have created with a sample program.
This is was based on an excellent post from here.
This is the command that i have executed:
qemu-system-arm -M vexpress-a9 -kernel linux-4.9/arch/arm/boot/zImage -initrd initramfs -append "console=tty1"
then, qemu shows me these errors and its graphical window is getting stuck:
pulseaudio: set_sink_input_volume() failed
pulseaudio: Reason: Invalid argument
pulseaudio: set_sink_input_mute() failed
pulseaudio: Reason: Invalid argument
Even when I run it without the -initrd parameter, for just loading the kernel - nothing happens.
When I tried run it with a vmlinuz-3.2.0-4-vexpress image in this example, it worked for me.
Does someone have clue what may be the problem? Something with the fact that it is a zImage? Is there a way to debug it?
Thanks!
"QEMU sits there and prints nothing" is quite a common symptom, and it almost always means "the guest kernel crashed before being able to print anything, because it wasn't configured correctly". This is pretty much the same effect you get if you try to boot a wrongly configured kernel on real hardware, and the process for debugging it is about the same:
check the obvious kernel config options are set correctly: in particular, that you have built it to support the ARM board and CPU that you're trying to run it on, and that you've enabled support for whatever devices you're trying to use for console output
give yourself the maximum chance of being able to see something, by configuring QEMU to output serial port information, and configuring the guest to send its console output to serial, and enabling any earlycon/earlyprintk options you can (serial output happens much earlier than graphics output, and the Linux kernel earlycon/earlyprintk options mean the kernel will start printing output earlier than it defaults to)
if you have a kernel that works, and one that doesn't, look at the differences between the kernel configs to see if one is missing something
if all else fails, you have to break out the debugger to find out what's going on
Nothing about this is particularly QEMU specific -- it's the same sort of pain you have to go through if you're trying to do kernel bringup on hardware.
PS: my first guess is that the kernel is crashing because it doesn't have enough memory -- you haven't passed QEMU a '-m' option, so it is defaulting to 128MB; the vexpress-a9 board can handle up to 1GB. earlycon would probably be sufficient debug output to identify this issue. You also aren't passing a device tree blob via -dtb, which may be an issue for newer kernels (older kernels would happily boot without one).
There are lots of pages that explain it but I can't find it. Many of the articles I find only work on El Capitan and older systems.
I cannot use the fwkpfv right now as I don't have the right dongles. My client is getting me a used MacBook that will support firewire.
My kernel extension panics my box. Quite oddly if my coworker builds my extension, it works just fine. I remain flummoxed.
You can get "live" local kernel logs using the command
log stream --process 0
For looking at past logs, use log show instead, e.g.:
log show --predicate 'processID == 0' --last 1h | less
None of that will help you much with kernel panics, however, as the logging happens asynchronously in user space, so you won't get the very last messages before the panic.
A few more options for debugging KPs without firewire, which you're probably already aware of but I'll mention them for completeness' sake:
Ethernet-based kernel debugging (as opposed to firewire). Only the test device needs wired/thunderbolt ethernet, the Mac running the debugger can be on wifi.
You can often extract quite a lot of info from the panic log itself: in addition to symbolicating the stack (use keepsyms=1 boot-arg so you don't have to do it retroactively), looking at the register contents and disassembly can often tell you the values of variables.
If you're missing parts of Apple's code the stack trace, run a debug or development kernel instead of the release one. Those are built with fewer optimisations enabled, so functions are less likely to be inlined, etc.
There are a bunch of memory debugging and other diagnostic options you can turn on in the kernel, e.g. -zp, -zc and so on.
If you can repro the crash in a VM (VMWare Fusion, Parallels, VirtualBox, KVM/Qemu, whatever), you can use the VM's simulated serial port to log kprintf output. The virtual ethernet ports also tend to support kernel debugging if you set them up right.
I am programming a pci device with verilog and also writing its driver,
I have probably inserted some bug in the hardware design and when i load the driver with insmod the kernel just gets stuck and doesnt respond. Now Im trying to figure out what's the last driver code line that makes my computer stuck. I have inserted printk in all relevant functions like probe and init but non of them get printed.
What other code is running when i use insmod before it gets to my init function? (I guess the kernel gets stuck over there)
printks are often not useful debugging such a problem. They are buffered sufficiently that you won't see them in time if the system hangs shortly after printk is called.
It is far more productive to selectively comment out sections of your driver and by process of elimination determine which line is the (first) problem.
Begin by commenting out the entire module's init section leaving only return 0;. Build it and load it. Does it hang? Reboot system, reenable the next few lines (class_create()?) and repeat.
From what you are telling, it is looks like that Linux scheduler is deadlocking by your driver. That's mean that interrupts from the system timer doesn't arrive or have a chance to be handled by kernel. There are two possible reasons:
You hang somewhere in your driver interrupt handler (handler starts its work but never finish it).
Your device creates interrupts storm (Device generates interrupts too frequently as a result your system do the only job -- handling of your device interrupts).
You explicitly disable all interrupts in your driver but doesn't reenable them.
In all other cases system will either crash, either oops or panic with all appropriate outputs or tolerate potential misbehavior of your device.
I guess that printk won't work for such extreme scenario as hang in kernel mode. It is quite heavy weight and due to this unreliable diagnostic tool for scenarios like your.
This trick works only in simpler environments like bootloaders or more simple kernels where system runs in default low-end video mode and there is no need to sync access to the video memory. In such systems tracing via debugging output to the display via direct writing to the video memory can be great and in many times the only tool that can be used for debugging purposes. Linux is not the case.
What techniques can be recommended from the software debugging point of view:
Try to review you driver code devoting special attention to interrupt handler and places where you disable/enable interrupts for synchronization.
Commenting out of all driver logic with gradual uncommenting can help a lot with localization of the issue.
You can try to use remote kernel debugging of your driver. I advice to try to use virtual machine for that purposes, but I'm not aware about do they allow to pass the PCI device in the virtual machine.
You can try the trick with in-memory tracing. The idea is to preallocate the memory chunk with well known virtual and physical addresses and zeroes it. Then modify your driver to write the trace data in this chunk using its virtual address. (For example, assign an unique integer value to each event that you want to trace and write '1' into the appropriate index of bytes array in the preallocated memory cell). Then when your system will hang you can simply force full memory dump generation and then analyze the memory layout packed in the dump using physical address of the memory chunk with traces. I had used this technique with VmWare Workstation VM on Windows. When the system had hanged I just pause a VM instance and looked to the appropriate .vmem file that contains raw memory latout of the physical memory of the VM instance. Not sure that this trick will work easy or even will work at all on Linux, but I would try it.
Finally, you can try to trace the messages on the PCI bus, but I'm not an expert in this field and not sure do it can help in your case or not.
In general kernel debugging is a quite tricky task, where a lot of tricks in use and all they works only for a specific set of cases. :(
I would put a logic analyzer on the bus lines (on FPGA you could use chipscope or similar). You'll then be able to tell which access is in cause (and fix the hardware). It will be useful anyway in order to debug or analyze future issues.
Another way would be to use the kernel crash dump utility which saved me some headaches in the past. But depending your Linux distribution requires installing (available by default in RH). See http://people.redhat.com/anderson/crash_whitepaper/
There isn't really anything that is run before your init. Bus enumeration is done at boot, if that goes by without a hitch the earliest cause for freezing should be something in your driver init AFAIK.
You should be able to see printks as they are printed, they aren't buffered and should not get lost. That's applicable only in situations where you can directly see kernel output, such as on the text console or over a serial line. If there is some other application in the way, like displaying the kernel logs in a terminal in X11 or over ssh, it may not have a chance to read and display the logs before the computer freezes.
If for some other reasons the printks still do not work for you, you can instead have your init function return early. Just test and move the return to later in the init until you find the point where it crashes.
It's hard to say what is causing your freezes, but interrupts is one of those things I would look at first. Make sure the device really doesn't signal interrupts until the driver enables them (that includes clearing interrupt enables on system reset) and enable them in the driver only after all handlers are registered (also, clear interrupt status before enabling interrupts).
Second thing to look at would be bus master transfers, same thing applies: Make sure the device doesn't do anything until it's asked to and let the driver make sure that no busmaster transfers are active before enabling busmastering at the device level.
The fact that the kernel gets stuck as soon as you install your driver module makes me wonder if any other driver (built in to kernel?) is already driving the device. I made this mistake once which is why i am asking. I'd look for the string "kernel driver in use" in the output of 'lspci' before installing the module. In any case, your printk's should be visible in dmesg output.
in addition to Claudio's suggestion, couple more debug ideas:
1. try kgdb (https://www.kernel.org/doc/htmldocs/kgdb/EnableKGDB.html)
2. use JTAG interfaces to connect to debug tools (these i think vary between devices, vendors so you'll have to figure out which debug tools you need to the particular hardware)