ARM Gdb break on processor mode change - debugging

Im currently debugging Linux ARM kernel, and have always wondered if it is possible in gdb to break when the cpu mode change (usr, svc, abt etc). Currently, when i'm not sure which mode we are in, I usually have to look at the psr register multiple times, but maybe there is a more effective way, such as break on mode change?
I know I can put breakpoints on the exception vector, but that means I only detect mode changes to privileged mode and not the other way around. Maybe there is a command to check if the psr changes to 0x10(usr mode) ?
Thanks

All processes are scheduled in entry-common.S. This file has a macro named arch_ret_to_user. Define it to be a BKPT instruction, perhaps conditional on a global set via /proc.
You can not detect a switch to user mode generically using the CPU alone (you need support code), as the supervisor code can change anything. ETM may have some means if your CPU has the functionality.
There are also thread_nofity.h which has call-backs for when a user task is re-scheduled. You can hook this with your own logic if you don't need the debugger or put some breakpoint on a null(){} function which you only call from the notifier when the condition is meant.

Related

how to set a memory breakpoint in windbg kernel mode?

I would like to set a memory breakpoints on access in windbg in the kernel mode debugger
I want the debugger breaks everytime a specific module in usermode is hit with the kernel debugger.
but I've read somewhere its impossible to set it, in order to make a memory breakpoints I have to write a plugin to make it
I tried to use SDbgExt plugin with the !vprotect command, but it fails to set memory bp
If I have to write a plugin to allow memory bp in kernel mode It has to be a driver?
I've read some chapters in windows internals book, but it doesn't help me at all.
I couldn't find too much info how to start deal with it
You can set breakpoints on user mode addresses from kernel mode. The only thing you should take care is to switch to the right process with ".process /i " command
If it is a one-off breakpoint -- that is, you are content with process being destroyed by debugging -- zero out the entire module using e command (edit memory). Set the whole thing to cc (which is int 3 as far as I remember)... zeros will do as well. You will break as soon as you touch any of the module's code.
Next step, remember where you were (relative to the module) and set a proper breakpoint.
Hope that helps.
(editing) Do you have full symbols? If you do, did you try bm module!*
Sounds like you want to set a "breakpoint on access" but instead of specifying an address you want to specify a range? I have never seen it done in windbg. The BA breakpoints uses HW debug registers instead of inserting INTs like SW breakpoints so this is definitely HW platform specific.
I have done this on an ARM chipset once using a HW debugger. ETM on ARM allows you to set triggers on address ranges.

Kernel panic error in ARM board

I have ARM board at remote location. Some time I had a kernel panic error in it. At this same time there is no option to hardware restart. bus no one is available at this place to restart it.
I want to restart my board automatically after kernel panic error. so what to do in kernel.
If your hardware contains watchdog timer, then compile the kernel with watchdog support and configure it. I suggest to follow this blog http://www.jann.cc/2013/02/02/linux_watchdog.html
Caution :: I never tried this. If the problem is solved, request you to update here.
You can modify the panic() function kernel/panic.c to call the kernel_restart(*cmd) at the point you want it to restart (like probably after printing the required debug information).
I am assuming you are bringing up a board, so Please note that you need to supply the ops for the associated functions in machine_restart() - (called by kernel_restart) in accordance to the MACH . If you are just using the board as is , then i guess rebuilding the kernel with kernel_restart(*cmd) should do.
The panic() is usually due to events that the kernel can not recover from. If you do not have a watchdog, you need to look at your hardware to see if a GPIO, etc is connected to the RESET line. If so, you can toggle this pin to reboot the CPU. Trying to alter panic() may just make things worse, depending on the root cause and the type of features you use.
You may hook arm_pm_restart with your custom restart functionality. You can test it with the shell command reboot, if present. panic() should call the same routine. With current ARM Linux versions
You may wish to turn off the MMU and block interrupts in this routine. It will make it more resilient when called from panic(). As you are going to reset, you can copy the routine to any physical address you like.
The watchdog maybe better; it may catch cases where even panic() may not be called. You may have a watchdog and not realize it. Many Cortex-A CPUs, have one built in. It is fairly rare for hardware not to have a watchdog.
However, if you don't have the watchdog, you can use the GPIO mechanism above; hardware should usually provide someway for software to restart the device (and peripherals). The panic() maybe due to some mis-behaving device tromping memory, latched up DRAM/Flash, etc. Toggling a RESET line maybe better than a watchdog in this case; if the RESET is also connected to other hardware, besides the CPU.
Related: How to debug kernel freeze, How to change watchdog timer
AFAIK, a simple way to restart the board after kernel panic is to pass a kernel parameter (from the bootloader usually)
panic=1
The board will then auto-reboot '1' second(s) after a panic.
Search the Documentation for more.
Some examples from the documentation:
...
panic= [KNL] Kernel behaviour on panic: delay <timeout>
timeout > 0: seconds before rebooting
timeout = 0: wait forever
timeout < 0: reboot immediately
Format: <timeout>
...
oops=panic Always panic on oopses. Default is to just kill the
process, but there is a small probability of
deadlocking the machine.
This will also cause panics on machine check exceptions.
Useful together with panic=30 to trigger a reboot.
...
As suggested in previous comments watchdog timer is your friend here. If your hardware contains watchdog timer, Enable it in kernel option and configure it.
Other alternative is use Phidget. If you usb connection available at remote location. Phidget controller/software is used to control your board using USB. Check for board support.

how to debug a pci device and linux driver

I am programming a pci device with verilog and also writing its driver,
I have probably inserted some bug in the hardware design and when i load the driver with insmod the kernel just gets stuck and doesnt respond. Now Im trying to figure out what's the last driver code line that makes my computer stuck. I have inserted printk in all relevant functions like probe and init but non of them get printed.
What other code is running when i use insmod before it gets to my init function? (I guess the kernel gets stuck over there)
printks are often not useful debugging such a problem. They are buffered sufficiently that you won't see them in time if the system hangs shortly after printk is called.
It is far more productive to selectively comment out sections of your driver and by process of elimination determine which line is the (first) problem.
Begin by commenting out the entire module's init section leaving only return 0;. Build it and load it. Does it hang? Reboot system, reenable the next few lines (class_create()?) and repeat.
From what you are telling, it is looks like that Linux scheduler is deadlocking by your driver. That's mean that interrupts from the system timer doesn't arrive or have a chance to be handled by kernel. There are two possible reasons:
You hang somewhere in your driver interrupt handler (handler starts its work but never finish it).
Your device creates interrupts storm (Device generates interrupts too frequently as a result your system do the only job -- handling of your device interrupts).
You explicitly disable all interrupts in your driver but doesn't reenable them.
In all other cases system will either crash, either oops or panic with all appropriate outputs or tolerate potential misbehavior of your device.
I guess that printk won't work for such extreme scenario as hang in kernel mode. It is quite heavy weight and due to this unreliable diagnostic tool for scenarios like your.
This trick works only in simpler environments like bootloaders or more simple kernels where system runs in default low-end video mode and there is no need to sync access to the video memory. In such systems tracing via debugging output to the display via direct writing to the video memory can be great and in many times the only tool that can be used for debugging purposes. Linux is not the case.
What techniques can be recommended from the software debugging point of view:
Try to review you driver code devoting special attention to interrupt handler and places where you disable/enable interrupts for synchronization.
Commenting out of all driver logic with gradual uncommenting can help a lot with localization of the issue.
You can try to use remote kernel debugging of your driver. I advice to try to use virtual machine for that purposes, but I'm not aware about do they allow to pass the PCI device in the virtual machine.
You can try the trick with in-memory tracing. The idea is to preallocate the memory chunk with well known virtual and physical addresses and zeroes it. Then modify your driver to write the trace data in this chunk using its virtual address. (For example, assign an unique integer value to each event that you want to trace and write '1' into the appropriate index of bytes array in the preallocated memory cell). Then when your system will hang you can simply force full memory dump generation and then analyze the memory layout packed in the dump using physical address of the memory chunk with traces. I had used this technique with VmWare Workstation VM on Windows. When the system had hanged I just pause a VM instance and looked to the appropriate .vmem file that contains raw memory latout of the physical memory of the VM instance. Not sure that this trick will work easy or even will work at all on Linux, but I would try it.
Finally, you can try to trace the messages on the PCI bus, but I'm not an expert in this field and not sure do it can help in your case or not.
In general kernel debugging is a quite tricky task, where a lot of tricks in use and all they works only for a specific set of cases. :(
I would put a logic analyzer on the bus lines (on FPGA you could use chipscope or similar). You'll then be able to tell which access is in cause (and fix the hardware). It will be useful anyway in order to debug or analyze future issues.
Another way would be to use the kernel crash dump utility which saved me some headaches in the past. But depending your Linux distribution requires installing (available by default in RH). See http://people.redhat.com/anderson/crash_whitepaper/
There isn't really anything that is run before your init. Bus enumeration is done at boot, if that goes by without a hitch the earliest cause for freezing should be something in your driver init AFAIK.
You should be able to see printks as they are printed, they aren't buffered and should not get lost. That's applicable only in situations where you can directly see kernel output, such as on the text console or over a serial line. If there is some other application in the way, like displaying the kernel logs in a terminal in X11 or over ssh, it may not have a chance to read and display the logs before the computer freezes.
If for some other reasons the printks still do not work for you, you can instead have your init function return early. Just test and move the return to later in the init until you find the point where it crashes.
It's hard to say what is causing your freezes, but interrupts is one of those things I would look at first. Make sure the device really doesn't signal interrupts until the driver enables them (that includes clearing interrupt enables on system reset) and enable them in the driver only after all handlers are registered (also, clear interrupt status before enabling interrupts).
Second thing to look at would be bus master transfers, same thing applies: Make sure the device doesn't do anything until it's asked to and let the driver make sure that no busmaster transfers are active before enabling busmastering at the device level.
The fact that the kernel gets stuck as soon as you install your driver module makes me wonder if any other driver (built in to kernel?) is already driving the device. I made this mistake once which is why i am asking. I'd look for the string "kernel driver in use" in the output of 'lspci' before installing the module. In any case, your printk's should be visible in dmesg output.
in addition to Claudio's suggestion, couple more debug ideas:
1. try kgdb (https://www.kernel.org/doc/htmldocs/kgdb/EnableKGDB.html)
2. use JTAG interfaces to connect to debug tools (these i think vary between devices, vendors so you'll have to figure out which debug tools you need to the particular hardware)

User Mode to Kernel Mode debugging in GDB

I was debugging a program in which I hit
int 0x80
I know this means a system call and then the kernel executed it. However, GDB does not allow me to look at the instructions run by the kernel while executing this system call. It just executes the system call and takes me to the next instruction.
Is there anyway I can look into the kernel mode code while debugging a user mode program? If not, then what are the over best alternatives available to me?
Is there anyway I can look into the kernel mode code while debugging a user mode program?
No.
(Actually, you could do that if you use UML, but that is likely too complicated for you to set up.)

How does a debugger work?

I keep wondering how does a debugger work? Particulary the one that can be 'attached' to already running executable. I understand that compiler translates code to machine language, but then how does debugger 'know' what it is being attached to?
The details of how a debugger works will depend on what you are debugging, and what the OS is. For native debugging on Windows you can find some details on MSDN: Win32 Debugging API.
The user tells the debugger which process to attach to, either by name or by process ID. If it is a name then the debugger will look up the process ID, and initiate the debug session via a system call; under Windows this would be DebugActiveProcess.
Once attached, the debugger will enter an event loop much like for any UI, but instead of events coming from the windowing system, the OS will generate events based on what happens in the process being debugged – for example an exception occurring. See WaitForDebugEvent.
The debugger is able to read and write the target process' virtual memory, and even adjust its register values through APIs provided by the OS. See the list of debugging functions for Windows.
The debugger is able to use information from symbol files to translate from addresses to variable names and locations in the source code. The symbol file information is a separate set of APIs and isn't a core part of the OS as such. On Windows this is through the Debug Interface Access SDK.
If you are debugging a managed environment (.NET, Java, etc.) the process will typically look similar, but the details are different, as the virtual machine environment provides the debug API rather than the underlying OS.
As I understand it:
For software breakpoints on x86, the debugger replaces the first byte of the instruction with CC (int3). This is done with WriteProcessMemory on Windows. When the CPU gets to that instruction, and executes the int3, this causes the CPU to generate a debug exception. The OS receives this interrupt, realizes the process is being debugged, and notifies the debugger process that the breakpoint was hit.
After the breakpoint is hit and the process is stopped, the debugger looks in its list of breakpoints, and replaces the CC with the byte that was there originally. The debugger sets TF, the Trap Flag in EFLAGS (by modifying the CONTEXT), and continues the process. The Trap Flag causes the CPU to automatically generate a single-step exception (INT 1) on the next instruction.
When the process being debugged stops the next time, the debugger again replaces the first byte of the breakpoint instruction with CC, and the process continues.
I'm not sure if this is exactly how it's implemented by all debuggers, but I've written a Win32 program that manages to debug itself using this mechanism. Completely useless, but educational.
In Linux, debugging a process begins with the ptrace(2) system call. This article has a great tutorial on how to use ptrace to implement some simple debugging constructs.
If you're on a Windows OS, a great resource for this would be "Debugging Applications for Microsoft .NET and Microsoft Windows" by John Robbins:
http://www.amazon.com/dp/0735615365
(or even the older edition: "Debugging Applications")
The book has has a chapter on how a debugger works that includes code for a couple of simple (but working) debuggers.
Since I'm not familiar with details of Unix/Linux debugging, this stuff may not apply at all to other OS's. But I'd guess that as an introduction to a very complex subject the concepts - if not the details and APIs - should 'port' to most any OS.
I think there are two main questions to answer here:
1. How the debugger knows that an exception occurred?
When an exception occurs in a process that’s being debugged, the debugger gets notified by the OS before any user exception handlers defined in the target process are given a chance to respond to the exception. If the debugger chooses not to handle this (first-chance) exception notification, the exception dispatching sequence proceeds further and the target thread is then given a chance to handle the exception if it wants to do so. If the SEH exception is not handled by the target process, the debugger is then sent another debug event, called a second-chance notification, to inform it that an unhandled exception occurred in the target process. Source
2. How the debugger knows how to stop on a breakpoint?
The simplified answer is: When you put a break-point into the program, the debugger replaces your code at that point with a int3 instruction which is a software interrupt. As an effect the program is suspended and the debugger is called.
Another valuable source to understand debugging is Intel CPU manual (Intel® 64 and IA-32 Architectures
Software Developer’s Manual). In the volume 3A, chapter 16, it introduced the hardware support of debugging, such as special exceptions and hardware debugging registers. Following is from that chapter:
T (trap) flag, TSS — Generates a debug exception (#DB) when an attempt is
made to switch to a task with the T flag set in its TSS.
I am not sure whether Window or Linux use this flag or not, but it is very interesting to read that chapter.
Hope this helps someone.
My understanding is that when you compile an application or DLL file, whatever it compiles to contains symbols representing the functions and the variables.
When you have a debug build, these symbols are far more detailed than when it's a release build, thus allowing the debugger to give you more information. When you attach the debugger to a process, it looks at which functions are currently being accessed and resolves all the available debugging symbols from here (since it knows what the internals of the compiled file looks like, it can acertain what might be in the memory, with contents of ints, floats, strings, etc.). Like the first poster said, this information and how these symbols work greatly depends on the environment and the language.

Resources