I am new to windows driver development, so please bear with me if my question is being too stupid. Well, I am not sure why, as MSDN suggested and also the way I perceived, the host computer, e.g developing the driver, and the target computer, e.g debugging the driver, need to be two separate ones. why such separation? I did try to merge those two by deploying and debugging a driver on the host computer, in which I am developing a driver, and it seemed work with no objection from windows. Thanks.
PS. Source like this http://msdn.microsoft.com/en-us/library/windows/hardware/hh698272(v=vs.85).aspx got me think so.
Practically, when you are developing and testing a driver, in many situation you will get system crash (BSOD) and your system may not be bootable. In such situations your development + debugger environment is also gone/in-accessible.
Two separate machines are required for kernel debugging. You cannot debug self by obvious reasons (a debugger and a debuggee are in the same kernel and a deadlock appears). Of course, the target machine can be a virtual one.
When we develop a driver and test it the system will crash and a blue screen (called BSOD - blue screen of death)will show up. This is not the case like developing a User mode application and it crashed due to a memory error. Your driver will be running as a kernel mode application , If it crashes due to any illegal memory operation then the whole system is gone. It is not a simple issue to resolve , You need to log into safe mode and remove the driver from your system to recover it.
Due to this it is preferred to use a target machine mostly a VM on which the driver is installed and a host machine there we will be using a debugger to debug the driver.
Related
I have recently been attempting to create a GPIO Driver for SBC's using an Intel chipset that run Windows 8.1 and have begun testing it on an actual system. After loading the Driver and updating the Intel chipset I am using, the system appears to hang after loading the BIOS. Unfortunately, this disables my mouse, keyboard, and video, preventing me from entering BIOS or the boot manager.
While it is possible that the chipset update caused the system to become unbootable, it is highly unlikely considering we use that update for our other SBC's running the same chipset.
So my question: Is it possible for a Windows Kernel Mode driver to prevent a system from booting up past BIOS/POST?
I appreciate the help, since, clearly, I am no expert on this topic.
Yes, if your driver is being loaded at boot time it can prevent the booting of the OS and it will end up in BSOD(Blue screen of death) error with related bug check.
According to bugcheck displayed by the OS you can resolve the issues with your driver.
Or sometimes if its not giving any error and just hangs you can use WinDbg to check the bugcheck.
It depends on the error control of the driver service. Boot time drivers can also fail at any point. There is nothing special about the failure happening during boot. Instead, what is relevant under this scenario depends more on the ErrorControl value of the driver service which specifies how to proceed if it fails to load or initialize properly. A value of 3 (critical) would reboot the system to LKGC. Same rules apply to a win32 servic as well..
I've got a driver for a custom PCI card, which builds and runs fine on XP. I'm trying to use this custom hardware on W7, and am trying to build and run my driver.
I've got the latest DDK from Microsoft, and build my driver for XP using Windows XP "x86 Free Build Environment". Everything installs & works fine. (Build using a DDK "build" command)
If I use the Windows 7 "x86 Free Build Environment" build environment, everything builds fine. I run it through the PREfast and staticdv code checkers, no errors from either. ( I get a couple of warnings about "The dispatch function 'FooFnc' does not have any __drv_dispatchType annotations" - are these likely to be the issue? )
When I install, the install starts OK (standard error about drivers not being signed), but gets to a certain point and then hangs, then fails with a timeout error. The device then shows up in device manager as installed. At this point the PC won't shutdown or boot, but hangs indefinitely. I'm forced to boot into Safe Mode and uninstall the driver from there.
So my question(s) are:
If there has been a change in the driver model between XP and W7, what's the best way to find it? I can't see anything on MSDN.
How would I go about debugging the driver? The box doesn't start, so it's not like I can run up WinDBG.
Any specific W7 driver gotchas that are hidden away?
I've tried to keep this as generic as possible, but if more detail would be helpful I'll provide more
AFAIK, the biggest changes have been made in video and network drivers. Other drivers retain backward compatibility and can be run on W7 even with no recompiling.
Run your driver under driver verifier and turn on generating crash dumps with a keyboard (very helpful in case of system hangs, you can manually generate crashdump, analyze it and find what was wrong).
Hope this helps!
I have a couple Freescale 68HCS08 MCUs connected in an I2C network, running different programs. When I click "debug," Codewarrior checks for a running instance of hiwave.exe to load and debug the program. I'd like to debug both simultaneously, which means having two instances running.
What is the best way to do this? Do I need two PC's? Is it better to try and manually reload the MCU's, using the Build command instead of Debug in Codewarrior?
I can run two instances of hiwave.exe manually, and then use the "File"->"Load Application" menu item to select the .abs file. It seems to run both instances fine, including code display and breakpoints, although I'm using full-chip simulation rather than a hardware debugger at the moment. I would guess that's where most of the fun is, in making sure that each instance uses the correct debugger, especially if you're using two of the same USB devices.
"That's too easy", I can hear you saying. Fine, take option 2:
I do all my CodeWarrior / Hiwave stuff in "Windows XP Mode", a Virtual PC running under Windows 7, mostly because CodeWarrior's installer doesn't run on 64-bit architectures (or it didn't a few months ago, for which I yelled at them in their forums).
I'm not entirely sure of the licensing technicalities (if you have Windows 7 pro, you should get at least one free license to use the Windows XP mode), but perhaps you could do something similar - e.g. run a Virtual PC environment with one of your debuggers passed through to the virtual system (Windows Virtual PC and other virtualization environments let you pass USB devices through), and have your other debugger still attached to the 'host' system. You could then have CodeWarrior/Hiwave installed on both the virtual and host systems, with one controlling system A and the other controlling system B. USB fun-time still applies, as you'd have to make sure the 'correct' USB debugger was passed through to the virtual system.
The debugger, HIWAVE.EXE will not work in either Windows XP mode, nor VMs such as VMWARE WORKSTATION, nor any of the VMs available in Linux. This is to do with the way the driver for the USB MULTILINK has been architecured.
Making Codewarrior v6.x work in Windows 7 is easy, by patching the installer.
We were not able to get the debugging pod to work to debug live hardware, because of the fact that the USB driver is implemented with Jungo Windriver, and, as per other articles, neither of the virtual machines can push that across into the virtual environment.
I have wasted months trying to solve this, in the end we resurrected old XP licenses and installed XP. However safe to say that, this, combined with Freescale's lack of vision to allow people running Linux to develop for the silicon, forced me into a decision that I will no longer use their products.
However, running multiple instances of the debugger is possible. The maximum seems to be around 20
Is there a debugger which works from a virtual machine's host?
That is, instead of using interrupts inside the machine, I expect this debugger to recognize the virtual machine's OS routines, memory locations etc, and to recognize when the OS is launching a certian EXE. Then I want to be able to set hardware-like breakpoints per process through the host computer. I'll clarify. The virtualized computer and OS would never know that the breakpoint was set or occurred. All debug handling would be done by the host computer which emulates a virtual computer.
This would enable much stronger breakpoint mechanism, for example "break when a certain data is read from the CDROM drive", or "break when a certain file on the disk contains the following byte sequence".
This approach will also, for example, eliminate anti-debugger techniques which are suppose to alter the executable's behavior when running under a debugger. (OTOH it opens up a new area of anti-virtualization techniques which relies upon slight differences between emulated computer and real hardware).
Is there such a product? Does it look like a good idea?
VMware has offered VM debugging plugins for Visual Studio and Eclipse for some time now. It is even possible to record a VM run (which logs input from all devices, allowing to replay the execution of the VM precisely as when it was recorded), then step through the recording with a debugger.
Recent versions of IDA Pro include a debugger interface which, among other setups, can inspect a BOCHS virtual machine.
I'm currently setting up vmware Server 2.0 for kernel debugging with gdb ( see this setup guide ) and someone asked me why not use kvm?
So I ask: kvm vs. vmware for kernel debugging / USB driver development
what are the pros and cons of each?
Driver development? are you working on a driver for a particular piece of hardware? if so, then you probably won't be able to use virtualization, because the virtualized instance won't have access to the new hardware.
For this you will need two machines, one running a remote debugger on the other.
*Edit: * Apparently you're developing a driver for a USB Device? this is one area in particular that a VM actually Can help. These days most VM's have the ability to delegate specific USB devices to a guest OS.
That said, this situation doesn't really offer any benefits over the remote debugger option, because you still need a way to inspect the state of the running or crashed OS, and VM's offer very little assistance in this regard. You might be able to replay saved states from just before a crash.
You might be able to get a bit of traction using UML, which would allow you to do local debugging as on a regular user process, which is a little bit less trouble.
Instead of answering the direct question I'll add another option... Depending on if the kernel in question is a Linux kernel, and what part(s) of it you are working on, you might find that UserModeLinux (included in the 2.6.x source, and available as patch sets for 2.4 and 2.2) may trump both of those options.
As it runs the kernel as a userland process under the host kernel it is easier to attach common debugging tools to. I believe it is very commonly used in the early stages of updates/additions to file-system related code. If you are developing/debugging modules that interact directly with hardware it may be much less use to you though.
Reference links: home,
other
I recently started building GNU Mach/HURD and found the combination of QEmu/KVM to work really quite well.. for the following reasons:
QEmu presents quite a clean environment
Networking has alot of options
I can easily mount the filesystem using a raw device file / loopback
Bottom line is, for kernel work I just want the minimum of functionality to boot and see the result. VMWare is much more for usable virtualization rather than down-and-dirty.
There is however no comparison to booting on a real machine with real hardware. The VM environment can seem like a safety blanket somtimes ... because even my toaster would know what a Realtek RTL8139C was.
If it is a "real hardware" device, of course, vmware will not emulate it, so you won't be able to debug the driver under it (nor will any other virtualisation software, unless you extend one to do so).
Device driver debugging can be done to some extent with a real hardware machine with a normal kernel - although there are obviously things you can't do - like set breakpoints.
It is still possible to attach a debugger to the kernel and inspect stuff. Moreover, traditional printf() debugging is quite possible (printk, anyone), and there are various features in the kernel which make debugging easier. It's possible to build the kernel with various debug options to try to detect pointer problems, memory leaks etc.
By default, the kernel even gives a nice-ish stack trace on the log when it encounters an OOPS or BUG condition (obviously this does not necessarily get written anywhere if the system hangs or crashes). Of course a pointer-out-of-range condition happening inside an interrupt is a recipe for disaster, but you could still get a stack trace on the screen immediately before the panic :)