Using SetupDiSetDeviceRegistryProperty with SPDRP_HARDWAREID - windows

The docs for the SetupDiSetDeviceRegistryProperty function say,
The following values are reserved for use by the operating system and
cannot be used in the Property parameter ...
SPDRP_HARDWAREID
However, there are lots of examples of code out there, including MS DevCon utility which uses this function with the SPDRP_HARDWAREID parameter, ie:
SetupDiSetDeviceRegistryProperty(DeviceInfoSet,
&DeviceInfoData,
SPDRP_HARDWAREID,
(LPBYTE)hwIdList,
(lstrlen(hwIdList)+1+1)*sizeof(TCHAR)))
They also have an article which suggests doing so:
If an installer detects a non-PnP device, the installer should select a driver for the device as follows: create a device information element (SetupDiCreateDeviceInfo), set the SPDRP_HARDWAREID property by calling SetupDiSetDeviceRegistryProperty
I'd like to (and do) use this function to set Hardware ID for my virtual device. The question is - is it a typo in the manual, or it's some sort of unsupported behavior and therefore it can stop working any time?

TL;DR: If you're creating a root-enumerated device node, you're free to set SPDRP_HARDWAREID/SPDRP_COMPATIBLEIDS yourself by calling SetupDiSetDeviceRegistryProperty. Otherwise you're not allowed to do so.
This was an error in the docs that was fixed at some point.
Today the docs of SetupDiSetDeviceRegistryProperty read:
SPDRP_HARDWAREID or SPDRP_COMPATIBLEIDS can only be used when DeviceInfoData represents a root-enumerated device. For other devices, the bus driver reports hardware and compatible IDs when enumerating a child device after receiving IRP_MN_QUERY_ID. [Emphasis mine]
... which is exactly what DevCon does.

Related

How does GetCommState populate the DCB struct in Windows 10 when using usbser.sys CDC ACM driver

I am building a embedded device that will communicate to the outside world by virtual COM. I have the descriptor and all the callbacks for the USB set up correctly and COM is working - well kind of. The problem is that when I issue the GetCommState command for the COM I get a semi valid struct back and when one fixes only couple of parameters (like setting the speed and 8N1) and try to reconfigure the port by calling SetCommState the actions fails with: 'A device attached to the system is not functioning.'
If one continues to use the port it just work - all writes and reads - without a problem. But the issue is that most libraries try to reconfigure the port by first issuing the GetCommState and then SetCommState - pyserial and C# both do it in this way.
My question is where do the "default" configuration for COM comes form?
In the USB ACM CDC standard there are (optional) class requests for SET and GET COMM feature but I can see (from USB sniffer) that they are never called (I tried with capabilities for USB ACM CDC set to 0x06 (that is without SET/GET COMM) and 0x07 (with SET/GET COMM) but in no case I get a class request from the driver). So the driver must take the config from somewhere else, does anybody knows from where or how?
I am using an NXP LPC and Windows 10 with usbser.sys driver on other end.
What I already checked is:
compared the USB descriptor to the working one - they are the same
checked the USB traffic - the enumeration and communication looks the same
without doing GetCommState and SetCommState the COM is working without problem
I attached the content of the DCB struct for working sample (left) and my (right). I do not understand where do the marked values come from? Who sets them?
The settings should come from the port driver - you can view and set default values in the Windows Device Manager. In your case, it would seem that flow control with RTS/CTS is enabled (left picture), which might be something that your USB adapter uses internally. If it works, then leave those settings as they were.
I'd advise to do like this:
Always check the result of each API function you call!
Call CreateFile to get the port handle.
Optionally call GetCommTimeouts and store the result in a zero-initialized struct like COMMTIMEOUTS com_timeouts = {0};. Change members of the struct as needed, then call SetCommTimeouts.
Create an (almost) zero-initialized struct DCB dcb = { .DCBlength = sizeof(DCB) }.
Call GetCommState on this struct.
Set baudrate, parity, stop bits etc as required. Leave other members as they were.
Call SetCommState.

WinDbg not showing register values

Basically, this is the same question that was asked here.
When performing kernel debugging of a machine running Windows 7 or older, with WinDbg version 6.2 and up, the debugger doesn't show anything in the registers window. Pressing the Customize... button results in a message box that reads Registers are not yet known.
At the same time, issuing the r command results in perfectly valid register values being printed out.
What is the reason for this behaviour, and can it be fixed?
TL;DR: I wrote an extension DLL that fixes the bug. Available here.
The Problem
To understand the problem, we first need to understand that WinDbg is basically just a frontend to Microsoft's Windows Symbolic Debugger Engine, implemented inside dbgeng.dll. Other frontends include the command-line kd.exe (kernel debugger) and cdb.exe (user-mode debugger).
The engine implements everything we expect from a debugger: working with symbol files, read and writing memory and registers, setting breakpoitns, etc. The engine then exposes all of this functionality through COM-like interfaces (they implement IUnknown but are not registered components). This allows us, for instance, to write our own debugger (like this person did).
Armed with this knowledge, we can now make an educated guess as to how WinDbg obtains the values of the registers on the target machine.
The engine exposes the IDebugRegisters interface for manipulating registers. This interface declares the GetValues method for retrieving the values of multiple registers in one go. But how does WinDbg know how many registers are there? That why we have the GetNumberRegisters method.
So, to retrieve the values of all registers on the target, we'll have to do something like this:
Call IDebugRegisters::GetNumberRegisters to get the total number of registers.
Call IDebugRegisters::GetValues with the Count parameter set to the total number of registers, the Indices parameter set to NULL, and the Start parameter set to 0.
One tiny problem, though: the second call fails with E_INVALIDARG.
Ehm, excuse me? How can it fail? Especially puzzling is the documentation for this return value:
The value of the index of one of the registers is greater than the number of registers on the target machine.
But I just asked you how many registers there are, so how can that value be out of range? Okay, let's continue reading the docs anyway, maybe something will become clear:
If the return value is not S_OK, some of the registers still might have been read. If the target was not accessible, the return type is E_UNEXPECTED and Values is unchanged; otherwise, Values will contain partial results and the registers that could not be read will have type DEBUG_VALUE_INVALID.
(Emphasis mine.)
Aha! So maybe the engine just couldn't read one of the registers! But which one? Turns out that the engine chokes on the xcr0 register. From the Intel 64 and IA-32 Architectures Software Developer’s Manual:
Extended control register XCR0 contains a state-component bitmap that specifies the user state components that software has enabled the XSAVE feature set to manage. If the bit corresponding to a state component is clear in XCR0, instructions in the XSAVE feature set will not operate on that state component, regardless of the value of the instruction mask.
Okay, so the register controls the operation of the XSAVE instruction, which saves the state of the CPU's extended features (like XMM and AVX). According to the last comment on this page, this instruction requires some support from the operating system. Although the comment states that Windows 7 (that's what the VM I was testing on was running) does support this instruction, it seems that the issue at hand is related to the OS anyway, as when the target is Windows 8 everything works fine.
Really, it's unclear whether the bug is within the debugger engine, which reports more registers than it can retrieve values for, or within WinDbg, which refuses to show any values at all if the engine fails to produce all of them.
The Solution
We could, of course, bite the bullet and just use an older version of WinDbg for debugging older Windows versions. But where's the challenge in that?
Instead, I present to you a debugger extension that solves this problem. It does so by hooking (with the help of this library) the relevant debugger engine methods and returning S_OK if the only register that failed was xcr0. Otherwise, it propagates the failure. The extension supports runtime unload, so if you experience problems you can always disable the hooks.
That's it, have fun!

Difference between uart_register_driver and platform_driver_register?

I am studying UART Driver in kernel code and want to know, who first comes into picture, device_register() or driver_register() call?
For difference between them follow this.
and in UART probing, we call
uart_register_driver(struct uart_driver *drv)
and after successfully registration,
uart_add_one_port(struct uart_driver *drv, struct uart_port *uport)
Please explain this in details.
That's actually two questions, but I'll try to address both of them.
who first comes into picture, device_register() or driver_register() call?
As it stated in Documentation/driver-model/binding.txt, it doesn't matter in which particular order you call device_register() and driver_register().
device_register() adds device to device list and iterates over driver list to find the match
driver_register() adds driver to driver list and iterates over device list to find the match
Once match is found, matched device and driver are binded and corresponding probe function is called in driver code.
If you are still curious which one is called first (because it doesn't matter) -- usually it's device_register(), because devices are usually being registered on initcalls from core_initcall to arch_initcall, and drivers are usually being registered on device_initcall, which executed later.
See also:
[1] From where platform device gets it name
[2] Who calls the probe() of driver
[3] module_init() vs. core_initcall() vs. early_initcall()
Difference between uart_register_driver and platform_driver_register?
As you noticed there are 2 drivers (platform driver and UART driver) for one device. But don't let this confuse you: those are just two driver APIs used in one (in fact) driver. The explanation is simple: UART driver API just lacks some functionality we need, and this functionality is implemented in platform driver API. Here is responsibility of each API in regular tty driver:
platform driver API is used for 3 things:
Matching device (described in device tree file) with driver; this way probe function will be executed for us by platform driver framework
Obtaining device information (reading from device tree)
Handling Power Management (PM) operations (suspend/resume)
UART driver API: handling actual UART functionality: read, write, etc.
Let's use drivers/tty/serial/omap-serial.c for driver reference and arch/arm/boot/dts/omap5.dtsi for device reference. Let's say, for example, we have next device described in device tree:
uart1: serial#4806a000 {
compatible = "ti,omap4-uart";
reg = <0x4806a000 0x100>;
interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "uart1";
clock-frequency = <48000000>;
};
It will be matched with platform driver in omap-serial.c by "ti,omap4-uart" string (you can find it in driver code). Then, using that platform driver, we can read properties from device tree node above, and use them for some platform stuff (setting up clocks, handling UART interrupt, etc.).
But in order to expose our device as standard TTY device we need to use UART framework (all those uart_* functions). Hence 2 different APIs: platform driver and UART driver.

trying to read midi controller device name using winmm

Long term goal, build software to implement midi control surface as a user interface for industrial control applications by using a prebuilt midi controller instead of building and wiring a custom control panel. Short term goal, read the name of the midi device plugged into the computer. Immediate problem, the compiler says 'illegal qualifier, szPname". I believe that szPname is a subset of the caps structure but I don't understand how to get to it.
I am using implementing winmm from FreePascal on a windows 10 machine.
here is my current code...
program asd;
uses mmSystem;
var
caps: ^MIDIINCAPS;
begin
writeln(midiInGetNumDevs());
midiInGetDevCaps(0,caps,SizeOf(MIDIINCAPS));
writeln(caps.szPname);
end.
The documentation says:
Error: Illegal qualifier
One of the following is happening:
You’re trying to access a field of a variable that is not a record.
You’re indexing a variable that is not an array.
You’re dereferencing a variable that is not a pointer.
In this case, caps is a pointer, so you must dereference it before you can access the record fields:
WriteLn(caps^.szPname);
(Other compilers can automatically dereference pointers to records. Apparently, FreePascal cannot.)
You also need to allocate memory for caps. (Or don't use a pointer.)

Windows HID Device Name Format

There are various ways to retrieve the Windows "Device Name" of a HID device, GetRawInputDeviceInfo with RIDI_DEVICENAME being one way to do it.
Given the example name:
\?\HID#VID_FEED&PID_DEAD#6&3559c8ea&0&0000#{378de44c-56ef-11d1-bc8c-00a0c91405dd}
I'm wondering if there is any documentation whatsoever on what is what in this string?
\?\HID#VID_AAAA&PID_BBBB#C&DDDDDD&E&FFFF#{GUID}
So the obvious ones are A(VID), B(PID) and the GUID on the end. What I'm wondering is what EXACTLY are C, D, E and F?
It seems that C and D are unique even if you plug in two of the exact same HID devices which is great for my problem, but I'd feel more comfortable if I could know exactly how this is determined on a per OS basis, or at least that it follows some known format.
I have been googling like a madman trying to figure this out, am I missing something obvious?
Thanks in advance
According to a similar MSDN post, the value represents a unique device instance ID:
the device instance ID is unique and constant for the physical
location the device is plugged into, but it is also opaque and should
not be parsed. that means it can be used for string comparison, but
not for interpretation.
It is actually device interface instance id (symbolic link name). And yes, its unique and persists across system restart. Some details also here.
You can use CM_Get_Device_Interface_Property or SetupDiGetDeviceInterfaceProperty on interface instance id with DEVPKEY_Device_InstanceId to get device instance id (one device can have multiple interfaces).
In your example - you have a HID device. Its device id format is described here.
Info on general USB devices id format is here.
After you have device instance id you can use CM_Get_DevNode_Property or SetupDiGetDeviceProperty with DEVPKEY_NAME to get localized friendly name of a device (which is shown in Device Manager).
To sum up:
\\?\HID#VID_203A&PID_FFFC&MI_01#7&2de99099&0&0000#{378de44c-56ef-11d1-bc8c-00a0c91405dd} - is device interface id (also referred as "device interface path" or "device name" in docs). This is path in virtual device file system.
{378de44c-56ef-11d1-bc8c-00a0c91405dd} - device interface class guid (GUID_DEVINTERFACE_MOUSE in this case. It determines which IOCTLs can be called on this device. IOCTL_MOUSE_QUERY_ATTRIBUTES in this case)
HID\VID_203A&PID_FFFC&MI_01\7&2de99099&0&0000 - is device instance id
NOTE: exact device interface id format is not documented, each device interface can generate file name it want. I don't recommend you to parse it - it could be changed in later Windows version, better aquire device instance id - it is documents at least.

Resources