DirectInput requires a lot of initialization functions and cetera to detect keyboard input, so what benefits are there to using it rather than the GetAsyncKeyState() function?
Courtesy of Wikipedia...
DirectInput and XInput have benefits over normal Win32 input events:
They enable an application to retrieve data from input devices even when the application is in the background.
They provide full support for any type of input device, as well as for force feedback.
Through action mapping, applications can retrieve input data without needing to know what kind of device is being used to generate it.
Basically DirectInput gives you more flexibility to move away from the keyboard. If the keyboard is all you ever plan on using then there is probably no harm in using GetAsyncKeyState()
Also see Should I use DirectInput or Windows message loop?
Microsoft seem to recommend just using windows messages to handle input data where possible now.
Related
I want to write a Windows application which accesses the joystick. It is just a very simple application which reads the values and sends them to a server, so I am not using any game programming framework. However, I am confused about which API to use.
I looked at the Multimedia Joystick API, but this is described as superseded by DirectInput. So I looked at DirectInput, but this is also deprecated in favour of XInput.
However the XInput documentation talks only about Xbox360 controllers, and says it does not support "legacy DirectInput devices".
Have Microsoft consigned the entire HID Joystick usage type to the dustbin and given up on supporting them in favour of their own proprietary controller products, or am I missing something?
The most common solution is to use a combination of XInput and DirectInput so your application can properly access both type of devices. Microsoft even provides instructions on how to do this.
Please note that DirectInput is not available for Windows Store apps so if you intend to distribute through there, that's not an option.
XInput devices like the XBox 360 controller will also work with DirectInput but with some limitations. Notably, the left and right trigger will be mapped to the same axis instead of being independents, and vibration effects will not be available.
There's a game that I'm trying to automate some actions on.
I've used SendInput in the past very successfully. However, with this application I can't get the mouse click to work. I've tested it using other applications and it all works as expected.
Can applications block my use of SendInput? And if so, can I get around it somehow?
Side note: I'm writing code in C# and running on Windows 7 x64. The app I'm trying to interact with is x86. I don't know if this makes a difference? I've testing my code interacting with both x64 and x86 apps.
Short answer: No. (Not the call to SendInput, but the input can be filtered. See update below.)
If you look at the parameters for SendInput there is nothing that identifies a process. The input is sent to the system, not an application. An application has no way of telling the difference between real and synthesized input.
There are a number of reasons why an application will not respond to synthesized input. As explained in the documentation for SendInput this API is subject to UIPI. An application running at a higher integrity level than an application calling SendInput will not receive this input.
Although SendInput injects input at a lower level than DirectInput runs, DirectInput is apparently more susceptible to buggy code. See Simulating Keyboard with SendInput API in DirectInput applications for reference.
Update (2016-05-01):
Besides issues with UIPI, preventing input from reaching an application, it is also possible for a low-level keyboard/mouse hook to identify injected input. Both the KBDLLHOOKSTRUCT (passed to the LowLevelKeyboardProc callback) as well as the MSLLHOOKSTRUCT (passed to the LowLevelMouseProc callback) contain a flags member, that have the LLKHF_INJECTED or LLMHF_INJECTED flag set, in case the input is injected.
An application can thus install a low-level keyboard/mouse hook to filter out messages that are injected. If this is the case, a potential workaround (without writing a keyboard driver) is, to install a low-level keyboard/mouse hook after the application did, and prevent input to reach the application's hook by not calling CallNextHookEx (hooks are called in reverse order they are installed, from last to first).
Note: The workaround is deliberately short-circuiting installed hooks, thereby likely breaking other applications. Besides, if an application decided to implement a low-level hook to filter out injected input, it may just as well guard against competing low-level hooks by frequently re-installing itself to the top of the hook chain, this rendering the workaround useless.
I'd like to know if there is a way to monitor the interactions between an application and a driver? The scenario for me is that I am having an occasional problem when reading and writing to a USB printer using libusbdotnet. The normal application reads and writes to the USB printer driver directly. I would like to monitor what it is doing to see if there is something special that it is doing to control the printer. I have looked around and haven't found a good way to do this.
Thanks
As far as I know, there is no out-of-the-box tool that does this (mainly because there is a variety of driver types, each type must comply to a different OS-defined interface). You need a SW component that will sit between your application and your driver and intercept the interactions. This is usually achievable by creating a filter driver (preferably in User space, since it simplifies the development and usage). See here for more details: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463453.aspx
I obviously don't think it would work as it is. Its more like, does Windows internal architecture allows for some third party SW to integrate in between? From what I read about Compiz, I believe it creates its own window, and somehow mixes graphics from System X to its own. But it still has to catch events like EXIT button and so on.
Does Windows even allow this? Let 3rd program to scan for input of another window? And more, catching output of GUI and replace it?
Does Windows even allow this? Let 3rd
program to scan for input of another
window? And more, catching output of
GUI and replace it? Thanks.
It is certainly possible. See WindowBlinds for an example. Just note that Windows "officially" does not support this, applications like WindowBlinds use API hooking, subclassing etc. to perform their deeds.
Windows does not natively allow it - it has its own compositor framework built in called DWM that does much of the same internal functionality as Compiz. However, glitzy graphics that are systemwide are reserved for the OS to perform, sadly. As other people mention, doing this as a 3rd-party app is going to be really hacky and difficult.
API Hooking:
http://www.codeproject.com/KB/system/hooksys.aspx
Also, look at:
http://yodm-3d.en.uptodown.com/
A free 'Compiz' for Windows.
I am currently trying to get simple keyboard input on OSX, right now I am doing it through the Leopard HID Manager object and that generally works, but since that is pretty low level I am wondering if there is an API available that has some extra functionality build it like key repetition or Unicode support (since when I catch the events on HID I/O level I think I have to write all these fancy extras from scratch). I know carbon event handlers(NewEventHandlerUPP) are capable of that but I am pretty sure that they are deprecated since you can't find anything about them in the current OSX reference and I don't want to use anything deprecated, so I am wondering if there is any alternative I didn't come across during my search!
Thanks!
No.
At the Unicode level, the official API of receiving input is NSTextInputClient protocol in Objective-C, and the official API of processing input between the keyboard and the program is the Input Method Kit.
And you can never write a sufficiently fancy extra correctly from scratch. You need to get the user's setting of the international keyboard and modify the key obtained accordingly. And you can never write an input method from scratch which turns the raw key input to Chinese or Japanese ...
So, I think the sane choices are either
Just get the raw ASCII data from the keyboard and don't aim for more, or
Use Cocoa at least around the key input handling, to get additional features.