I am writing a small proof of concept for detecting extra inputs across mouses and keyboards on Windows, is it possible and how do I go about detecting input from a large amount of buttons in the Windows API? From what I have read, there is only support for 5 buttons but many mice have more buttons than that, is my question even possible with the Windows API, is it possible at all within the constraints of Windows?
You can use the Raw Input API to receive WM_INPUT messages directly from the mouse/keyboard driver. There are structure fields for the 5 standard mouse buttons (left, middle, right, x1, and x2). Beyond the standard buttons, additional buttons are handled by vendor-specific data that you would have to code for as needed. The API can give you access to the raw values, but you will have to refer to the vendor driver documentation for how to interpret them. Sometimes extra buttons are actually reported as keyboard input instead of mouse input.
Or, try using the DirectInput API to interact with DirectInput devices to receive Mouse Data and Keyboard Data.
Or, you could use the XInput API, which is the successor of DirectInput. However, XInput is more limited than DirectInput, as it is designed primarily for interacting with the Xbox 360 controller, whereas DirectInput is designed to interact with any controller. See XInput and DirectInput for more details.
Very simple: use GetKeyState
SHORT WINAPI GetKeyState(
_In_ int nVirtKey
);
Logic is next:
Ask user not to press buttons
Loop GetKeyState for all buttons 0-255
Drop pressed buttons state (some virtual keys can be pressed even it not pressed, not know why)
Now start keys monitor thread for rest keys codes and save them to any structure (pause between loop is 25ms is enough)
Ask user to press button
From keys monitor array you will see the any pressed buttons by user
Direct input and all other is more usable for other user input devices. For keyboard and mouse - GetKeyState is best.
Related
I'm working on a WinUI 3 - C++/winRT - desktop application.
The application displays and updates in a window-sized XAML SwapChainPanel through Direct2D and needs to receive keyboard input from the user. With KeyDown and KeyUp on the SwapChainPanel, I can get raw keyboard input. However, this provides only VirtualKeys, ScanCodes, RepeatCount, etc. through the accompanying KeyRoutedEventArgs.
I can find no way to tell WinUI 3 which keyboard layout to use, nor any sort of keyboard management for such things as shift keys, etc (VirtualKeys are only capital ASCII letters).
What I've managed to do is build window messages from the KeyDowns and KeyUps and send them to TranslateMessage then DispatchMessage so they end up as WM_CHARs in the window's message loop.
This takes care of quite a bit of the shift, caps lock, etc. logic and produces Unicode. However, it doesn't take into account the keyboard layout which, in my case, is Canadian Multilingual with four layers of characters on most keys. I can receive some non-ASCII characters (Latin 1) but they aren't the right ones.
This must be a common situation with all the different languages in the world, but I haven't found anything in the way of a keyboard processing function that would receive raw information from the keyboard, process the control keys and output Unicode.
If the window's message pump is the only way to go (for now ?), how to get TranslateMessage to take into account the keyboard layout ?
Thanks for any help with this.
I have a virtual keyboard and I am trying to figure out a way to detect the user swiping keys, ie, pressing one winapi button control down (for eg 'Q') then moving the mouse/finger around over other keys to type out words. For a better description see the below image.
Currently the only solution I can think of is detect a WM_LBUTTONDOWN on a button. Then detect WM_MOUSEMOVE events over other buttons (by hit testing?) and record that key. When I next receive a WM_LBUTTONUP I know the user is finished typing. I've also tried to detect WM_TOUCH and WM_TOUCHHITTESTING events but on a Surface Pro 3 these events are not firing but this could be I need to register for these events?
Is there an existing WinAPI function/methodology I could use that I am unaware of?
I have a DLL that I am injecting to DX games. In the DLL, I am disabling XInput, raw input and also subclass WndProc to filter a bunch of input messages like WM_MOUSEMOVE, WM_LBUTTONDOWN, WM_INPUT etc. Disabling XInput with XInputEnable(FALSE) and register raw devices with RIDEV_REMOVE flag.
While it works great for some games, it doesn't work for all. Certain games still have mouse move/hover input and I can see hover state for some UI when I move over.
My question is what did I miss? Could the game be capturing input some other ways?
Thank you.
I can think of these possible ways the application may still be receiving mouse input:
It re-enables Raw Input notifications
A window other than you one subclassed is receiving the messages
It's polling GetCursorPos
Using the Windows HID API or other user-mode interface to access the mouse device
Hooking mouse events or windows message using SetWindowsHookEx
The are probably others, but these are all I can think of at the moment.
How can I fire an automatic key press or mouse click event when a color appears on the screen
on other application or browser?
It depends a lot on what you want. Do you want to send the keys to
your Application
another fixed Application
Simulate a global keypress
Simulating keys globally
All of these will cause problems targeting a specific application and the active window changes.
SendKeys Sends Messages to the active app. It's a high level function taking a string which encodes a sequence of keys.
keybd_event is very low level and injects a global keypress. In most cases SendKeys is easier to use.
mouse_event simulates mouse input.
SendInput supersedes these functions. It's more flexible but a bit harder to use.
Sending to a specific window
When working with a fixed target window, sending it messages can work depending on how the window works. But since this doesn't update all states it might not always work. But you don't have a race condition with changing window focus, which is worth a lot.
WM_CHAR sends a character in the basic multilingual plane (16 bit)
WM_UNICHAR sends a character supporting the whole unicode range
WM_KEYDOWN and WM_KEYUP Sends keys which will be translated to characters by the keyboard layout.
My recommendation is when targeting a specific window/application try using messages first, and only if that fails try one of the lower level solutions.
when a color appears on the screen on other application or browser
I made one program using OpenCV and C++ for operating mouse with finger gesture. I used 3 color strips for 3 mouse function.
Yellow color for Left click
Blue color for Right click
Pink color for controlling cursor position
Whenever camera detect these colors, associated function takes place, I have used mouse_event for performing mouse function.
For more information you may read my code, blog, video.
I'm not 100% sure what you want, but if all you are after is running the method linked the the button.Clicked event, then you can manually run the method just like any other method.
You can use the .NET SendKeys class to send keystrokes.
Emulating mouse clicks requires P/Invoke.
I don't know how to detect colors on the screen.
I am writing an input system for a game that needs to be able to handle keyboard schemes that are not just qwerty. In designing the system, I must take into consideration:
Two types of input: standard shooter controls (lots of buttons being pressed and raw samples collected) and flight sim controls (the button's label is what the user presses to toggle something)
Alternative software keyboard layouts (dvorak, azerty, etc) as supplied by the OS
Alternative hardware keyboard layouts that supply Unicode characters
My initial inclination is to sample the USB HID unicode scancodes. Interested on thoughts on what I need to do to be compatible with the world's input devices and recommendation of input APIs on both platforms.
Simple solution is to allow customization of input. In the control customization, record what key the OS tells you has been pressed. In game, when you get a key press, check it against your list of bound keys and do the appropriate action.
It looks You need a cross platform library for games. You can look at SDL:
http://www.libsdl.org/
It is quite popular in game development.
http://en.wikipedia.org/wiki/List_of_games_using_SDL
The library is quite modular. You can only use part that control input.