WinUI 3 keyboard layout for input from SwapChainPanel - windows

I'm working on a WinUI 3 - C++/winRT - desktop application.
The application displays and updates in a window-sized XAML SwapChainPanel through Direct2D and needs to receive keyboard input from the user. With KeyDown and KeyUp on the SwapChainPanel, I can get raw keyboard input. However, this provides only VirtualKeys, ScanCodes, RepeatCount, etc. through the accompanying KeyRoutedEventArgs.
I can find no way to tell WinUI 3 which keyboard layout to use, nor any sort of keyboard management for such things as shift keys, etc (VirtualKeys are only capital ASCII letters).
What I've managed to do is build window messages from the KeyDowns and KeyUps and send them to TranslateMessage then DispatchMessage so they end up as WM_CHARs in the window's message loop.
This takes care of quite a bit of the shift, caps lock, etc. logic and produces Unicode. However, it doesn't take into account the keyboard layout which, in my case, is Canadian Multilingual with four layers of characters on most keys. I can receive some non-ASCII characters (Latin 1) but they aren't the right ones.
This must be a common situation with all the different languages in the world, but I haven't found anything in the way of a keyboard processing function that would receive raw information from the keyboard, process the control keys and output Unicode.
If the window's message pump is the only way to go (for now ?), how to get TranslateMessage to take into account the keyboard layout ?
Thanks for any help with this.

Related

Safe KLID For 'Custom' Keyboard Layout?

I need to install several 'custom' keyboard layouts on Windows 10.
These are not MKLC generated layouts.
What is a 'safe' KLID to use for my layouts?
axxxxxxx seems to be used by MKLC.
Dxxxxxxx seems to be utilized by the Layouts PreLoad / Substitues
I have several keyboard layouts to install, I.e, ????0409, ????0407, ????040e, .....
Any ideas for a relatively 'safe' value for '????' ?
I am concerned about running into some one else's keyboard layout.
Thanks
KLID — a keyboard layout identifier. Traditionally pronounced "Kay-El-Eye-Dee" because some people in the USA get very uptight about certain homonyms (you can catch me slipping on this point from time to time). It's also sometimes called the input locale identifier since the name for HKL has been updated (see the HKL definiteion for info on why that is incorrect since the HKL is for something different). The KLID can be retrieved for the currently selected keyboard layout in a thread through the GetKeyboardLayoutName API (note the pswzKLID parameter), though that is not true of any other selected or installed keyboard layout. Every keyboard layout on the system has one of these. Each KLID is 32 bits (thus 8 hex digits), and they can all be found in the registry as the subkeys under HKLM\SYSTEM\CurrentControlSet\Control\Keyboard Layouts\. The bottom half of the KLID is a LANGID, and the top half is something device-specific. By convention, the first hex digit is usually as follows:
0 — Most keyboard layouts
A — Keyboard layouts defined by MSKLC
B — Keyboard layouts defined by KbdEdit
D — Some non-CJK input methods that have been defined by the Text Services Framework (note: reported to me; I have never seen one of these!)
E — CJK input methods, also known as IMEs (deprecated and AFAIK not used since Windows 8)
Looks like you can use 1..9 or B or F hex as prefix.
Source: http://archives.miloush.net/michkap/archive/2005/04/17/409032.html

Support for up to eleven mouse buttons?

I am writing a small proof of concept for detecting extra inputs across mouses and keyboards on Windows, is it possible and how do I go about detecting input from a large amount of buttons in the Windows API? From what I have read, there is only support for 5 buttons but many mice have more buttons than that, is my question even possible with the Windows API, is it possible at all within the constraints of Windows?
You can use the Raw Input API to receive WM_INPUT messages directly from the mouse/keyboard driver. There are structure fields for the 5 standard mouse buttons (left, middle, right, x1, and x2). Beyond the standard buttons, additional buttons are handled by vendor-specific data that you would have to code for as needed. The API can give you access to the raw values, but you will have to refer to the vendor driver documentation for how to interpret them. Sometimes extra buttons are actually reported as keyboard input instead of mouse input.
Or, try using the DirectInput API to interact with DirectInput devices to receive Mouse Data and Keyboard Data.
Or, you could use the XInput API, which is the successor of DirectInput. However, XInput is more limited than DirectInput, as it is designed primarily for interacting with the Xbox 360 controller, whereas DirectInput is designed to interact with any controller. See XInput and DirectInput for more details.
Very simple: use GetKeyState
SHORT WINAPI GetKeyState(
_In_ int nVirtKey
);
Logic is next:
Ask user not to press buttons
Loop GetKeyState for all buttons 0-255
Drop pressed buttons state (some virtual keys can be pressed even it not pressed, not know why)
Now start keys monitor thread for rest keys codes and save them to any structure (pause between loop is 25ms is enough)
Ask user to press button
From keys monitor array you will see the any pressed buttons by user
Direct input and all other is more usable for other user input devices. For keyboard and mouse - GetKeyState is best.

What does it mean for a control to be Unicode or ANSI?

Windows controls can be Unicode or ANSI, my question is what does it mean for a control to be in one of these two categories?
Is it the following (if I assume that the control is Unicode):
The buffer for the control is in Unicode.
WM_CHAR messages are sent as Unicode (e.g. 'A' is sent as 0x0041 and not as 0x41).
It means the window messages sent to the window procedure will be the ANSI or Unicode versions. For example a Unicode window will receive a CREATESTRUCTW in WM_CREATE, while the ANSI windows will receive a CREATESTRUCTA.
This will apply to almost every window message containing string data.
Windows will internally marshal data accordingly. For example if you call the ANSI SendMessageA(WM_SETTEXT, ...) to a Unicode window, the window will receive WM_SETTEXT with a Unicode string.
A window will be Unicode or ANSI depending on whether its class was registered with RegisterClassExA or RegisterClassExW. You can test if a window is Unicode or not by calling IsWindowUnicode.
Some info about common controls, because that seems to be what you're talking about.
First of all, there is no problem with an ANSI window being the parent of a Unicode window. Also keep in mind that "controls" are just windows.
Common controls are always Unicode. The messages they receive will be in the native, Unicode, format. Of course you won't have visibility to this because it's all internal to the OS (exception: if you subclass a common control).
The messages you are typically dealing with will be sent to your window in the form of WM_COMMAND or WM_NOTIFY. These messages are sent from common controls to their parent window (your window). They will respect your window being Unicode or ANSI like this:
First they will send your window a WM_NOTIFYFORMAT message to ask if you prefer messages to be received in ANSI or Unicode (despite your window being Unicode or ANSI).
If you don't handle WM_NOTIFYFORMAT, then the control will call IsWindowUnicode to determine.
Then the control will send the message accordingly.
So basically, you never need to know if a common control is Unicode or not, because 1) it always is, and 2) this question is only relevant as a matter of handling messages, which is internal to Windows, not your responsibility.
Your responsibility is to handle notifications coming from common controls to your own window. In this case the Unicode/ANSI-ness of your own window is all that matters.

How can I fire a key press or mouse click event without touching any input device at system level?

How can I fire an automatic key press or mouse click event when a color appears on the screen
on other application or browser?
It depends a lot on what you want. Do you want to send the keys to
your Application
another fixed Application
Simulate a global keypress
Simulating keys globally
All of these will cause problems targeting a specific application and the active window changes.
SendKeys Sends Messages to the active app. It's a high level function taking a string which encodes a sequence of keys.
keybd_event is very low level and injects a global keypress. In most cases SendKeys is easier to use.
mouse_event simulates mouse input.
SendInput supersedes these functions. It's more flexible but a bit harder to use.
Sending to a specific window
When working with a fixed target window, sending it messages can work depending on how the window works. But since this doesn't update all states it might not always work. But you don't have a race condition with changing window focus, which is worth a lot.
WM_CHAR sends a character in the basic multilingual plane (16 bit)
WM_UNICHAR sends a character supporting the whole unicode range
WM_KEYDOWN and WM_KEYUP Sends keys which will be translated to characters by the keyboard layout.
My recommendation is when targeting a specific window/application try using messages first, and only if that fails try one of the lower level solutions.
when a color appears on the screen on other application or browser
I made one program using OpenCV and C++ for operating mouse with finger gesture. I used 3 color strips for 3 mouse function.
Yellow color for Left click
Blue color for Right click
Pink color for controlling cursor position
Whenever camera detect these colors, associated function takes place, I have used mouse_event for performing mouse function.
For more information you may read my code, blog, video.
I'm not 100% sure what you want, but if all you are after is running the method linked the the button.Clicked event, then you can manually run the method just like any other method.
You can use the .NET SendKeys class to send keystrokes.
Emulating mouse clicks requires P/Invoke.
I don't know how to detect colors on the screen.

In writing games that deal with scancodes, what do I need to know to support international keyboards on Mac and PC?

I am writing an input system for a game that needs to be able to handle keyboard schemes that are not just qwerty. In designing the system, I must take into consideration:
Two types of input: standard shooter controls (lots of buttons being pressed and raw samples collected) and flight sim controls (the button's label is what the user presses to toggle something)
Alternative software keyboard layouts (dvorak, azerty, etc) as supplied by the OS
Alternative hardware keyboard layouts that supply Unicode characters
My initial inclination is to sample the USB HID unicode scancodes. Interested on thoughts on what I need to do to be compatible with the world's input devices and recommendation of input APIs on both platforms.
Simple solution is to allow customization of input. In the control customization, record what key the OS tells you has been pressed. In game, when you get a key press, check it against your list of bound keys and do the appropriate action.
It looks You need a cross platform library for games. You can look at SDL:
http://www.libsdl.org/
It is quite popular in game development.
http://en.wikipedia.org/wiki/List_of_games_using_SDL
The library is quite modular. You can only use part that control input.

Resources