Related
I would like to add certain behavior to my program that binds a function to the numpad enter key, if it is present, or bind an alternate key if it is not.
According to Microsoft:
The scan code is the value that the keyboard hardware generates when
the user presses a key. It is a device-dependent value that identifies
the key pressed, as opposed to the character represented by the key.
An application typically ignores scan codes. Instead, it uses the
device-independent virtual-key codes to interpret keystroke messages.
(source)
I know that on my keyboard it is 0x9C (156), but this is not guaranteed to hold true for all keyboards.
I can't use MapVirtualKey() with VK_RETURN and MAPVK_VK_TO_VSC as this always returns the scan code for the primary return key in the center of the keyboard.
How can I obtain this information without any intervention on the part of the user?
My language is C/C++ and this is for Win32 only.
The scan code depends on the hardware, it might not be the same on a different system. There can be more than one scan code that maps to a virtual key. Virtual keys are supposed to be somewhat generic and not tied to the hardware.
You can tell the difference in WM_KEYDOWN and WM_KEYUP; WPARAM is VK_RETURN and bit 24 is set in LPARAM when the Enter key on the numpad is used:
Indicates whether the key is an extended key, such as the right-hand ALT and CTRL keys that appear on an enhanced 101- or 102-key keyboard. The value is 1 if it is an extended key; otherwise, it is 0.
GetKeyboardType can tell you some information about "the keyboard" but since there can be more than one keyboard connected these days you would have to go deeper to find out if there are any keyboards that have the properties you are looking for. Perhaps the SetupAPI knows.
For Numpad Enter to be recognized as either kind of Enter, the keyboard hardware must send a scan code that maps to VK_RETURN within the current keyboard layout. Keyboard layout is determined by system settings and the window which is receiving keyboard input (or the user), not by the physical keyboard. It is quite possible that the virtual keyboard layout does not match the physical keyboard (i.e. the labels on the keys don't match their functions).
There are two strategies for allowing different physical keyboard layouts to function correctly:
Change the labels visible on the keys but keep the scan codes in the same physical positions. For the function of each key to match its label, the user must choose, install or create the correct keyboard layout in software.
Physically move keys, or assign pre-established scan codes to different physical positions. The OS doesn't know or care where any key is physically; it just maps scan codes to virtual keycodes based on the current keyboard layout.
Both Enter and Numpad Enter are mapped to VK_RETURN; there is no virtual key code reserved for Numpad Enter, so no way for it to have different scan codes on different keyboard layouts. With strategy #1, any key can be turned into Enter, but not specifically Numpad Enter. With strategy #2, Numpad Enter still has the same scan code as usual.
At the hardware level, Enter sends 0x1C while Numpad Enter sends 0xE0 0x1C (source: my own observations and a document by Andries Brouwer). Windows has different ways of reporting this: with bit 24 of WM_KEYDOWN's lParam, the LLKHF_EXTENDED flag for low level keyboard hooks, the RI_KEY_E0 flag for Raw Input, and possibly more.
In short, it is safe to assume that Numpad Enter is 0x1C plus the extended-key flag, since in any other case, it is impossible to identify.
I know that on my keyboard it is 0x9C (156)
I assume that you received this value from DirectInput, which defines an enum constant DIK_NUMPADENTER with value 0x9C. This value is not a scan code.
Anders makes a good point - I don't know of a way to tell if that key is present on (one of the) keyboard (s) present on any particular system. Also, don't forget about the Onscreen Keyboard and touch devices in general.
Why not simply bind your function to both keys regardless? Do you have a good reason not to do this?
As an addition to #Lexikos great answer:
The scan code is the value that the keyboard hardware generates when
the user presses a key. It is a device-dependent value that identifies
the key pressed, as opposed to the character represented by the key.
This was true in encient days. At least since Windows NT system uses PS/2 Scan Code Set 1 for all keyboard APIs. With some bugs that are specifically supported for backwards compatibility (for example NumLock and Pause scan codes are swapped. They are special.).
Under all Microsoft operating systems, all keyboards actually transmit
Scan Code Set 2 values down the wire from the keyboard to the keyboard
port. These values are translated to Scan Code Set 1 by the i8042 port
chip. The rest of the operating system, and all applications that
handle scan codes expect the values to be from Scan Code Set 1. Scan
Code Set 3 is not used or required for operation of Microsoft
operating systems. (Keyboard Scan Code Specification Revision 1.3a — March 16, 2000)
Because of this some API docs are reffering to scan codes as to virtual scan codes.
Modern USB or Bluetooth keyboard are using HID protocol with its HID Usage IDs to report key presses (see 10 Keyboard/Keypad Page (0x07) in HID Usage Tables spec for a list of possible keyboard key Usage IDs).
These HID Usages get converted to PS/2 Scan Code Set 1 by kbdclass driver (HID client mapper driver for keyboards) via call to HidP_TranslateUsagesToI8042ScanCodes API. It works according to published spec. So we actually have a published scan code list that is used in Windows. If you're interested in history behind this scan code mess - there is a good page.
There is no way to detect if keypad Enter button is present on particular keyboard hardware.
When adding a system tray icon from in Windows there are two versions of API that we can pass to Shell_NotifyIcon() via NOTIFYICONDATA structure. There are subtle differences between the two API, and these are not listed anywhere on MSDN. It took me some effort to figure out some of the differences, which I am going to share now. Improvements/additions to the answer are always welcome.
PS: This question is purely for sharing what I have learnt over last few days experimenting with windows DPI scaling.
uVersion member of the NOTIFYICONDATA structure can have 3 possible values, representing the version of the API being used to create the taskbar icon.
0 Use this value for applications designed for Windows versions prior to Windows 2000.
NOTIFYICON_VERSION Use the Windows 2000 behavior. Use this value for applications designed for Windows 2000 and later.
NOTIFYICON_VERSION_4 Use the current behavior. Use this value for applications designed for Windows Vista and later.
When it comes to message handler for the tray icon, the wParam, and uParam have the differences as illustrated in the following image.
Notice that in NOTIFYICON_VERSION_4 the wParam gives X, and Y coordinates of various events, but there is no provision for getting the coordinates in NOTIFYICON_VERSION. This gives rise to an interesting behaviour (which was a cause of a BUG I was trying to solve). If you use NOTIFYICON_VERSION, and then invoke the context menu of the tray icon, then the mouse cursor, wherever it may be while you are invoking the menu, gets placed right at the center of the tray icon. Even if you use keyboard (WINDOWS+B) for invoking context menu of the icon, the mouse cursor still moves to the icon.
This may not be of particular interest to you until you look at this particular BUG I am trying to solve in Pico torrent application.
Here is the scenario.
OS : Windows 10
Application isn't per-monitor DPI aware, but is system level DPI aware.
There is an initial value of Desktop scaling set, say 150%, when the user logs in.
Pico torrent is running.
DPI scaling value is changed to, say 125%
Pico torrent's context menu is invoked
The context menu will not be displayed at its proper place, and will be displaced a little, showing a deviance.
See the following images to understand what's happening.
The problem is that although MSDN says that GET_X_LPARAM(wParam), and GET_Y_LPARAM(wParam) should give correct values in the handler of tray icon, but it doesn't, in the presence of DPI scaling (i.e. for a change in DPI scaling without doing a sign out and sign in). On the other hand the API GetCursorPos() returns the correct value of mouse cursor coordinates. Note that NOTIFYICON_VERSION_4 along with GetCursorPos() will not work, since the context menu can be invoked using keyboard, at which the mouse cursor can be anywhere on the screen(s).
So, how do you combine all the knowledge just learnt to display the tray icon's context menu correctly when DPI scaling is done in the manner above, without making you application per-monitor DPI aware (for per-monitor DPI aware applications GET_X_LPARAM(wParam), and GET_Y_LPARAM(wParam) always return correct value)?
Use NOTIFYICON_VERSION instead of NOTIFYICON_VERSION_4, this will position the mouse cursor at the tray icon when context menu is invoked, and then use GetCursorPos() to get mouse cursor's position. Display the context menu using TrackPopupMenu() with the coordinates.
PS: In the example above the DPI scaling value is changed from 150% to 125%. The context menu deviance is more pronounced when DPI scaling is done from a bigger value to a smaller value, when your tray icon area is on lower right of the screen. This is because when DPI scaling is done, and windows magnifies UI elements which are not per-monitor aware, using DPI virtualization, then things move right-wards, and down-wards. eg. if in an application a windows rectangle is (0,0,100,100) (screen coordinates), then after magnification to 150%, it may become (0,0,150,150). Now for tray icon's menu, if you specify coordinates which lie beyond bottom-right of the screen, then the OS will still display is at a bottom right position which lies inside the screen, and which ensures that menu is displayed properly. eg. if a screen is 1920x1080, and TrackPopupMenu()is given (10000,10000) for menu, the menu will still be displayed inside the 1920x1080 screen rectangle. Thus increasing DPI scaling will not move the context menu any further, if it has already reached the right-bottom most position.
#sahil-singh you are right about it, and I agree with everyone else that your application should be DPI aware, but when this is not a point here.
I had a similar issue where my application is (still) not DPI aware and GET_X_LPARAM(wParam) will return non-virtual coordinates. After passing this value to the TrackPopupMenu() I get wrong position on the screen.
The best way is to use GetMessagePos() instead of wParam. In this case, Windows will give you new DWORD with virtual coordinates and then use GET_X_LPARAM/GET_Y_LPARAM to get a values that you can pass to the TrackPopupMenu().
This seems to (2 years later in 2019) be documented on MSDN:
NOTIFYICONDATAA structure
uCallbackMessage
Type: UINT
When the uVersion member is either 0 or NOTIFYICON_VERSION, the wParam parameter of the message contains the identifier of the taskbar icon in which the event occurred. This identifier can be 32 bits in length. The lParam parameter holds the mouse or keyboard message associated with the event. For example, when the pointer moves over a taskbar icon, lParam is set to WM_MOUSEMOVE.
When the uVersion member is NOTIFYICON_VERSION_4, applications continue to receive notification events in the form of application-defined messages through the uCallbackMessage member, but the interpretation of the lParam and wParam parameters of that message is changed as follows:
LOWORD(lParam) contains notification events, such as NIN_BALLOONSHOW, NIN_POPUPOPEN, or WM_CONTEXTMENU.
HIWORD(lParam) contains the icon ID. Icon IDs are restricted to a length of 16 bits.
GET_X_LPARAM(wParam) returns the X anchor coordinate for notification events NIN_POPUPOPEN, NIN_SELECT, NIN_KEYSELECT, and all mouse messages between WM_MOUSEFIRST and WM_MOUSELAST. If any of those messages are generated by the keyboard, wParam is set to the upper-left corner of the target icon. For all other messages, wParam is undefined.
GET_Y_LPARAM(wParam) returns the Y anchor coordinate for notification events and messages as defined for the X anchor.
I am writing a small proof of concept for detecting extra inputs across mouses and keyboards on Windows, is it possible and how do I go about detecting input from a large amount of buttons in the Windows API? From what I have read, there is only support for 5 buttons but many mice have more buttons than that, is my question even possible with the Windows API, is it possible at all within the constraints of Windows?
You can use the Raw Input API to receive WM_INPUT messages directly from the mouse/keyboard driver. There are structure fields for the 5 standard mouse buttons (left, middle, right, x1, and x2). Beyond the standard buttons, additional buttons are handled by vendor-specific data that you would have to code for as needed. The API can give you access to the raw values, but you will have to refer to the vendor driver documentation for how to interpret them. Sometimes extra buttons are actually reported as keyboard input instead of mouse input.
Or, try using the DirectInput API to interact with DirectInput devices to receive Mouse Data and Keyboard Data.
Or, you could use the XInput API, which is the successor of DirectInput. However, XInput is more limited than DirectInput, as it is designed primarily for interacting with the Xbox 360 controller, whereas DirectInput is designed to interact with any controller. See XInput and DirectInput for more details.
Very simple: use GetKeyState
SHORT WINAPI GetKeyState(
_In_ int nVirtKey
);
Logic is next:
Ask user not to press buttons
Loop GetKeyState for all buttons 0-255
Drop pressed buttons state (some virtual keys can be pressed even it not pressed, not know why)
Now start keys monitor thread for rest keys codes and save them to any structure (pause between loop is 25ms is enough)
Ask user to press button
From keys monitor array you will see the any pressed buttons by user
Direct input and all other is more usable for other user input devices. For keyboard and mouse - GetKeyState is best.
NSEvent keyCode gives a keyboard scan code, which is a hardware specific code representing the physical key. I want to convert the scan code to a virtual key code, which is the logical key based on the users keyboard layout (QWERTY, AZERTY, etc).
In Windows I can do this via MapVirtualKey. What is the OS X equivalent?
The virtual key code is precisely not based on the user's keyboard layout. It indicates which key was pressed, not what character that key would produce nor how it's labeled.
For example, kVK_ANSI_A (from Carbon/HIToolbox/Events.h, value 0x00) does not mean the key which produces the 'A' character, it means the key which is in the position that the 'A' key is in an ANSI standard keyboard. If a French keyboard layout is active, that key will produce 'Q'. If the physical keyboard is a French keyboard, that key will probably be labeled 'Q', too.
So, the virtual key code is sort of akin to a scan code, but from an idealized, standard keyboard. It is, as noted, hardware-independent. It is also independent of the keyboard layout.
To translate from the virtual key code to a character, you can use UCKeyTranslate(). You need the 'uchr' data for the current keyboard layout. You can get that using TISCopyCurrentKeyboardLayoutInputSource() and then TISGetInputSourceProperty() with kTISPropertyUnicodeKeyLayoutData as the property key.
You also need the keyboard type code. I believe it's still supported to use LMGetKbdType() to get that, even though it's no longer documented except in the legacy section. If you don't like that, you can obtain a CGEvent from the NSEvent, create a CGEventSource from that using CGEventCreateSourceFromEvent(), and then use CGEventSourceGetKeyboardType()and call CGEventGetIntegerValueField() with kCGKeyboardEventKeyboardType to get the keyboard type.
Of course, it's much easier to simply use -[NSEvent characters] or -[NSEvent charactersIgnoringModifiers]. Or, if you're implementing a text view, send key-down events to -[NSResponder interpretKeyEvents:] (as discussed in Cocoa Event Handling Guide: Handling Key Events) or -[NSTextInputContext handleEvent:] (as discussed in Cocoa Text Architecture Guide:Text Editing). Either of those will call back to the view with the appropriate action selector, like moveBackward:, or with -insertText: if the keystroke (in context of recent events and the input source) would produce text.
According to the NSEvent documentation, -[NSEvent keyCode] returns the hardware-independent virtual key code.
Imagine an application that displays a button: OK. Is it possible to break the execution of the program and view the disassembly using WinDbg, right after the button has received a click? How would I do that? In this scenario, the source code is not available.
So, your description is very general, and not very well defined, and the exact research really depends on the application that you are trying to reverse. You will have easier time if you have symbols, but these aren't required.
First, some (trivial) background: Windows communicates with the application through Windows Messages. The application will fetch messages from the message queue, and almost always will dispatch those messages to the appropriate Windows Procedure.
So, first - what do you mean: "right after the button has received a click"? I suspect that you actually don't care about this code. Although your application could have a custom button, and you really care how the button handles a WM_LBUTTONDOWN message. I'm going to assume that your application has a Windows stock button (implemented in user32.dll or comctl32.dll), and that you don't care about that.
The default implementation of a button control handling WM_LBUTTONDOWN is to send WM_COMMAND to the window that contains the button. Typically, the application that you want to investigate handles the "click" there. Now, if this is the 'OK' button, it's ID would be IDOK (defined to be 1), and Windows will send you the same message also when you click the 'Enter' key.
So, we are now looking for how the application handles WM_COMMAND. What you want to find is the Windows procedure. Do that with Spy++. Open Spy and find the Window that contain your button. Most chances that the code you are looking for is in the Windows Procedure of that window. Spy++ will tell you the address of the Window Procedure.
As an example, let's look at the 'Save' button of the 'Save As' dialog in Notepad. On my machine the address is: 0x73611142, which is in ComCtl32.dll
Go to WinDbg, and take a look at the function.
0:000> u 73611142
COMCTL32!MasterSubclassProc
73611142 8bff mov edi,edi
73611144 55 push ebp
73611145 8bec mov ebp,esp
73611147 6afe push 0FFFFFFFEh
73611149 6858126173 push offset COMCTL32!Ordinal377+0x146 (73611258)
7361114e 68a1b06273 push offset COMCTL32!DllGetVersion+0x336f (7362b0a1)
73611153 64a100000000 mov eax,dword ptr fs:[00000000h]
73611159 50 push eax
This is indeed a function. Like all Windows, it starts with move edi,edi, and then it sets the frame pointer.
Put a break point, hit go, and you'll almost immediately break. Let's take a look:
0:000> bu 73611142
0:000> g
0:000> kb1
ChildEBP RetAddr Args to Child
0101f220 75d87443 00120c6a 00000046 00000000 COMCTL32!MasterSubclassProc
The first argument (00120c6a) is handle of the window. Compare with the value on Spy++, it should be the same. The second argument is the message. In my case it was 0x46 which is WM_WINDOWPOSCHANGING.
OK, I don't care about all those messages, and I want to break only on the messages I care about. You care about WM_COMMAND which is 0X0111 (winuser.h)
Now, put the following (a little bit complex command):
0:000> bu 73611142 "j poi(esp+4)==00120c6a AND poi(esp+8)==111 AND poi(esp+''; 'gc'"
breakpoint 0 redefined
You set a breakpoint on the windows procedure, and you tell WinDbg to break only when the first argument ( that's the poi(esp+4) ) is your Windows handle, and the second argument is 111. The 'gc' tells WinDBG to continue the execution when the condition will not meet.
Now you can debug the disassembly. If you have symbols, you'll have an easier job, but this isn't necessary. In any case, remember to download the Microsoft stripped down symbols from the symbols server, so if the code you are debugging is calling a Windows API, you can see it.
That's about it. Modify this technique if your requirements are different (different Window, different message, etc). As a last resort consider putting a breakpoint on PostMessage or DispatchMessage if you can't reliably find the Windows Procedure (although, you'll have to follow that code). For heavy lifting reversing use IDA, which will disassemble the executable, and solve various cross reference.
Assuming you had the pdbs and they did not have the private symbols stripped then you would set the breakpoint on the button handler like so:
bp myDLL!myWindowApp::onOKBtnClicked
If you had the pdbs then you could search for a likely handler using x:
x myDLL!myWindowApp::*ok*
this presumes that you know or can guess which dll and what the function name is, otherwise you could gleam this information using spy++, Win Spy++ or Win Detective to get the handle for the button and intercept the window messages and from that info set the breakpoint.
Once it hits the breakpoint you can view the assembly code using u, there is a msdn guide if you require it.