why TTN_NEEDTEXTW but not TTN_NEEDTEXTA? - winapi

This is an old problem that I've never figured out - wondered if someone here might happen to know the answer off of the top of your head...
In some parts of our software (MFC/Win32/MBCS) my code will only receive
TTN_NEEDTEXTW
In other parts of our software, I'll receive the MBCS correct message
TTN_NEEDTEXTA
It makes no sense to me.
I understand that our software can be compiled Unicode or not (we are set to use Multibyte character set). And I have the vague recollection that each window can be constructed Unicode or not, though this is a vague memory, nothing concrete.
Does anyone know why we'd be getting the wide version message some places in our code, despite being compiled as multibyte?
NOTES:
We're definitely not sending this message - presumably the ToolTip control is.
We're definitely only receiving the (W) message in some places, and definitely only receiving the (A) message in others.
I'm certain that all compilation modules use MBCS, not Unicode, and that the build targets all specify MBCS not Unicode.
This seems to happen only for CMainFrame hosted windows and controls. i.e. Windows outside of the main frame can use narrow versions (say in a dialog box).

The common control sends you a WM_NOTIFYFORMAT message to ask you "Would you prefer to receive MBCS notifications or Unicode notifications?" The default is to respond based on whether the window was created via CreateWindowExW or CreateWindowExA.

With an MFC ansi app (that handles unicode data), I had this issue with CStatic derived classes and tooltip where I was getting TTN_NEEDTEXTA instead of, in my case, the desired TTN_NEEDTEXTW.
Using the accepted answer, managed to get TTN_NEEDTEXTW.
BEGIN_MESSAGE_MAP(CStaticDerived, CStatic)
ON_WM_NOTIFYFORMAT()
ON_NOTIFY_EX_RANGE(TTN_NEEDTEXTW, 0, 0xFFFF, OnTTNeedText)
END_MESSAGE_MAP()
UINT CStaticDerived::OnNotifyFormat(
CWnd *pWnd,
UINT nCommand) {
if (pWnd->m_hWnd
== AfxGetModuleThreadState()->m_pToolTip->m_hWnd) {
// want TTN_NEEDTEXTW for tooltips
return NFR_UNICODE;
}
return __super::OnNotifyFormat(pWnd, nCommand);
}

Related

Translate sequences of virtual keycodes into the resulting character message

My understanding is that TranslateMessage collates a sequence of key events and adds a WM_CHAR message to the queue if it results in a character. So on my computer what happens when I press Alt+1234 (6 relevant events/messages, including releasing the Alt) is the single character "Ê" comes out (in certain places).
Let's say I have a sequence of virtual key codes and related keypress data generated from the LL keyboard hook. Is there some way of using the Windows OS logic to translate this sequence into a real character? For example, could I construct contrived MSG structures, call TranslateMessage on them and then catch the WM_CHAR ensuing events? That seems very outside of Windows' expectations; haven't tried it yet but it seems like it could cause all kinds of subtle problems.
The cleanest solution I can think of so far is just to re-implement the logic myself to figure out the characters from the virtual codes. This is unfortunate of course since Windows internals already seem to know how to do this! Is there a better way?
I am aware of the existence of MapVirtualKeyA but this does not seem to handle a sequence of virtual key codes.
I am also aware that it is possible to do a hook on all GetMessage calls which could be used just to grab the WM_CHAR messages from every process. However this seems an extremely heavy solution: I need to make a separate DLL unlike for the WH_KEYBOARD_LL hook and then use some sort of IPC to send the characters back to my host process. Also MSDN explicitly says that you should avoid doing global hooks this for anything outside debugging and I need this to work on production machines.
I am also also aware of KeysConverter in .NET (I am fine to use .NET for this) but again this does not seem to deal with sequences of virtual keys like given above.

Since TranslateMessage() returns nonzero unconditionally, how can I tell, either before or after the fact, that a translation has occurred?

This is a continuation from What is the correct, modern way to handle arbitrary text input in a custom control on Windows? WM_CHAR? IMM? TSF?.
So after experimenting with a non-IME layout (US English), a non-TSF IME (the Japanese FAKEIME from the Windows XP DDK), and a TSF text service (anything that comes with Windows 7), it appears that if the active input processor profile is not a TSF text service (that is, it is a TF_PROFILETYPE_KEYBOARDLAYOUT), I'll still have to handle keystrokes and WM_CHAR messages to do text input.
My problem is that my architecture needs a way to be told that it can ignore the current key message because it was translated into a text input message. It does not care whether this happens before or after the translation; it just needs to know that such a translation will or has happened. Or in pseudocode terms:
// if I can suppress WM_CHAR generation and synthesize it myself (including if the translation is just dead keys)
case WM_KEYDOWN:
case WM_SYSKEYDOWN:
if (WillTranslateMessage())
InsertChar(GenerateEquivalentChar());
else
HandleRawKeyEvent();
break;
// if I can know if a WM_CHAR was generated (or will be generated; for instance, in the case of dead keys)
case WM_KEYDOWN:
case WM_SYSKEYDOWN:
if (!DidTranslateMessage())
HandleRawKeyEvent();
break;
case WM_CHAR:
case WM_SYSCHAR:
InsertChar(wParam);
break;
The standard way of handling text input, either from a keyboard or through a non-TSF IME, is to let TranslateMessage() do the WM_KEYDOWN-to-WM_CHAR translation. However, there's a problem: MSDN says
If the message is WM_KEYDOWN, WM_KEYUP, WM_SYSKEYDOWN, or WM_SYSKEYUP, the return value is nonzero, regardless of the translation.
which means that I cannot use it to determine if a translation has occurred.
After reading some Michael Kaplan blog posts, I figured I could use ToUnicode() or ToUnicodeEx() to do the conversion myself, passing in the state array from GetKeyboardState(). The wine source code seems to agree, but it has two special cases that I'm not sure if they are wine-specific or need to be done on real Windows as well:
VK_PACKET — generates a WM_CHAR directly out of the message's LPARAM
VK_PROCESS — calls a function ImmTranslateMessage(), which seems to either be a wine-specific function or an undocumented imm32.dll function; I can't tell which is true
And wine also does nothing with WM_KEYUP and WM_SYSKEYUP; again, I don't know if this is true for wine only.
But do I even need to worry about these cases in a program that uses TSF? And if I do, what's the "official" way to do so? And even then, what would I do on WM_KEYUP/WM_SYSKEYUP; do I need to send those to ToUnicode() too? Do I even need to catch WM_KEYUPs in my windows specially if there was a WM_CHAR?
Or am I missing something that is not in any of the MSDN TSF samples that will allow me to just have TSF take care of the TF_PROFILETYPE_KEYBOARDLAYOUT processors? I thought TSF did transparent IME passthrough, but my experiment with the FAKEIME sample showed otherwise...? I see both Firefox and Chromium also check for TF_PROFILETYPE_KEYBOARDLAYOUT and even use ImmGetIMEFileName() to see if the keyboard layout is backed by an IME or not, but I don't know if they actually take care of input themselves in these cases...
My minimum version right now is Windows 7.
Thanks.
UPDATE The original version of this question included needing to know about associated WM_KEYUPs; on second look through my equivalent code on other platforms this won't be necessary after all, except for the details of TranslateMessage(); I've adjusted the question accordingly. (On OS X you don't even give key-release events to the text input system; on GTK+ you do but it seems keypresses that insert characters don't bother with releases and so they don't get handled anyway, at least for the input methods I've tried (there could be some that do...).) That being said, if I missed something, I added another sub-question.
In general, it's not a good idea to try to duplicate Windows internals. It's tedious, error-prone, and likely to change without notice.
The edit controls that I have source access to pick off arrow keys (and other specific keys) in the WM_KEYDOWN handler and pass everything else off to the default handler, which will (eventually) generate WM_CHAR or TSF input calls (if your control supports TSF, which it should).
You would still need WM_CHAR in the case where there is no TSF handler involved. However, you can always have your WM_CHAR handler call your ITextStoreACP::InsertTextAtSelection method.

What's the preferred way for notifying a parent window from a customized control?

I have a custom Windows controls that superclass standard ones. I would like my custom controls to notify its parent window of certain events. What's the best practice for doing so?
Send the parent window a window message in the WM_USER or WM_APP range. This won't work since the values could collide if another child control tried the same thing.
Send the parent window WM_NOTIFY. This seems like the right thing to do, but since I'm extending a standard Windows control, how can I ensure that the notification code I use won't collide with one normally sent by the base class (now or in the future)?
Send the parent window a window message from RegisterWindowMessage. This should be sufficient to avoid unintentional collisions, but Microsoft recommends using it only for inter-process messages.
Have the control provide a mechanism for the application to specify what WM_APP message to use for notifications. This seems like the only robust approach, but it also feels a bit like overkill. (Or, instead of specifying a window message, I suppose that the application could pass down a function pointer.)
I've seen a similar question, but the sole answer there is tied to MFC and doesn't really address avoiding collisions.
What do other people usually do? Do they use one of the first three approaches and not worry about it? I'd like my controls to be suitable for broader consumption outside of my application, so I'd also prefer using standard Win32.
Edit: Tried to clarify what I'm looking for.
Since you are superclassing an existing window class and augmenting its behaviour, then you are correct to worry about collisions with existing messages. Because of that I feel that you have to use a message in the WM_APP range. You could equally well use RegisterWindowMessage but I agree that is overkill.
So I noticed that the notification code ranges defined in CommCtrl.h all look like:
#define NM_FIRST (0U- 0U) // generic to all controls
#define NM_LAST (0U- 99U)
...
#define TRBN_FIRST (0U-1501U) // trackbar
#define TRBN_LAST (0U-1519U)
So Microsoft's common controls at least have defined ranges (and are likely to always be large unsigned values). Therefore if I super- or subclass standard controls and use notification codes incrementing from 0, I think that I should be safe against current and future versions of Windows.
(If I were deriving from third-party controls, then those third-party controls would need to define their own reserved ranges. Otherwise all bets would be off.)

Finding WndProc Address

How can I find the address of a WndProc (of a window of another process). Even if I inject a DLL and try to find it with either GetClassInfoEx() or GetWindowLong() or GetWindowLongPtr() I always get values like 0xffff08ed, which is definitely not an executable address. It is according to MSDN: "... the address of the window procedure, or a handle representing the address of the window procedure."
Unfortunately that is not good enough for me I need the actual address. Spy++ does the job right most of the time (but even that sometimes fails). So it should be be possible. Thanx.
[EDIT:] Kudos to Chris Becke for providing a super fast, and correct solution to my little problem!
Perhaps you are being stymied because you are asking for the wrong version of the windowproc.
Window Procs, like applications, occur in two flavors: ansi and unicode. Windows cannot return a raw pointer to a ansi window to a unicode application, or visa versa, as they will attempt to call it with the wrong string type.
So, there is no GetWindowLongPtr function. Its a macro that resolves to two 'real' functions the windows api provides: GetWindowLongPtrA and GetWindowLongPtrW. If the window is a unicode window, and GetWindowLongPtrA is called windows will return a handle instead of the raw pointer, so that it can intercept calls (made via CallWindowProc) and marshal the string's from ansi to unicode. The opposite conversion holds the other way.
Even if you call the correct function, you still might get a handle back - its completely possible that ansi code has subclassed a unicode window. so the windowproc has been completely replaced by one of the callWindowProc handles.
In that case - tough luck I guess.
To extend Chris Becke's answer (which solved my problem, thanks!):
So, there is no GetWindowLongPtr function. Its a macro that resolves to two 'real' functions the windows api provides: GetWindowLongPtrA and GetWindowLongPtrW. If the window is a unicode window, and GetWindowLongPtrA is called windows will return a handle instead of the raw pointer, so that it can intercept calls (made via CallWindowProc) and marshal the string's from ansi to unicode. The opposite conversion holds the other way.
You can check whether the window in question is an unicode or ANSI window by calling the IsWindowUnicode function. Using this information you can determine which GetWindowLongPtr function needs to be called (at runtime),

Is there a way to get SendInput to work with an application using GDK?

I have an application that can successfully inject keyboard input using the SendInput API with the UNICODE flag set. This causes WM_KEYUP and WM_KEYDOWN messages to be generated with the VK code of E7 (VK_PACKET), which gets appropriately translated into the correct WM_CHAR message. This works in all the applications I have tried except for Pidgin, which uses GDK. GDK seems to only look for WM_KEYUP messages. Since the ones being generated here don't actually have any indication of the input character (only the WM_CHAR does), the input is ignored. Is there a way I could get around this. I haven't had much luck if I use SendInput without the UNICODE flag.
When I had a similar problem, I used the clipboard as a workaround. A better way is to use WM_CHAR, and if I find a way to send Unicode characters with WM_CHAR, I'll update my answer. Since GTK+ is open source, you can contribute to it and help them (I'm a beginner with C).

Resources