Programmatically implement Mac long-press accent popup - macos

The application I'm working on uses a custom text editor. The problem is that it therefore doesn't employ the Mac's now-standard long-press key accent popup, i.e., holding down 'A' will produce "aaaaaaaaa" instead of the "à á â ä..." window.
Anyone know if it's possible to programmatically call/otherwise implement that accent popup?

Properly handling key events so that they interact properly with the system is discussed in the Creating a Custom Text View section of the Cocoa Text Architecture Guide. You should also familiarize yourself with the Handling Key Events chapter of the Cocoa Event Handling Guide (although it's a bit outdated; in particular, it refers to the deprecated NSTextInput protocol which has been replaced by NSTextInputClient).
The basic gist is that you should send key events through either -[NSTextInputContext handleEvent:] or -[NSResponder interpretKeyEvents:], implement action methods on your view, and also have your view class adopt and implement the NSTextInputClient protocol.
You acquire a reference to the appropriate NSTextInputContext object from the inputContext property of NSView.
Sending the key events through one of those handler methods is how that press-and-hold feature gets activated. The NSTextInputClient protocol methods are how it ultimately interacts with your text view's document model. When the user selects a character from the popover, the feature uses that protocol to actually replace the initial character with the final one.
This will also allow your text view to handle Asian input methods.

Related

Setting the text and background colours of a super classed listbox control

I am in the process of writing a super classed version of the Windows LISTBOX common-control to add extra functionality.
A standard control sends the WM_CTLCOLORLISTBOX message to its parent which allows both its text and background colours to be specified at run time within an appropriate message handler. However WM_CTLCOLORLISTBOX is not sent to the control itself, therefore cannot be encapsulated and handled internally.
The scenario I am attempting to address is to change the background and text colours depending on the control's enabled/disabled state. The standard behaviour of leaving the listbox background the same shade regardless of its state looks ugly and inconsistent to me. Is there another way to set these values within the encapsulation, yet hand-off all other painting tasks to the base-class window procedure?
I wondered about using SetClassLongPtr(). However, not only would this not address the text colour but if I understand rightly it would change the background for ALL controls of that class currently in existence and not the specific control whose state has changed.
The answer should be obvious - since WM_CTLCOLORLISTBOX is sent to the parent window, you have to subclass the parent window in order to receive the message. There is no getting around that. However, some wrapper UI frameworks, like VCL, are designed to forward such messages to the control that generated then, so controls can handle their own messages. But if you are not using a wrapper framework and are working with Win32 directly, you have to handle the parent messages yourself. Refer to MSDN for details about subclassing a given HWND:
Subclassing Controls

Accessibility support for a Win32 control based on RichEdit

I have implemented custom Win32 control based on RichEdit. I insert custom OLE objects into the rich text using method 'InsertObject' of the IRichEditOle. Custom objects just show some text and some more additional functionality.
This my control is similar to the Outlook's control which allows user to enter email addresses.
I have a problem with accessibility support. I want to implement functionality the same as Outlook. I want that screenreaders (for example Narrator or Thunder Storm) reads all the text including content of the inserted OLE objects.
I have tried to implement IAccessible interface which is returned on the message WM_GETOBJECT.
I return some reasonable value from the 'get_accRole' and 'get_accName'. Accessible role is 'editable text'. Also I return string which represents the whole control content from the method 'get_accValue'.
I tested my implementation using application Inspet.exe from Windows Kits. I see acc role, name value which I provides in the IAccessible methods.
THE PROBLEM IS: Screenreaders do not read the whole content of the control. Screenreaders read the only text entered to the control, but not the content of the inserted objects.
I suggest that Screenreaders do not use IAccessible interface for RichEdit control.
My question to the comunity: does anybody have experince with Accessibility support for the RichEdit control with inserted OLE objects. What I should provide for Screenreader?

Map NSEvent keyCode to virtual key code

NSEvent keyCode gives a keyboard scan code, which is a hardware specific code representing the physical key. I want to convert the scan code to a virtual key code, which is the logical key based on the users keyboard layout (QWERTY, AZERTY, etc).
In Windows I can do this via MapVirtualKey. What is the OS X equivalent?
The virtual key code is precisely not based on the user's keyboard layout. It indicates which key was pressed, not what character that key would produce nor how it's labeled.
For example, kVK_ANSI_A (from Carbon/HIToolbox/Events.h, value 0x00) does not mean the key which produces the 'A' character, it means the key which is in the position that the 'A' key is in an ANSI standard keyboard. If a French keyboard layout is active, that key will produce 'Q'. If the physical keyboard is a French keyboard, that key will probably be labeled 'Q', too.
So, the virtual key code is sort of akin to a scan code, but from an idealized, standard keyboard. It is, as noted, hardware-independent. It is also independent of the keyboard layout.
To translate from the virtual key code to a character, you can use UCKeyTranslate(). You need the 'uchr' data for the current keyboard layout. You can get that using TISCopyCurrentKeyboardLayoutInputSource() and then TISGetInputSourceProperty() with kTISPropertyUnicodeKeyLayoutData as the property key.
You also need the keyboard type code. I believe it's still supported to use LMGetKbdType() to get that, even though it's no longer documented except in the legacy section. If you don't like that, you can obtain a CGEvent from the NSEvent, create a CGEventSource from that using CGEventCreateSourceFromEvent(), and then use CGEventSourceGetKeyboardType()and call CGEventGetIntegerValueField() with kCGKeyboardEventKeyboardType to get the keyboard type.
Of course, it's much easier to simply use -[NSEvent characters] or -[NSEvent charactersIgnoringModifiers]. Or, if you're implementing a text view, send key-down events to -[NSResponder interpretKeyEvents:] (as discussed in Cocoa Event Handling Guide: Handling Key Events) or -[NSTextInputContext handleEvent:] (as discussed in Cocoa Text Architecture Guide:Text Editing). Either of those will call back to the view with the appropriate action selector, like moveBackward:, or with -insertText: if the keystroke (in context of recent events and the input source) would produce text.
According to the NSEvent documentation, -[NSEvent keyCode] returns the hardware-independent virtual key code.

How do SIP auto-corrections get communicated to a TextBox?

I am trying to debug a tricky situation with auto-correction not getting correctly handled in a TextBox, but I am stuck:
I cannot find how the tapping of an auto-correction suggestion in the SIP gets communicated to the TextBox.
I have traced the KeyUp, KeyDown, TextInput, TextInputStart and TextInputUpdate events, but they do not seem to be involved in the update of the Text in the TextBox object.
Background:
When a language other than Greek is used, auto-correction works as it should for a TextBox in my app. However, when the language is set to Greek, nothing happens when tapping on the suggested word ... On the other hand, in TextBoxes in standard phone apps (e.g. adding text in the Notes section of a contact) Greek auto-correction works perfectly. So, my first guess is that there is something wrong with the TextBox rather than with the SIP. My plan is to subclass TextBox, modifying only its auto-correction handling parts.
Any help would be much appreciated,
Gerasimos
Update:
I made a few tests and this seems to be a problem in all non standard apps. Specifically, I tested the eBay and SkyMap applications and in both cases English auto-corrections work, while Greek do not.
The problem is easy to reproduce:
put a textbox in an application (with an inputScope that has auto-corrections enabled)
use a Greek keyboard layout
tap 1-2 random letters.
tap on one of the proposed auto-corrections. Only the final space is introduced, and in cases that the cursor is between two spaces (as I preferred to test it) nothing happens.
So, I believe that there is a bug somewhere in the framework part and not in the application code. Now, if we could find how this auto-correction tapping is communicated to the TextBox... :-)
You should not be able to interrupt the SIP directly, You can edit the content of the textbox after the value has been entered/changed. Alternatively, you can implement something like an autosuggest if your intentions are to change the content instead of the visuals.

Detect if user presses button on a Wacom tablet

I was wondering if it is possible in Cocoa/Carbon to detect whether a key combination (e. g. Ctrl + Z) comes from a Wacom button or the keyboard itself.
Thanks
best
xonic
I can only assume a Wacom tablet's driver is faking keyboard events that are bound to specific buttons. If this is the case, I don't think you'll be able to distinguish them as -pointingDeviceID, -tabletID, and friends are only valid for mouse events (which a keyboard event - faked or real - is not).
For the "Express Keys", Wacom provides custom events with the driver version 6.1+
From the Wacom developer docs:
WacomTabletDriver version 6.1.0 provides a set of Apple Events that enable applications to take control of tablet controls. There are three types of tablet controls: ExpressKeys, TouchStrip, and TouchRing. Each control has one or more functions associated with it. Do not make assumption of the number of controls of a specific tablet or the number of functions associated with a control. Always use the APIs to query for the information.
An application needs to do the following to override tablet controls:
Create a context for the tablet of interest.
Register with the distributed notification center to receive the overridden controls’ data from user actions.
Query for number of controls by control type (ExpressKeys, TouchStrip, > or TouchRing).
Query for number of functions of each control.
Enumerate the functions to find out which are available for override.
Set override flag for a control function that’s available.
Handle the control data notifications to implement functionality that the application desires for the control function.
Must destroy the context upon the application’s termination or when the application is done with it.
To create an override context for a tablet, send to the Tablet Driver an Apple Event of class / type {kAECoreSuite, kAECreateElement} with the keyAEObjectClass Param of the Apple Event filled with a DescType of cContext, the keyAEInsertHere Param filled with an object specifier of the index of the tablet (cWTDTablet) and the keyASPrepositionFor Param filled with a DescType of pContextTypeBlank.
To destroy a context, send to the Tablet Driver an Apple Event of class / Type {kAECore, kAEDelete} with the keyDirect Apple Event Parameter filled with an object specifier of the context’s (cContext) uniqueID (formUniqueID).
Most of this only makes sense in context of the documentation page where lots of C structs and helper functions are defined for both Carbon and Cocoa. (This particular part in the docs is pretty far down.)

Resources