Detect if user presses button on a Wacom tablet - cocoa

I was wondering if it is possible in Cocoa/Carbon to detect whether a key combination (e. g. Ctrl + Z) comes from a Wacom button or the keyboard itself.
Thanks
best
xonic

I can only assume a Wacom tablet's driver is faking keyboard events that are bound to specific buttons. If this is the case, I don't think you'll be able to distinguish them as -pointingDeviceID, -tabletID, and friends are only valid for mouse events (which a keyboard event - faked or real - is not).

For the "Express Keys", Wacom provides custom events with the driver version 6.1+
From the Wacom developer docs:
WacomTabletDriver version 6.1.0 provides a set of Apple Events that enable applications to take control of tablet controls. There are three types of tablet controls: ExpressKeys, TouchStrip, and TouchRing. Each control has one or more functions associated with it. Do not make assumption of the number of controls of a specific tablet or the number of functions associated with a control. Always use the APIs to query for the information.
An application needs to do the following to override tablet controls:
Create a context for the tablet of interest.
Register with the distributed notification center to receive the overridden controls’ data from user actions.
Query for number of controls by control type (ExpressKeys, TouchStrip, > or TouchRing).
Query for number of functions of each control.
Enumerate the functions to find out which are available for override.
Set override flag for a control function that’s available.
Handle the control data notifications to implement functionality that the application desires for the control function.
Must destroy the context upon the application’s termination or when the application is done with it.
To create an override context for a tablet, send to the Tablet Driver an Apple Event of class / type {kAECoreSuite, kAECreateElement} with the keyAEObjectClass Param of the Apple Event filled with a DescType of cContext, the keyAEInsertHere Param filled with an object specifier of the index of the tablet (cWTDTablet) and the keyASPrepositionFor Param filled with a DescType of pContextTypeBlank.
To destroy a context, send to the Tablet Driver an Apple Event of class / Type {kAECore, kAEDelete} with the keyDirect Apple Event Parameter filled with an object specifier of the context’s (cContext) uniqueID (formUniqueID).
Most of this only makes sense in context of the documentation page where lots of C structs and helper functions are defined for both Carbon and Cocoa. (This particular part in the docs is pretty far down.)

Related

Programmatically implement Mac long-press accent popup

The application I'm working on uses a custom text editor. The problem is that it therefore doesn't employ the Mac's now-standard long-press key accent popup, i.e., holding down 'A' will produce "aaaaaaaaa" instead of the "à á â ä..." window.
Anyone know if it's possible to programmatically call/otherwise implement that accent popup?
Properly handling key events so that they interact properly with the system is discussed in the Creating a Custom Text View section of the Cocoa Text Architecture Guide. You should also familiarize yourself with the Handling Key Events chapter of the Cocoa Event Handling Guide (although it's a bit outdated; in particular, it refers to the deprecated NSTextInput protocol which has been replaced by NSTextInputClient).
The basic gist is that you should send key events through either -[NSTextInputContext handleEvent:] or -[NSResponder interpretKeyEvents:], implement action methods on your view, and also have your view class adopt and implement the NSTextInputClient protocol.
You acquire a reference to the appropriate NSTextInputContext object from the inputContext property of NSView.
Sending the key events through one of those handler methods is how that press-and-hold feature gets activated. The NSTextInputClient protocol methods are how it ultimately interacts with your text view's document model. When the user selects a character from the popover, the feature uses that protocol to actually replace the initial character with the final one.
This will also allow your text view to handle Asian input methods.

UWP MapControl: distinction between user- and app-manipulation

Within an UWP-App containing a MapControl, is there a way to distinct between a manipulation to the map made by the user (e.g. by pinch to zoom) and one, that is made by the app itself? (e.g. by calling mapControl.TrySetViewAsync(...))
It doesn't seem like that there's an eventhandler for that, right?
I already tried several ones (like LoadingStatusChanged or CenterChanged), but none of them are making any difference between user- and app-manipulation..
You should be able to register to receive a TargetCameraChanged event that will fire whenever the map view changes. The MapTargetCameraChangedEventArgs returned contain a ChangeReason property.
The ChangeReason property will be System, UserInteraction or Programmatic.
Map movements caused by calling APIs such as TrySetViewAsync(...) cause events that have ChangeReason == Programmatic, and movements caused by user actions such as pinch to zoom should have ChangeReason == UserInteraction.

Support for up to eleven mouse buttons?

I am writing a small proof of concept for detecting extra inputs across mouses and keyboards on Windows, is it possible and how do I go about detecting input from a large amount of buttons in the Windows API? From what I have read, there is only support for 5 buttons but many mice have more buttons than that, is my question even possible with the Windows API, is it possible at all within the constraints of Windows?
You can use the Raw Input API to receive WM_INPUT messages directly from the mouse/keyboard driver. There are structure fields for the 5 standard mouse buttons (left, middle, right, x1, and x2). Beyond the standard buttons, additional buttons are handled by vendor-specific data that you would have to code for as needed. The API can give you access to the raw values, but you will have to refer to the vendor driver documentation for how to interpret them. Sometimes extra buttons are actually reported as keyboard input instead of mouse input.
Or, try using the DirectInput API to interact with DirectInput devices to receive Mouse Data and Keyboard Data.
Or, you could use the XInput API, which is the successor of DirectInput. However, XInput is more limited than DirectInput, as it is designed primarily for interacting with the Xbox 360 controller, whereas DirectInput is designed to interact with any controller. See XInput and DirectInput for more details.
Very simple: use GetKeyState
SHORT WINAPI GetKeyState(
_In_ int nVirtKey
);
Logic is next:
Ask user not to press buttons
Loop GetKeyState for all buttons 0-255
Drop pressed buttons state (some virtual keys can be pressed even it not pressed, not know why)
Now start keys monitor thread for rest keys codes and save them to any structure (pause between loop is 25ms is enough)
Ask user to press button
From keys monitor array you will see the any pressed buttons by user
Direct input and all other is more usable for other user input devices. For keyboard and mouse - GetKeyState is best.

Swipe Gesture Recogniser using voiceover

I have a view with a few gesture recognisers (ala Clear). Should I add buttons only for voice over users instead?
I thought about using the hint to say something like "3 Finger Swipe Right to Edit. Left to Delete. Up to Create a new one." but it seems like Apple discourages that. Even apple uses "Double Tap to Edit" on textFields and such and I have no idea why they discourage that.
Does not include the name of the action or gesture. A hint does not tell users how to perform the action, it tells users what will happen when that action occurs. Therefore, do not create hints such as “Tap to play the song,” “Tapping purchases the item,” or “Swipe to delete the item.”
This is especially important because VoiceOver users can use VoiceOver-specific gestures to interact with elements in your application. If you name a different gesture in a hint, it would be very confusing.
Yes you should include alternate buttons.
You're misunderstanding the Apple Disclaimer. The disclaimer refers to the fact that VoiceOver is going to take over the touch screen. Once VoiceOver takes over the screen, it decides how to pass gestures to your application. So as it works right now to activate a button, a user would highlight the button, and then double tap. But, VoiceOver doesn't need to stick to this (though it is highly likely that they will for some time). However, it is not a developers job to inform users of this. VoiceOver informs users of this through earcons, traits, and other instructions that are dependant on the AT. If a developer were to include this information in the hint, it could be invalidated by a change in the AT, and then be inconsistent across device versions, or other ATs such as braille boards.
Not only would you be potentially describing gestures that VoiceOver doesn't allow (given that it captures screen gestures. But, even if you were to apply the allows direct interaction trait, you may be describing gestures that people with disabilities can not perform. Either way, including another method of achieving the given interaction is the better solution.
Use custom actions defined on accessible elements instead of using specific buttons for your purpose.
Moreover, I don't think it's a good idea to add VoiceOver gestures dedicated to an application as you suggested with your hints : try and build your app with the VoiceOver standards that users are used to manipulating.

block ALL keyboard access, mouse access and keyboard shortcut events

In order to block ALL keyboard access, mouse access and keyboard shortcut events in one of my projects, I:
Created a full screen transparent borderless window, in front of other windows, but invisible.
Handle all keyboard and mouse events with simple return; the window itself.
Make the window modal [NSApp runModalForWindow:myWindow] in order to block keyboard shortcuts.
Release window from touchpad's gesture events only.
But this guy made it look simple in a tiny app -MACIFIER:
How did he do it?
not really sure if this would be usable, but you could use the program hotkeynet (generally used for gaming, but I have had success using other methods) and map every single key/mouse action to do nothing. I did something similar by blocking access to a specific program with it in about 20-30 minutes.
not sure if it will help; but it might be the solution you need?
I believe you can use Quartz Event Services. In particular, have a look at CGEventTapCreate, and note the 4th parameter, which allows you to specify what kinds of events you'd like to intercept. The available kinds of events are listed in the CGEventType enum.
If you set your tap to be an active filter, returning NULL from the callback will delete the event.

Resources