Different kinds of clicks in Mac OS? - macos

I have just bought a Wacom Bamboo touch tablet. It works fine with all applications except the Twitter client, which gets a bit confused when I click on a link.
Is there a quick bit of code I can knock up / API I can call to see what kind of mouse events are being generated by the driver (just to satisfy my curiosity)?
To clarify: I'm not writing an app here... just trying to use a product and work out why it's not working properly.

Tablet events are somewhat different than mouse events. Specifically:
A [tablet] pointer event is an NSEvent object of type NSTabletPoint or
an object representing a mouse-down, mouse-dragged, or mouse-up event
with a subtype of NSTabletPointEventSubtype.

Related

How to recognize mouse wheel in MAUI view for desktop application

How to get notified for mouse wheel interaction for MacCatalyst and windows at MAUI platform.
Answer 1: Scrolling.
What do you want to do based on mouse wheel interaction? If you simply want to scroll, or know when scrolling has occurred, then you can rely on ScrollView, and other views that handle scrolling themselves. E.g. ScrollView.Scrolled event.
Answer 2: General use of mouse scroll wheel.
Input functionality for mouse or keyboard has not yet been implemented in MAUI. Nor has a specification been finalized.
Here is one mouse proposal.
You could add a comment to that proposal requesting that mouse wheel support be included.
However, this might not be in the first release of MAUI, as the current emphasis is on stabilizing the functionality that is needed on all platforms (including mobile), some of which don't have mice.
In case anyone is wondering "shouldn't this be specified in .net 6?" (And then MAUI would simply use it.)
There are interactions between what is happening on the display (views or windows) and how mouse/keyboard input should be handled - it makes sense to put that input in the same code base that is displaying to the screen - therefore MAUI is a good place for it.
Especially given that touch is part of MAUI.
Until then, the solution is to make a DependencyService on each platform, to refer to the platform's APIs that you need.
Surprisingly, I'm not finding one that anyone has done for mouse on Windows and Mac.
Other than "implicitly", since a mouse can be used similar to a touch device. And text can be typed on a keyboard. The point is that there is no API specific to functionality that only makes sense if you have a physical mouse (scroll wheel) or a physical keyboard (global keyboard hooks).
TBD I'll look into this further.
Basic approach would be to look at what WinUI 3 uses as input APIs.
On Windows Desktop app, forward to those input APIs. Write an adapter on other platforms (Mac, Linux).
I'll see if Uno Platform or Avalonia have taken this approach.

Docking wear app to watch face

Apologies for an 'open' question, but can anyone provider pointers on how to 'dock' my app to the Android Wear watch face?
Essentially, I want users of the application to be able to swipe left to right (or vice-versa) from the edge of the screen to open the application, compared to having to scroll the list of applications after tapping the watch face.
I've seen this implemented in another wear app, but don't know the right terminology to produce meaningful results in Google. Is it a wallpaper service, specific view type, touch listerner service etc?
Many thanks.
You can't receive touch events inside the WatchFaceService, touch delivery is disabled.
I can't say for sure how the app you saw implemented the desired behavior, but it probably did by inserting views directly into the WindowManager from a Service.
Checkout this youtube video: https://www.youtube.com/watch?v=S3vHjxonOeg
I don't know how well the Standout library does its job, but it should give you enough examples to figure out yourself, how to add views to the WindowManager.

How can I synthesize Cocoa multi-touch gesture events?

Dear stackoverflow folks! To this day I never saw the need to ask a question, because all of you have done a great job in asking and answering nearly all of the code-related problems I encountered. So, thank you for that!
At the moment I am working on an iOS application that is able to process raw touch events. These are then sent to an iMac over a WiFi network (the protocol I use is OSC). On the OS X side there is a server application listening for these OSC messages and converting them to mouse pointer movement / mouse button press / multi-touch gestures. So basically I want to build a (of course much more basic) software-bundle like mobile mouse (http://mobilemouse.com/) that I am able to adapt to the needs of our customers (by the means of customizing colors / additional buttons / gestures, and so on) for small remote control projects.
Right now, everything works but the multitouch gestures (pinch, rotate, two-finger-scroll). So my question is: How can I programmatically create and post a multitouch gesture event?
I searched a lot and found some threads about it here on stackoverflow, but none of them could help me:
Is there a way to trigger gesture events on Mac OS X? - Is there a way to change rotation of gesture events? - Generate and post Multitouch-Events in OS X to control the mac using an external camera - ...
Update 1:
The last thing I tried was:
CGEventSourceRef eventSource = CGEventSourceCreate(kCGEventSourceStateCombinedSessionState);
CGEventRef event = CGEventCreate(eventSource);
CGEventSetType(event, NSEventTypeMagnify);
CGEventPost(kCGHIDEventTap, event);
CFRelease(eventSource);

How does one correctly identify IE10 Metro and IE10 Desktop from the server in order to send back a "finger friendly" or "mouse friendly" interface?

I've read that since user agent is the same between both, the recommend method is to use feature detection. That is fine and good for some situations, where you may want to display a Flash video/movie/app vs. a javascript slideshow, but my issue is to display a correct interface based on the user's input device.
The assumption I'm making is that if a user is in the "Metro" IE10 they are probably expecting to use their fingers instead of a mouse. That being the case, I'd like to give them an interface with large hit boxes.
My question: Is there a way to tell the difference and display an appropriate interface? Or am I stuck with making the user manually switch modes via links on my site that set a cookie?
Still there's no way to detect normal IE from the crippled Metro IE, but know you can know at the server if the user has a touch screen http://blogs.msdn.com/b/ie/archive/2012/07/12/ie10-user-agent-string-update.aspx
That post includes other comments about how to perform detection in javascript.
If you use the msPointerPoint interfaces, your client will receive the same messages whether they're using the mouse or touch. You can also use the gestures api - there was just a blog post on the IE blog which discusses how to use gestures from the mouse browser.
IE exposes a unified stack for messages so you can use the same input processing and your UI will work whether you're using touch/pen or mouse.

Generate and post Multitouch-Events in OS X to control the mac using an external camera

I am currently working on a research project for my university. The goal is to control a Mac using the Microsoft Kinect camera. Another student is writing the Kinect driver (which will be mounted somewhere on the ceiling or the wall behind the Mac and which outputs the position of all fingers on the Macs screen).
It is my responsibility to use that finger-positions and react on them. The goal is to use one single finger to control the mouse and react on multiple fingers the very same way, like they are on the trackpad.
I thought that this is going to be easy and straight forward, but its not. It is actually very easy to control the mouse cursor using one finger (using CGEvent), but unfortunately there is no public API for creating and posting Multitouch-Gestures to the system.
I've done a lot of research, including catching all CGEvents using an event tap at the lowest possible position and trying to disassemble them, but no real progress so far.
Than I stumbled over this and realized, that even the lowest position for an event tap is not deep enough:
Extending Functionality of Magic Mouse: Do I Need a kext?
When I got it right, the built-in Trackpad (and the MagicMouse and the MagicTrackpad) communicates over a KEXT-Kernel-Extension with the private MultitouchSupport-framework, which is generating and posting the incoming data in some way to the OS.
So I would need to use private APIs from the MultitouchSupport.framework to do the very same thing like the Trackpad does, right?
Or would I need to write a KEXT-Extension?
And if I need to use the MultitouchSupport-framework:
How can I disassemble it to get the private APIs? (I know class-dump, but that only works on Objective-C-frameworks, which this framework is not)
Many thanks for any response!
NexD.
"The goal is to use one single finger to control the mouse and react on multiple fingers the very same way" here if I understand what you are trying to do is you try to track fingers from Kinect. But the thing is Kinect captures only major body joints. But you can do this with other third party libraries I guess. Here is a sample project I saw. But its for windows. Just try to get the big picture there http://channel9.msdn.com/coding4fun/kinect/Finger-Tracking-with-Kinect-SDK-and-the-Kinect-for-XBox-360-Device

Resources