Intercepting RadialController events at OS level - windows

I’m trying to figure out if there is a way to intercept surface dial (or RadialController) events even when your application is in the background, and another one has the focus. Maybe using SetWindowsHookEx somehow?
My purpose is, since there is so very little support for surface dial at the moment in terms of applications, I was trying to create a background-ish service that intercepts surface dial events at OS level and sends/simulates input events to other applications. I have the simulation of input events in other applications controlled, and I know that I can get Mouse Events at OS level using SetWindowsHookEx, but I don’t know how to obtain surface dial input data at OS level..
There is very little to no information about Surface Dial programming, even less outside UWP. So I've tried basically everything I could put my hands on, but I could only find information about more standard input hardware like mouse and keyboard... no Radial Controller.
Note: I’m using Windows Forms with C# in Windows 11.

Related

detect if a click is real or emulated

I wonder how I can detect if a click was made by the user using mouse vs clicks made by bot (emulated)
Most of my research suggests that it is impossible but I have seen many games that successfully blocked emulated clicks while having no impact on the real ones.
I can't seem to find any tutorials or articles on how to do this.
Based on my research all emulated clicks will likely be using either SendInput or SendMessage function. Both functions are defined inside User32.dll
So is it possible or safe to find the call stack of the event and block the event if I find User32.dll in the stack? How can I do that in Unity?
Pattern matching is good enough to filter out really simple click bots. Those typically click in precisely the same location, and the interval would be regular within 1/10th of a second. I'd start there, and then develop more advanced algorithms as you target and learn how cheaters are interacting with your game client. Like others said, it depends on your game.

How can I synthesize Cocoa multi-touch gesture events?

Dear stackoverflow folks! To this day I never saw the need to ask a question, because all of you have done a great job in asking and answering nearly all of the code-related problems I encountered. So, thank you for that!
At the moment I am working on an iOS application that is able to process raw touch events. These are then sent to an iMac over a WiFi network (the protocol I use is OSC). On the OS X side there is a server application listening for these OSC messages and converting them to mouse pointer movement / mouse button press / multi-touch gestures. So basically I want to build a (of course much more basic) software-bundle like mobile mouse (http://mobilemouse.com/) that I am able to adapt to the needs of our customers (by the means of customizing colors / additional buttons / gestures, and so on) for small remote control projects.
Right now, everything works but the multitouch gestures (pinch, rotate, two-finger-scroll). So my question is: How can I programmatically create and post a multitouch gesture event?
I searched a lot and found some threads about it here on stackoverflow, but none of them could help me:
Is there a way to trigger gesture events on Mac OS X? - Is there a way to change rotation of gesture events? - Generate and post Multitouch-Events in OS X to control the mac using an external camera - ...
Update 1:
The last thing I tried was:
CGEventSourceRef eventSource = CGEventSourceCreate(kCGEventSourceStateCombinedSessionState);
CGEventRef event = CGEventCreate(eventSource);
CGEventSetType(event, NSEventTypeMagnify);
CGEventPost(kCGHIDEventTap, event);
CFRelease(eventSource);

Different kinds of clicks in Mac OS?

I have just bought a Wacom Bamboo touch tablet. It works fine with all applications except the Twitter client, which gets a bit confused when I click on a link.
Is there a quick bit of code I can knock up / API I can call to see what kind of mouse events are being generated by the driver (just to satisfy my curiosity)?
To clarify: I'm not writing an app here... just trying to use a product and work out why it's not working properly.
Tablet events are somewhat different than mouse events. Specifically:
A [tablet] pointer event is an NSEvent object of type NSTabletPoint or
an object representing a mouse-down, mouse-dragged, or mouse-up event
with a subtype of NSTabletPointEventSubtype.

Generate and post Multitouch-Events in OS X to control the mac using an external camera

I am currently working on a research project for my university. The goal is to control a Mac using the Microsoft Kinect camera. Another student is writing the Kinect driver (which will be mounted somewhere on the ceiling or the wall behind the Mac and which outputs the position of all fingers on the Macs screen).
It is my responsibility to use that finger-positions and react on them. The goal is to use one single finger to control the mouse and react on multiple fingers the very same way, like they are on the trackpad.
I thought that this is going to be easy and straight forward, but its not. It is actually very easy to control the mouse cursor using one finger (using CGEvent), but unfortunately there is no public API for creating and posting Multitouch-Gestures to the system.
I've done a lot of research, including catching all CGEvents using an event tap at the lowest possible position and trying to disassemble them, but no real progress so far.
Than I stumbled over this and realized, that even the lowest position for an event tap is not deep enough:
Extending Functionality of Magic Mouse: Do I Need a kext?
When I got it right, the built-in Trackpad (and the MagicMouse and the MagicTrackpad) communicates over a KEXT-Kernel-Extension with the private MultitouchSupport-framework, which is generating and posting the incoming data in some way to the OS.
So I would need to use private APIs from the MultitouchSupport.framework to do the very same thing like the Trackpad does, right?
Or would I need to write a KEXT-Extension?
And if I need to use the MultitouchSupport-framework:
How can I disassemble it to get the private APIs? (I know class-dump, but that only works on Objective-C-frameworks, which this framework is not)
Many thanks for any response!
NexD.
"The goal is to use one single finger to control the mouse and react on multiple fingers the very same way" here if I understand what you are trying to do is you try to track fingers from Kinect. But the thing is Kinect captures only major body joints. But you can do this with other third party libraries I guess. Here is a sample project I saw. But its for windows. Just try to get the big picture there http://channel9.msdn.com/coding4fun/kinect/Finger-Tracking-with-Kinect-SDK-and-the-Kinect-for-XBox-360-Device

OpenCV -- record browser window instead of capturing camera output?

I'm trying to get started with OpenCV by trying to write a simple screen recorder -- one that can perform continuous or polled capture of a GUI window on Mac. For example, I could capture the client area of a browser window.
I'm sure this is possible, but I don't know where to start -- any pointers? Is the framegrabber to read the GUI window an OSX/Cocoa thing, or an OpenCV call?
You'll have to deal with the operating system you're dealing with. I've seen some software where they install a driver. It emulates a camera and streams your desktop into the camera. That way you can use OpenCV's functions to get access to the desktop.
I think you'll have to deal with MacOSX components as CoreVideo... and to make some objective-c bindings.

Resources