Device Information from NSEvent/CGEvent - cocoa

My application uses an event tap to capture keyboard events, and I'd like to know which device (i.e. which keyboard) each event comes from. Is there an sort of device-identifying information along with the CGEvent that a tap gets? I've looked at NSEvent's methods, and the various CGEventField keys, but none of them seem to be device-unique. Any help?

You might want to take a look at DDHidLib, Dave Dribin's excellent framework to work with USB HID devices independently.
http://www.dribin.org/dave/blog/archives/2007/03/19/ddhidlib_10
(not just about joysticks, so read more than the first paragraph of that blog post)
Some of the functionality of DDHidLib no longer works under Leopard, due to some security concerns at Apple regarding capturing an HID device, but if you're lucky it might provide you with what you need.

DDHidLib is neat, and in fact I rewrote parts of it for Delicious Library 2 for Leopard's newer HID APIs, and submitted the changes back to the original author -- if you write him you can get the Leopard-only sample code.
Unfortunately, the new Leopard HID APIs have the ability to peak at keyboard events as they pass by, but NOT to intercept them, so you can't build your own application-level device handler unless it's OK that the key events are also going to the AppKit, as well. (This is why there's a BONKING noise when you use a USB barcode scanner in Delicious Library 2 - I peak at the scanner and read the barcode, but then the typing is still sent to the topmost window, which doesn't want it, and beeps a lot. Sigh.)
-Wil

Related

How to send raw multitouch trackpad data under Mac OS X?

The end goal is to take touch input from an iOS device, send it over a websocket, accept it on the Mac OS X side, and send it through the system so everything using the private multitouch framework to accept input sees the data as if it were normal multitouch trackpad data.
The synthesize sub-project under https://github.com/calftrail/Touch seems like a good place to start. However it seems like the developer created it with the intent of taking valid multitouch input (from a magic mouse when there was arbitrarily little software support from Mac OS X), and piping it as multitouch trackpad input. I need to create valid/acceptable multitouch trackpad out of thin air (with just sequences of touch locations, not real HID data).
In deep here. Help, someone. :)
Glad you found my TouchSynthesis subproject — I think it will let you do what you need, since internally it is split up as you want it. [Please note however that this code is GPL licensed, i.e. virally open source, unlike many Mac libraries.]
You can treat TouchSynthesis.m as example code for using the TouchEvents "library" which provides support for your specific question via one "simple" function: tl_CGEventCreateFromGesture
The basic gist is that tl_CGEventCreateFromGesture takes in a dictionary of gesture+touch data and will return a CGEvent that you can inject via Quartz Event Services into the system. A gesture event is required to send what becomes NSTouch data, but IIRC could be a fairly generic "gesture" type rather than zoom/pan/etc.
This is sort of a halfway-private solution: Apple support injecting CGEvents into the system [at least outside The Sandbox? …I've since lost interest in their platforms so haven't researched that one…] so that part is "fine", but the actual CGEvent I create is of an undocumented type, the format for which I had to figure out via hex dumps and some Darwin source code HID headers they shared. It's that work that "TouchEvents.m" implements — that's how Sesamouse could "create valid/acceptable multitouch trackpad out of thin air" — and it should already be separate from the private framework MultitouchSupport stuff that read in the Magic Mouse input.

USB device opens Applications, then types in text field

I got a letter in the mail that contained a small USB device. Here is what it loks like: http://imgur.com/a/VEtNK
When I plug it into my computer it seems to hover over the programs in my Dock and then opens one. It then types a link into a text field that is available. I originally had Skype in the dock and it defaulted to that one, strange. I removed Skype from the dock and now it opens to System Preferences.
Here's a video of what happens as I plug it in: https://www.dropbox.com/s/yuw6ggvo77rkvwh/Test1MysteryDevice.mov
Also, it does not appear like a memory stick does on my computer. I can't seem to locate it when it's plugged in. It would be cool if I could find it somehow. It would be even cooler if I could program it to do something I wanted.
Thanks, and if anyone can help out that's awesome or if you could point me to a forum/anywhere that might be able to help out, that'd be great!
Probably it self-identifies as a HID (Human Interface Device), specifically a keyboard. As soon as it is accepted as a keyboard by the OS it can send any sequence of keystrokes, and the OS will assume that it is input from a human user.
Scripting such behavior is easy using Applescript.
However automatically running a program from a USB stick when it is inserted is supposed to be impossible on OS X, as auto-run is a security risk.
Of course at the very least a custom USB device could be made to act like a mouse and keyboard, so even without autorun it's a risk to plug strange devices into your computer.
To get more info on the device you can go to System Profiler and look for the device on the USB bus.
If it is a custom device pretending to be a keyboard then it's probably hardwired to do what it does, and you probably won't be able to reprogram it; you'd need to find a manufacturer that will sell you customized devices.

Adobe AIR 3.1 Rendering/Input Issue with Steam Overlay (Windows)

I am in the process of porting a Flash Player-based game over to the Desktop (OSX and Windows) via Adobe AIR (3.1). The porting to AIR itself has gone rather smoothly. The one wrinkle I've been dealt is that the game will be distributed over the Steam network. In order to interact with the Steam Client, I've had to write a native extension to expose the Steam SDK APIs to AS3. The native extension support has been implemented for both platforms, and I have the application launching and communicating with Steam as desired.
The area I've run into trouble is dealing with Steam's Overlay, which renders overtop of games when it is activated. Essentially, when a game is launched, the Steam Client suspends the process in order to hook its Overlay library up to either D3D or OpenGL. Initially, the Overlay failed to appear at all as the AIR application descriptor had the default rendermode set to "auto." However, once I switched the rendermode to "gpu" the Overlay would appear as desired.
On the OSX side of things, everything works as expected. I can toggle in and out of the Overlay just fine. On the windows end of the spectrum, I've hit a bit of a problem when I activate the Overlay. Specifically, when the Overlay is enabled (it's rendering overtop of the game) and I either move the mouse or generate keyboard input, both the Overlay and the game both "freeze" (rendering stops) for 2-3 seconds. Additionally, I have noticed that when I open the Task Manager with the game running, the cpu usage is roughly 75-80%. The cpu usage remains the same when I first active the Overlay (which is desired). However, when I move the mouse cursor or press a key on the keyboard, the cpu usage drops to roughly 1%. This problem has occurred on 4 of 5 windows machines (2 XP, 3 Win 7) we've tested on. Naturally, I first contacted Valve about the issue since this only occurs when the Overlay is enabled. I've uploaded both the OSX and Windows builds for their devs to debug; however, my contact suggested I find out more about AIRs rendering/input as well.
Here is a snippet of a post with a Steam Dev detailing how the overlay works:
"The requirements for the overlay on Windows are as follows:
Game must use D3D7, D3D8, D3D9, D3D10, D3D11, or OpenGL
Game must call D3D Present() or OpenGL SwapBuffers() on a fast regular basis (these calls are hooked by the overlay and give it opportunity to do work). For instance 2D games that only call these functions when mouse movement occurs or graphics on screen actually change rather than every frame will not function well.
Game should use standard Win32 input messages, raw Win32 input messages, or DirectInput for input and the overlay will then detect hotkeys and hide/block input events from the game when active.
It sounds like your game may violate #2 and stops calling Present/SwapBuffers sometimes when the overlay is active. This may happen if you call these functions in response to user input which is now blocked due to the overlay being activated. You should guarantee you keep pumping frames and swapping at a regular interval even if input events aren't occurring."
After a little more prodding, the Valve devs profiled my application to determine if there was any specific problem occuring with the Game Overlay. Unfortunately, they were unable to find anything going on in the Overlay itself. This pretty much means that AIR on Windows doesn't like that the Overlay is blocking Win32 input messages. Here is the Valve dev's response:
"I got your depots and did some testing. Nothing unusual happens in the overlay. Profiling your app with xpref while the issue occurs and taking some minidumps to check callstacks it looks like the app just blocks up completely and uses zero CPU during the time it is blocked, when it happens it calls Present() only at roughly 1 second intervals until it recovers (maybe there is a 1 second timeout somewhere in the AIR code). It's hard to get much detail since I don't have any symbols for the AIR runtime libraries.
It does however look like this is somehow related to input state and AIR being unhappy with win32 input messages stopping. If I change our overlay to not block any input at all once activated (which obviously has some pretty big problems for usability, but just for testing purposes.) then the issue does not occur. It's possible that the AIR code has some weird logic where if it's seen some specific WM_WHATVER message it's expecting another right after and blocks on it waiting somehow.
Hopefully you can work out on your side or with Adobe as to why the application behaves badly in these situations and starts blocking and not presenting at regular intervals."
I've posted on the Adobe forums, but haven't had any such luck over there. Mainly, I'm hoping that someone has either dealt with this before or has an idea about how I could possibly get around the issue. Any suggestions, comments or thoughts would be greatly appreciated!
As it turns out, there is an bug deep in AIR core framework that is the root cause of this issue. Adobe has confirmed the bug, and they are working on a fix for the Cyril (AIR 3.3) release. The status of the bug (#3089755) can be viewed in the Adobe AIR bug list.
In the short-term, I was forced to detect Windows messages that were being consumed by the SteamOverlay, and pass on fake messages to prevent AIR from locking up. I accomplished this by using the Windows API SetWindowsHookEx along with the WH_DEBUG and WH_GETMESSAGE hooks. This is definitely not a desirable approach, but was needed in the short-term until Adobe releases a fix.

How do I tell OS X to ignore the input from one of two connected USB mice?

I have two USB mice connected to my Mac, one of which I'm using as a scanner. I need access to the Generic X and Y data but I don't want that data to move the cursor. How, under either carbon or cocoa environments, do I tell the system to ignore the mouse as a pointing device?
Edit: after some digging I've found that I can turn off mouse position updating with the CGAssociateMouseAndMouseCursorPosition() function, but this does not allow me to specify a single mouse. Can anyone explain the OS X relationship between HID mouse devices and the cursor? There has to be a binding between the hardware and software on a device by device basis but I can't find it.
I would look into writing a basic user-space driver for the mouse.
This will allow you direct access to the mouse as a USB device. You can also take control of the device from the system for your exclusive use.
There is some documentation here:
Working With USB Device Interfaces
To get you started, the set up steps to connect to a USB device go like this (I think, my IOKit is rusty)
include < IOKit/IOKitLib.h > and < IOKit/usb/IOUSBLib.h >
find the device you are interested in using IOServiceMatching(). This lets you pick find the correct USB device based on its properties, including things like vendor ID, &c. (See the IORegistryExplorer tool screen shot below)
get a USB plugin instance (let's call it plugin) with IOCreatePlugInInterfaceForService()
use plugin from step 2 get a device interface (let's call it device) using (**plugin)->QueryInterface()
device represents a connection handle to your USB device--open it first using either (**device).USBDeviceOpen or (**device).USBDeviceOpenSeize(). from there you should be able to send/receive data.
Sounds like a lot I know.. and there might be an easier way, but this is what comes to my mind. There may be some benefits to having this level of control of the device, not sure. good luck.

What devices are available to test WM_GESTURE and WM_TOUCH code on a desktop machine?

I'm writing some code to handle WM_GESTURE and WM_TOUCH events in Windows 7, but I can't figure out how to test it. I do my development in Boot Camp on a 17" Mac Book Pro.
So far, I have determined that the Boot Camp trackpad driver in Windows 7 does not generate those events, and this generic trackpad I found on Amazon.com that claims to be 'multi-touch' works as advertised, but not by creating WM_GESTURE or WM_TOUCH events. I verified this by using Spy++ to report the events; nothing with the WM_GESTURE or WM_TOUCH value was reported.
What kind of hardware is supposed to generate these kinds of events? At this point, I'm assuming it's only for tablet or mobile (Windows CE) hardware, but I'd appreciate any other suggestions.
I suppose there's another way to approach this -- I want to get functionality similar to Cocoa's [NSResponder swipeWithGesture:] and related methods, which report back swipes, rotation, and other gestures on the trackpad. WM_GESTURE appears to be the equivalent on Windows 7.
An other option, which would require only another physical mouse device to work with, and should get you at least 95% of the way there is the Multi-Touch Vista project, which can emulate up to 256 touch points using physical devices - thus the need for an extra mouse, or two since it can be awkward to simultaneously work with a mouse in one hand and trackpad with another.
There are several monitors out there supporting touch with Windows 7. For example: Acer T230H.
HTH
Wacom makes several touchpads that support multitouch; a particularly inexpensive version is the Bamboo Touch. This gives you touch without having to buy another monitor - although it doesn't give that direct interaction feeling.

Resources