UVC (USB Video Device Class) control of pan/tilt on OS X - macos

I am trying to modify an existing application that talks to a standard USB video device class webcam (a Logitech BCC950 camera) over USB on OS X.
The device (a conferencing webcam) is compliant with USB's "Video Device Class" (https://en.wikipedia.org/wiki/USB_video_device_class). I have provided a link to some source code that allows controlling saturation and white balance of the picture using the webcam's hardware and the VDC specification.
I now want to control the pan/tilt function of this webcam. This is called "CT_PANTILT_ABSOLUTE_CONTROL" in the specification. How do I do this?
This site has some example code for controlling the gain, exposure and a handful of other settings with OS X's IOKit.
The aim would be to make an application similar to this: https://www.youtube.com/watch?v=U10OqVzoHbw that is controllable using a web interface.
I want to send new parameters for the CT_PANTILT_ABSOLUTE_CONTROL command, to control the pan of the camera.
Additionally, in the documentation, VC_PROCESSING_UNIT is listed as 0x05, but in the source, it's listed as 0x02. Also, other sources such as the Linux UVC headers define it as 0x05.
In the UVC specifications, this is listed under 4.2.2.1.14 PanTilt (Absolute) Control, however, I am unclear of the unit & selector codes that are required to get this information.
I would love to get some help for the commands & code that needs to be written so that this application will work in OS X with IOKit.

With the help of a friend, we have found this: https://github.com/kazu/UVCCameraControl
It is a modified version of the code that I linked to in my question, however, it seems to have support for Pan & Tilt.
I have as of yet not tried it, but quickly looking at the code, it seems to support everything that I need.

Related

Mac OS: get full control of web-camera (USB connected)

The task:
OS: Mac OS X 10.9 +
Description:
There is web-camera connected to a Mac via USB. I need to discover a way of getting access to its' brightness, pan, color temperature, focus, etc.
I also need a way to apply image filters against camera's video stream.
I need to be able to control the camera while it is being used by other programs like Skype, so I can transmit for example video stream with increased contrast at Skype video call.
Reference app: https://itunes.apple.com/app/webcam-settings/id533696630?mt=12
Solution:
This is the question.
As far as I understood I must to find custom kext (driver) in order to perform all this magic.
Could you please show me right direction, libraries, drivers, etc.
You can use opencv library to capture camera frames, apply filters, etc.
http://docs.opencv.org/2.4/doc/tutorials/introduction/display_image/display_image.html
Then you can feed a virtual cam that can feed into Skype, etc.
http://download.cnet.com/Virtual-Webcam/3000-2348_4-75754338.html
There also many open source virtual webcam available.
I hope this helps.

How to send raw multitouch trackpad data under Mac OS X?

The end goal is to take touch input from an iOS device, send it over a websocket, accept it on the Mac OS X side, and send it through the system so everything using the private multitouch framework to accept input sees the data as if it were normal multitouch trackpad data.
The synthesize sub-project under https://github.com/calftrail/Touch seems like a good place to start. However it seems like the developer created it with the intent of taking valid multitouch input (from a magic mouse when there was arbitrarily little software support from Mac OS X), and piping it as multitouch trackpad input. I need to create valid/acceptable multitouch trackpad out of thin air (with just sequences of touch locations, not real HID data).
In deep here. Help, someone. :)
Glad you found my TouchSynthesis subproject — I think it will let you do what you need, since internally it is split up as you want it. [Please note however that this code is GPL licensed, i.e. virally open source, unlike many Mac libraries.]
You can treat TouchSynthesis.m as example code for using the TouchEvents "library" which provides support for your specific question via one "simple" function: tl_CGEventCreateFromGesture
The basic gist is that tl_CGEventCreateFromGesture takes in a dictionary of gesture+touch data and will return a CGEvent that you can inject via Quartz Event Services into the system. A gesture event is required to send what becomes NSTouch data, but IIRC could be a fairly generic "gesture" type rather than zoom/pan/etc.
This is sort of a halfway-private solution: Apple support injecting CGEvents into the system [at least outside The Sandbox? …I've since lost interest in their platforms so haven't researched that one…] so that part is "fine", but the actual CGEvent I create is of an undocumented type, the format for which I had to figure out via hex dumps and some Darwin source code HID headers they shared. It's that work that "TouchEvents.m" implements — that's how Sesamouse could "create valid/acceptable multitouch trackpad out of thin air" — and it should already be separate from the private framework MultitouchSupport stuff that read in the Magic Mouse input.

Selecting input mic for Mac Audio Queue Services?

I am currently using the Mac OS X Audio Queue Services API for audio recording and sound analysis. Works fine using the default mic input.
If there is more than one microphone plugged into the Mac (USB, headset jack, etc.), is there a way to programmatically enumerate and select which mic is to be used for audio input within an application? (e.g. not have to send the user to the system preferences panel, which may affect a users other audio applications.) If so, which APIs should be used to select the mic input.
To enumerate available input devices please see my answer to AudioObjectGetPropertyData to get a list of input devices.
Once you've determined the input device you'd like to use, you can set the kAudioQueueProperty_CurrentDevice property to the device's UID.
I fear, no, because AQ is hard-coded to use default input (to my best knowledge). AQ is fairly limited and only iOS gives more control via AutoSessions. However, you can use AUHAL to record from an arbitrary device:
http://developer.apple.com/library/mac/#technotes/tn2091/_index.html
You won't need listing 4 from above because you'll use the AudioDeviceID for the device you have chosen (presumably by getting the list of devices using AudioObjectGetPropertyDataSize and picking the one you want).
FWIW: if you decide that's too much, you can presumably still use AudioHardwareSetProperty to set kAudioHardwarePropertyDefaultInputDevice from your code - not what you wanted but certainly less work...
If you set up the Audio Queue to read from the default input device, then it will read from the mic that is selected as default in the System Preferences->Soubd->Input tab.

Device driver to act as a virtual web camera

I'm looking for writing virtual camera drivers. Does anybody has idea?
Any book that would be helpful or any link.
Adding more details:
I have developed a device driver which saves the image to disk and the display uses the device driver to display the image. The performance does not seem good.
The fns. that I have used are:
//to capture
GetDesktopWindow()
CreateCompatibleBitmap()
Save()
//to display
WM_MOUSEMOVE
giving a call to capture and display every time
but the display is not continuous and appears only after window goes out of focus and comes in focus again
Should I use some other technique to record or display images, what will give fruitful results, please help.
Thanks,
-mitesh
What do you mean by virtual camera driver?
It is possible to write a virtual capture device using DirectShow. Such a virtual capture device can then be used by applications such as skype, etc. If that suffices for your needs, you can download vcam from http://tmhare.mvps.org/downloads.htm under the "Capture Source Filter" link.
Edit:
In order to use that capture device in the link I posted you need to download the Windows SDK. The Windows SDK has a tool called "GraphEdit" If you search online, I'm sure you can find a quick GraphEdit tutorial. Basically GraphEdit allows you to construct a multimedia pipeline by connecting a bunch of filters. (This is what happens in the background for instance when you play a movie on your computer. ) This could be something like
web cam -> renderer
or
file source -> some decoder -> renderer
and would result in you seeing the video captured by the web cam or the content of the file. The example download shows how you can construct a virtual capture device i.e. it looks like media is coming from a 'real' capture device, but actually you can generate any video you want if you adapt the code to your specific means i.e. take a screengrab and output that. Applications like skype can pick up you virtual capture device if it is registered correctly.
The easiest way to find out if this is sufficient for your needs is to download the capture source filter, register it with the regsvr32 command, and then to use GraphEdit to insert the capture source into a graph, connect the source to a video renderer and hit the play button. A lot of the above mentioned concepts/keywords might seem new to you, but you can do some reading on each topic, and perhaps this will give you a point to get started.
Edit 2:
Is the capture source filter approach not sufficient for your requirements?
1) AFAIR you stated in your (now deleted) answer that you would like to take a screen grab, and use that as a virtual camera device for use in applications such as skype.
If that is all you require, you do NOT have to write a device driver. DirectShow can do that perfectly well by means of the capture source filter. You would then need to
learn some basic DirectShow
modify the source code of the capture filter to take screen grabs etc.
As far as books are concerned to write device driver to accomplish the same, I have no idea. The point I'm trying to make, is that you need to determine whether you actually need to write a device driver or whether simply modifying the open source capture filter is sufficient.

How do I tell OS X to ignore the input from one of two connected USB mice?

I have two USB mice connected to my Mac, one of which I'm using as a scanner. I need access to the Generic X and Y data but I don't want that data to move the cursor. How, under either carbon or cocoa environments, do I tell the system to ignore the mouse as a pointing device?
Edit: after some digging I've found that I can turn off mouse position updating with the CGAssociateMouseAndMouseCursorPosition() function, but this does not allow me to specify a single mouse. Can anyone explain the OS X relationship between HID mouse devices and the cursor? There has to be a binding between the hardware and software on a device by device basis but I can't find it.
I would look into writing a basic user-space driver for the mouse.
This will allow you direct access to the mouse as a USB device. You can also take control of the device from the system for your exclusive use.
There is some documentation here:
Working With USB Device Interfaces
To get you started, the set up steps to connect to a USB device go like this (I think, my IOKit is rusty)
include < IOKit/IOKitLib.h > and < IOKit/usb/IOUSBLib.h >
find the device you are interested in using IOServiceMatching(). This lets you pick find the correct USB device based on its properties, including things like vendor ID, &c. (See the IORegistryExplorer tool screen shot below)
get a USB plugin instance (let's call it plugin) with IOCreatePlugInInterfaceForService()
use plugin from step 2 get a device interface (let's call it device) using (**plugin)->QueryInterface()
device represents a connection handle to your USB device--open it first using either (**device).USBDeviceOpen or (**device).USBDeviceOpenSeize(). from there you should be able to send/receive data.
Sounds like a lot I know.. and there might be an easier way, but this is what comes to my mind. There may be some benefits to having this level of control of the device, not sure. good luck.

Resources