Mac OS: get full control of web-camera (USB connected) - macos

The task:
OS: Mac OS X 10.9 +
Description:
There is web-camera connected to a Mac via USB. I need to discover a way of getting access to its' brightness, pan, color temperature, focus, etc.
I also need a way to apply image filters against camera's video stream.
I need to be able to control the camera while it is being used by other programs like Skype, so I can transmit for example video stream with increased contrast at Skype video call.
Reference app: https://itunes.apple.com/app/webcam-settings/id533696630?mt=12
Solution:
This is the question.
As far as I understood I must to find custom kext (driver) in order to perform all this magic.
Could you please show me right direction, libraries, drivers, etc.

You can use opencv library to capture camera frames, apply filters, etc.
http://docs.opencv.org/2.4/doc/tutorials/introduction/display_image/display_image.html
Then you can feed a virtual cam that can feed into Skype, etc.
http://download.cnet.com/Virtual-Webcam/3000-2348_4-75754338.html
There also many open source virtual webcam available.
I hope this helps.

Related

Capturing 15MP images from a Logitech c920 webcam

I am trying to take 15 Megapixel images with a Logitech c920.
The camera supports full-HD (2.1 MP) video, and 15MP static images.
Using the supplied Logitech Software, I can indeed save 15MP images. But if I access the camera directly via DirectShow, I can at most get the full-HD resolution. It seems like the camera is stuck in "video" mode.
I tried various examples for accessing webcams, namely RobotEyez, VLC, NirSoft WebCamImageSave.
I am beginning to suspect that the default interfaces for webcams under windows are only designed for video, and that there is no official way to tell the camera to take still images in higher resolution than the video stream.
How can I access such a device under windows in order to capture the full 15MP resolution supported for still images?

UVC (USB Video Device Class) control of pan/tilt on OS X

I am trying to modify an existing application that talks to a standard USB video device class webcam (a Logitech BCC950 camera) over USB on OS X.
The device (a conferencing webcam) is compliant with USB's "Video Device Class" (https://en.wikipedia.org/wiki/USB_video_device_class). I have provided a link to some source code that allows controlling saturation and white balance of the picture using the webcam's hardware and the VDC specification.
I now want to control the pan/tilt function of this webcam. This is called "CT_PANTILT_ABSOLUTE_CONTROL" in the specification. How do I do this?
This site has some example code for controlling the gain, exposure and a handful of other settings with OS X's IOKit.
The aim would be to make an application similar to this: https://www.youtube.com/watch?v=U10OqVzoHbw that is controllable using a web interface.
I want to send new parameters for the CT_PANTILT_ABSOLUTE_CONTROL command, to control the pan of the camera.
Additionally, in the documentation, VC_PROCESSING_UNIT is listed as 0x05, but in the source, it's listed as 0x02. Also, other sources such as the Linux UVC headers define it as 0x05.
In the UVC specifications, this is listed under 4.2.2.1.14 PanTilt (Absolute) Control, however, I am unclear of the unit & selector codes that are required to get this information.
I would love to get some help for the commands & code that needs to be written so that this application will work in OS X with IOKit.
With the help of a friend, we have found this: https://github.com/kazu/UVCCameraControl
It is a modified version of the code that I linked to in my question, however, it seems to have support for Pan & Tilt.
I have as of yet not tried it, but quickly looking at the code, it seems to support everything that I need.

capture video from screen for stream

i am looking for differnt solutions to capture video stream from monitor screen and send it to vidoestreaming server to broadcast in web. it must occuring in "live".
i'd not like to use external services like "procaster" for broad.
OS: Windows.
it will be great to know the ideas and expirience people have to accomplish that.
Thanks all.
Recently, I build a GoLang project called ScreenStreamer, is a tool to stream current active window or full screen (Linux's or Windows's) to other device, like phone or another PC, as MJPEG over http or FLV over rtmp, it's very realtime (delay < 100ms). It works on Windows and Linux.
After building it, you can run it as:
# enter the project root directory
cd ./src/ScreenStreamer
# run it
./mjpeg or .\mjpeg.exe
# use a web browser or other video player, open http://host:port/mjpeg
./rtmp or .\rtmp.exe
# use a video player, open rtmp://host:port/live/screen
Screenshot:
Windows SDK includes Push Source Filters Sample, which in turn contains CPushSourceDesktop filter/class.
CPushSourceDesktop: Copy of current desktop image (GDI only)
It captures desktop image and pushes it into DirectShow pipeline. From there on you can process it using video compression codec and stream it to remote location. A decent screen image compression codec is included with Windows Media subsystem, network streaming will have to be a custom or third party component. Alternatively, it is possible to make the capture class a virtual camera and have Windows Media Encoder broadcast it (or, it already has a simila feature built in).
Alternatively, you can check VNC (or one of the clones) source code and see how it hooks windows and captures image updates, then compresses them and makes it available for remote applications.
Note that you will have to specifically capture non-GDI images (such as coming from video/gaming applications, which use hardware acceleration and non-RGB surffaces).

Device driver to act as a virtual web camera

I'm looking for writing virtual camera drivers. Does anybody has idea?
Any book that would be helpful or any link.
Adding more details:
I have developed a device driver which saves the image to disk and the display uses the device driver to display the image. The performance does not seem good.
The fns. that I have used are:
//to capture
GetDesktopWindow()
CreateCompatibleBitmap()
Save()
//to display
WM_MOUSEMOVE
giving a call to capture and display every time
but the display is not continuous and appears only after window goes out of focus and comes in focus again
Should I use some other technique to record or display images, what will give fruitful results, please help.
Thanks,
-mitesh
What do you mean by virtual camera driver?
It is possible to write a virtual capture device using DirectShow. Such a virtual capture device can then be used by applications such as skype, etc. If that suffices for your needs, you can download vcam from http://tmhare.mvps.org/downloads.htm under the "Capture Source Filter" link.
Edit:
In order to use that capture device in the link I posted you need to download the Windows SDK. The Windows SDK has a tool called "GraphEdit" If you search online, I'm sure you can find a quick GraphEdit tutorial. Basically GraphEdit allows you to construct a multimedia pipeline by connecting a bunch of filters. (This is what happens in the background for instance when you play a movie on your computer. ) This could be something like
web cam -> renderer
or
file source -> some decoder -> renderer
and would result in you seeing the video captured by the web cam or the content of the file. The example download shows how you can construct a virtual capture device i.e. it looks like media is coming from a 'real' capture device, but actually you can generate any video you want if you adapt the code to your specific means i.e. take a screengrab and output that. Applications like skype can pick up you virtual capture device if it is registered correctly.
The easiest way to find out if this is sufficient for your needs is to download the capture source filter, register it with the regsvr32 command, and then to use GraphEdit to insert the capture source into a graph, connect the source to a video renderer and hit the play button. A lot of the above mentioned concepts/keywords might seem new to you, but you can do some reading on each topic, and perhaps this will give you a point to get started.
Edit 2:
Is the capture source filter approach not sufficient for your requirements?
1) AFAIR you stated in your (now deleted) answer that you would like to take a screen grab, and use that as a virtual camera device for use in applications such as skype.
If that is all you require, you do NOT have to write a device driver. DirectShow can do that perfectly well by means of the capture source filter. You would then need to
learn some basic DirectShow
modify the source code of the capture filter to take screen grabs etc.
As far as books are concerned to write device driver to accomplish the same, I have no idea. The point I'm trying to make, is that you need to determine whether you actually need to write a device driver or whether simply modifying the open source capture filter is sufficient.

How do I tell OS X to ignore the input from one of two connected USB mice?

I have two USB mice connected to my Mac, one of which I'm using as a scanner. I need access to the Generic X and Y data but I don't want that data to move the cursor. How, under either carbon or cocoa environments, do I tell the system to ignore the mouse as a pointing device?
Edit: after some digging I've found that I can turn off mouse position updating with the CGAssociateMouseAndMouseCursorPosition() function, but this does not allow me to specify a single mouse. Can anyone explain the OS X relationship between HID mouse devices and the cursor? There has to be a binding between the hardware and software on a device by device basis but I can't find it.
I would look into writing a basic user-space driver for the mouse.
This will allow you direct access to the mouse as a USB device. You can also take control of the device from the system for your exclusive use.
There is some documentation here:
Working With USB Device Interfaces
To get you started, the set up steps to connect to a USB device go like this (I think, my IOKit is rusty)
include < IOKit/IOKitLib.h > and < IOKit/usb/IOUSBLib.h >
find the device you are interested in using IOServiceMatching(). This lets you pick find the correct USB device based on its properties, including things like vendor ID, &c. (See the IORegistryExplorer tool screen shot below)
get a USB plugin instance (let's call it plugin) with IOCreatePlugInInterfaceForService()
use plugin from step 2 get a device interface (let's call it device) using (**plugin)->QueryInterface()
device represents a connection handle to your USB device--open it first using either (**device).USBDeviceOpen or (**device).USBDeviceOpenSeize(). from there you should be able to send/receive data.
Sounds like a lot I know.. and there might be an easier way, but this is what comes to my mind. There may be some benefits to having this level of control of the device, not sure. good luck.

Resources