Exposure Lock in iSight - cocoa

I am creating object-detection program on Mac.
I want to use iSight in manual exposure mode to improve detection quality.
I tried iGlasses & QTKit Capture to do that and it worked but program runs very slowly and unstable.
So I want to try other solution.
In PhotoBooth.app, iSight seemed to be run in fixed exposure mode so there might be a way to do that.
I read QTKit Capture documents and OpenCV documents but I couldn't find the answer.
If you have any ideas, please tell me.
Thank you.

QTKit Capture, as easy as it is to use, lacks the ability to set manual camera parameters like gain, brightness, focus, etc. If you were using a Firewire camera, I'd suggest looking into the libdc1394 library, which gives you control over all these values and more if you're using an IIDC Firewire camera (like the old external iSight). I use this library for video capture from, and control of, CCD cameras on a robotics platform.
However, I'm guessing that you're interested in the internal iSight camera, which is USB. Wil Shipley briefly mentions control of parameters on internal USB iSights in his post "Frozen in Carbonite", but most of the Carbon code he lays out controls those values in IIDC Firewire cameras.
Unfortunately, according to this message in the QuickTime mailing list by Brad Ford, it sounds like you can't programmatically control anything but saturation and sharpness on builtin iSights through the exposed interfaces. He speculates that iGlasses is post-processing the image in software, which is something you could do using Core Image filters.

I finally managed to lock my iSight's autoexposure/autowhitebalance from my Cocoa App.
Check out www.paranoid-media.de/blog for more info.

Hmmm,
I tried & googled a lot these days but I couldn't find a good idea.
I think OpenCV + cocoa + iGlasses is the fastest one but still unstable.
If you have good idea, please reply.
Thank you.

The UVC Camera Control for Mac OSX by phoboslab uses basic USB commands and documented USB interfaces to access the webcam controls. The paranoid-media.de/blog listed above links to PhobosLab and provides a few additional tweaks to that method for the iSight. (Those tweaks can now also be found in the comments at phoboslab.

Related

Where can I find an example of creating a FaceTime comparable camera in OSX

Many of us are working from home more and I envy the Windows guys who have a virtual webcam plugging in OBS (Open Broadcast Studio). OBS and the Windows plugin are open source projects. As a competent software engineer I should be able to create a plugin that works on OSX -But- I am not a hardened OSX dev.
I am sure I am not googling for the correct APIs and Subsystems. If someone(s) could help me with the Apple concept map to this obscure topic. I would be grateful for a set of crumbs that leads to the OSX API to call(s) to create a camera. I know it can be done as SnapCam does it, but that is a closed-source app.
I am aware of the workaround for OBS that
1) uses code injection and requires disabling security features
2) Doesn't even work in the current versions of OSX
3) Requires yet another app running with video previews etc.
I like the challenge of creating this plugin. I am also wise enough to try and ask for a road map if one is available.
Someone beat me to it. https://github.com/johnboiles/obs-mac-virtualcam
I thought I would search just githib.com directly with the search "virtual camera macos site:github.com". Constraining the search to just GitHub was quite useful.

How to expose a virtual camera on macOS?

I want to write my own camera filters for videochat, and ideally apply them in any/all of the popular videochat applications (Zoom, Hangouts, Skype, etc.). The way I imagine this working is to write a macOS application that reads the camera feed, applies my filters, and exposes an additional virtual camera. This virtual camera could then be selected in whichever videochat application.
I've spent many hours researching how to do this and I'm still not clear if it's even possible with modern macOS APIs. There are a few similar questions on StackOverflow (e.g. here, here), but they are either unanswered or very old. I'm hoping this question will collect advice/links/ideas in the right direction for how to do this as of 2020.
Here's what I got so far:
There's a popular tool in the live streaming community called OBS Studio. It captures input from different sources (camera, desktop, etc.), has a plugin system for applying effects, and then streams the output to popular services (e.g. Twitch). However, there is no functionality to expose the stream as a virtual camera on macOS. In discussions about this (thread, thread), folks talk about a tool called Syphon and a tool called CamTwist.
Unfortunately, Syphon doesn't expose a virtual camera anymore: "SyphonInject NO LONGER WORKS IN macOS 10.14 (Mojave). Apple closed up the loophole that allows scripting additions in global directories to load into any process. Trying to inject into any process will silently fail. It will work if SIP is disabled, but that's a terrible idea and I'm not going to suggest or help anyone do that."
Fortunately, CamTwist works. I got it running on my macOS Catalina, applied some of its builtin effects on my camera stream, and saw it show up as a new camera in my Hangouts settings (after restarting Chrome). This was encouraging.
Unfortunately, CamTwist is rather old and not well maintained. It uses Quartz Composer for implementing effects, but Quartz Composer was deprecated by Apple and it's probably living its last days in Catalina.
The macOS SDK used to have an API called CoreMediaIO, which might have been the way to expose a virtual camera, but this API was also deprecated. It's not clear if/what is a modern alternative.
I guess another way of asking this whole question is: how is CamTwist implemented, how come it still works in macOS Catalina, and how would you implement the same thing in 2020?
Anything that sheds some light on all of this would be highly appreciated!
I also want to create own camera filter like Snap Camera.
So I researched around CoreMediaIO and Syphon.
Did you check this Github project?
https://github.com/lvsti/CoreMediaIO-DAL-Example
This repository started off as a fork of the official CoreMediaIO sample code by Apple.
You know, the original code didn't age well since it was last updated in 2012.
So the owner of the repository changed to make it compile on modern systems.
And you can know that the code works in macOS 10.14 (Mojave) to see the following issue.
https://github.com/lvsti/CoreMediaIO-DAL-Example/issues/4
Actually I have not created the camera filter yet because I don't know how to send images to virtual camera that builded by CoreMediaIO.
I would like to know more information. If you know please tell me.
CamTwist uses CoreMedioIO. What makes you think that's deprecated? Looking at the headers in the 10.15 SDK, I see no indication that it's deprecated. There were updates as recently as 10.14.

How to programatically create custom input/output devices?

How do i create a custom media input/output device like a speaker or microphone that i can select from a program like Skype. For example i could make a GreyScale webcam that reads the webcam and makes it greyscale or a custom Beep Speaker that takes anything a program sends to the speaker and adds a beep after 3 seconds etc. An example would be this:
http://www.videohelp.com/tools/UScreenCapture
I just need help on how to create the actual (virtual?) device, not how to make it greyscale etc. I can figure that out later.
Where do i even begin to search for tutorials/readings on this? As per the tags, i prefer qt/c++ related but it doesn't necessarily have to be that. Just a nudge in the right direction to get me started would be fine.
You need to create a device driver. What that entails depends entirely on the platform and the type of device you want to emulate.
Start with the documentation of your operating system and look up references as if you were developing a new hardware device. But you'll just skip any actual hardware interfaces.
Nevertheless this is likely to require kernel programming, so Qt is likely to be inappropriate.

How can I access a laptop built-in camera?

Is it possible to access the camera in a Macbook with REALbasic? I'd like to allow a user to capture an image from the camera.
Found 3 possible solutions:
a free plugin:
CamCapture
This should work for anyone needing an easy method to capture images from the built-in iSight camera. It should also work for other QuickTime capable camera sources. There is an example project but no documentation. FYI- the site and example is in French.
Commercial option, which I wasn't able to try is QTKit from MonkeyBread Software. This option is not free, but is documented and supported, unlike the free option.
realcapture is a free, unsupported RB canvas. It uses declares to access the camera.

How do I read a video camera in a win32 C program

I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.

Resources