Capture video from firewire cameras - cocoa

I am working on a project where I need to capture video from two firewire cameras and do some processing on them. The one caveat is that I need to control the camera features (gain, gamma, white balance, etc).
After searching around I found two projects that seem to work. One is a cocoa project that interfaces with opencv but as far as I know, I can access the camera features through that code. The other is an openframeworks project that does allow me to access the features of the camera but doesn't currently interface with opencv.
I'm thinking about trying to combine the two projects and use the openframeworks one to set the camera features and the cocoa one to capture the images and do the processing. Is that a feasible idea or is there a better way to do this?

openFrameworks interfaces with openCv. There's an addon called ofxOpenCv and there are a couple of examples using it in openFrameworks/apps/addonsExamples

Related

Mac app with unity3d

I need to create a Mac app which shows 3D models and those should support the multi touch events. This will be run in a Mac computer and the touch screen will be a Mac OS X touch screen. I will be using Unity3d to create the 3D models in a scene. I want to know how I can integrate these Unity models/scenes to my Mac App.
One option is to embed Unity web player to my Mac app. But the issue is in the Web player it doesn't seem to recognize multi touch events like pinch zoom
So the other option is to integrate/embed the unity3d models to my app directly. But I've no clue how to do that. In the Unity build options I can build for Mac standalone. But it creates a .app file directly. So I don't think I can use that to integrate with my Mac app.
Any help on this is really appreciated
Thanks
why bother making a second app and integrating the unity app into the first.
Why not just make the whole thing in Unity ?
forgot to mention.. there are a couple of free Unity demo's that illustrate how to create or modify meshes procedurally (if you are trying to make anything more complex than basic primitive shapes)
What kind of 3D models are you trying to make? Unity3D can only make primitives and really isn't designed for 3D modeling. So if you are trying to make more than just boxes, spheres, etc. you will need another program. Blender is free and works well with Unity. There is a plug in here that will allow you to make 3D models in Unity. http://gamedraw.mixeddimensions.com/ I haven't tried it myself but they have a free version you could try before buying the full plugin to see if it will fit your needs. I agree with the previous reply that if you are going to use Unity you might as well make the whole app in Unity.

How can I access a laptop built-in camera?

Is it possible to access the camera in a Macbook with REALbasic? I'd like to allow a user to capture an image from the camera.
Found 3 possible solutions:
a free plugin:
CamCapture
This should work for anyone needing an easy method to capture images from the built-in iSight camera. It should also work for other QuickTime capable camera sources. There is an example project but no documentation. FYI- the site and example is in French.
Commercial option, which I wasn't able to try is QTKit from MonkeyBread Software. This option is not free, but is documented and supported, unlike the free option.
realcapture is a free, unsupported RB canvas. It uses declares to access the camera.

Suitable technologies for a Windows video tool

A few years ago, DirectShow was around and let you manage video on DirectDraw surfaces. But since then I think both technologies have been replaced. What's currently the best solution to let you make a Windows app which can let you composite/blend/mix videos/music together? Does one still need to go the DirectX route with surfaces/textures, or is functionality found in the core Windows APIs?
Examples might be to overlay an image on a playing video, overlay two videos on top of each other with a transition effect, etc.
Apart from core technologies to handle video/audio, are their good 3rd-party libraries? Or maybe the core APIs have enough functionality on their own?
If you're talking managed code?
Microsoft.DirectX.AudioVideoPlayback
Short tutorial here:
http://forum.codecall.net/csharp-tutorials/20436-tutorial-playing-video-files-managed-directx.html

How do I read a video camera in a win32 C program

I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.

Exposure Lock in iSight

I am creating object-detection program on Mac.
I want to use iSight in manual exposure mode to improve detection quality.
I tried iGlasses & QTKit Capture to do that and it worked but program runs very slowly and unstable.
So I want to try other solution.
In PhotoBooth.app, iSight seemed to be run in fixed exposure mode so there might be a way to do that.
I read QTKit Capture documents and OpenCV documents but I couldn't find the answer.
If you have any ideas, please tell me.
Thank you.
QTKit Capture, as easy as it is to use, lacks the ability to set manual camera parameters like gain, brightness, focus, etc. If you were using a Firewire camera, I'd suggest looking into the libdc1394 library, which gives you control over all these values and more if you're using an IIDC Firewire camera (like the old external iSight). I use this library for video capture from, and control of, CCD cameras on a robotics platform.
However, I'm guessing that you're interested in the internal iSight camera, which is USB. Wil Shipley briefly mentions control of parameters on internal USB iSights in his post "Frozen in Carbonite", but most of the Carbon code he lays out controls those values in IIDC Firewire cameras.
Unfortunately, according to this message in the QuickTime mailing list by Brad Ford, it sounds like you can't programmatically control anything but saturation and sharpness on builtin iSights through the exposed interfaces. He speculates that iGlasses is post-processing the image in software, which is something you could do using Core Image filters.
I finally managed to lock my iSight's autoexposure/autowhitebalance from my Cocoa App.
Check out www.paranoid-media.de/blog for more info.
Hmmm,
I tried & googled a lot these days but I couldn't find a good idea.
I think OpenCV + cocoa + iGlasses is the fastest one but still unstable.
If you have good idea, please reply.
Thank you.
The UVC Camera Control for Mac OSX by phoboslab uses basic USB commands and documented USB interfaces to access the webcam controls. The paranoid-media.de/blog listed above links to PhobosLab and provides a few additional tweaks to that method for the iSight. (Those tweaks can now also be found in the comments at phoboslab.

Resources