How can I capture frames from a Bodelin ProScope HR using AVCaptureSession on Mac OS X Lion? - cocoa

I am attempting to grab frames and preview the video from a Bodelin Proscope HR USB microscope. I have a simple Cocoa app using an AVCaptureSession with an AVCaptureDeviceInput for the Proscope HR and a AVCaptureVideoPreviewLayer displaying the output.
All of this works fine with the built-in iSight camera, but the output from the Proscope HR is garbled beyond recognition.
Using the bundled Proscope software, I sometimes see the same garbling when trying to use the higher resolutions. My suspicion is that the hardware used is rather under-spec'd, and this is bolstered by the fact that at the lowest 320x200 resolution the bundled software grabs at 30fps, but when you bump up the resolutions the frame rates drop dramatically, down to 15fps at 640x480, all the way down to 3.75fps at the maximum resolution of 1600x1200.
EDIT: I originally thought that perhaps the frame rate being attempted by the AVCaptureSession was too high, but I have since confirmed that (at least in theory) the capture session is requesting the frame rate advertised by the AVCaptureDevice.
I should note that I have already tried all of the standard AVCaptureSessionPreset* constant presets defined in the headers, and none of them improved the results from the Proscope HR. (They did however appear to affect the built-in iSight in approximately the expected manner.)
Here is a screen capture showing the garbled output from the ProScope HR:
And just for comparison, the output from a generic WebCam:

According to the documentation you should configure AVCaptureDevice rather than AVCaptureSession.
EDIT:
The AV framework is developed on top of IOKit and it fully relies on the fact that you have no problems with hardware. In your case, it looks like the root of your problem is hardware-related so you should consider using IOKit directly.

Related

Screenshot of the specific window (HWND, HW accelerated)

I need to capture a snapshots/screenshots of the specific window (HWND) that is using HW acceleration and record them to a video stream.
While using BitBlt or PrintWindow I'm able to capture image data only if this window is not HW accelerated, else I'm getting a black texture.
Tried using User32.dll's undocumented DwmGetDxSharedSurface to get the DirectX surface handle. But it fails with an error:
ERROR_GRAPHICS_PRESENT_REDIRECTION_DISABLED - desktop windowing
management subsystem is off
(Edit: Fails for certain applications, i.e. "calculator.exe")
Tried using Dwmapi.dll's undocumented functions DwmpDxUpdateWindowSharedSurface and DwmpDxGetWindowSharedSurface. I've managed to retrieve what looks like a valid DirectX surface handle. (it's d3dFormat, width and height information was valid) Dx's OpenSharedResource was not complaining and managed to create a valid ID3D11Texture2D. Problem is.. all bytes are zeros (getting a black texture). I might be doing something wrong here or.. undocumented DWM functionas does not work anymore on Windows 10...
Edit: I'm able to get image data for some applications like Windows
explorer, Paint, etc, but for some like i.e. Slack i get all
zeros/black image.
Edit: When capturing i.e. VLC, I get this:
Question:
Is there any other way to capture image data of the HW accelerated window?
Note: I don't want to capture the entire desktop.
You can use PrintWindow with nFlags=2
Or use Magnification API (exclude windows)
Or try to hack dwm.exe.

How can I capture the screen with Haskell on Mac OS X?

How can I capture the screen with Haskell on Mac OS X?
I've read Screen capture in Haskell?. But I'm working on a Mac Mini. So, the Windows solution is not applicable and the GTK solution does not work because it only captures a black screen. GTK in Macs only captures black screens.
How can I capture the screen with … and OpenGL?
Only with some luck. OpenGL is primarily a drawing API and the contents of the main framebuffer are undefined unless it's drawn to by OpenGL functions themself. That OpenGL could be abused was due to the way graphics system did manage their on-screen windows' framebuffers: After a window without predefined background color/brush was created, its initial framebuffer content was simply everything that was on the screen right before the window's creation. If a OpenGL context is created on top of this, the framebuffer could be read out using glReadPixels, that way creating a screenshot.
Today window compositing has become the norm which makes abusing OpenGL for taking screenshots almost impossible. With compositing each window has its own off-screen framebuffer and the screen's contents are composited only at the end. If you used that method outlined above, which relies on uninitialized memory containing the desired content, on a compositing window system, the results will vary wildly, between solid clear color, over wildly distorted junk fragments, to data noise.
Since taking a screenshot reliably must take into account a lot of idiosyncrasy of the system this is to happen on, it's virtually impossible to write a truly portable screenshot program.
And OpenGL is definitely the wrong tool for it, no matter that people (including myself) were able to abuse it for such in the past.
I programmed this C code to capture the screen of Macs and to show it in an OpenGL window through the function glDrawPixels:
opengl-capture.c
http://pastebin.com/pMH2rDNH
Coding the FFI for Haskell is quite trivial. I'll do it soon.
This might be useful to find the solution in C:
NeHe Productions - Using gluUnProject
http://nehe.gamedev.net/article/using_gluunproject/16013/
Apple Mailing Lists - Re: Screen snapshot example code posted
http://lists.apple.com/archives/cocoa-dev/2005/Aug/msg00901.html
Compiling OpenGL programs on Windows, Linux and OS X
http://goanna.cs.rmit.edu.au/~gl/teaching/Interactive3D/2012/compiling.html
Grab Mac OS Screen using GL_RGB format

Quartz Composer using video capture output as GLSL shader environmnet

L.S.,
A year ago I made a very simple screen saver using Quartz Composer on Snow Leopard (SL).
The screen saver captures the input of the built-in camera with the "video capture"-patch and uses the images as input for the environment parameter of the "GLSL shader"-patch as found in the GLSL Environment Map.qtz stock example. The shader in turn maps the video capture on the famous 3D teapot, creating the illusion of a chrome teapot mirroring the person in front of the iMac or MB. You can find the screen saver here: Compresses QC source
Under Mountain Lion (ML) the output of the video capture fails to function as input for the envirnment of the GLSL shader patch.
The video capture still works though. Because you can still can use it as the input for the image parameter for the teapot-patch.
Furthermore, it doesn't matter whether I run the screen saver as a screen saver or in the QC runner.
Anybody any idea what's happening? The question boils down to: Why is it under ML not possible to use the video capture output as environment for the GLSL shader patch?
The screen saver, as simpel as it is, is quite popular and it would be a shame if people can't enjoy it any more.
I'm eagerly looking forward to a solution!

Controlling the aspect ratio in DirectShow (full screen mode)

I'm using DirectShow with a simple approach (IGraphBuilder RenderFile) and try to control everything else with querying supplemental interfaces.
The option in question is aspect ratio. I thought that it is maintained by default, but actually the same program behaves differently on different machines (maybe versions of DirectX). This is not a huge problem for a video in a window, because I can maintain the aspect ratio of my window by myself (based on the video size), but for full-screen mode I can not understand how can I control.
I found that there are at least two complex options: for VMR video and with adding overlay mixer, but is there a known way for doing this for IGraphBuilder' RenderFile video?
When you do IGraphBuilder::RenderFile, it internally adds a video renderer filter to the graph. It is typically a VMR-7 Video Renderer Filter:
In Windows XP and later, the Video Mixing Renderer 7 (VMR-7) is the
default video renderer. It is called the VMR-7 because internally it
uses DirectDraw 7.
At this point you can enumerate graph's filters, locate VMR-7 and use its interfaces such as IVMRAspectRatioControl to specify mode of interest.

How does software like GotoMeeting capture an image of the desktop?

I was wondering how do software like GotoMeeting capture desktop. I can do a full screen (or block by block) capture using GDI but that just seems too wasteful to me. Also I have looked into Mirror devices but I was wondering if there's a simpler technique or a library out there which does this.
I need fast and efficient desktop screen capture (10p15 fps) which I am eventually going to convert into a video file and integrate with my application to send the captured feed over the network or something.
Thanks!
Yes, taking a screen capture and finding the diff between previous capture would be a good way to reduce bandwidth on transmission by sending the changes across only, of course this is similar to video encoding techniques which does this block by block.
Still means you need to do a capture plus extra processing for getting the difference, i.e, encoding it.
by using the Mirror devices you can get both the updated Rectangle that are change and also pointer to the Screen. updated Rectangle pointer point to all the rectangle that are change , these rectangle are the change rectangle that are frequently changing. filter out some of the rectangle because in one second you can get 1000 of rectangles.
I would either:
Do full screen captures, and then
perform image processing to isolate
parts of the screen that have changed
to save bandwidth.
-OR-
Use a program like CamStudio.
i got 20 to 30 frame per second using memory driver i am display on my picture box but when i get full screen update then these frame are buffered. as picture box is slow and have change to my own component this is some how fast but not good on full screen as well averge i display 10 fps in full screen. i am facing problem in rendering frames i can capture 20 to 30 frames per second but my rendering is 8 to 10 full screen frames in one second . if any one has achive rendering frames in full screen please replay me.
What language?
.NET provides Graphics.CopyFromScreen.

Resources