ISampleGrabber deprecated? - winapi

I have an old computer vision experiment that uses Video for Windows to grab frames from a camera connected to the PC. It's a hack, it uses VfW to create a preview window, then it does a GetDIBits from the window DC.
I'm finally ready to port this to DirectShow. My understanding was that I could grab frames from a video capture graph by using ISampleGrabber, but now I read that ISampleGrabber is deprecated.
What's the non-deprecated way to grab frames from a video feed? Do I have to implement my own DirectShow filter that does essentially what ISampleGrabber does?

DirectShow is not deprecated; just the DirectShow Editing Services. I would strongly recommend using DirectShow because of the much wider level of support, unless there are specific features of MF that are needed.
There's been no development of DES for some years, but the sample grabber is a widely-used filter that is somewhat independent of DES. I would be happy to recommend that you use it. If there is an issue in future versions of windows, it would not be more than a day or two's work to replace the filter.
G

I think Windows Media Foundation would be your best bet if you are only targeting Vista/Win7, otherwise you can still use DirectShow/SampleGrabber approach, I doubt it will be removed any time soon. Related question here.

Related

How to use Windows Media Foundation instead DirectShow Editing Services?

I am developing a non-linear video editor. I need to have the support timeline, mixing of audio streams, the transitions between videos, etc. These all features are in DirectShow Editing Services, but it is no longer supported in the new versions of Windows. Instead, offer to use Microsoft Media Foundation. Is it possible to implement the same functionality in the MF or is using other SDK? For example, gstreamer. Maybe someone will recommend SDK for video editing on the basis of MF?
With Media Foundation you have to implement it all by yourself. For instance: video trimming could be implemented by Source Reader to Sink Writer and you have to manipulate the samples manually to compare their timestamps with the required range etc. Trimming has been implemented in the MFCopy Media Foundation example already. MFCopy uses the Source Reader/Sink Writer approach, because this way the app has more control over the timestamps.
For a Windows 10 UWP App you can use the Windows.Media.Editing.MediaComposition class.

Suitable technologies for a Windows video tool

A few years ago, DirectShow was around and let you manage video on DirectDraw surfaces. But since then I think both technologies have been replaced. What's currently the best solution to let you make a Windows app which can let you composite/blend/mix videos/music together? Does one still need to go the DirectX route with surfaces/textures, or is functionality found in the core Windows APIs?
Examples might be to overlay an image on a playing video, overlay two videos on top of each other with a transition effect, etc.
Apart from core technologies to handle video/audio, are their good 3rd-party libraries? Or maybe the core APIs have enough functionality on their own?
If you're talking managed code?
Microsoft.DirectX.AudioVideoPlayback
Short tutorial here:
http://forum.codecall.net/csharp-tutorials/20436-tutorial-playing-video-files-managed-directx.html

How do I read a video camera in a win32 C program

I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.

Capturing the desktop with Windows Media Format(WMF)

I am using Windows Media Format SDK to capture the desktop in real time and save it in a WMV file (actually this is an oversimplification of my project, but this is the relevant part). For encoding, I am using the Windows Media Video 9 Screen codec because it is very efficient for screen captures and because it is available to practically everybody without the need to install anything, as the codec is included with Windows Media Player 9 runtime (included in Windows XP SP1).
I am making BITMAP screen shots using the GDI functions and feed those BITMAPs to the encoder. As you can guess, taking screen shots with GDI is slow, and I don't get the screen cursor, which I have to add manually to the BITMAPs. The BITMAPs I get initially are DDBs, and I need to convert those to DIBs for the encoder to understand (RGB input), and this takes more time.
Firing a profiler shows that about 50% of the time is spent in WMVCORE.DLL, the encoder. This is to be expected, of course as the encoding is CPU intensive.
The thing is, there is something called Windows Media Encoder that comes with a SDK, and can do screen capture using the desired codec in a simpler, and more CPU friendly way.
The WME is based on WMF. It's a higher lever library and also has .NET bindings. I can't use it in my project because this brings unwanted dependencies that I have to avoid.
I am asking about the method WME uses for feeding sample data to the WMV encoder. The encoding takes place with WME exactly like it takes place with my application that uses WMF. WME is more efficient than my application because it has a much more efficient way of feeding video data to the encoder. It doesn't rely on slow GDI functions and DDB->DIB conversions.
How is it done?
The source to CamStudio, a GPL'd screencasting app that's been around for years (commercially and then open-srcd later) might be useful?
http://sourceforge.net/project/showfiles.php?group_id=131922
I'd suggest looking at the guts of VNC clients too, though they're probably very simplistic (I think just grabbing screenshots then jpg'ing the tiles that have changed since the last capture).
You might want to consider not using WMV9 as the encoder for on-the-fly encoding if it is too cpu-heavy? Maybe use an older, less efficient compressor (like MS RLE) as used by HyperCam and then compress to WMV afterwards? MS RLE has been a default install since at least Win2000 I believe:
http://wiki.multimedia.cx/index.php?title=Microsoft_RLE
CamStudio's Lossless codec is GPL (same link as above), that offers pretty good compression (though you'd need to bundle the dll in your installer) and could be used on the fly, it works well with high compression on all modern systems.
It's been ages since I've done any Win32 coding, but AFAIK, WMF as a format is basically a list of GDI commands and their parameters which would explain why it is much more efficient to encode...
You'd probably need to hook into the top level GDI context (just as Remote Desktop does, I guess) and capture the GDI commands as they are called. I seem to remember there being some way of creating a WMF output GDI context which means you may be able to just delegate calls to it in some way.
I'm guessing here, but you may be able to find example code for the above in the TightVNC/QuickVNC for Windows projects as they would have to do something like that to capture changes on screen in an efficient way.
Have you checked out the BB FlashBack library?
I am on a similar hunt, and I have just started evaluating the BB FlashBack library.
I am not sure about the external dependencies or install footprint. It appears to have a proprietary codec that has to be installed, but the installation of the codec can be handled by the exposed BB FlashBack API.
Beware, there are licensing restrictions (Runtime setting of license keys, ...)
I can send you the CHM from the SDK via e-mail if you want to evaluate the API before committing to a licensed download.
Things I am in the midst of evaluating:
Proper captures of WPF views
mouse cursor tracking
Size of stored movie
How to display stored movie without proprietary codec (i.e. SWF export)
--Batgar

Sound processing: Should I use DirectSound or directly Win32 APIs?

I'm making an application where I will:
Record from the microphone and do some realtime processing on the input
Play an MP3 file (a regular song), but manipulating the output in realtime
Every now and then I'll need to play additional sounds over this song too, but I guess I can do that by simply adding the buffers.
In short, I need to have circular buffers for both recording and playing, and I need to be "feeding" the output buffer every 20 ms or so with the new data that is just about to be played.
I've been looking at DirectSound, and it doesn't seem to help a lot. The reading and writing to the output buffers seem very similar to Win32, the only place where it seems it'd help is in playing the "additional sounds" over the main song.
Should I use DirectSound, or should I go straight to raw Windows APIs?
Is DirectSound going to do anything for me?
Thanks in Advance!
The Directsound API's give you better realtime control. They are also the supported way to use sound in Windows. I was under the impression that the win32 api's were depracated, but I could be wrong on this.
This question is close to yours
https://stackoverflow.com/questions/314522/what-is-the-best-c-sound-api-for-windows
also
Is DirectSound the best audio abstraction layer for Windows?
last but not least, this is what microsoft has to say http://msdn.microsoft.com/en-us/library/dd370784(VS.85).aspx
Neither? :)
The story is that DirectSound is the replacement for waveOut, but DirectSound joined DirectInput as deprecated APIs in Vista and is replaced with WASAPI. DirectSound and waveOut are implemented on top of the User-Space WASAPI in Vista. On XP, waveOut and DirectSound feed to the same kernel level Mixer API.
To consolidate all of these interfaces take a look at something like OpenAL, it's a well supported audio standard along the same lines as OpenGL.
It sounds like you're going to be quite sensitive to latency. It might pay to look at ASIO
I found Harmony Central - Audio Programming. Also read w:DirectSound.
Windows Vista features a completely
re-written audio stack based on the
Universal Audio Architecture. Because
of the architectural changes in the
redesigned audio stack, a direct path
from DirectSound to the audio drivers
does not exist.
Because of Xbox 360 and Microsoft
Windows integration, Microsoft is
actively pushing developers to migrate
new applications to equivalent Xbox
audio APIs such as XAudio and XACT.
OpenAL looks promising.

Resources