This question already has answers here:
webcam access in c++
(6 answers)
Closed 8 years ago.
I have read this question here: webcam access in c++
however just wondering if there is any updated answers since 2009:
I would like to plug in a Webcam to a windows based system, and have a software that monitors and processes in real-time the webcam feed (eg. scan a barcode). I'm wondering what are some solutions out there.
Thanks
Video capture APIs are still the same:
DirectShow
Media Foundation
Video for Windows (deprecated)
and then a multitude of wrappers over the mentioned, esp. over DirectShow and VfW.
You want to initialize video capture session with a camera, capture video and process it, e.g. to detect bar codes fro the incoming video feed.
Related
I am developing a non-linear video editor. I need to have the support timeline, mixing of audio streams, the transitions between videos, etc. These all features are in DirectShow Editing Services, but it is no longer supported in the new versions of Windows. Instead, offer to use Microsoft Media Foundation. Is it possible to implement the same functionality in the MF or is using other SDK? For example, gstreamer. Maybe someone will recommend SDK for video editing on the basis of MF?
With Media Foundation you have to implement it all by yourself. For instance: video trimming could be implemented by Source Reader to Sink Writer and you have to manipulate the samples manually to compare their timestamps with the required range etc. Trimming has been implemented in the MFCopy Media Foundation example already. MFCopy uses the Source Reader/Sink Writer approach, because this way the app has more control over the timestamps.
For a Windows 10 UWP App you can use the Windows.Media.Editing.MediaComposition class.
I would like to take video from a webcam, render some text on the frames and do some motion tracking and pass it on to a virtual webcam so it can be streamed easily.
I found some answers on stackoverflow suggesting that I should use DirectShow. According to information in DirectShow documentation, the DirectShow SDK is part of Windows SDK. So I installed the latest Windows SDK but it seems that it doesn't include DirectShow because there are no DirectShow samples under C:\Program Files (x86)\Microsoft SDKs\Windows. (The stackoverflow answers are also pretty old - dated around 2010)
Can you suggest a way to make DirectShow work (including samples working on Visual Studio 2015) or some other alternative to DirectShow, that would help me create a virtual webcam?
Virtual webcam is typically a software only implementation that application discover as if it is a device with physical representation. The mentioned applications use APIs to work with web cameras and ability to extend the APIs and add your own video source is the way to create a virtual web camera.
In Windows there are a few APIs to consume video sources: Video for Windows, DirectShow, Media Foundation (in chronological order).
Video for Windows is not really extensible and limited in capabilities overall. It will see a virtual device if you provide a kernel mode driver for a virtual camera.
DirectShow is the API used by most video capture enabled Windows applications and it is present in all Windows versions including Windows 10 (except just Windows RT). Then it's perfectly extensible and in most cases the term "virtual webcam" refers to DirectShow virtual webcam. Methods to create DirectShow virtual webcam discussed in many StackOverflow questions remain perfectly valid for Windows 10, for applications that implement video capture using DirectShow:
Virtual webcam input as byte stream
Simulate a DirectShow Webcam
DirectShow samples were removed from Windows SDK but you can still find them in older releases:
Getting DirectShow Samples on Windows 8
If you provide a kernel mode driver for video camera device (your virtual webcam through custom kernel driver), DirectShow would also see it just like other video APIs.
Media Foundation is a supposed successor of DirectShow but its video capture capabilities in the part of extensibility simply do not exist1. Microsoft decided to not allow custom video sources application would be able to discover the same way as web cameras. Due to Media Foundation complexity, and overhead and overall unfriendliness it is used by modest amount of applications. To implement a virtual webcam for Media Foundation application you again, like in case of Video for Windows, have to implement a kernel mode driver.
1 Starting with Windows Build 22000 (Windows 11), there is new API MFCreateVirtualCamera which offers virtual camera creation. A developer can implement a video source which the API connects to so called Windows Camera Frame Server service, which in turn distributes the generated video as a source along with regular cameras. Applications see this software implementation the same way as if it was, for example, a webcam.
This is an ancient question internet-wise but I thought I could contribute:
I was looking into this about a year ago and almost abandoned my project altogether until I found Microsoft's SimpleMediaSource driver sample on their Github. It is documented here but it is a tough read if you haven't written drivers before - which was the case for me. Fortunately, documentation seems to have been updated and improved since I used it.
To get it working, I had to manually delete and copy-paste the DLL into C:\System32 after each compilation with Visual Studio. I also had to side-download and install the now removed (from what I can tell) devcon utility to add & remove drivers with devcon dp_add/dp_remove commands. You also need the Windows Driver Kit (WDK).
You need to enable unsigned driver loading within Windows so it may not be a great route if you want to distribute it. Anticheat and DRM software may also not appreciate it :)
There are two projects being compiled:
MediaSource - COM DLL project for the custom media source
SimpleMediaSourceDriver - UMDF driver install package
Just install obs studio
In newer versions it automatiaclly installs an easy to use virtual webcam that mirrors the OBS scene.
I have googled and searched a number of forums and developer websites without any success; I believe it is a specific question that needs direct expertise or knowledge, so please read on!
BACKGROUND:
I have an audio enhancement algorithm that is implemented as a system Audio Processing Object (sAPO) that was developed and tested successfully in Windows 7. As an APO, it applies processing to all audio stream through an end point device, including audio originating from Skype.
QUESTION:
Is it true that this is not applicable to Windows 8.x ( 8.1 or greater)? More specifically, does sAPO processing still work for Skype? Does Skype disable any and all APO processing on its stream?
WHAT HAS BEEN TRIED SO FAR:
(1) I have succeeded in following the standard procedure of loading an unsigned APO from Windows 7 in Windows 8.
(2) I have tested this with Skype audio stream and that works as well.
HOWEVER:
(1) above, fails in Windows 8.1 developer preview. As a result I have not been able to test (2).
Please note that I am specifically asking about Windows 8.1, in a laptop or desktop. This is not for mobile phones or tablets. Any information or links regarding this is much appreciated!
I am also trying to update an APO which have been developed for W7/8 to the new format introduced by W8.1, however it doesn't seem like much documentation has been released yet.
What I have found so far, is that Windows 8.1 requires some new methods for discovery and control of effects to be implemented into the APO. This means that Skype would be able to discover certain APO effects and disable them if it needs to.
The new interface IAudioSystemEffects2:
link
Some updated code can be found in the new SwapAPO example:
link
Not much, but hopefully it can help you get going in the right direction.
Can anybody give me a link to a working example of playing background live-streaming audio in Window Phone 7 (or 7.1)? I saw a lot of examples (in microsoft.com too) and noone of them works correctly for playing a background live-streaming audio.
FYI, here's an url of live-streaming audio http://radiozetmp3-02.eurozet.pl:8400/
Background audio is not supported on 7.0, only 7.1 (and above).
If you want to play streaming audio in a format/codec which is not natively supported by the phone you must do it with an AudioStreamingAgent. If it is a supported codec, you can use an AudioPlayerAgent (see sample here).
Using an AudioStreamingAgent is a nontrivial task and requires a deep understanding of the codec you need to play so you can convert it to something the phone understands. I know of one person who did this, for a H.264 stream, and it took a long time and much hair pulling to get it working. And before anyone asks: No, they are not able to share code from that project.
If you really must go down this route, the ManagedMediaHelpers (previously here) are a good place to start, but yes, they don't cover all codecs and this is, potentially, very complicated and not something well documented on the web.
I am trying to build what I think is a basic app. Well, two apps one for windows and one for OS X. I would like to capture the audio signal that is playing (ie if the user is playing music out his/her speakers). Then take that signal and stream it out so another computer can "listen". The other computer would be Windows or OS X.
Any ideas on how to get the audio signal?
What's the most efficient way to stream out audio without a 3rd party plugin? If there is an open-source solution out there, I would be interested.
Thanks!
Chris
On Windows XP this isn't trivial at all because there's no way of intercepting the output signal without writing an audio filter driver (which is not somethign for the faint of heart).
On Windows Vista and above, you can capture the output of the audio engine by using the WASAPI APIs (built into Windows so they're free) and initializing an audio client with the AUDCLNT_STREAMFLAGS_LOOPBACK flag. This will give you a capture stream that's hooked to the output of the audio engine.
You can then package up that audio and send it to the other machine and render it with whatever audio rendering API you want.
I don't know how to do the equivilant on OSX though :(.