I am currently developing an application that uses DirectShow.
The application should support many different webcams that probably have a lot of different output pin ColorSpace configurations (RGB, YUV, I420, etc).
When encoding i will always be using the same encoding filter and mux + file writer - however i won't know how to connect the Output pin on the Source filter to the input pin on the encoding filter, because it will depend on the source filter's output ColorSpace.
Examples:
Source1 (RGB24) -> Color Space Converter -> RGB2YUV - > Encoder ->
Mux -> File Writer.
Source2 (YUV) -> Encoder -> Mux -> File Writer.
Source3 (MJPG) -> MJPEG Decompressor -> Color Space Converter ->
RGB2YUV -> Encoder -> Mux -> File Writer.
And so on.. meaning there could be lots of different filter configurations up until the encoder.
My question is now, is it perfectly okay to use Intelligent Connect() instead of ConnectDirect() to connect the Source filter to the encoder filter?
Or would I have to check the media type of the source output pin each time, and manually build the graph up depending on the Color Space (RGB,YUV..) of the source output pin?
Is there an easy way to do that, that I might not know about - seems like there could be and endless amount of possibilities to connect the source filter to the encoder..
Thanks for your help.
Using Connect and Intelligent Connect is fine. Basically it means that you request pins to be connected "somehow", and "the best way possible".
However because there might be many options, different environments, hardware and configurations in many situations you want to connect the predictable way you know (esp. when it comes to encoding, not decoding).
A good strategy is to Connect (and ConnectDirect) to individual filters you are aware of and you are sure that you want exactly them, and leaving Intelligent Connect for connections you are okay with the system supplied filter chains, esp. when it comes to decoding and where Windows is supposed to pick available decoder for you.
Also when Intelligent Connect is in question, it makes rarely a difference whether you Connect or ConnectDirect. Either way the filters connect using the media type, and sometimes they might re-agree media type on the go. More important is whether you connect to known filter, or you let Intelligent Connect supply you with a filter required for connection. Incorrectly picked filter, or at all filter with bad registration that crashes process instead of connecting the pipeline, is more often the real headache.
Related
I am trying to write a small DirectShow application using C++. My main issue is grabbing the raw data from the microphone and saving it as BYTES.
Is this possible using a DirectShow filter? What interface am I supposed to use in order to get the data?
Right now, I am able to achieve writing a recorded AVI file using the following Graph Filter:
Microphone->Avi Mux->File writer
This graph works fine.
I have tried to use the SampleGrabber (which is deprecated by Microsoft) and I have a lack of knowledge regarding what to do with this BaseFilter type.
By design DirectShow topology needs to be complete, starting with source (microphone) and terminating with renderer filter, and data exchange in DirectShow pipelines is private to connected filters, without data exposure to controlling application.
This makes you confused because you apparently want to export content from the pipeline, into outer world. It is not exactly the way DirectShow is designed to work.
The "intended", "DirectShow way" is to develop a custom renderer filter which would connect to the microphone filter and receive its data. More often than not developers prefer to not take this path since developing a custom filter is a sort of complicated.
The popular solution is to build a pipeline Microphone --> Sample Grabber --> Null Renderer. Sample Grabber is a filter which exposes data, which is passed through, using SampleCB callback. Even though it's getting harder with time, you can still find tons of code which do the job. Most developers prefer this path: to build pipeline using ready to use blocks and forget about DirectShow API.
And then one another option would be to not use DirectShow at all. Given its state this API choice is unlucky, you should rather be looking at WASAPI capture instead.
We are trying to use a full-duplex stream from two different devices on two different clock domains. PortAudio reports flags periodically, which I've put down to the interfaces being on different clock domains.
I have read and understood the behaviour documented in http://www.portaudio.com/docs/proposals/001-UnderflowOverflowHandling.html, and it is consistent with my observations. These flags are of course accompanied by clicks. I can reproduce this behaviour easily in the pa_fuzz example, so I believe it to be expected behaviour.
What I'm unsure of exactly what I should do with this information? The flags tell me that an overflow/underflow has already happened; how could I feasibly implement any resampling with this?
Am I going about this completely wrong? What is the typical usage pattern for PortAudio full-duplex using two different devices?
I'm using Core Audio. Thanks!
I don't have a proper answer for handling it at the PortAudio level, but there is a platform-specific solution that you could consider.
On OS X, you can create an aggregate device that combines the two devices, and enable drift correction to have CoreAudio perform the resampling. The resulting aggregate will have the inputs and outputs for both devices, so it may take additional logic to select the input and output channels of interest.
You could try it first by creating an aggregate device using Audio MIDI Setup.app. Apple does have some support notes on it, like
Create an Aggregate Device to combine multiple audio interfaces, and Set aggregate device settings in Audio MIDI Setup on Mac (which may have conflicting guidance on selecting drift correction)
I'm not sure it's formally documented other than headers like AudioHardware.h, but it is possible to create an aggregate device programatically as well. I have only experimented with it myself, and recall having some intermittent issues configuring the device (perhaps an asynchronous behavior with some operations? ).
Of course, if you're using PortAudio and trying to find a portable solution, this won't solve the issue generally.
Simply put, I need to take results from a DAQ and display them visually in a UI (no interaction needed) that gets information updated in real time. The DAQ I am using has an "utility" to plug into Labview, so it seems that the easiest way is to grab this data from Labview and then transmit that data to some UI using one of these methods.
I am using Windows 10 (although I could boot to Ubuntu), just not sure what UI application would be best / easiest to use.
You can use this National Instrument's tool for DAQ UI visualization. As it is native it should be quite straightforward to use.
You may want to use the DAQExpress VI in LabVIEW as #MateoRandwolf suggested. The neat thing about it is that it almost creates your first programm automatically -- besides the configuration of your NI modules.
There are just two things missing:
a waveform chart, and
a write to a TDMS file
Here is a snippit of a simple program doing this (the stop button is important to actually close the TMDS file before aborting the program)
If you really want to stream the data to a different device, you I suggest to use TCP/IP. There exist good examples in the documentation from which you can start (Help > Find Examples... > Search-tab). If you cannot accept the roughly 40ms buffer that TCP/IP has (because of shake-hands etc.), have a look on UDP.
You can use Dewesoft's DAQ systems which use dual mode capability. They use dual data buses (EtherCAT and USB). USB for high-speed buffered data storage to the PC's SSD hard drive and the EtherCAT bus for low latency real-time stream to any 3rd party EtherCAT master.
The DAQ systems are also capable of visualising data in real-time on the display using various pre-build visual displays like recorders, XY graphs, 3D graphs, osciloscopes, FFTs, GPS, video, and numerous other...
I'm having problems writing a custom DS source filter to play a TS stream that was dumped to multiple files. [ed: The point being to re-play a continuous stream from these separate files]
First I tried modifying the Async file sample: no go - the data 'pull' model seems to put all the controlling logic in the splitter filter so I couldn't trick it into believing I have a 'continuous' stream.
So then tried modifying the PushSource desktop sample: it appears we have to babysit the MPEG demuxer this way to create its output pin, parsing data myself to get IDs etc. I managed to get GraphStudio to auto-wire up something (using a strange DTV-DVD decoder) but it doesn't play anything despite the source filter pushing the right data downstream.
Does anyone have experience in this area to help/suggest anything?
Have you found a solution to your problem, now?
I am writing a similar DirectShow filter, currently for playing only one file, but I think that modifying it for playing several files should not be a problem.
I built this filter starting from the "Push Source Bitmap" filter, but I had to make a lot of changes on it.
I also had to build the graph using an application that I wrote (so not using GraphEdit), connect the "Mpeg-2 Demultiplexer" to the new filter, add one PSI output (mapped to PID 0 = PAT) and the "MPEG-2 Sections and Tables Filter" connected to this PSI output.
After that, I used the "MPEG-2 Sections and Tables Filter" for reading the PAT table and the PMT PIDs defined inside it. Next, I mapped also all PMT PIDs to the same "MPEG-2 Sections and Tables Filter", and I parsed the PMT tables for knowing Elementary Streams PIDs and media types, and next I created one video output and one audio output based on these informations (there may be more than one audio + video streams, but at the current step I retain only the first one). Note that this needs to run the partial graph temporarily in order to be able to parse tables, and after to stop it in order to be able to create the video and audio output pins (with the proper media types) and connect decoders and renderers.
In addition to that, I have an information that I you could find interesting: it appears that when connected, the "Mpeg-2 Demultiplexer" searchs the graph for a filter exposing the "IBDA_NetworkProvider" interface, and if found, it registers itself to it using the IBDA_NetworkProvider::RegisterDeviceFilter method.
I think that you could use this for detecting the "Mpeg-2 Demultiplexer" filter insertion into the graph (by exposing the "IBDA_NetworkProvider" interface from your filter), and try to make the above operations from your source filter, thus allowing to use your filter inside GraphEdit and expect the "Mpeg-2 Demultiplexer" to be baby-sat from this filter without worrying to build an application around for doing these operations.
Gingko
I created a TS source filter which reads a network stream. So that is continuous too, but I difference to reading from a file is, that a network stream automatically gives me the correct speed. So I think you should be able to use a similar approach.
I have based my filter on the FBall example from de dx sdk.
The filter derives from CSource, IFileSourceFilter, IAMFilterMiscFlags and ISpecifyPropertyPages. The output pin derives just from CSourceStream.
If you have problems decoding the audio/video, maybe first try a simple mpeg-2 stream, for example from a DVB source. And make sure you have a decoder installed and it accepts that format. (for example ffdshow has mpeg2 decoding turned off by default).
I want to look for some source codes about directshow, which implement this feature :
Implement one image process filter for two input video source pins, and render the result.
For example, open two video files, process each frame from two videos , then composite those two frames into only one output frame.
Are there any existing filter implementation or framework source codes ?
Thanks
Just implement 2 pins for input connections. Get a sample from DirectX SDK and change input pin number to 2 if it's only one.
Also found some doc and sample for you here.
You can use the stock VMR filter to perform alpha-blending without any special code, as long as you are only going to render the output. Just feed the two videos into separate pins on the same VMR instance.
If you want to save the mixed output, you'll need to do the mixing yourself (or write a custom allocator-presenter plugin for the VMR filter).
G