Using NAudio I am capturing microphone samples in order to play them back on a remote machine. In some cases I'd like to apply a low-pass filter. How can this done with NAudio?
Ideally I would like to apply the filter based on end user input on the playback machine (maybe via a "enable/disable low-pass filter" button).
It appears as though I need to use...
var myFilter = BiQuadFilter.LowPassFilter (x, y, z);
myFilter.Transform (inBuffer, outBuffer);
However, it isn't clear how/where I insert this transformation prior to playback.
I see a few filters in the NAudio library that appear to be what I'd need, unfortunately, there doesn't appear to be any samples that make use of them. Even a simple example in NAudioDemo (Network Chat w/ Filters?) would be great!
Related
I am trying to get programmatically the maximum display rate that Windows allows (i.e, Display settings > Advanced display settings > Refresh rate > max value). Chances are, there's no such query and I instead need to obtain all the possible options. How do I do that ?
I've already obtained the monitor names and current refresh rates using the CCD API, by obtaining DISPLAYCONFIG_PATH_INFOs and by using DisplayConfigGetDeviceInfo. But I can't seem to find a way to obtain the refresh rate options associated to a monitor. A CCD API based solution would be perfect, but an alternative is fine - it just means I'll have to reconcile the information obtained via the CCD API with that obtained from that other API, somehow.
Also, I'm trying to do this in the context of a plain Windows executable, that doesn't use a specific graphics backend library (ex DX12) or game-making framework.
Thanks !
Using the CCD API, you can use DisplayConfigGetDeviceInfo to get the GDI device name using DISPLAYCONFIG_DEVICE_INFO_TYPE::DISPLAYCONFIG_DEVICE_INFO_GET_SOURCE_NAME , usually something like \\.\DISPLAY1, \\.\DISPLAY2, etc.
Once you have that device name, you can use the EnumDisplaySettingsW function to enumerate all DEVMODE for this device, this will give you all possible combination of modes (resolution, frequency, etc.) that the device supports (that can easily return hundreds of modes).
Once you have that you just need to group them by DEVMODE's dmDisplayFrequency field (and sort it).
I am trying to write a small DirectShow application using C++. My main issue is grabbing the raw data from the microphone and saving it as BYTES.
Is this possible using a DirectShow filter? What interface am I supposed to use in order to get the data?
Right now, I am able to achieve writing a recorded AVI file using the following Graph Filter:
Microphone->Avi Mux->File writer
This graph works fine.
I have tried to use the SampleGrabber (which is deprecated by Microsoft) and I have a lack of knowledge regarding what to do with this BaseFilter type.
By design DirectShow topology needs to be complete, starting with source (microphone) and terminating with renderer filter, and data exchange in DirectShow pipelines is private to connected filters, without data exposure to controlling application.
This makes you confused because you apparently want to export content from the pipeline, into outer world. It is not exactly the way DirectShow is designed to work.
The "intended", "DirectShow way" is to develop a custom renderer filter which would connect to the microphone filter and receive its data. More often than not developers prefer to not take this path since developing a custom filter is a sort of complicated.
The popular solution is to build a pipeline Microphone --> Sample Grabber --> Null Renderer. Sample Grabber is a filter which exposes data, which is passed through, using SampleCB callback. Even though it's getting harder with time, you can still find tons of code which do the job. Most developers prefer this path: to build pipeline using ready to use blocks and forget about DirectShow API.
And then one another option would be to not use DirectShow at all. Given its state this API choice is unlucky, you should rather be looking at WASAPI capture instead.
I want to use the Medialooks multisource filter in my application, This has entry in
HKEY_CURRENT_USER\SOFTWARE\Classes\CLSID\
But still i have to Add this filter manually using CLS_ID and AddFilter Function.
Is there any way so that Renderfile function of Dshow will automatically creates a graph by enumerating the filters from registry
Checked in Grphedt tool but if i manually insert and connect Filters I can play the videos properly.Otherwise it wont render automatically by building the graph
Ability yo connect filters and obtain a topology of your interest is one thing, and having this sort of connection taking place during Intelligent Connect is another thing. For Intelligent Connect and RenderFile the filters of interest must be apparently registered, and then they have accurate DirectShow registration details: merit, media types. Quite so often filters are lacking this registration (and at other times they are "over-registrating" themselves so that they are picked up when they are obvious misfit).
Even though you can re-register filter yourself (see IFilterMapper2::RegisterFilter) with alternate registration details, you typically do not do it. It's filter developer's business to register accurately. The better alternative for you is to build graph using AddFilter calls where you have fine control over graph construction. Or you might want to do it as a fallback construction method if RenderFile fails in first place.
I'm having problems writing a custom DS source filter to play a TS stream that was dumped to multiple files. [ed: The point being to re-play a continuous stream from these separate files]
First I tried modifying the Async file sample: no go - the data 'pull' model seems to put all the controlling logic in the splitter filter so I couldn't trick it into believing I have a 'continuous' stream.
So then tried modifying the PushSource desktop sample: it appears we have to babysit the MPEG demuxer this way to create its output pin, parsing data myself to get IDs etc. I managed to get GraphStudio to auto-wire up something (using a strange DTV-DVD decoder) but it doesn't play anything despite the source filter pushing the right data downstream.
Does anyone have experience in this area to help/suggest anything?
Have you found a solution to your problem, now?
I am writing a similar DirectShow filter, currently for playing only one file, but I think that modifying it for playing several files should not be a problem.
I built this filter starting from the "Push Source Bitmap" filter, but I had to make a lot of changes on it.
I also had to build the graph using an application that I wrote (so not using GraphEdit), connect the "Mpeg-2 Demultiplexer" to the new filter, add one PSI output (mapped to PID 0 = PAT) and the "MPEG-2 Sections and Tables Filter" connected to this PSI output.
After that, I used the "MPEG-2 Sections and Tables Filter" for reading the PAT table and the PMT PIDs defined inside it. Next, I mapped also all PMT PIDs to the same "MPEG-2 Sections and Tables Filter", and I parsed the PMT tables for knowing Elementary Streams PIDs and media types, and next I created one video output and one audio output based on these informations (there may be more than one audio + video streams, but at the current step I retain only the first one). Note that this needs to run the partial graph temporarily in order to be able to parse tables, and after to stop it in order to be able to create the video and audio output pins (with the proper media types) and connect decoders and renderers.
In addition to that, I have an information that I you could find interesting: it appears that when connected, the "Mpeg-2 Demultiplexer" searchs the graph for a filter exposing the "IBDA_NetworkProvider" interface, and if found, it registers itself to it using the IBDA_NetworkProvider::RegisterDeviceFilter method.
I think that you could use this for detecting the "Mpeg-2 Demultiplexer" filter insertion into the graph (by exposing the "IBDA_NetworkProvider" interface from your filter), and try to make the above operations from your source filter, thus allowing to use your filter inside GraphEdit and expect the "Mpeg-2 Demultiplexer" to be baby-sat from this filter without worrying to build an application around for doing these operations.
Gingko
I created a TS source filter which reads a network stream. So that is continuous too, but I difference to reading from a file is, that a network stream automatically gives me the correct speed. So I think you should be able to use a similar approach.
I have based my filter on the FBall example from de dx sdk.
The filter derives from CSource, IFileSourceFilter, IAMFilterMiscFlags and ISpecifyPropertyPages. The output pin derives just from CSourceStream.
If you have problems decoding the audio/video, maybe first try a simple mpeg-2 stream, for example from a DVB source. And make sure you have a decoder installed and it accepts that format. (for example ffdshow has mpeg2 decoding turned off by default).
I want to look for some source codes about directshow, which implement this feature :
Implement one image process filter for two input video source pins, and render the result.
For example, open two video files, process each frame from two videos , then composite those two frames into only one output frame.
Are there any existing filter implementation or framework source codes ?
Thanks
Just implement 2 pins for input connections. Get a sample from DirectX SDK and change input pin number to 2 if it's only one.
Also found some doc and sample for you here.
You can use the stock VMR filter to perform alpha-blending without any special code, as long as you are only going to render the output. Just feed the two videos into separate pins on the same VMR instance.
If you want to save the mixed output, you'll need to do the mixing yourself (or write a custom allocator-presenter plugin for the VMR filter).
G