I am reading about Audio Units on OSX, but it's not totally clear to me what an Audio Unit is.
I would like to insert custom audio processing in any stream that is being captured from a microphone or played by any application.
Is it possible to implement the custom audio processing as an Audio Unit which is automatically inserted into any capture or render streams on the machine?
If so, are there any good examples in the public domain that I can take a look at?
Related
I have an MFC based project that decodes some data and generates 16 bit 48000 Hz raw wav audio data
The program continuously generates wav audio data in real time
Are there any functions in MFC that will let me play back the audio data in the sound card? I have been googling around for a while and the consensus seems to be that MFC doesn't have this feature. I have also found this tutorial that shows how to playback a wav file using PlaySound() function, but it looks like it is only for wav files and even if it plays audio data in memory, that data has to be prepared in the form of a full wav file with all the header information, while I need to play back raw wav data generated in real time
I have also seen people suggest using Direct X, but I feel like something like this should be possible using basic windows library functions without having to use any other extra libraries. I also found this tutorial for creating and reading wav files in an MFC based project, but it's not really clear how to use it to play raw wav data in memory. This tutorial uses waveOutOpen() function to playbakc the wav file, and it looks like this is probably what I need, but I cannot find a simple tutorial that shows how to use it.
How do I playback raw wav audio in memory in an MFC Dialog based project? I am looking for something where I can specify pointer to the wav data, number of samples, bits and sampling frequency and the function would playback the wav data for me. A basic working example such as generating a sinewave and playing it back will be appreciated. If directx is the only way to do this then that's fine as well.
I'm trying to use various audio sources in DirectShow and I have these capture devices in my system which I think are quite common (provided by chipset drivers):
Realtek HD Audio Line input
Realtek HD Audio Stereo input
Realtek HD Audio Mic input
They look like capture sources, expose analog input and 24-bit pcm output, and can connect the output to other filters (renderer etc).
But the return code from IMediaFilter::Run of the capture filter is ERROR_BAD_COMMAND which does not say much. I tried it in my program and also in GraphStudioNext which did not reveal any extra information.
Is it possible to use these for capture and how?
Update
For instance, I tried this graph with mic input (actually connected and working). In this setup, the graph does not start (ERROR_BAD_COMMAND) but with the other source, it would start.
This is the same device but different drivers. The one that works is from the category "Audio capture sources" the one that does not "WDM Streaming Capture Devices".
The easiest way to check the device with GraphStudioNext is to build a recording graph with the PCM audio input device itself, AVI Mux filter and File Writer filter connected as this (with default media types):
You hit Run and the recording graph produces non-empty file via Filter Writer in the location prompted during graph building.
--
So now I realized your question is a bit different. You see filters corresponding to your audio input device both under
Audio Capture Sources -- CLSID_AudioInputDeviceCategory
WDM Streaming Capture Devices -- AM_KSCATEGORY_CAPTURE
And the question is that the first filter works and the other does not.
A similar filter from AM_KSCATEGORY_CAPTURE seems to be connecting into topology, but attempt to run triggers ERROR_BAD_COMMAND.
First of all, these are indeed different filters. Even though underlying hardware might be the same, the "frontend" filters are different. The wrapper that "works" is Audio Capture Filter backed by WDM device. In the other case it is Generic WDM Filter Proxy which behavior is, generally speaking, undefined. The filter is not documented and, I am guessing, it does not receive sufficient initialization or does not implement required behavior otherwise, so this proxy is not and is not supposed to be interchangeable with Audio Capture Filter proxy.
I am looking for a way I can modify an output stream from the microphone.
The idea is to modify the output stream merging two audio streams into single one.
My use case is the following. When a person makes a skype call it adds a background song to the output stream.
Is there any way to do this for Windows ?
If you are talking about manipulating the input that other programs see this would be fairly difficult to implement, you would have to create a virtual audio device and then have the target program use that. There are existig packages that already provide that functionality, however, perhaps a search for "virtual audio cable" or "virtual mixer" would come up with something that would work.
I listed the waveInGetDevCaps and it shows me microphone. I however need to record the speaker audio. Is it possible to record devices listed by waveOutGetDevCaps? All examples I find are of waveIn
I am trying to record audio of system. Not audio of mic.
I have two goals, one is a record the sound then do music recognition on it, and the second goal is to record screen and system audio togather. Does DirectShow apis record audio as well?
Edit: So I started the DirectShow thing and am able to list CLSID_AudioInputDeviceCategory but I can't find an example out there of how to record system audio, does anyone know of one or can provide one?
i'm currently reading the documentation of MSDN to render a stream to an audio renderer..
or in other word, to play my captured data from microphone.
http://msdn.microsoft.com/en-us/library/dd316756%28v=vs.85%29.aspx
this example provides example.
My problem now is that i couldn't really understand the project flow.
I currently have a different class storing the below parameters which i obtained from the capture process.
these parameters will be continuously re-written as the program captures streaming audio data from the microphone.
BYTE data;
UINT32 bufferframecount;
DWORD flag;
WAVEFORMATEX *pwfx;
My question is,
How does really the loadData() function works.
is it suppose to grab the parameter i'm writing from capture process?
how does the program sends the data to audio renderer, and play it in my speaker.
The loadData() function fills in audio pointed to by pData. The example abstracts the audio source, so this could be anything from a .wav file to the microphone audio that you already captured.
So, if you are trying to build from that example, I would implement the MyAudioSource class, and have it just read PCM or float samples from a file, whenever loadData() is called. Then, if you run that program, it should play the audio from the file out the speaker.