i'm currently reading the documentation of MSDN to render a stream to an audio renderer..
or in other word, to play my captured data from microphone.
http://msdn.microsoft.com/en-us/library/dd316756%28v=vs.85%29.aspx
this example provides example.
My problem now is that i couldn't really understand the project flow.
I currently have a different class storing the below parameters which i obtained from the capture process.
these parameters will be continuously re-written as the program captures streaming audio data from the microphone.
BYTE data;
UINT32 bufferframecount;
DWORD flag;
WAVEFORMATEX *pwfx;
My question is,
How does really the loadData() function works.
is it suppose to grab the parameter i'm writing from capture process?
how does the program sends the data to audio renderer, and play it in my speaker.
The loadData() function fills in audio pointed to by pData. The example abstracts the audio source, so this could be anything from a .wav file to the microphone audio that you already captured.
So, if you are trying to build from that example, I would implement the MyAudioSource class, and have it just read PCM or float samples from a file, whenever loadData() is called. Then, if you run that program, it should play the audio from the file out the speaker.
Related
I am reading about Audio Units on OSX, but it's not totally clear to me what an Audio Unit is.
I would like to insert custom audio processing in any stream that is being captured from a microphone or played by any application.
Is it possible to implement the custom audio processing as an Audio Unit which is automatically inserted into any capture or render streams on the machine?
If so, are there any good examples in the public domain that I can take a look at?
I have an MFC based project that decodes some data and generates 16 bit 48000 Hz raw wav audio data
The program continuously generates wav audio data in real time
Are there any functions in MFC that will let me play back the audio data in the sound card? I have been googling around for a while and the consensus seems to be that MFC doesn't have this feature. I have also found this tutorial that shows how to playback a wav file using PlaySound() function, but it looks like it is only for wav files and even if it plays audio data in memory, that data has to be prepared in the form of a full wav file with all the header information, while I need to play back raw wav data generated in real time
I have also seen people suggest using Direct X, but I feel like something like this should be possible using basic windows library functions without having to use any other extra libraries. I also found this tutorial for creating and reading wav files in an MFC based project, but it's not really clear how to use it to play raw wav data in memory. This tutorial uses waveOutOpen() function to playbakc the wav file, and it looks like this is probably what I need, but I cannot find a simple tutorial that shows how to use it.
How do I playback raw wav audio in memory in an MFC Dialog based project? I am looking for something where I can specify pointer to the wav data, number of samples, bits and sampling frequency and the function would playback the wav data for me. A basic working example such as generating a sinewave and playing it back will be appreciated. If directx is the only way to do this then that's fine as well.
I am looking for a way I can modify an output stream from the microphone.
The idea is to modify the output stream merging two audio streams into single one.
My use case is the following. When a person makes a skype call it adds a background song to the output stream.
Is there any way to do this for Windows ?
If you are talking about manipulating the input that other programs see this would be fairly difficult to implement, you would have to create a virtual audio device and then have the target program use that. There are existig packages that already provide that functionality, however, perhaps a search for "virtual audio cable" or "virtual mixer" would come up with something that would work.
I am downloading various sound files with my own c++ http client (i.e. mp3's, aiff's etc.). Now I want to parse them using Core Audio's AudioToolbox, to get linear PCM data for playback with i.e. OpenAL. According to this document: https://developer.apple.com/library/mac/#documentation/MusicAudio/Conceptual/CoreAudioOverview/ARoadmaptoCommonTasks/ARoadmaptoCommonTasks.html , it should be possible to also create an audio file from memory. Unfortunately I didn't find any way of doing this when browsing the API, so what is the common way to do this? Please don't say that I should save the file to my hard drive first.
Thank you!
I have done this using an input memory buffer, avoiding any files, in my case I started with AAC audio format and used apple's api : AudioConverterFillComplexBuffer to do the hardware decompress into LPCM. The trick is you have to define a callback function to supply each packet of input data. That api call does the format conversion on a per packet basis. In my case I had to write code to parse the compressed AAC data to identify packet starts (0xfff) then use the callback to spoon feed each packet into the api call. I am also using OpenAL for audio rendering which has its own challenges to avoid using input files.
I'm making a MIDI application with adobe AIR and in order that I might play dynamic real time(ish) output I would like either:
A program which accepts raw midi data on std input and plays it
or
An API which I can use to play raw midi byte data
I've looked at the Java sound API and others but they doesn't seem to have functionality for playing a raw stream of bytes. If anyone knows anything similar which accepts input in a format other than MIDI, I'd like to hear it because I can easily translate my MIDI data if needed! Thanks in advance