Processed audio very noisy when Superpowered Reverb used with Audio Graph - macos

I'm using Superpowered Reverb effect with Audio Graph on OS X.
I'm doing that by calling reverb->process in the render callback of an output audio unit (tested on kAudioUnitSubType_SystemOutput and kAudioUnitSubType_DefaultOutput).
The reverb effect worked but the resulted audio is very noisy. I've tried different things (adjust the samplerate, use extra and zeroed buffers, etc) but it doesn't seems to help. Are there any ways to solve this? Thx.
Simplified code:
SuperpoweredReverb* reverb;
OSStatus callback(void * inComponentStorage,
AudioUnitRenderActionFlags * __nullable flags,
const AudioTimeStamp * inTimeStamp,
UInt32 busNumber,
UInt32 framesCount,
AudioBufferList * ioData)
{
for (int i = 0; i < ioData->mNumberBuffers; ++i)
{
if (ioData->mBuffers[i].mData)
reverb->process(static_cast<float*>(ioData->mBuffers[i].mData),
static_cast<float*>(ioData->mBuffers[i].mData),
framesCount);
}
return noErr;
}
void setupReverb(unsigned int sampleRate, AudioUnit unit)
{
reverb = new SuperpoweredReverb(sampleRate);
reverb->enable(true);
reverb->setMix(0.5);
AudioUnitAddRenderNotify(unit, callback, nullptr);
}

Turns out that in the audio graph, the callback will call multiple times even on the same channel, I made the following changes (using an integer to track current channel) and it work awesomely well now. (below is again simplified code)
SuperpoweredReverb* reverbUnit;
int spliter = 0;
OSStatus callback(void * inComponentStorage,
AudioUnitRenderActionFlags * __nullable flags,
const AudioTimeStamp * inTimeStamp,
UInt32 busNumber,
UInt32 framesCount,
AudioBufferList * ioData)
{
spliter++;
for (int i = 0; i < ioData->mNumberBuffers; ++i)
{
if (ioData->mBuffers[i].mData) {
if (!(spliter % ioData->mBuffers[i].mNumberChannels))
reverbUnit->process(static_cast<float*>(ioData->mBuffers[i].mData),
static_cast<float*>(ioData->mBuffers[i].mData),
framesCount);
}
}
return noErr;
}
void setupReverb(unsigned int sampleRate, AudioUnit unit)
{
reverbUnit = new SuperpoweredReverb(sampleRate);
reverbUnit->enable(true);
reverbUnit->setWet(0.7);
AudioUnitAddRenderNotify(unit, callback, nullptr);
}

Related

Custom AVAudioUnit with AVAudioEngine crashes on set provider block

I have the follow code block that should create an AVAudioNode that produces a sine wave, however it crashes on the line marked with a -[AUAudioUnitV2Bridge setOutputProvider:]: unrecognized selector sent to instance.
Any ideas what might be wrong?
// TEST
AudioComponentDescription mixerDesc;
mixerDesc.componentType = kAudioUnitType_Generator;
mixerDesc.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
mixerDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
mixerDesc.componentFlags = 0;
mixerDesc.componentFlagsMask = 0;
[AVAudioUnit instantiateWithComponentDescription:mixerDesc options:kAudioComponentInstantiation_LoadInProcess completionHandler:^(__kindof AVAudioUnit * _Nullable audioUnit, NSError * _Nullable error) {
NSLog(#"here");
// Crashes here
audioUnit.AUAudioUnit.outputProvider = ^AUAudioUnitStatus(AudioUnitRenderActionFlags *actionFlags, const AudioTimeStamp *timestamp, AUAudioFrameCount frameCount, NSInteger inputBusNumber, AudioBufferList *inputData)
{
const double amplitude = 0.2;
static double theta = 0.0;
double theta_increment = 2.0 * M_PI * 880.0 / 44100.0;
const int channel = 0;
Float32 *buffer = (Float32 *)inputData->mBuffers[channel].mData;
memset(inputData->mBuffers[channel].mData, 0, inputData->mBuffers[channel].mDataByteSize);
memset(inputData->mBuffers[1].mData, 0, inputData->mBuffers[1].mDataByteSize);
// Generate the samples
for (UInt32 frame = 0; frame < inputBusNumber; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta >= 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
return noErr;
};
}];
UPDATE
It feels like something like this would be the right approach, but alas, status throws errors, callback never called.:
AVAudioUnitGenerator *unit = [[AVAudioUnitGenerator alloc] init];
// TEST
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Generator;
desc.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
[AVAudioUnit instantiateWithComponentDescription:desc options:kAudioComponentInstantiation_LoadInProcess completionHandler:^(__kindof AVAudioUnit * _Nullable audioUnit, NSError * _Nullable error) {
NSLog(#"here");
self.audioNode = audioUnit;
OSStatus status;
AudioUnit *unit = audioUnit.audioUnit;
UInt32 numBuses = 1;
status = AudioUnitSetProperty(unit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Output, 0, &numBuses, sizeof(UInt32));
NSLog(#"status: %d", status);
AURenderCallbackStruct rcbs;
rcbs.inputProc = callback2;
rcbs.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(unit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &rcbs, sizeof(rcbs));
NSLog(#"status: %d", status);
}];
Exactly what the error message says. After instantiating your requested type of audioUnit and making sure the AUAudioUnit isn't nil, this code makes it explicit:
if (![audioUnit.AUAudioUnit respondsToSelector:#selector(outputProvider)]) {
NSLog(#"selector method not found in this instance of an AUAudioUnit");
}
The header file does not claim that this method is absolutely required to be implemented in all audio units.
Added later: If you just want to have your code synthesize samples for an AVAudioUnit, and don't care about the specific API or code structure, then:
Get the AudioUnit property of your AVAudioUnit instance; create an AURenderCallback function; fill in an AURenderCallbackStruct with this C function; then do an AudioUnitSetProperty for kAudioUnitProperty_SetRenderCallback on the AudioUnit. This seems to not crash even if no outputProvider method is available.

How can I use glReadPixels without hWnd?

I succeeded to read pixels from DC which OpenGL rendered on.
Here is a part of my code.
void COpenGLWnd::OffscreenRender
(/* IN parameters */ int transitionID, int counts, int directionID,
/* OUT parameters */ BYTE* pPixelArray)
{
HDC hDC = ::GetDC(m_hWnd);
SetDCPixelFormat(hDC);
HGLRC hRC = wglCreateContext(hDC);
VERIFY(wglMakeCurrent(hDC, hRC));
for(int i = 0; i < counts; i++)
{
GLFadeinRender((GLfloat)(i+1) / (GLfloat)counts);
BYTE* data = (BYTE*) malloc(m_BmpSize);
glReadBuffer(GL_BACK);
glReadPixels(0,0,m_BmpWidth,m_BmpHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE, data);
memcpy(&pPixelArray[i * m_BmpSize], data, m_BmpSize);
free(data);
}
wglMakeCurrent(hDC, NULL);
wglDeleteContext(hRC);
::ReleaseDC(m_hWnd, hDC);
}
It works very well, but I want to use glReadPixels without hWnd. I heard that PBO or FBO may help for this situation but I cannot make it. Here is the code that I tried to render with PBO.
void COpenGLWnd::OffscreenRender
(/* IN parameters */ int transitionID, int counts, int directionID,
/* OUT parameters */ BYTE* pPixelArray)
{
GLuint pbo;
glGenBuffersARB(1,&pbo);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pbo);
glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, m_BmpSize, NULL, GL_STREAM_READ_ARB);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);
for(int i = 0; i < counts; i++)
{
glReadBuffer(GL_BACK);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pbo);
GLFadeinRender((GLfloat)(i+1) / (GLfloat)counts);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pbo);
//glReadBuffer(GL_BACK);
glReadPixels(0,0,m_BmpWidth,m_BmpHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE, 0);
BYTE* data = (BYTE*) glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB);
memcpy(&pPixelArray[i * m_BmpSize], data, m_BmpSize);
if(data)
{
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER_ARB);
}
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB,0);
}
glDeleteBuffers(1,&pbo);
}
But nothing happened at glReadPixels. No pixels read.
So can I off-screen render and read pixels without using hWnd?

Muxing with libav

I have a program which is supposed to demux input mpeg-ts, transcode the mpeg2 into h264 and then mux the audio alongside the transcoded video. When I open the resulting muxed file with VLC I neither get audio nor video. Here is the relevant code.
My main worker loop is as follows:
void
*writer_thread(void *thread_ctx) {
struct transcoder_ctx_t *ctx = (struct transcoder_ctx_t *) thread_ctx;
AVStream *video_stream = NULL, *audio_stream = NULL;
AVFormatContext *output_context = init_output_context(ctx, &video_stream, &audio_stream);
struct mux_state_t mux_state = {0};
//from omxtx
mux_state.pts_offset = av_rescale_q(ctx->input_context->start_time, AV_TIME_BASE_Q, output_context->streams[ctx->video_stream_index]->time_base);
//write stream header if any
avformat_write_header(output_context, NULL);
//do not start doing anything until we get an encoded packet
pthread_mutex_lock(&ctx->pipeline.video_encode.is_running_mutex);
while (!ctx->pipeline.video_encode.is_running) {
pthread_cond_wait(&ctx->pipeline.video_encode.is_running_cv, &ctx->pipeline.video_encode.is_running_mutex);
}
while (!ctx->pipeline.video_encode.eos || !ctx->processed_audio_queue->queue_finished) {
//FIXME a memory barrier is required here so that we don't race
//on above variables
//fill a buffer with video data
OERR(OMX_FillThisBuffer(ctx->pipeline.video_encode.h, omx_get_next_output_buffer(&ctx->pipeline.video_encode)));
write_audio_frame(output_context, audio_stream, ctx); //write full audio frame
//FIXME no guarantee that we have a full frame per packet?
write_video_frame(output_context, video_stream, ctx, &mux_state); //write full video frame
//encoded_video_queue is being filled by the previous command
}
av_write_trailer(output_context);
//free all the resources
avcodec_close(video_stream->codec);
avcodec_close(audio_stream->codec);
/* Free the streams. */
for (int i = 0; i < output_context->nb_streams; i++) {
av_freep(&output_context->streams[i]->codec);
av_freep(&output_context->streams[i]);
}
if (!(output_context->oformat->flags & AVFMT_NOFILE)) {
/* Close the output file. */
avio_close(output_context->pb);
}
/* free the stream */
av_free(output_context);
free(mux_state.pps);
free(mux_state.sps);
}
The code for initialising libav output context is this:
static
AVFormatContext *
init_output_context(const struct transcoder_ctx_t *ctx, AVStream **video_stream, AVStream **audio_stream) {
AVFormatContext *oc;
AVOutputFormat *fmt;
AVStream *input_stream, *output_stream;
AVCodec *c;
AVCodecContext *cc;
int audio_copied = 0; //copy just 1 stream
fmt = av_guess_format("mpegts", NULL, NULL);
if (!fmt) {
fprintf(stderr, "[DEBUG] Error guessing format, dying\n");
exit(199);
}
oc = avformat_alloc_context();
if (!oc) {
fprintf(stderr, "[DEBUG] Error allocating context, dying\n");
exit(200);
}
oc->oformat = fmt;
snprintf(oc->filename, sizeof(oc->filename), "%s", ctx->output_filename);
oc->debug = 1;
oc->start_time_realtime = ctx->input_context->start_time;
oc->start_time = ctx->input_context->start_time;
oc->duration = 0;
oc->bit_rate = 0;
for (int i = 0; i < ctx->input_context->nb_streams; i++) {
input_stream = ctx->input_context->streams[i];
output_stream = NULL;
if (input_stream->index == ctx->video_stream_index) {
//copy stuff from input video index
c = avcodec_find_encoder(CODEC_ID_H264);
output_stream = avformat_new_stream(oc, c);
*video_stream = output_stream;
cc = output_stream->codec;
cc->width = input_stream->codec->width;
cc->height = input_stream->codec->height;
cc->codec_id = CODEC_ID_H264;
cc->codec_type = AVMEDIA_TYPE_VIDEO;
cc->bit_rate = ENCODED_BITRATE;
cc->time_base = input_stream->codec->time_base;
output_stream->avg_frame_rate = input_stream->avg_frame_rate;
output_stream->r_frame_rate = input_stream->r_frame_rate;
output_stream->start_time = AV_NOPTS_VALUE;
} else if ((input_stream->codec->codec_type == AVMEDIA_TYPE_AUDIO) && !audio_copied) {
/* i care only about audio */
c = avcodec_find_encoder(input_stream->codec->codec_id);
output_stream = avformat_new_stream(oc, c);
*audio_stream = output_stream;
avcodec_copy_context(output_stream->codec, input_stream->codec);
/* Apparently fixes a crash on .mkvs with attachments: */
av_dict_copy(&output_stream->metadata, input_stream->metadata, 0);
/* Reset the codec tag so as not to cause problems with output format */
output_stream->codec->codec_tag = 0;
audio_copied = 1;
}
}
for (int i = 0; i < oc->nb_streams; i++) {
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
oc->streams[i]->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
if (oc->streams[i]->codec->sample_rate == 0)
oc->streams[i]->codec->sample_rate = 48000; /* ish */
}
if (!(fmt->flags & AVFMT_NOFILE)) {
fprintf(stderr, "[DEBUG] AVFMT_NOFILE set, allocating output container\n");
if (avio_open(&oc->pb, ctx->output_filename, AVIO_FLAG_WRITE) < 0) {
fprintf(stderr, "[DEBUG] error creating the output context\n");
exit(1);
}
}
return oc;
}
Finally this is the code for writing audio:
static
void
write_audio_frame(AVFormatContext *oc, AVStream *st, struct transcoder_ctx_t *ctx) {
AVPacket pkt = {0}; // data and size must be 0;
struct packet_t *source_audio;
av_init_packet(&pkt);
if (!(source_audio = packet_queue_get_next_item_asynch(ctx->processed_audio_queue))) {
return;
}
pkt.stream_index = st->index;
pkt.size = source_audio->data_length;
pkt.data = source_audio->data;
pkt.pts = source_audio->PTS;
pkt.dts = source_audio->DTS;
pkt.duration = source_audio->duration;
pkt.destruct = avpacket_destruct;
/* Write the compressed frame to the media file. */
if (av_interleaved_write_frame(oc, &pkt) != 0) {
fprintf(stderr, "[DEBUG] Error while writing audio frame\n");
}
packet_queue_free_packet(source_audio, 0);
}
A resulting mpeg4 file can be obtained from here: http://87.120.131.41/dl/mpeg4.h264
I have ommited the write_video_frame code since it is a lot more complicated and I might be making something wrong there as I'm doing timebase conversation etc. For audio however I'm doing 1:1 copy. Each packet_t packet contains data from av_read_frame from the input mpegts container. In the worst case I'd expect that my audio is working and not my video. However I cannot get either of those to work. Seems the documentation is rather vague on making things like that - I've tried both libav and ffmpeg irc channels to no avail. Any information regarding how I can debug the issue will be greatly appreciated.
When different containers yield different results in libav it is almost always a timebase issue. All containers have a time_base that they like, and some will accept custom values... sometimes.
You must rescale the time base before putting it in the container. Generally tinkering with the mux state struct isn't something you want to do and I think what you did there doesn't do what you think. Try printing out all of the timebases to find out what they are.
Each frame you must recalculate PTS at least. If you do it before you call encode the encoder will produce the proper DTS. Do the same for the audio, but generally set the DTS it to AV_NO_PTS and sometimes you can get away with setting the audio PTS to that as well. To rescale easily use the av_rescale(...) functions.
Be careful assuming that you have MPEG-2 data in a MPEG-TS container, that is not always true.

iOS Core Audio Recording Buffers Issue

I'm trying to write a bunch of samples to a a TPCircularBuffer, provided Michael Tyson by http://atastypixel.com/blog/a-simple-fast-circular-buffer-implementation-for-audio-processing/comment-page-1/#comment-4988
I am successful in playing back these recorded samples in real time. Something like a monitor.
However, I wish to keep the samples in the TPCircularBuffer for later playback, and so I implemented 2 flags, rio->recording and rio->playing.
My idea was that I would activate rio->recording to be YES using a button. Record for a while, and then stop recording by setting the flag to be NO. Theoretically, TPCircularBuffer would have my audio info saved.
However, when I activate rio->playing to be YES in the playback Callback, I simply hear some jittery sound that has no semblance of what I recorded.
Am I using the buffer correctly? Or is this usually done another way?
Thanks.
Pier.
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
RIO *rio = (RIO*)inRefCon;
AudioUnit rioUnit = rio->theAudioUnit;
//ExtAudioFileRef eaf = rio->outEAF;
AudioBufferList abl = rio->audioBufferList;
SInt32 samples[NUMBER_OF_SAMPLES]; // A large enough size to not have to worry about buffer overrun
abl.mNumberBuffers = 1;
abl.mBuffers[0].mData = &samples;
abl.mBuffers[0].mNumberChannels = 1;
abl.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16);
OSStatus result;
result = AudioUnitRender(rioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&abl);
if (noErr != result) { NSLog(#"Obtain recorded samples error"); }
// React to a recording flag, if recording, save the abl into own buffer, else ignore
if (rio->recording)
{
TPCircularBufferProduceBytes(&rio->buffer, abl.mBuffers[0].mData, inNumberFrames * sizeof(SInt16));
NSLog(#"Recording!");
}
else
{
NSLog(#"Not Recording!");
}
// once stop recording save the circular buffer to a temp circular buffer
return noErr;
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
RIO *rio = (RIO*)inRefCon;
int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16 *targetBuffer = (SInt16*)ioData->mBuffers[0].mData;
// Pull audio from playthrough buffer
int32_t availableBytes;
if (rio->playing)
{
SInt16 * tempbuffer = TPCircularBufferTail(&rio->buffer, &availableBytes);
memcpy(targetBuffer, tempbuffer, MIN(bytesToCopy, availableBytes));
TPCircularBufferConsume(&rio->buffer, MIN(bytesToCopy, availableBytes));
NSLog(#"Playing!");
}
else
{
NSLog(#"Playing silence!");
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
//get the buffer to be filled
AudioBuffer buffer = ioData->mBuffers[i];
UInt32 *frameBuffer = buffer.mData;
//loop through the buffer and fill the frames
for (int j = 0; j < inNumberFrames; j++){
frameBuffer[j] = 0;
}
}
}
return noErr;
}
I will answer this question myself.
Basically the rubbish sounds were due to the TPCircularBuffer not being large enough to hold the sounds. The playback callback was simply playing rubbish as the buffer did not contain anymore valid audio data.
Basically, making the TPCircularBuffer larger solved my problem. (duh!)
Pier.

Playing an arbitrary sound on Windows?

How do I, say, play a sound with a given amplitude and a given frequency makeup (say, consisting of frequencies 2 kHz and 3 kHz) natively on Windows (32-bit and 64-bit, up to Windows 7)?
(By natively I mean without using an external library.)
I believe this needs the waveOutWrite method but I have no idea how it works.
I got something working...
#define _USE_MATH_DEFINES 1
#include <math.h>
#include <stdio.h>
#include <tchar.h>
#include <windows.h>
#include <mmreg.h>
#include <complex>
#pragma comment(lib, "Winmm.lib")
MMRESULT play(float nSeconds,
float signal(float timeInSeconds, unsigned short channel, void *context),
void *context = NULL, unsigned long samplesPerSecond = 48000)
{
UINT timePeriod = 1;
MMRESULT mmresult = MMSYSERR_NOERROR;
WAVEFORMATEX waveFormat = {0};
waveFormat.cbSize = 0;
waveFormat.wFormatTag = WAVE_FORMAT_IEEE_FLOAT;
waveFormat.nChannels = 2;
waveFormat.nSamplesPerSec = samplesPerSecond;
const size_t nBuffer =
(size_t)(nSeconds * waveFormat.nChannels * waveFormat.nSamplesPerSec);
float *buffer;
waveFormat.wBitsPerSample = CHAR_BIT * sizeof(buffer[0]);
waveFormat.nBlockAlign =
waveFormat.nChannels * waveFormat.wBitsPerSample / CHAR_BIT;
waveFormat.nAvgBytesPerSec =
waveFormat.nSamplesPerSec * waveFormat.nBlockAlign;
buffer = (float *)calloc(nBuffer, sizeof(*buffer));
__try
{
for (size_t i = 0; i < nBuffer; i += waveFormat.nChannels)
for (unsigned short j = 0; j < waveFormat.nChannels; j++)
buffer[i+j] = signal((i+j) * nSeconds / nBuffer, j, context);
HWAVEOUT hWavOut = NULL;
mmresult = waveOutOpen(&hWavOut, WAVE_MAPPER,
&waveFormat, NULL, NULL, CALLBACK_NULL);
if (mmresult == MMSYSERR_NOERROR)
{
__try
{
timeBeginPeriod(timePeriod);
__try
{
WAVEHDR hdr = {0};
hdr.dwBufferLength =
(ULONG)(nBuffer * sizeof(buffer[0]));
hdr.lpData = (LPSTR)&buffer[0];
mmresult = waveOutPrepareHeader(hWavOut,
&hdr, sizeof(hdr));
if (mmresult == MMSYSERR_NOERROR)
{
__try
{
ULONG start = GetTickCount();
mmresult =
waveOutWrite(hWavOut, &hdr, sizeof(hdr));
Sleep((ULONG)(1000 * nSeconds
- (GetTickCount() - start)));
}
__finally
{ waveOutUnprepareHeader(hWavOut, &hdr, sizeof(hdr)); }
}
}
__finally { timeEndPeriod(timePeriod); }
}
__finally { waveOutClose(hWavOut); }
}
}
__finally { free(buffer); }
return mmresult;
}
// Triangle wave generator
float triangle(float timeInSeconds, unsigned short channel, void *context)
{
const float frequency = *(const float *)context;
const float angle = (float)(frequency * 2 * M_PI * timeInSeconds);
switch (channel)
{
case 0: return (float)asin(sin(angle + 0 * M_PI / 2));
default: return (float)asin(sin(angle + 1 * M_PI / 2));
}
}
// Pure tone generator
float pure(float timeInSeconds, unsigned short channel, void *context)
{
const float frequency = *(const float *)context;
const float angle = (float)(frequency * 2 * M_PI * timeInSeconds);
switch (channel)
{
case 0: return (float)sin(angle + 0 * M_PI / 2);
default: return (float)sin(angle + 1 * M_PI / 2);
}
}
int _tmain(int argc, _TCHAR* argv[])
{
float frequency = 2 * 261.626F;
play(1, pure, &frequency);
return 0;
}
Beep
BOOL WINAPI Beep(
__in DWORD dwFreq,
__in DWORD dwDuration
);
The waveOut functions deal with sound waveform data (in WAV format, if I recall correctly).
While this is targeted at WPF applications, the following link should prove helpful for any desktop application:
Sound Generation in WPF Applications
Beep via pc speaker, or using Directx Sound.
I can offer some snippets if you need.

Resources