Load .wav file for OpenAL in Cocoa - cocoa

I need to load sound files to a Cocoa-based OpenAL app.
Progress:
The OpenAL utility function alutLoadWAVFile has been deprecated; the alut header is no longer included in Mac OS X SDKs. According to the TechNotes, the actual code is still there for binary compatibility. However, if I attempt to add a declaration for the function, the code will compile but the linker will abort, complaining that the symbol for alutLoadWAVFile could not be found. (I am linking to the OpenAL.framework).
Yet, Apple OpenAL sample code still uses this symbol. When I Clean the sample code project, it compiles and links just fine. Yet there is no declaration of the function to be found. (Side question: how can it build and link, then?)
So, I found some code by George Warner at Apple, containing replacement functions for alutCreateBufferFromFile and alutLoadMemoryFromFile. Although capable of creating an OpenAL buffer directly from most any kind of audio file, the code appears to support only 8bit mono sound files. 16bit stereo or mono 44khz files result in a nasty hissing sound and clipping. (The files are ok; Quicktime plays them just fine.)
Thus, my question: can someone please point me to some .wav loading code/help for Cocoa/Carbon, suitable for use with an OpenAL Buffer? Thankyou.

I'm sure you've solved this already, but for people who find this via Google, here's some barely tested WAV loading code. It works but you'd better double check for memory leaks and whatnot before using for something real.
static bool LoadWAVFile(const char* filename, ALenum* format, ALvoid** data, ALsizei* size, ALsizei* freq, Float64* estimatedDurationOut)
{
CFStringRef filenameStr = CFStringCreateWithCString( NULL, filename, kCFStringEncodingUTF8 );
CFURLRef url = CFURLCreateWithFileSystemPath( NULL, filenameStr, kCFURLPOSIXPathStyle, false );
CFRelease( filenameStr );
AudioFileID audioFile;
OSStatus error = AudioFileOpenURL( url, kAudioFileReadPermission, kAudioFileWAVEType, &audioFile );
CFRelease( url );
if ( error != noErr )
{
fprintf( stderr, "Error opening audio file. %d\n", error );
return false;
}
AudioStreamBasicDescription basicDescription;
UInt32 propertySize = sizeof(basicDescription);
error = AudioFileGetProperty( audioFile, kAudioFilePropertyDataFormat, &propertySize, &basicDescription );
if ( error != noErr )
{
fprintf( stderr, "Error reading audio file basic description. %d\n", error );
AudioFileClose( audioFile );
return false;
}
if ( basicDescription.mFormatID != kAudioFormatLinearPCM )
{
// Need PCM for Open AL. WAVs are (I believe) by definition PCM, so this check isn't necessary. It's just here
// in case I ever use this with another audio format.
fprintf( stderr, "Audio file is not linear-PCM. %d\n", basicDescription.mFormatID );
AudioFileClose( audioFile );
return false;
}
UInt64 audioDataByteCount = 0;
propertySize = sizeof(audioDataByteCount);
error = AudioFileGetProperty( audioFile, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount );
if ( error != noErr )
{
fprintf( stderr, "Error reading audio file byte count. %d\n", error );
AudioFileClose( audioFile );
return false;
}
Float64 estimatedDuration = 0;
propertySize = sizeof(estimatedDuration);
error = AudioFileGetProperty( audioFile, kAudioFilePropertyEstimatedDuration, &propertySize, &estimatedDuration );
if ( error != noErr )
{
fprintf( stderr, "Error reading estimated duration of audio file. %d\n", error );
AudioFileClose( audioFile );
return false;
}
ALenum alFormat = 0;
if ( basicDescription.mChannelsPerFrame == 1 )
{
if ( basicDescription.mBitsPerChannel == 8 )
alFormat = AL_FORMAT_MONO8;
else if ( basicDescription.mBitsPerChannel == 16 )
alFormat = AL_FORMAT_MONO16;
else
{
fprintf( stderr, "Expected 8 or 16 bits for the mono channel but got %d\n", basicDescription.mBitsPerChannel );
AudioFileClose( audioFile );
return false;
}
}
else if ( basicDescription.mChannelsPerFrame == 2 )
{
if ( basicDescription.mBitsPerChannel == 8 )
alFormat = AL_FORMAT_STEREO8;
else if ( basicDescription.mBitsPerChannel == 16 )
alFormat = AL_FORMAT_STEREO16;
else
{
fprintf( stderr, "Expected 8 or 16 bits per channel but got %d\n", basicDescription.mBitsPerChannel );
AudioFileClose( audioFile );
return false;
}
}
else
{
fprintf( stderr, "Expected 1 or 2 channels in audio file but got %d\n", basicDescription.mChannelsPerFrame );
AudioFileClose( audioFile );
return false;
}
UInt32 numBytesToRead = audioDataByteCount;
void* buffer = malloc( numBytesToRead );
if ( buffer == NULL )
{
fprintf( stderr, "Error allocating buffer for audio data of size %u\n", numBytesToRead );
return false;
}
error = AudioFileReadBytes( audioFile, false, 0, &numBytesToRead, buffer );
AudioFileClose( audioFile );
if ( error != noErr )
{
fprintf( stderr, "Error reading audio bytes. %d\n", error );
free(buffer);
return false;
}
if ( numBytesToRead != audioDataByteCount )
{
fprintf( stderr, "Tried to read %lld bytes from the audio file but only got %d bytes\n", audioDataByteCount, numBytesToRead );
free(buffer);
return false;
}
*freq = basicDescription.mSampleRate;
*size = audioDataByteCount;
*format = alFormat;
*data = buffer;
*estimatedDurationOut = estimatedDuration;
return true;
}

Use the AudioFileReadBytes function from Audio Services. Examples can be found in the Finch sound engine, see the Sound+IO category.

This may be an obvious suggestion, but since you didn't mention it: have you tried the library at http://www.openal.org/ as suggested in Apple's technote?
As for how the sample code links and builds, it's not finding a prototype (if you turn on -Wall, you'll get an implicit function declaration warning), but OpenAL.framework--at least in the SDK they are using in the sample project--does in fact export _alutLoadWAVFile, which you can check with nm. What's the exact link error you get, and what SDK are you using?

Related

VoiceProcessingIO Audio Unit adds an unexpected input stream to Built-in output device (macOS)

I work on VoIP app on macOS and use VoiceProcessingIO Audio Unit for audio processing like Echo cancellation and automatic gain control.
Problem is, when I init the audio unit, the list of Core Audio devices changes - not just by adding new aggregate device which VP audio unit uses for it's needs, but also because built-in output device (i.e. "Built - In MacBook Pro Speakers") now appears also as an input device, i.e. having an unexpected input stream in addition to output ones.
This is a list of INPUT devices (aka "microphones") I get from Core Audio before initialising my VP AU:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
This is the same list when my VP AU is initialised:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
DEVICE: INPUT 86 BuiltInSpeakerDevice /// WHY?
DEVICE: INPUT 98 VPAUAggregateAudioDevice-0x101046040
This is very frustrating because I need to display a list of devices in the app and even though I can filter out Aggregate devices from device list boldly (they are not usable with VP AU anyway), I cannot exclude our built-in macBook Speaker device.
Maybe someone of You has already been through this and has a clue what's going on and if this can be fixed. Some kAudioObjectPropertyXX I need to watch for to exclude the device from inputs list. Or course this might be a bug/feature on Apple's side and I simply have to hack my way around this.
VP AU works well, and the problem reproduces despite devices used (I tried on built-in and on external/USB/Bluetooth alike). The problem is reproduced on all macOS version I could test on, starting from 10.13 and ending by 11.0 included. This also reproduces on different Macs and different audio device sets connected. I am curious that there is next to zero info on that problem available, which brings me to a thought that I did something wrong.
One more strange thing is, when VP AU is working, the HALLab app indicates the another thing: Built-in Input having two more input streams (ok, I would survive this If it was just that!). But it doesn't indicate that Built-In output has input streams added, like in my app.
Here is extract from cpp code on how I setup VP Audio Unit:
#define MAX_FRAMES_PER_CALLBACK 1024
AudioComponentInstance AvHwVoIP::getComponentInstance(OSType type, OSType subType) {
AudioComponentDescription desc = {0};
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentSubType = subType;
desc.componentType = type;
AudioComponent ioComponent = AudioComponentFindNext(NULL, &desc);
AudioComponentInstance unit;
OSStatus status = AudioComponentInstanceNew(ioComponent, &unit);
if (status != noErr) {
printf("Error: %d\n", status);
}
return unit;
}
void AvHwVoIP::enableIO(uint32_t enableIO, AudioUnit auDev) {
UInt32 no = 0;
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
1,
&enableIO,
sizeof(enableIO));
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0,
&enableIO,
sizeof(enableIO));
}
void AvHwVoIP::setDeviceAsCurrent(AudioUnit auDev, AudioUnitElement element, AudioObjectID devId) {
//Set the Current Device to the AUHAL.
//this should be done only after IO has been enabled on the AUHAL.
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_CurrentDevice,
element == 0 ? kAudioUnitScope_Output : kAudioUnitScope_Input,
element,
&devId,
sizeof(AudioDeviceID));
}
void AvHwVoIP::setAudioUnitProperty(AudioUnit auDev,
AudioUnitPropertyID inID,
AudioUnitScope inScope,
AudioUnitElement inElement,
const void* __nullable inData,
uint32_t inDataSize) {
OSStatus status = AudioUnitSetProperty(auDev, inID, inScope, inElement, inData, inDataSize);
if (noErr != status) {
std::cout << "****** ::setAudioUnitProperty failed" << std::endl;
}
}
void AvHwVoIP::start() {
m_auVoiceProcesing = getComponentInstance(kAudioUnitType_Output, kAudioUnitSubType_VoiceProcessingIO);
enableIO(1, m_auVoiceProcesing);
m_format_description = SetAudioUnitStreamFormatFloat(m_auVoiceProcesing);
SetAudioUnitCallbacks(m_auVoiceProcesing);
setDeviceAsCurrent(m_auVoiceProcesing, 0, m_renderDeviceID);//output device AudioDeviceID here
setDeviceAsCurrent(m_auVoiceProcesing, 1, m_capDeviceID);//input device AudioDeviceID here
setInputLevelListener();
setVPEnabled(true);
setAGCEnabled(true);
UInt32 maximumFramesPerSlice = 0;
UInt32 size = sizeof(maximumFramesPerSlice);
OSStatus s1 = AudioUnitGetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, &size);
printf("max frames per callback: %d\n", maximumFramesPerSlice);
maximumFramesPerSlice = MAX_FRAMES_PER_CALLBACK;
s1 = AudioUnitSetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, size);
OSStatus status = AudioUnitInitialize(m_auVoiceProcesing);
if (noErr != status) {
printf("*** error AU initialize: %d", status);
}
status = AudioOutputUnitStart(m_auVoiceProcesing);
if (noErr != status) {
printf("*** AU start error: %d", status);
}
}
And Here is how I get my list of devices:
//does this device have input/output streams?
bool hasStreamsForCategory(AudioObjectID devId, bool input)
{
const AudioObjectPropertyScope scope = (input == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
AudioObjectPropertyAddress propertyAddress{kAudioDevicePropertyStreams, scope, kAudioObjectPropertyElementWildcard};
uint32_t dataSize = 0;
OSStatus status = AudioObjectGetPropertyDataSize(devId,
&propertyAddress,
0,
NULL,
&dataSize);
if (noErr != status)
printf("%s: Error in AudioObjectGetPropertyDataSize: %d \n", __FUNCTION__, status);
return (dataSize / sizeof(AudioStreamID)) > 0;
}
std::set<AudioDeviceID> scanCoreAudioDeviceUIDs(bool isInput)
{
std::set<AudioDeviceID> deviceIDs{};
// find out how many audio devices there are
AudioObjectPropertyAddress propertyAddress = {kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster};
uint32_t dataSize{0};
OSStatus err = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
if ( err != noErr )
{
printf("%s: AudioObjectGetPropertyDataSize: %d\n", __FUNCTION__, dataSize);
return deviceIDs;//empty
}
// calculate the number of device available
uint32_t devicesAvailable = dataSize / sizeof(AudioObjectID);
if ( devicesAvailable < 1 )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
AudioObjectID devices[devicesAvailable];//devices to get
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, devices);
if ( err != noErr )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
const AudioObjectPropertyScope scope = (isInput == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
for (uint32_t i = 0; i < devicesAvailable; ++i)
{
const bool hasCorrespondingStreams = hasStreamsForCategory(devices[i], isInput);
if (!hasCorrespondingStreams) {
continue;
}
printf("DEVICE: \t %s \t %d \t %s\n", isInput ? "INPUT" : "OUTPUT", devices[i], deviceUIDFromAudioDeviceID(devices[i]).c_str());
deviceIDs.insert(devices[i]);
}//end for
return deviceIDs;
}
Well, replying my own question in 4 months since Apple Feedback Assistant responded to my request:
"There are two things you were noticing, both of which are expected and considered as implementation details of AUVP:
The speaker device has input stream - this is the reference tap stream for echo cancellation.
There is additional input stream under the built-in mic device - this is the raw mic streams enabled by AUVP.
For #1, We'd advise you to treat built-in speaker and (on certain Macs) headphone with special caution when determining whether it’s input/output device based on its input/output streams.
For #2, We'd advise you to ignore the extra streams on the device."
So they suggest me doing exactly what I did then: determine built - in output device before starting AU and then just memorising it; Ignoring any extra streams that appear in built - in devices during VP AU operation.

x264 encoded frames into a mp4 container with ffmpeg API

I'm struggling with understanding what is and what is not needed in getting my already encoded x264 frames into a video container file using ffmpeg's libavformat API.
My current program will get the x264 frames like this -
while( x264_encoder_delayed_frames( h ) )
{
printf("Writing delayed frame %u\n", delayed_frame_counter++);
i_frame_size = x264_encoder_encode( h, &nal, &i_nal, NULL, &pic_out );
if( i_frame_size < 0 ) {
printf("Failed to encode a delayed x264 frame.\n");
return ERROR;
}
else if( i_frame_size )
{
if( !fwrite(nal->p_payload, i_frame_size, 1, video_file_ptr) ) {
printf("Failed to write a delayed x264 frame.\n");
return ERROR;
}
}
}
If I use the CLI to the ffmpeg binary, I can put these frames into a container using:
ffmpeg -i "raw_frames.h264" -c:v copy -f mp4 "video.mp4"
I would like to code this function into my program using the libavformat API though. I'm a little stuck in the concepts and the order on which each ffmpeg function is needed to be called.
So far I have written:
mAVOutputFormat = av_guess_format("gen_vid.mp4", NULL, NULL);
printf("Guessed format\n");
int ret = avformat_alloc_output_context2(&mAVFormatContext, NULL, NULL, "gen_vid.mp4");
printf("Created context = %d\n", ret);
printf("Format = %s\n", mAVFormatContext->oformat->name);
mAVStream = avformat_new_stream(mAVFormatContext, 0);
if (!mAVStream) {
printf("Failed allocating output stream\n");
} else {
printf("Allocated stream.\n");
}
mAVCodecParameters = mAVStream->codecpar;
if (mAVCodecParameters->codec_type != AVMEDIA_TYPE_AUDIO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_VIDEO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_SUBTITLE) {
printf("Invalid codec?\n");
}
if (!(mAVFormatContext->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&mAVFormatContext->pb, "gen_vid.mp4", AVIO_FLAG_WRITE);
if (ret < 0) {
printf("Could not open output file '%s'", "gen_vid.mp4");
}
}
ret = avformat_write_header(mAVFormatContext, NULL);
if (ret < 0) {
printf("Error occurred when opening output file\n");
}
This will print out:
Guessed format
Created context = 0
Format = mp4
Allocated stream.
Invalid codec?
[mp4 # 0x55ffcea2a2c0] Could not find tag for codec none in stream #0, codec not currently supported in container
Error occurred when opening output file
How can I make sure the codec type is set correctly for my video?
Next I need to somehow point my mAVStream to use my x264 frames - advice would be great.
Update 1:
So I've tried to set the H264 codec, so the codec's meta-data is available. I seem to hit 2 newer issues now.
1) It cannot find the device and therefore cannot configure the encoder.
2) I get the "dimensions not set".
mAVOutputFormat = av_guess_format("gen_vid.mp4", NULL, NULL);
printf("Guessed format\n");
// MUST allocate the media file format context.
int ret = avformat_alloc_output_context2(&mAVFormatContext, NULL, NULL, "gen_vid.mp4");
printf("Created context = %d\n", ret);
printf("Format = %s\n", mAVFormatContext->oformat->name);
// Even though we already have encoded the H264 frames using x264,
// we still need the codec's meta-data.
const AVCodec *mAVCodec;
mAVCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (!mAVCodec) {
fprintf(stderr, "Codec '%s' not found\n", "H264");
exit(1);
}
mAVCodecContext = avcodec_alloc_context3(mAVCodec);
if (!mAVCodecContext) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(1);
}
printf("Codec context allocated with defaults.\n");
/* put sample parameters */
mAVCodecContext->bit_rate = 400000;
mAVCodecContext->width = width;
mAVCodecContext->height = height;
mAVCodecContext->time_base = (AVRational){1, 30};
mAVCodecContext->framerate = (AVRational){30, 1};
mAVCodecContext->gop_size = 10;
mAVCodecContext->level = 31;
mAVCodecContext->max_b_frames = 1;
mAVCodecContext->pix_fmt = AV_PIX_FMT_NV12;
av_opt_set(mAVCodecContext->priv_data, "preset", "slow", 0);
printf("Set codec parameters.\n");
// Initialize the AVCodecContext to use the given AVCodec.
avcodec_open2(mAVCodecContext, mAVCodec, NULL);
// Add a new stream to a media file. Must be called before
// calling avformat_write_header().
mAVStream = avformat_new_stream(mAVFormatContext, mAVCodec);
if (!mAVStream) {
printf("Failed allocating output stream\n");
} else {
printf("Allocated stream.\n");
}
// TODO How should codecpar be set?
mAVCodecParameters = mAVStream->codecpar;
if (mAVCodecParameters->codec_type != AVMEDIA_TYPE_AUDIO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_VIDEO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_SUBTITLE) {
printf("Invalid codec?\n");
}
if (!(mAVFormatContext->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&mAVFormatContext->pb, "gen_vid.mp4", AVIO_FLAG_WRITE);
if (ret < 0) {
printf("Could not open output file '%s'", "gen_vid.mp4");
}
}
printf("Called avio_open()\n");
// MUST write a header.
ret = avformat_write_header(mAVFormatContext, NULL);
if (ret < 0) {
printf("Error occurred when opening output file (writing header).\n");
}
Now I am getting this output -
Guessed format
Created context = 0
Format = mp4
Codec context allocated with defaults.
Set codec parameters.
[h264_v4l2m2m # 0x556460344b40] Could not find a valid device
[h264_v4l2m2m # 0x556460344b40] can't configure encoder
Allocated stream.
Invalid codec?
Called avio_open()
[mp4 # 0x5564603442c0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[mp4 # 0x5564603442c0] dimensions not set
Error occurred when opening output file (writing header).

FFT of samples from portAudio stream

Beginner here, (OSX 10.9.5, Xcode 6)
I have a portAudio stream that gives out noise. Now I'd like to get those random values generated in the callback and run them through an fftw plan. As far as I know, fftw needs to be executed in the main. So how can I show the numbers from the callback to the main? I have a feeling it has something to do with pointers but that's a very uneducated guess...
I'm having some difficulty with joining two different libraries. Little help would be greatly appreciated, thank you!
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include "portaudio.h"
#include "fftw3.h"
#define NUM_SECONDS (1)
#define SAMPLE_RATE (44100)
typedef struct
{
float left_phase;
float right_phase;
}
paTestData;
static int patestCallback( const void *inputBuffer, void *outputBuffer,
unsigned long framesPerBuffer,
const PaStreamCallbackTimeInfo* timeInfo,
PaStreamCallbackFlags statusFlags,
void *userData )
{
/* Cast data passed through stream to our structure. */
paTestData *data = (paTestData*)userData;
float *out = (float*)outputBuffer;
unsigned int i;
(void) inputBuffer; /* Prevent unused variable warning. */
for( i=0; i<framesPerBuffer; i++ )
{
*out++ = data->left_phase; /* left */
*out++ = data->right_phase; /* right */
/* Generate random value that ranges between -1.0 and 1.0. */
data->left_phase = (((float)rand()/(float)(RAND_MAX)) * 2) - 1 ;
data->right_phase = (((float)rand()/(float)(RAND_MAX)) * 2) - 1 ;
printf("%f, %f\n", data->left_phase, data->right_phase);
}
return 0;
}
/*******************************************************************/
static paTestData data;
int main(void);
int main(void)
{
PaStream *stream;
PaError err;
printf("PortAudio Test: output noise.\n");
/* Initialize our data for use by callback. */
data.left_phase = data.right_phase = 0.0;
/* Initialize library before making any other calls. */
err = Pa_Initialize();
if( err != paNoError ) goto error;
/* Open an audio I/O stream. */
err = Pa_OpenDefaultStream( &stream,
0, /* no input channels */
2, /* stereo output */
paFloat32, /* 32 bit floating point output */
SAMPLE_RATE,
512, /* frames per buffer */
patestCallback,
&data );
if( err != paNoError ) goto error;
err = Pa_StartStream( stream );
if( err != paNoError ) goto error;
/* Sleep for several seconds. */
Pa_Sleep(NUM_SECONDS*1000);
err = Pa_StopStream( stream );
if( err != paNoError ) goto error;
err = Pa_CloseStream( stream );
if( err != paNoError ) goto error;
Pa_Terminate();
printf("Test finished.\n");
return err;
error:
Pa_Terminate();
fprintf( stderr, "An error occured while using the portaudio stream\n" );
fprintf( stderr, "Error number: %d\n", err );
fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) );
return err;
}
You could try running the stream in "blocking write" mode instead of using the callback. To use this mode, you pass NULL for the streamCallback parameter of Pa_OpenDefaultStream and then you continually call Pa_WriteStream in a loop. The call will block as necessary. Something like this pseudo code:
Pa_OpenStream(&stream, 0, 2, paFloat32, SAMPLE_RATE, 512, NULL, NULL);
Pa_StartStream(stream);
float interleavedSamples[2*512];
for (int i = 0 ; i < SAMPLE_RATE/512 ; i++) // approx 1 second
{
GenerateNoise(&interleavedSamples, 2, 512, &data);
RunFft(interleavedSamples, ...);
PaWriteStream(stream, interleavedSamples, 512);
}

Not able to capture image Opencv

Not able to capture Image, works in other laptop, my webcam is not able to open.But it works in other laptop.The output is "Error:Capture is Null"
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
// A Simple Camera Capture Framework
int main() {
CvCapture* capture = cvCaptureFromCAM( CV_CAP_ANY );
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
// Create a window in which the captured images will be presented
cvNamedWindow( "mywindow", CV_WINDOW_AUTOSIZE );
// Show the image captured from the camera in the window and repeat
while ( 1 ) {
// Get one frame
IplImage* frame = cvQueryFrame( capture );
if ( !frame ) {
fprintf( stderr, "ERROR: frame is null...\n" );
getchar();
break;
}
cvShowImage( "mywindow", frame );
// Do not release the frame!
//If ESC key pressed, Key=0x10001B under OpenCV 0.9.7(linux version),
//remove higher bits using AND operator
if ( (cvWaitKey(10) & 255) == 27 ) break;
}
// Release the capture device housekeeping
cvReleaseCapture( &capture );
cvDestroyWindow( "mywindow" );
return 0;
}
You may try different numbers in the place of CV_CAP_ANY.
It is also possible that your OpenCV is not installed appropriately, then you should reinstall it with libv4l as it is suggested here.
There is a slight chance that your camera is not compatible with OpenCV.
Try passing -1 instead of CV_CAP_ANY.

Audio Unit and Writing to file

I'm creating real-time audio sequencer app on OS X.
Real-time synth part is implemented by using AURenderCallback.
Now I'm making function to write rendered result to Wave File (44100Hz 16bit Stereo).
Format for render-callback function is 44100Hz 32bit float Stereo interleaved.
I'm using ExtAudioFileWrite to write to file.
But ExtAudioFileWrite function returns error code 1768846202;
I searched 1768846202 but I couldn't get information.
Would you give me some hints?
Thank you.
Here is code.
outFileFormat.mSampleRate = 44100;
outFileFormat.mFormatID = kAudioFormatLinearPCM;
outFileFormat.mFormatFlags =
kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
outFileFormat.mBitsPerChannel = 16;
outFileFormat.mChannelsPerFrame = 2;
outFileFormat.mFramesPerPacket = 1;
outFileFormat.mBytesPerFrame =
outFileFormat.mBitsPerChannel / 8 * outFileFormat.mChannelsPerFrame;
outFileFormat.mBytesPerPacket =
outFileFormat.mBytesPerFrame * outFileFormat.mFramesPerPacket;
AudioBufferList *ioList;
ioList = (AudioBufferList*)calloc(1, sizeof(AudioBufferList)
+ 2 * sizeof(AudioBuffer));
ioList->mNumberBuffers = 2;
ioList->mBuffers[0].mNumberChannels = 1;
ioList->mBuffers[0].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[0].mData = ioDataL;
ioList->mBuffers[1].mNumberChannels = 1;
ioList->mBuffers[1].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[1].mData = ioDataR;
...
while (1) {
//Fill buffer by using render callback func.
RenderCallback(self, nil, nil, 0, frames, ioList);
//i want to create one sec file.
if (renderedFrames >= 44100) break;
err = ExtAudioFileWrite(outAudioFileRef, frames , ioList);
if (err != noErr){
NSLog(#"ERROR AT WRITING TO FILE");
goto errorExit;
}
}
Some of the error codes are actually four character strings. The Core Audio book provides a nice function to handle errors.
static void CheckError(OSStatus error, const char *operation)
{
if (error == noErr) return;
char str[20];
// see if it appears to be a 4-char-code
*(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error);
if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
str[0] = str[5] = '\'';
str[6] = '\0';
} else
// no, format it as an integer
sprintf(str, "%d", (int)error);
fprintf(stderr, "Error: %s (%s)\n", operation, str);
exit(1);
}
Use it like this:
CheckError(ExtAudioFileSetProperty(outputFile,
kExtAudioFileProperty_CodecManufacturer,
sizeof(codec),
&codec), "Setting codec.");
Before you can do any sort of debugging, you probably need to figure out what that error message actually means. Have you tried passing that status code to GetMacOSStatusErrorString() or GetMacOSStatusCommentString()? They aren't documented so well, but they are declared in CoreServices/CarbonCore/Debugging.h.

Resources