CoreAudio AudioQueue stop issue - macos

I'm making a CoreAudio based FLAC player, and ran into a naughty issue with AudioQueues.
I'm initializing my stuff like this (variables beginning with an underscore are instance variables):
_flacDecoder = FLAC__stream_decoder_new();
AudioStreamBasicDescription asbd = {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
.mSampleRate = 44100,
.mChannelsPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 4,
.mFramesPerPacket = 1,
.mBytesPerFrame = 4,
.mReserved = 0
};
AudioQueueNewOutput(&asbd, HandleOutputBuffer, (__bridge void *)(self), CFRunLoopGetCurrent(), kCFRunLoopDefaultMode, 0, &_audioQueue);
for (int i = 0; i < kNumberBuffers; ++i) {
AudioQueueAllocateBuffer(_audioQueue, 0x10000, &_audioQueueBuffers[i]);
}
AudioQueueSetParameter(_audioQueue, kAudioQueueParam_Volume, 1.0);
16 bit stereo PCM at 44.1 kHz, pretty basic setup. kNumberBuffers is 3, and each buffer is 0x10000 bytes.
I populate the buffers with these callbacks:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
AudioQueueStop(self->_audioQueue, false);
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
static FLAC__StreamDecoderWriteStatus flacDecoderWriteCallback(const FLAC__StreamDecoder *decoder, const FLAC__Frame *frame, const FLAC__int32 * const buffer[], void *client_data){
FLACPlayer * self = (__bridge FLACPlayer *)client_data;
assert(frame->header.bits_per_sample == 16); // TODO
int16_t * bufferWritePosition = (int16_t*)((uint8_t*)self->_buffer->mAudioData + self->_buffer->mAudioDataByteSize);
for(int i = 0; i < frame->header.blocksize; i++){
for(int j = 0; j < frame->header.channels; j++){
*bufferWritePosition = (int16_t)buffer[j][i];
bufferWritePosition++;
}
}
int totalFramePayloadInBytes = frame->header.channels * frame->header.blocksize * frame->header.bits_per_sample/8;
self->_buffer->mAudioDataByteSize += totalFramePayloadInBytes;
return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;
}
static void flacDecoderMetadataCallback(const FLAC__StreamDecoder *decoder, const FLAC__StreamMetadata *metadata, void *client_data){
FLACPlayer * self = (__bridge FLACPlayer*) client_data;
if(metadata->type == FLAC__METADATA_TYPE_STREAMINFO){
self->_currentStreamInfo = metadata->data.stream_info;
}
}
Basically when the queue requests a new buffer, I fill the buffer from the FLAC__stream_decoder, then I enqueue it. Just like everyone else would do. When libFLAC tells me that I've reached the end of my file, I tell the AudioQueue to stop asynchronously, until it had consumed all the buffers' contents. However, instead of playing through the end, the playback stops a tiny bit before it should. If I remove this line:
AudioQueueStop(self->_audioQueue, false);
everything works fine; the audio plays end-to-end, although my queue keeps running till the end of time. If I change that line to this:
AudioQueueStop(self->_audioQueue, true);
then the playback stops immediately/synchronously, as you'd expect from Apple's documentation:
If you pass true, stopping occurs immediately (that is,
synchronously). If you pass false, the function returns immediately,
but the audio queue does not stop until its queued buffers are played
or recorded (that is, the stop occurs asynchronously). Audio queue
callbacks are invoked as necessary until the queue actually stops.
My questions are:
- am I doing anything wrong?
- how can I play my audio until the end, and shut down the queue appropriately?

Of course, after struggling with this stuff for hours, I've found the solution minutes after posting this question...
The problem was that the AudioQueue doesn't care about buffers enqueued after calling AudioQueueStop(..., false). So now I'm feeding the queue like this, and everything works like charm:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
bool shouldStop = false;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
shouldStop = true;
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
if(shouldStop){
AudioQueueStop(self->_audioQueue, false);
}
}

Related

How to know when MIDI file has finished with OS X MusicPlayer

I want to be able to play MIDI files that are included as resources in my app. I have a very simple function to do this, given the name of the resource (minus the .MID) file extension:
MusicPlayer musicPlayer;
MusicSequence sequence;
int MusicPlaying=0;
void PlayMusic(char *fname)
{
OSStatus res=noErr;
res = NewMusicPlayer(&musicPlayer);
res = NewMusicSequence(&sequence);
strcpy(TmpPath, "MUSIC/");
strcat(TmpPath, fname);
strcat(TmpPath, ".MID");
NSString *iName = [NSString stringWithUTF8String:TmpPath];
NSURL *url = [[NSBundle mainBundle] URLForResource:iName withExtension:nil];
res = MusicSequenceFileLoad (sequence, (__bridge CFURLRef _Nonnull)(url), 0, kMusicSequenceLoadSMF_ChannelsToTracks);
res = MusicPlayerSetSequence(musicPlayer, sequence);
res = MusicPlayerStart(musicPlayer);
if( res==noErr ) MusicPlaying = 1;
}
This all works fine and dandy, takes very little code... the problem is that I can't figure out how to know when the MIDI file has finished playing. I've tried MusicPlayerIsPlaying() (it ALWAYS returns true, LONG after the file has finished). I've tried checking MusicPlayerGetTime(), but the time count keeps on going after the MIDI finishes. I can't find any way to get a notification from this or any other way to determine that the actual MIDI data has finished playing.
Any ideas?
Apple's PlaySequence example shows how to do this:
You have to determine the length of the sequence by getting the length of each track:
MusicSequenceGetTrackCount(sequence, &ntracks);
for (UInt32 i = 0; i < ntracks; ++i) {
result = MusicSequenceGetIndTrack(sequence, i, &track);
result = MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength,
&trackLength, &propsize);
if (trackLength > sequenceLength)
sequenceLength = trackLength;
}
Then wait until you have reached that time:
while (1) {
usleep (2 * 1000 * 1000);
result = MusicPlayerGetTime(player, &time);
if (time >= sequenceLength)
break;
}

from xcode5 to xcode6-beta6, sounds are now distorted on iOS7

I have a remoteIO application that loads and plays samples on iOS. It works fine when built with xcode5. I use iOS7 as a deployment target.
My application was originally built using the AudioUnitSampleType audio format and the kAudioFormatFlagsCanonical format flags. My sample files are 16 bits/44100Hz/Mono/Caf files.
Now I want to run it on iOS8.
Building my app with its original code on xcode6, the app runs fine on an iOS7 device but it produces no sounds on an iOS8 device.
As AudioUnitSampleType and kAudioFormatFlagsCanonical are deprecated in iOS8, I replaced them, after some researches, with float and kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved.
Now my app runs fine on iOS8 but the sounds are saturated on iOS7.
Has anyone experiences this? Any help ? Thanks, I am stuck here.
Pascal
ps : here is my sample loading method
#define AUDIO_SAMPLE_TYPE float
#define AUDIO_FORMAT_FLAGS kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved
-(void) load:(NSURL*)fileNameURL{
if (frameCount>0){
if (leftChannel!= NULL){
free (leftChannel);
leftChannel = 0;
}
if (rightChannel != NULL){
free (rightChannel);
rightChannel = 0;
}
}
soundFileURLRef=(CFURLRef)fileNameURL;
//----------------------------------------------
// 1.[OPEN AUDIO FILE] and associate it with the extended audio file object.
//----------------------------------------------
ExtAudioFileRef audioFileExtendedObject = 0;
log_if_err(ExtAudioFileOpenURL((CFURLRef)soundFileURLRef,
&audioFileExtendedObject),
#"ExtAudioFileOpenURL failed");
//----------------------------------------------
// 2.[AUDIO FILE LENGTH] Get the audio file's length in frames.
//----------------------------------------------
UInt64 totalFramesInFile = 0;
UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);
log_if_err(ExtAudioFileGetProperty(audioFileExtendedObject,
kExtAudioFileProperty_FileLengthFrames,
&frameLengthPropertySize,
&totalFramesInFile),
#"ExtAudioFileGetProperty (audio file length in frames) failed");
frameCount = totalFramesInFile;
//----------------------------------------------
// 3.[AUDIO FILE FORMAT] Get the audio file's number of channels. Normally CAF.
//----------------------------------------------
AudioStreamBasicDescription fileAudioFormat = {0};
UInt32 formatPropertySize = sizeof (fileAudioFormat);
log_if_err(ExtAudioFileGetProperty(audioFileExtendedObject,
kExtAudioFileProperty_FileDataFormat,
&formatPropertySize,
&fileAudioFormat),
#"ExtAudioFileGetProperty (file audio format) failed");
//----------------------------------------------
// 4.[ALLOCATE AUDIO FILE MEMORY] Allocate memory in the soundFiles instance
// variable to hold the left channel, or mono, audio data
//----------------------------------------------
UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;
// DLog(#"fileNameURL=%# | channelCount=%d",fileNameURL,(int)channelCount);
if (leftChannel != NULL){
free (leftChannel);
leftChannel = 0;
}
leftChannel =(AUDIO_UNIT_SAMPLE_TYPE *) calloc (totalFramesInFile, sizeof(AUDIO_UNIT_SAMPLE_TYPE));
AudioStreamBasicDescription importFormat = {0};
if (2==channelCount) {
isStereo = YES;
if (rightChannel != NULL){
free (rightChannel);
rightChannel = 0;
}
rightChannel = (AUDIO_UNIT_SAMPLE_TYPE *) calloc (totalFramesInFile, sizeof (AUDIO_UNIT_SAMPLE_TYPE));
importFormat = stereoStreamFormat;
} else if (1==channelCount) {
isStereo = NO;
importFormat = monoStreamFormat;
} else {
ExtAudioFileDispose (audioFileExtendedObject);
return;
}
//----------------------------------------------
// 5.[ASSIGN THE MIXER INPUT BUS STREAM DATA FORMAT TO THE AUDIO FILE]
// Assign the appropriate mixer input bus stream data format to the extended audio
// file object. This is the format used for the audio data placed into the audio
// buffer in the SoundStruct data structure, which is in turn used in the
// inputRenderCallback callback function.
//----------------------------------------------
UInt32 importFormatPropertySize = (UInt32) sizeof (importFormat);
log_if_err(ExtAudioFileSetProperty(audioFileExtendedObject,
kExtAudioFileProperty_ClientDataFormat,
importFormatPropertySize,
&importFormat),
#"ExtAudioFileSetProperty (client data format) failed");
//----------------------------------------------
// 6.[SET THE AUDIBUFFER LIST STRUCT] which has two roles:
//
// 1. It gives the ExtAudioFileRead function the configuration it
// needs to correctly provide the data to the buffer.
//
// 2. It points to the soundFiles[soundFile].leftChannel buffer, so
// that audio data obtained from disk using the ExtAudioFileRead function
// goes to that buffer
//
// Allocate memory for the buffer list struct according to the number of
// channels it represents.
//----------------------------------------------
AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
if (NULL==bufferList){
NSLog(#"*** malloc failure for allocating bufferList memory");
return;
}
//----------------------------------------------
// 7.initialize the mNumberBuffers member
//----------------------------------------------
bufferList->mNumberBuffers = channelCount;
//----------------------------------------------
// 8.initialize the mBuffers member to 0
//----------------------------------------------
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
bufferList->mBuffers[arrayIndex] = emptyBuffer;
}
//----------------------------------------------
// 9.set up the AudioBuffer structs in the buffer list
//----------------------------------------------
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AUDIO_UNIT_SAMPLE_TYPE);
bufferList->mBuffers[0].mData = leftChannel;
if (channelCount==2){
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AUDIO_UNIT_SAMPLE_TYPE);
bufferList->mBuffers[1].mData = rightChannel;
}
//----------------------------------------------
// 10.Perform a synchronous, sequential read of the audio data out of the file and
// into the "soundFiles[soundFile].leftChannel" and (if stereo) ".rightChannel" members.
//----------------------------------------------
UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;
OSStatus result = ExtAudioFileRead (audioFileExtendedObject,
&numberOfPacketsToRead,
bufferList);
free (bufferList);
if (noErr != result) {
log_if_err(result,#"ExtAudioFileRead failure");
//
// If reading from the file failed, then free the memory for the sound buffer.
//
free (leftChannel);
leftChannel = 0;
if (2==channelCount) {
free (rightChannel);
rightChannel = 0;
}
frameCount = 0;
}
//----------------------------------------------
// Dispose of the extended audio file object, which also
// closes the associated file.
//----------------------------------------------
ExtAudioFileDispose (audioFileExtendedObject);
return;
}

Cannot get OpenAL to play sound

I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.

Images saved with D3DXSaveSurfaceToFile will open in Paint, not Photoshop

I'm using D3DXSaveSurfaceToFile to save windowed Direct3D 9 surfaces to PNG, BMP and JPG files. There are no errors returned from the D3DXSaveSurfaceToFile call and all files open fine in Windows Photo Viewer and Paint. But they will not open in a higher end image editing program such as Paint Shop Pro or Photoshop. The error messages from these programs basically say that the file is corrupted. If I open the files in Paint and then save them in the same file format with a different file name, then they'll open fine in the other programs.
This leads me to believe that D3DXSaveSurfaceToFile is writing out non-standard versions of these file formats. Is there some way I can get this function to write out files that can be opened in programs like Photoshop without the intermediate step of resaving the files in Paint? Or is there another function I should be using that does a better job of saving a Direct3D surfaces to an image?
Take a look at the file in a image meta viewer. What does it tell you?
Unfortunately D3DXSaveSurfaceToFile() isn't the most stable (it's also exceptionally slow). Personally I do something like the below code. It works even on Anti-aliased displays by doing an offscreen render to take the screenshot then getting it into a buffer. It also supports only the most common of the pixel formats. Sorry for any errors in it, pulled it out of an app I used to work on.
You can then, in your code and probably in another thread, then convert said 'bitmap' to anything you like using a variety of different code.
void HandleScreenshot(IDirect3DDevice9* device)
{
DWORD tcHandleScreenshot = GetTickCount();
LPDIRECT3DSURFACE9 pd3dsBack = NULL;
LPDIRECT3DSURFACE9 pd3dsTemp = NULL;
// Grab the back buffer into a surface
if ( SUCCEEDED ( device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pd3dsBack) ))
{
D3DSURFACE_DESC desc;
pd3dsBack->GetDesc(&desc);
LPDIRECT3DSURFACE9 pd3dsCopy = NULL;
if (desc.MultiSampleType != D3DMULTISAMPLE_NONE)
{
if (SUCCEEDED(device->CreateRenderTarget(desc.Width, desc.Height, desc.Format, D3DMULTISAMPLE_NONE, 0, FALSE, &pd3dsCopy, NULL)))
{
if (SUCCEEDED(device->StretchRect(pd3dsBack, NULL, pd3dsCopy, NULL, D3DTEXF_NONE)))
{
pd3dsBack->Release();
pd3dsBack = pd3dsCopy;
}
else
{
pd3dsCopy->Release();
}
}
}
if (SUCCEEDED(device->CreateOffscreenPlainSurface(desc.Width, desc.Height, desc.Format, D3DPOOL_SYSTEMMEM, &pd3dsTemp, NULL)))
{
DWORD tmpTimeGRTD = GetTickCount();
if (SUCCEEDED(device->GetRenderTargetData(pd3dsBack, pd3dsTemp)))
{
D3DLOCKED_RECT lockedSrcRect;
if (SUCCEEDED(pd3dsTemp->LockRect(&lockedSrcRect, NULL, D3DLOCK_READONLY | D3DLOCK_NOSYSLOCK | D3DLOCK_NO_DIRTY_UPDATE)))
{
int nSize = desc.Width * desc.Height * 3;
BYTE* pixels = new BYTE[nSize +1];
int iSrcPitch = lockedSrcRect.Pitch;
BYTE* pSrcRow = (BYTE*)lockedSrcRect.pBits;
LPBYTE lpDest = pixels;
LPDWORD lpSrc;
switch (desc.Format)
{
case D3DFMT_A8R8G8B8:
case D3DFMT_X8R8G8B8:
for (int y = desc.Height - 1; y >= 0; y--)
{
lpSrc = reinterpret_cast<LPDWORD>(lockedSrcRect.pBits) + y * desc.Width;
for (unsigned int x = 0; x < desc.Width; x++)
{
*reinterpret_cast<LPDWORD>(lpDest) = *lpSrc;
lpSrc++; // increment source pointer by 1 DWORD
lpDest += 3; // increment destination pointer by 3 bytes
}
}
break;
default:
ZeroMemory(pixels, nSize);
}
pd3dsTemp->UnlockRect();
BITMAPINFOHEADER header;
header.biWidth = desc.Width;
header.biHeight = desc.Height;
header.biSizeImage = nSize;
header.biSize = sizeof(BITMAPINFOHEADER);
header.biPlanes = 1;
header.biBitCount = 3 * 8; // RGB
header.biCompression = 0;
header.biXPelsPerMeter = 0;
header.biYPelsPerMeter = 0;
header.biClrUsed = 0;
header.biClrImportant = 0;
BITMAPFILEHEADER bfh = {0};
bfh.bfType = 0x4d42;
bfh.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER);
bfh.bfSize = bfh.bfOffBits + nSize;
unsigned int rough_size = sizeof(BITMAPINFOHEADER) + sizeof(BITMAPFILEHEADER) + nSize;
unsigned char* p = new unsigned char[rough_size]
memcpy(p, &bfh, sizeof(BITMAPFILEHEADER));
p += sizeof(BITMAPFILEHEADER);
memcpy(p, &header, sizeof(BITMAPINFOHEADER));
p += sizeof(BITMAPINFOHEADER);
memcpy(p, pixels, nSize);
delete [] pixels;
/**********************************************/
// p now has a full BMP file, write it out here
}
}
pd3dsTemp->Release();
}
pd3dsBack->Release();
}
}
Turns out that it was a combination of a bug in my code and Paint being more forgiving than Photoshop when it comes to reading files. The bug in my code caused the files to be saved with the wrong extension (i.e. Image.bmp was actually saved using D3DXIFF_JPG). When opening a file that contained a JPG image, but had a BMP extension, Photoshop just failed the file. I guess Paint worked since it ignored the file extension and just decoded the file contents.
Looking at a file in an image meta viewer helped me to see the problem.

Audio Unit and Writing to file

I'm creating real-time audio sequencer app on OS X.
Real-time synth part is implemented by using AURenderCallback.
Now I'm making function to write rendered result to Wave File (44100Hz 16bit Stereo).
Format for render-callback function is 44100Hz 32bit float Stereo interleaved.
I'm using ExtAudioFileWrite to write to file.
But ExtAudioFileWrite function returns error code 1768846202;
I searched 1768846202 but I couldn't get information.
Would you give me some hints?
Thank you.
Here is code.
outFileFormat.mSampleRate = 44100;
outFileFormat.mFormatID = kAudioFormatLinearPCM;
outFileFormat.mFormatFlags =
kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
outFileFormat.mBitsPerChannel = 16;
outFileFormat.mChannelsPerFrame = 2;
outFileFormat.mFramesPerPacket = 1;
outFileFormat.mBytesPerFrame =
outFileFormat.mBitsPerChannel / 8 * outFileFormat.mChannelsPerFrame;
outFileFormat.mBytesPerPacket =
outFileFormat.mBytesPerFrame * outFileFormat.mFramesPerPacket;
AudioBufferList *ioList;
ioList = (AudioBufferList*)calloc(1, sizeof(AudioBufferList)
+ 2 * sizeof(AudioBuffer));
ioList->mNumberBuffers = 2;
ioList->mBuffers[0].mNumberChannels = 1;
ioList->mBuffers[0].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[0].mData = ioDataL;
ioList->mBuffers[1].mNumberChannels = 1;
ioList->mBuffers[1].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[1].mData = ioDataR;
...
while (1) {
//Fill buffer by using render callback func.
RenderCallback(self, nil, nil, 0, frames, ioList);
//i want to create one sec file.
if (renderedFrames >= 44100) break;
err = ExtAudioFileWrite(outAudioFileRef, frames , ioList);
if (err != noErr){
NSLog(#"ERROR AT WRITING TO FILE");
goto errorExit;
}
}
Some of the error codes are actually four character strings. The Core Audio book provides a nice function to handle errors.
static void CheckError(OSStatus error, const char *operation)
{
if (error == noErr) return;
char str[20];
// see if it appears to be a 4-char-code
*(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error);
if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
str[0] = str[5] = '\'';
str[6] = '\0';
} else
// no, format it as an integer
sprintf(str, "%d", (int)error);
fprintf(stderr, "Error: %s (%s)\n", operation, str);
exit(1);
}
Use it like this:
CheckError(ExtAudioFileSetProperty(outputFile,
kExtAudioFileProperty_CodecManufacturer,
sizeof(codec),
&codec), "Setting codec.");
Before you can do any sort of debugging, you probably need to figure out what that error message actually means. Have you tried passing that status code to GetMacOSStatusErrorString() or GetMacOSStatusCommentString()? They aren't documented so well, but they are declared in CoreServices/CarbonCore/Debugging.h.

Resources