I am trying to set the runtime depth frame rate but it is not successful. No error is returned but the depth frame rate keeps unchanged. I used the following code the set the depth frame rate.
TangoErrorType SetRuntimeDepthFrameRate(uint32_t frameRate)
{
TangoConfig runtimeConfig = TangoService_getConfig(TANGO_CONFIG_RUNTIME);
if (runtimeConfig == nullptr) {
LOGE("failed to get runtime config");
return TANGO_ERROR;
}
TangoErrorType err = TangoConfig_setInt32(runtimeConfig, "config_runtime_depth_framerate", frameRate);
if (err != TANGO_SUCCESS) {
LOGE("failed to set runtime depth framerate to %d", frameRate);
return err;
}
err = TangoService_setRuntimeConfig(runtimeConfig);
if (err != TANGO_SUCCESS)
LOGE("ailed to set runtime config");
LOGI("the runtime depth framerate is set to %d", GetRuntimeDepthFrameRate());
return err;
}
I used the following code the query the runtime depth frame rate.
int GetRuntimeDepthFrameRate() const {
TangoConfig runtimeConfig = TangoService_getConfig(TANGO_CONFIG_RUNTIME);
if (runtimeConfig == nullptr) {
LOGE("failed to get runtime config");
return -1;
}
int32_t depthFrameRate;
TangoErrorType err = TangoConfig_getInt32(runtimeConfig, "config_runtime_depth_framerate", &depthFrameRate);
if (err != TANGO_SUCCESS) {
LOGE("failed to get runtime depth framerate");
return -1;
}
return depthFrameRate;
}
The runtime depth frame rate is never changed, it is always 5. My program keeps calling the depth callbacks when I try to set the depth rate to 0 which means the rate is not set successfully.
Is there anything wrong with what I am doing?
Thank you for any answers in advance.
This is a bit old but maybe it will help someone: After some experiments, I found it's still not possible to turn depth perception on/off via API at runtime using ENABLE_DEPTH_PERCEPTION_BOOL (I'm using Unity SDK). The only way to turn it on and off as needed (to save battery and cpu) is to start with depth ON and after OnTangoServiceConnected() set framerate to 0 using RUNTIME_DEPTH_FRAMERATE. Enabling it later by setting framerate to 5 again works fine.
I'm also not able to set the frame-rate to anything else than 5, not in C++ or Java.
Tested on both Yellowstone tablet and Lenovo Phab2 Pro, Tango SDK version Wasat (Version 1.44, September 2016).
Related
When I captured video from camera on Intel Mac, used VideoToolbox to hardware encode raw pixel buffers to H.264 codec slices, I found that the VideoToolbox encoded I frame not clear, causing it looks like blurs every serveral seconds. Below are properties setted:
self.bitrate = 1000000;
self.frameRate = 20;
int interval_second = 2;
int interval_second = 2;
NSDictionary *compressionProperties = #{
(id)kVTCompressionPropertyKey_ProfileLevel: (id)kVTProfileLevel_H264_High_AutoLevel,
(id)kVTCompressionPropertyKey_RealTime: #YES,
(id)kVTCompressionPropertyKey_AllowFrameReordering: #NO,
(id)kVTCompressionPropertyKey_H264EntropyMode: (id)kVTH264EntropyMode_CABAC,
(id)kVTCompressionPropertyKey_PixelTransferProperties: #{
(id)kVTPixelTransferPropertyKey_ScalingMode: (id)kVTScalingMode_Trim,
},
(id)kVTCompressionPropertyKey_AverageBitRate: #(self.bitrate),
(id)kVTCompressionPropertyKey_ExpectedFrameRate: #(self.frameRate),
(id)kVTCompressionPropertyKey_MaxKeyFrameInterval: #(self.frameRate * interval_second),
(id)kVTCompressionPropertyKey_MaxKeyFrameIntervalDuration: #(interval_second),
(id)kVTCompressionPropertyKey_DataRateLimits: #[#(self.bitrate / 8), #1.0],
};
result = VTSessionSetProperties(self.compressionSession, (CFDictionaryRef)compressionProperties);
if (result != noErr) {
NSLog(#"VTSessionSetProperties failed: %d", (int)result);
return;
} else {
NSLog(#"VTSessionSetProperties succeeded");
}
These are very strange compression settings. Do you really need short GOP and very strict data rate limits?
I very much suspect you just copied some code off the internet without having any idea what it does. If it's the case, just set interval_second = 300 and remove kVTCompressionPropertyKey_DataRateLimits completely
I made NDIS 6 network filter driver and am reading the packet.
When I use Intel I350 NIC, 'MmGetMdlByteCount' returns '9014'bytes.
This value is the same as the MTU size, so I can read the data at once.
However, when using the x540 NIC, 'MmGetMdlByteCount' is returned to '2048'bytes.
So I have to read the MDL over and over again. Why is this happening?
Is there a way to read data at once on the X540 NIC?
I want to reduce repetition because I think the consumption time will be longer if I bring the data several times.
Below is a part of my source code.
PVOID vpByTmpData = NULL;
for( pNbMdl = NET_BUFFER_CURRENT_MDL( pNetBuffer );
pNbMdl != NULL && ulDataLength > 0;
pNbMdl = NDIS_MDL_LINKAGE( pNbMdl ) )
{
ulBytesToCopy = MmGetMdlByteCount( pNbMdl );
if( ulBytesToCopy == 0 )
continue;
vpByTmpData = MmGetSystemAddressForMdlSafe( pNbMdl, NormalPagePriority );
if( !vpByTmpData )
{
bRet = FALSE;
__leave;
}
if( ulBytesToCopy > ulDataLength )
ulBytesToCopy = ulDataLength;
NdisMoveMemory( &baImage[ulMemIdxOffset], (PBYTE)(vpByTmpData), ulBytesToCopy);
ulMemIdxOffset += ulBytesToCopy;
}
Please help me.
What you're seeing is a result of how the NIC hardware physically works. Different hardware will use different buffer layout strategies. NDIS does not attempt to force every NIC to use the same strategy, since that would reduce performance on some NICs. Unfortunately for you, that means the complexity of dealing with different buffers gets pushed upwards into NDIS filter & protocol drivers.
You can use NdisGetDataBuffer to do some of this work for you. Internally, NdisGetDataBuffer works like this:
if MmGetSystemAddressForMdl fails:
return NULL;
else if the payload is already contiguous in memory:
return a pointer to that directly;
else if you provided your own buffer:
copy the payload into your buffer
return a pointer to your buffer;
else:
return NULL;
So you can use NdisGetDataBuffer to obtain a contiguous view of the payload. The simplest way to use it is this:
UCHAR ScratchBuffer[MAX_MTU_SIZE];
UCHAR *Payload = NdisGetDataBuffer(NetBuffer, NetBuffer->DataLength, ScratchBuffer, 1, 0);
if (!Payload) {
return NDIS_STATUS_RESOURCES; // very unlikely: MmGetSystemAddressForMdl failed
}
memcpy(baImage, Payload, NetBuffer->DataLength);
But this can have a double-copy in some cases. (Exercise to test your understanding: when would there be a double-copy?) For slightly better performance, you can avoid the double-copy with this trick:
UCHAR *Payload = NdisGetDataBuffer(NetBuffer, NetBuffer->DataLength, baImage, 1, 0);
if (!Payload) {
return NDIS_STATUS_RESOURCES; // very unlikely: MmGetSystemAddressForMdl failed
}
// Did NdisGetDataBuffer already copy the payload into my flat buffer?
if (Payload != baImage) {
// If not, copy from the MDL to my flat buffer now.
memcpy(baImage, Payload, NetBuffer->DataLength);
}
You haven't included a complete code sample, but I suspect there may be some bugs in your code. I don't see any attempt to handle NetBuffer->CurrentMdlOffset. While this is usually zero, it's not always zero, so your code would not always be correct.
Similarly, it looks like the copy isn't correctly constrained by ulDataLength. You would need a ulDataLength -= ulBytesToCopy in there somewhere to fix this.
I'm very sympathetic to how tricky it is to navigate NBLs, NBs, and MDLs -- my first NIC driver included a nasty bug in calculating MDL offsets. I have some MDL-handling code internally -- I will try to clean it up a bit and publish it at https://github.com/microsoft/ndis-driver-library/ in the next few days. I'll update this post if I get it published. I think there's clearly a need for some nice, reusable and well-documented sample code to just copy (subsets of) an MDL chain into a flat buffer, or vice-versa.
Update: Refer to MdlCopyMdlChainAtOffsetToFlatBuffer in mdl.h
When I run encoder->ProcessInput(stream_id, sample.Get(), 0) I am getting a E_FAIL ("Unspecified error") error which isn't very helpful.
I am either trying to (1) Figure out what the real error is and/or (2) get past this unspecified error.
Ultimately, my goal is achieving this: http://alax.info/blog/1716
Here's the gist of what I am doing:
(Error occurs in this block)
void encode_frame(ComPtr<ID3D11Texture2D> texture) {
_com_error error = NULL;
IMFTransform *encoder = nullptr;
encoder = get_encoder();
if (!encoder) {
cout << "Did not get a valid encoder to utilize\n";
return;
}
cout << "Making it Direct3D aware...\n";
setup_D3_aware_mft(encoder);
cout << "Setting up input/output media types...\n";
setup_media_types(encoder);
error = encoder->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, NULL); // flush all stored data
error = encoder->ProcessMessage(MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, NULL);
error = encoder->ProcessMessage(MFT_MESSAGE_NOTIFY_START_OF_STREAM, NULL); // first sample is about to be processed, req for async
cout << "Encoding image...\n";
IMFMediaEventGenerator *event_generator = nullptr;
error = encoder->QueryInterface(&event_generator);
while (true) {
IMFMediaEvent *event = nullptr;
MediaEventType type;
error = event_generator->GetEvent(0, &event);
error = event->GetType(&type);
uint32_t stream_id = get_stream_id(encoder); // Likely just going to be 0
uint32_t frame = 1;
uint64_t sample_duration = 0;
ComPtr<IMFSample> sample = nullptr;
IMFMediaBuffer *mbuffer = nullptr;
DWORD length = 0;
uint32_t img_size = 0;
MFCalculateImageSize(desktop_info.input_sub_type, desktop_info.width, desktop_info.height, &img_size);
switch (type) {
case METransformNeedInput:
ThrowIfFailed(MFCreateDXGISurfaceBuffer(__uuidof(ID3D11Texture2D), texture.Get(), 0, false, &mbuffer),
mbuffer, "Failed to generate a media buffer");
ThrowIfFailed(MFCreateSample(&sample), sample.Get(), "Couldn't create sample buffer");
ThrowIfFailed(sample->AddBuffer(mbuffer), sample.Get(), "Couldn't add buffer");
// Test (delete this) - fake buffer
/*byte *buffer_data;
MFCreateMemoryBuffer(img_size, &mbuffer);
mbuffer->Lock(&buffer_data, NULL, NULL);
mbuffer->GetCurrentLength(&length);
memset(buffer_data, 0, img_size);
mbuffer->Unlock();
mbuffer->SetCurrentLength(img_size);
sample->AddBuffer(mbuffer);*/
MFFrameRateToAverageTimePerFrame(desktop_info.fps, 1, &sample_duration);
sample->SetSampleDuration(sample_duration);
// ERROR
ThrowIfFailed(encoder->ProcessInput(stream_id, sample.Get(), 0), sample.Get(), "ProcessInput failed.");
I setup my media types like this:
void setup_media_types(IMFTransform *encoder) {
IMFMediaType *output_type = nullptr;
IMFMediaType *input_type = nullptr;
ThrowIfFailed(MFCreateMediaType(&output_type), output_type, "Failed to create output type");
ThrowIfFailed(MFCreateMediaType(&input_type), input_type, "Failed to create input type");
/*
List of all MF types:
https://learn.microsoft.com/en-us/windows/desktop/medfound/alphabetical-list-of-media-foundation-attributes
*/
_com_error error = NULL;
int stream_id = get_stream_id(encoder);
error = output_type->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
error = output_type->SetGUID(MF_MT_SUBTYPE, desktop_info.output_sub_type);
error = output_type->SetUINT32(MF_MT_AVG_BITRATE, desktop_info.bitrate);
error = MFSetAttributeSize(output_type, MF_MT_FRAME_SIZE, desktop_info.width, desktop_info.height);
error = MFSetAttributeRatio(output_type, MF_MT_FRAME_RATE, desktop_info.fps, 1);
error = output_type->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive); // motion will be smoother, fewer artifacts
error = output_type->SetUINT32(MF_MT_MPEG2_PROFILE, eAVEncH264VProfile_High);
error = output_type->SetUINT32(MF_MT_MPEG2_LEVEL, eAVEncH264VLevel3_1);
error = output_type->SetUINT32(CODECAPI_AVEncCommonRateControlMode, eAVEncCommonRateControlMode_CBR); // probably will change this
ThrowIfFailed(encoder->SetOutputType(stream_id, output_type, 0), output_type, "Couldn't set output type");
error = input_type->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
error = input_type->SetGUID(MF_MT_SUBTYPE, desktop_info.input_sub_type);
error = input_type->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive);
error = MFSetAttributeSize(input_type, MF_MT_FRAME_SIZE, desktop_info.width, desktop_info.height);
error = MFSetAttributeRatio(input_type, MF_MT_FRAME_RATE, desktop_info.fps, 1);
error = MFSetAttributeRatio(input_type, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
ThrowIfFailed(encoder->SetInputType(stream_id, input_type, 0), input_type, "Couldn't set input type");
}
My desktop_info struct is:
struct desktop_info {
int fps = 30;
int width = 2560;
int height = 1440;
uint32_t bitrate = 10 * 1000000; // 10Mb
GUID input_sub_type = MFVideoFormat_ARGB32;
GUID output_sub_type = MFVideoFormat_H264;
} desktop_info;
Output of my program prior to reaching ProcessInput:
Hello World!
Number of devices: 3
Device #0
Adapter: Intel(R) HD Graphics 630
Got some information about the device:
\\.\DISPLAY2
Attached to desktop : 1
Got some information about the device:
\\.\DISPLAY1
Attached to desktop : 1
Did not find another adapter. Index higher than the # of outputs.
Successfully duplicated output from IDXGIOutput1
Accumulated frames: 0
Created a 2D texture...
Number of encoders/processors available: 1
Encoder name: IntelĀ« Quick Sync Video H.264 Encoder MFT
Making it Direct3D aware...
Setting up input/output media types...
If you're curious what my Locals were right before ProcessInput: http://prntscr.com/mx1i9t
This may be an "unpopular" answer since it doesn't provide a solution for MFT specifically but after 8 months of working heavily on this stuff, I would highly recommend not using MFT and implementing encoders directly.
My solution was implementing an HW encoder like NVENC/QSV and you could fall back on a software encoder like x264 if the client doesn't have HW acceleration available.
The reason for this is that MFT is far more opaque and not well documented/supported by Microsoft. I think you'll find you want more control over the settings & parameter tuning of the encoder's as well wherein each encoder implementation is subtly different.
We have seen this error coming from the Intel graphics driver. (The H.264 encoder MFT uses the Intel GPU to do the encode the video into H.264 format.)
In our case, I think the bug was triggered by configuring the encoder to a very high bit rate and then configuring to a low bit rate. In your sample code, it does not look like you are changing the bit rate, so I am not sure if it is the same bug.
Intel just released a new driver about two weeks ago, that is supposed to have the fix for the bug that we were seeing. So, you may want to give that new driver a try -- hopefully it will fix the problem that you are having.
The new driver is version 25.20.100.6519. You can get it from the Intel web site: https://downloadcenter.intel.com/download/28566/Intel-Graphics-Driver-for-Windows-10
If the new driver does not fix the problem, you could try running your program on a different PC that uses a NVidia or AMD graphics card, to see if the problem only happens on PCs that have Intel graphics.
I need to play raw PCM data (16 bit signed) using CoreAudio on OS X. I get it from network using UDP socket (on sender side data is captured from microphone).
The problem is that all I hear now is some short cracking noise and then only silence.
I'm trying to play data using AudioQueue. I setup it like this:
// Set up stream format fields
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = 44100;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
OSStatus err = noErr;
// create the audio queue
err = AudioQueueNewOutput(&streamFormat, MyAudioQueueOutputCallback, myData, NULL, NULL, 0, &myData->audioQueue);
if (err)
{ PRINTERROR("AudioQueueNewOutput"); myData->failed = true; result = false;}
// allocate audio queue buffers
for (unsigned int i = 0; i < kNumAQBufs; ++i) {
err = AudioQueueAllocateBuffer(myData->audioQueue, kAQBufSize, &myData->audioQueueBuffer[i]);
if (err)
{ PRINTERROR("AudioQueueAllocateBuffer"); myData->failed = true; break; result = false;}
}
// listen for kAudioQueueProperty_IsRunning
err = AudioQueueAddPropertyListener(myData->audioQueue, kAudioQueueProperty_IsRunning, MyAudioQueueIsRunningCallback, myData);
if (err)
{ PRINTERROR("AudioQueueAddPropertyListener"); myData->failed = true; result = false;}
MyAudioQueueOutputCallback is:
void MyAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer)
{
// this is called by the audio queue when it has finished decoding our data.
// The buffer is now free to be reused.
MyData* myData = (MyData*)inClientData;
unsigned int bufIndex = MyFindQueueBuffer(myData, inBuffer);
// signal waiting thread that the buffer is free.
pthread_mutex_lock(&myData->mutex);
myData->inuse[bufIndex] = false;
pthread_cond_signal(&myData->cond);
pthread_mutex_unlock(&myData->mutex);
}
MyAudioQueueIsRunningCallback is:
void MyAudioQueueIsRunningCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueuePropertyID inID)
{
MyData* myData = (MyData*)inClientData;
UInt32 running;
UInt32 size;
OSStatus err = AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &running, &size);
if (err) { PRINTERROR("get kAudioQueueProperty_IsRunning"); return; }
if (!running) {
pthread_mutex_lock(&myData->mutex);
pthread_cond_signal(&myData->done);
pthread_mutex_unlock(&myData->mutex);
}
}
and MyData is:
struct MyData
{
AudioQueueRef audioQueue; // the audio queue
AudioQueueBufferRef audioQueueBuffer[kNumAQBufs]; // audio queue buffers
AudioStreamPacketDescription packetDescs[kAQMaxPacketDescs]; // packet descriptions for enqueuing audio
unsigned int fillBufferIndex; // the index of the audioQueueBuffer that is being filled
size_t bytesFilled; // how many bytes have been filled
size_t packetsFilled; // how many packets have been filled
bool inuse[kNumAQBufs]; // flags to indicate that a buffer is still in use
bool started; // flag to indicate that the queue has been started
bool failed; // flag to indicate an error occurred
bool finished; // flag to inidicate that termination is requested
pthread_mutex_t mutex; // a mutex to protect the inuse flags
pthread_mutex_t mutex2; // a mutex to protect the AudioQueue buffer
pthread_cond_t cond; // a condition varable for handling the inuse flags
pthread_cond_t done; // a condition varable for handling the inuse flags
};
I'm sorry if I posted too much code - hope it helps anyone to understand what exactly I do.
Mostly my code based on this code which is version of AudioFileStreamExample from Mac Developer Library adapted to work with CBR data.
Also I looked at this post and tried AudioStreamBasicDescription desribed there. And tried to change my flags to Little or Big Endian. It didn't work.
I looked at some another posts here and in the other resources while finding similar problem, I checked the order of my PCM data, for example. I just can't post more than two links.
Please anyone help me to understand what I'm doing wrong! Maybe I should abandon this way and use Audio Units right away? I'm just very newbie in CoreAudio and hoped that mid-level of CoreAudio will help me to solve this problem.
P.S. Sorry for my English, I tried as I can.
I hope you've solved this one on your own already, but for the benefit of other people who are having this problem, I'll post up an answer.
The problem is most likely because once an Audio Queue is started, time continues moving forward, even if you stop enqueueing buffers. But when you enqueue a buffer, it is enqueued with a timestamp that is right after the previously enqueued buffer. This means that if you don't stay ahead of the where the audio queue is playing, you will end up enqueuing buffers with a timestamp in the past, therefore the audio queue will go silent and the isRunning property will still be true.
To work around this, you have a couple of options. The simplest in theory would be to never fall behind on submitting buffers. But since you are using UDP, there is no guarantee that you will always have data to submit.
Another option is that you can keep track of what sample you should be playing and submit an empty buffer of silence whenever you need to have a gap. This option works good if your source data has timestamps that you can can use to calculate how much silence you need. But ideally, you wouldn't need to do this.
Instead you should be calculating the timestamp for the buffer using the system time. Instead of AudioQueueEnqueueBuffer, you'll need to use AudioQueueEnqueueBufferWithParameters instead. You just need to make sure the timestamp is ahead of where the queue is currently at. You'll also have to keep track what the system time was when you started the queue, so you can calculate the correct timestamp for each buffer you are submitting. If you have timestamp values on your source data, you should be able to use them to calculate the buffer timestamps as well.
I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.