I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.
Related
I am trying to learn Xcode Core Audio and stumbled upon this example:
https://developer.apple.com/library/mac/samplecode/CAPlayThrough/Introduction/Intro.html#//apple_ref/doc/uid/DTS10004443
My intention is to capture the raw audio. Everytime I hit a break point, I lose the audio. Since it is using CARingBuffer.
How would you remove the time factor.I don't need real-time audio.
Since it is using CARingBuffer it should keep on writing to same memory location? So why don't I hear the audio? If I have a breakpoint?
I am reading the Learning Core Audio book. But, so far I cannot figure out this part of the following code:
CARingBufferError CARingBuffer::Store(const AudioBufferList *abl, UInt32 framesToWrite, SampleTime startWrite)
{
if (framesToWrite == 0)
return kCARingBufferError_OK;
if (framesToWrite > mCapacityFrames)
return kCARingBufferError_TooMuch; // too big!
SampleTime endWrite = startWrite + framesToWrite;
if (startWrite < EndTime()) {
// going backwards, throw everything out
SetTimeBounds(startWrite, startWrite);
} else if (endWrite - StartTime() <= mCapacityFrames) {
// the buffer has not yet wrapped and will not need to
} else {
// advance the start time past the region we are about to overwrite
SampleTime newStart = endWrite - mCapacityFrames; // one buffer of time behind where we're writing
SampleTime newEnd = std::max(newStart, EndTime());
SetTimeBounds(newStart, newEnd);
}
// write the new frames
Byte **buffers = mBuffers;
int nchannels = mNumberChannels;
int offset0, offset1, nbytes;
SampleTime curEnd = EndTime();
if (startWrite > curEnd) {
// we are skipping some samples, so zero the range we are skipping
offset0 = FrameOffset(curEnd);
offset1 = FrameOffset(startWrite);
if (offset0 < offset1)
ZeroRange(buffers, nchannels, offset0, offset1 - offset0);
else {
ZeroRange(buffers, nchannels, offset0, mCapacityBytes - offset0);
ZeroRange(buffers, nchannels, 0, offset1);
}
offset0 = offset1;
} else {
offset0 = FrameOffset(startWrite);
}
offset1 = FrameOffset(endWrite);
if (offset0 < offset1)
StoreABL(buffers, offset0, abl, 0, offset1 - offset0);
else {
nbytes = mCapacityBytes - offset0;
StoreABL(buffers, offset0, abl, 0, nbytes);
StoreABL(buffers, 0, abl, nbytes, offset1);
}
// now update the end time
SetTimeBounds(StartTime(), endWrite);
return kCARingBufferError_OK; // success
}
Thanks!
If I understood the question well, the signal is lost while input unit (producer) being halted on a breakpoint. I presume this may be the expected behavior. CoreAudio is a pull-model engine running of the real time thread. This means under some conditions your producer hits a breakpoint, the ring buffer empties, the output unit (consumer) keeps on running, but gets nothing from the buffer while the playthrough chain is interrupted, hence the possible silence.
Perhaps this code from the example is not really the simplest one: I see it also zeroes audio buffers if ring buffer gets overrun/underrun, AFAICT.
The term "raw audio" in the question is also not self-explanatory, I'm not sure what does it mean. I would suggest trying to learn async i/o using simpler circular buffers. There are few of them (without obligatory time values) on GitHub.
Please also be so kind to format the source code for easier reading.
I want to bounce a midi file offline, and as the PlaySequence example does exactly this, I am trying to understand it.
I keep reading everywhere that you need a callback function to do anything in CoreAudio, yet I cannot see any in this project.
I paste the loop containing the AudioUnitRender, thanks for your help!
CAStreamBasicDescription clientFormat = CAStreamBasicDescription();
size = sizeof(clientFormat);
FailIf ((result = AudioUnitGetProperty (outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0,
&clientFormat, &size)), fail, "AudioUnitGetProperty: kAudioUnitProperty_StreamFormat");
size = sizeof(clientFormat);
FailIf ((result = ExtAudioFileSetProperty(outfile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat)), fail, "ExtAudioFileSetProperty: kExtAudioFileProperty_ClientDataFormat");
{
MusicTimeStamp currentTime;
AUOutputBL outputBuffer (clientFormat, numFrames);
AudioTimeStamp tStamp;
memset (&tStamp, 0, sizeof(AudioTimeStamp));
tStamp.mFlags = kAudioTimeStampSampleTimeValid;
int i = 0;
int numTimesFor10Secs = (int)(10. / (numFrames / srate));
do {
outputBuffer.Prepare();
AudioUnitRenderActionFlags actionFlags = 0;
FailIf ((result = AudioUnitRender (outputUnit, &actionFlags, &tStamp, 0, numFrames, outputBuffer.ABL())), fail, "AudioUnitRender");
tStamp.mSampleTime += numFrames;
FailIf ((result = ExtAudioFileWrite(outfile, numFrames, outputBuffer.ABL())), fail, "ExtAudioFileWrite");
FailIf ((result = MusicPlayerGetTime (player, ¤tTime)), fail, "MusicPlayerGetTime");
if (shouldPrint && (++i % numTimesFor10Secs == 0))
printf ("current time: %6.2f beats\n", currentTime);
} while (currentTime < sequenceLength);
}
I had a look at the project and you're right, it does not have a rendercallback. Render callbacks have their place withing coreaudio audiounit effect processing. The call to setup a render callback looks like this:
inline OSStatus SetInputCallback (CAAudioUnit &inUnit, AURenderCallbackStruct &inInputCallback)
{
return inUnit.SetProperty (kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&inInputCallback,
sizeof(inInputCallback));
}
However this code is just one big main() which setups up the sequence, augraph, music player and then the filewriter WriteOutputFile in case there's an outputfile to bounce to.
I recommend you set some breakpoints at key methods, and walk through the code, watching what it does and looking at variables.
EDIT: Note, that in setting up your rendercallback on iOS on your RemoteIO (which doubles as input & output units), getting the correct stream format on the correct scope & bus element numbers in your setproperty calls can be tricky. Refer to this from the Apple docs.
I am creating a multi-monitor full screen DXGI/D3D application. I am enumerating through the available outputs and adapters in preparation of creating their swap chains.
When creating my swap chain using DXGI's IDXGIFactory::CreateSwapChain method, I need to provide a swap chain description which includes a buffer description of type DXGI_MODE_DESC that details the width, height, refresh rate, etc. How can I find out what the output is currently set to (or how can I find out what the display mode of the output currently is)? I don't want to change the user's resolution or refresh rate when I go to full screen with this swap chain.
After looking around some more I stumbled upon the EnumDisplaySettings legacy GDI function, which allows me to access the current resolution and refresh rate. Combining this with the IDXGIOutput::FindClosestMatchingMode function I can get pretty close to the current display mode:
void getClosestDisplayModeToCurrent(IDXGIOutput* output, DXGI_MODE_DESC* outCurrentDisplayMode)
{
DXGI_OUTPUT_DESC outputDesc;
output->GetDesc(&outputDesc);
HMONITOR hMonitor = outputDesc.Monitor;
MONITORINFOEX monitorInfo;
monitorInfo.cbSize = sizeof(MONITORINFOEX);
GetMonitorInfo(hMonitor, &monitorInfo);
DEVMODE devMode;
devMode.dmSize = sizeof(DEVMODE);
devMode.dmDriverExtra = 0;
EnumDisplaySettings(monitorInfo.szDevice, ENUM_CURRENT_SETTINGS, &devMode);
DXGI_MODE_DESC current;
current.Width = devMode.dmPelsWidth;
current.Height = devMode.dmPelsHeight;
bool useDefaultRefreshRate = 1 == devMode.dmDisplayFrequency || 0 == devMode.dmDisplayFrequency;
current.RefreshRate.Numerator = useDefaultRefreshRate ? 0 : devMode.dmDisplayFrequency;
current.RefreshRate.Denominator = useDefaultRefreshRate ? 0 : 1;
current.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
current.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
current.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
output->FindClosestMatchingMode(¤t, outCurrentDisplayMode, NULL);
}
...But I don't think that this is really the correct answer because I'm needing to use legacy functions. Is there any way to do this with DXGI to get the exact current display mode rather than using this method?
I saw solution here:
http://www.rastertek.com/dx11tut03.html
In folow part:
// Now go through all the display modes and find the one that matches the screen width and height.
// When a match is found store the numerator and denominator of the refresh rate for that monitor.
for(i=0; i<numModes; i++)
{
if(displayModeList[i].Width == (unsigned int)screenWidth)
{
if(displayModeList[i].Height == (unsigned int)screenHeight)
{
numerator = displayModeList[i].RefreshRate.Numerator;
denominator = displayModeList[i].RefreshRate.Denominator;
}
}
}
Is my understanding correct, the available resolution is in the displayModeList.
This might be what you are looking for:
// Get display mode list
std::vector<DXGI_MODE_DESC*> modeList = GetDisplayModeList(*outputItor);
for(std::vector<DXGI_MODE_DESC*>::iterator modeItor = modeList.begin(); modeItor != modeList.end(); ++modeItor)
{
// PrintDisplayModeInfo(*modeItor);
}
}
std::vector<DXGI_MODE_DESC*> GetDisplayModeList(IDXGIOutput* output)
{
UINT num = 0;
DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_TYPELESS;
UINT flags = DXGI_ENUM_MODES_INTERLACED | DXGI_ENUM_MODES_SCALING;
// Get number of display modes
output->GetDisplayModeList(format, flags, &num, 0);
// Get display mode list
DXGI_MODE_DESC * pDescs = new DXGI_MODE_DESC[num];
output->GetDisplayModeList(format, flags, &num, pDescs);
std::vector<DXGI_MODE_DESC*> displayList;
for(int i = 0; i < num; ++i)
{
displayList.push_back(&pDescs[i]);
}
return displayList;
}
I am using a toolkit to do some Elliptical Curve Cryptography on an ATMega2560. When trying to use the print functions in the toolkit I am getting an empty string. I know the print functions work because the x86 version prints the variables without a problem. I am not experienced with ATMega and would love any help on this matter. The print code is included below.
Code to print a big number (it itself calls a util_print)
void bn_print(bn_t a) {
int i;
if (a->sign == BN_NEG) {
util_print("-");
}
if (a->used == 0) {
util_print("0\n");
} else {
#if WORD == 64
util_print("%lX", (unsigned long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*lX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long int)a->dp[i]);
}
#else
util_print("%llX", (unsigned long long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*llX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long long int)a->dp[i]);
}
#endif
util_print("\n");
}
}
The code to actually print a big number variable:
static char buffer[64 + 1];
void util_printf(char *format, ...) {
#ifndef QUIET
#if ARCH == AVR
char *pointer = &buffer[1];
va_list list;
va_start(list, format);
vsnprintf(pointer, 128, format, list);
buffer[0] = (unsigned char)2;
va_end(list);
#elif ARCH == MSP
va_list list;
va_start(list, format);
vprintf(format, list);
va_end(list);
#else
va_list list;
va_start(list, format);
vprintf(format, list);
fflush(stdout);
va_end(list);
#endif
#endif
}
edit: I do have UART initialized and can output printf statments to a console.
I'm one of the authors of the RELIC toolkit. The current util_printf() function is used to print inside the Avrora simulator, for debugging purposes. I'm glad that you could adapt the code to your purposes. As a side note, the buffer size problem was already fixed in more recent releases of the toolkit.
Let me know you have further problems with the library. You can either contact me personally or write directly to the discussion group.
Thank you!
vsnprintf store it's output on the given buffer (which in this case is the address point by pointer variable), in order for it to show on the console (through UART) you must send your buffer using printf (try to add printf("%s", pointer) after vsnprintf), if you're using avr-libc don't forget to initialized std stream first before making any call to printf function
oh btw your code is vulnerable to buffer overflow attack, buffer[64 + 1] means your buffer size is only 65 bytes, vsnprintf(pointer, 128, format, list); means that the maximum buffer defined by your application is 128 bytes, try to change it below 65 bytes in order to avoid overflow
Alright so I found a workaround to print the bn numbers to a stdout on an ATMega2560. The toolkit comes with a function that writes a variable to a string (bn_write_str). So I implemented my own print function as such:
void print_bn(bn_t a)
{
char print[BN_SIZE]; // max precision of a bn number
int bi = bn_bits(a); // get the number of bits of the number
bn_write_str(print, bi, a, 16) // 16 indicates the radix (hexadecimal)
printf("%s\n"), print);
}
This function will print a bn number in hexadecimal format.
Hope this helps anyone using the RELIC toolkit with an AVR.
This skips the util_print calls.
In my app, i would be receiving audio data over socket in Linear PCM Format, in uniform interval of time, 50 ms approx.,
I am using AudioQueue to play the same, i referred most of the code from AudioQueue SpeakHere Example only the difference is i need to run it on the Mac OS,
Following is the relevant piece of code,
Setup AudioBufferDescription format,
FillOutASBDForLPCM (sRecordFormat,
16000,
1,
16,
16,
false,
false
);
Allocate Buffers to hold and play data
for (int i = 0; i < kNumberBuffersPLyer; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
}
Where bufferByteSize is 640 and Number of buffer is 3
To Start the Queue,
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
Now, the thing is, i was expecting, it should hit the Callback automatically When it played buffer, but it was't happening,
So as and when i am getting buffer, i am enqueue buffer , this is the code
void AudioStream::startQueueIfNeeded(){
SetLooping(true);
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffersPLyer; ++i)
{
AQBufferCallback (this, mQueue, mBuffers[0]);
//enQueueBuffer(this,mQueue,mBuffers[i]);
}
// AudioSessionSetActive( true );
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
mIsDone = false;
mIsStarted = true;
}
i feel the buffer is proper but i can't hear the sound, can anyone guide me, what i am doing wrong.
Thanks in advance
Thanks for have a look on it, the problem was, i was running AudioQueue in C++ based thread class and it was terminated , when i take it out of thread class it worked fine.