I am trying to loop through an array of Buffers each containing a sound sample read from disk, but I am having problems getting the SynthDef to reset its pointer to the buffers.
I did the following:
Assume I have a folder of sound files and I have read them all into an array of Buffers called "~buffers"
I just want to go through the array in order, playing the samples back to back and stopping after the last one.
I define a simple SynthDef, and then put the Synth that calls it into a Routine:
(
SynthDef(\playBuffer,{arg out = 0, buf;
var sig;
sig = PlayBuf.ar(2, buf, doneAction: Done.freeSelf);
Out.ar(out, sig);
}).add
~routine = Routine({
~buffers.do({
arg item;
var synth;
synth = Synth(\playBuffer, [\buf, item]);
item.duration.wait;
synth.free;
});
});
~routine.play;
)
It does not work as expected---the synths are always playing the same sound,the first one, although for the durations corresponding to the different samples.
I think the problem could be that the function inside my \playbuffer SynthDef (at least according to the Help files) is not re-evaluated with a different bufnum argument inside the loop.
In fact I can loop through the buffers if I use Buffer.play which creates synthDef's and Synth's on the fly. Replacing my Routine with this code works:
(
~routine2 = Routine({
~buffers.do({
arg item;
item.play;
item.duration.wait;
});
});
~routine2.play;
)
BUT: it is very crude, as now I cannot manipulate the buffer output except for changing the amplitude through the mul parameter of Buffer.play.
What I would like to do is to replicate Buffer.play's behavior---creating SynthDef's and Synth's on the fly---in my own code. But I'm having no luck with it. In fact I am not sure where to start, possibly because I don't fully grasp SuperCollider's server's handling of functions. Should I make a Synth-making function and use that inside the routine's loop? Or should I move the definition of the SynthDef inside the loop (which seems equivalent)? I tried the latter, but still got the same sound playing.
Perhaps I am going at this the wrong way---I'm very new to SuperCollider.
The code in your first example is correct. If I fill by buffers like this:
(
s.makeBundle(nil, {
~buffers = [1, 2, 3, 4, 5].collect {
|i|
var b;
b = Buffer.alloc(s, 44100, 1);
b.sine3([100, 150, 175] * i, 0.25);
};
})
)
and then play them with your code example:
(
SynthDef(\playBuffer,{arg out = 0, buf;
var sig;
sig = PlayBuf.ar(1, buf, doneAction: Done.freeSelf);
Out.ar(out, sig);
}).add;
~routine = Routine({
~buffers.do({
arg item;
Synth(\playBuffer, [\buf, item]);
item.duration.wait;
});
});
~routine.play;
)
This works fine, I hear ascending tones. (I changed your example to be single channel buffers, and removed the .free as you were already doing Done.freeSelf). If you're hearing the same sound playing each time, the problem is likely in the code where you're loading your buffers and not in playback.
One gotcha: the duration property of a buffer is not available immediately after you load them - reading an audio file is asynchronous, and SC doesn't know the duration until it's loaded. If you're doing Buffer.read immediately before playing, there's a chance your duration might be e.g. 0 or nil, which would cause unexpected results.
i tryed it in a Task, but i think it is complitly like the think which you will do in routine.
you have to put the buffers in array.
like buffer=[1,2,3,4,5]
but it is better to code in this way.
\buffer=[a.bufnum,b.bufnum,c.bufnum,d.bufnum]
and set the buffer variable in second arrgument of PlayBuf in SynthDef.
because you might load others buffer in your server and if you put the number of the buffer in the array, it will usually play wrong buffer which you do not want to play,.
Related
I wrote a command line c tool generating an sine wave and playing it using CoreAudio on the default audio output. I am initializing a
AURenderCallbackStruct and initialize an AudioUnit using AudioUnitInitialize (as already discussed in this forum). All this is working as intended, but when it comes to closing the program I am not able to close the AudioUnit, neither with using AudioOutputUnitStop(player.outputUnit); nor AudioOutputUnitStop(player.outputUnit); nor
AudioComponentInstanceDispose(player.outputUnit);
The order of appearance of these calls in the code does not change the behavior.
The program is compiled without error messages, but the sine is still audible as long as the rest of the program is running.
Here is the code I'm using for initializing the AudioUnit:
void CreateAndConnectOutputUnit (ToneGenerator *player) {
AudioComponentDescription outputcd = {0};
outputcd.componentType = kAudioUnitType_Output;
outputcd.componentSubType = kAudioUnitSubType_DefaultOutput;
outputcd.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent comp = AudioComponentFindNext (NULL, &outputcd);
if (comp == NULL) {
printf ("can't get output unit");
exit (-1);
}
AudioComponentInstanceNew(comp, &player->outputUnit);
// register render callback
AURenderCallbackStruct input;
input.inputProc = SineWaveRenderCallback;
input.inputProcRefCon = player;
AudioUnitSetProperty(player->outputUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Output,
0,
&input,
sizeof(input);
// initialize unit
AudioUnitInitialize(player->outputUnit);
}
In my main program I'm starting the AudioUnit and the sine wave.
void main {
// code for doing various things
ToneGenerator player = {0}; // create a sound object
CreateAndConnectOutputUnit (&player);
AudioOutputUnitStart(player.outputUnit);
// waiting to listen to the sine wave
sleep(3);
// attempt to stop the sound output
AudioComponentInstanceDispose(player.outputUnit);
AudioUnitUninitialize(player.outputUnit);
AudioOutputUnitStop(player.outputUnit);
//additional code that should be executed without sine wave being audible
}
As I'm new to both, this forum as well as programming in Xcode I hope that I could explain this issue in a way that you can help me out and I hope that I didn't miss the answer somewhere in the forum while searching for a solution.
Thank you in advance for your time and input,
Stefan
You should manage and unmanage your audio unit in a logical order. It doesn't make sense to stop playback on an already uninitialized audio unit, which had in fact previously been disposed of in the middle of the playback. Rather than that, try the following order:
AudioOutputUnitStop(player.outputUnit); //first stops playback
AudioUnitUninitialize(player.outputUnit); //then deallocates unit's resources
AudioComponentInstanceDispose(player.outputUnit); //finally disposes of the AU itself
The sine wave command line app you're after is a well elaborated lesson in this textbook. Please read it step by step.
Last, but not least, your question has nothing to do with C++, CoreAudio is a plain-C API, so C++ in both your title and tag are wrong and misleading.
An Audio Unit runs in an asynchronous thread that may not actually stop immediately when you call AudioOutputUnitStop. Thus, it may work better to wait a fraction of a second (at least a couple audio callback buffer durations in time) before calling AudioUnitUninitialize and AudioComponentInstanceDispose on a potentially still running audio unit.
Also, check to make sure your player.outputUnit value is a valid unit (and not an uninitialized or trashed variable) at the time you stop the unit.
I'm altering someone else's code. They used PNG's which are loaded via BufferedImage. I need to load a TGA instead, which is just simply a 18 byte header and BGR codes. I have the textures loaded and running, but I get a gray box instead of the texture. I don't even know how to DEBUG this.
Textures are loaded in a ByteBuffer:
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
static ByteBuffer buffer = ByteBuffer.allocateDirect(datasize);
FileInputStream fin = new FileInputStream("/Volumes/RAMDisk/shot00021.tga");
FileChannel inc = fin.getChannel();
inc.position(18); // skip header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
I've followed this: [how-to-manage-memory-with-texture-in-opengl][1] ... because I am updating the texture once per frame, like video.
Called once:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
Called repeatedly:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, width, height, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, byteBuffer);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
return textureID;
The render code hasn't changed and is based on:
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
Make sure you set the texture sampling mode. Especially min filter: glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR). The default setting is mip mapped (GL_NEAREST_MIPMAP_LINEAR) so unless you upload mip maps you will get a white read result.
So either set the texture to no mip or generate them. One way to do that is to call glGenerateMipmap after the tex img call.
(see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml).
It's a very common gl pitfall and something people just tend to know after getting bitten by it a few times.
There is no easy way to debug stuff like this. There are good gl debugging tools in for example xcode but they will not tell you about this case.
Debugging GPU code is always a hassle. I would bet my money on a big industry progress in this area as more companies discover the power of GPU. Until then; I'll share my two best GPU debugging friends:
1) Define a function to print OGL errors:
int printOglError(const char *file, int line)
{
/* Returns 1 if an OpenGL error occurred, 0 otherwise. */
GLenum glErr;
int retCode = 0;
glErr = glGetError();
while (glErr != GL_NO_ERROR) {
printf("glError in file %s # line %d: %s\n", file, line, gluErrorString(glErr));
retCode = 1;
glErr = glGetError();
}
return retCode;
}
#define printOpenGLError() printOglError(__FILE__, __LINE__)
And call it after your render draw calls (possible earlier errors will also show up):
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
printOpenGLError();
This alerts if you make some invalid operations (which might just be your case) but you usually have to find where the error occurs by trial and error.
2) Check out gDEBugger, free software with tons of GPU memory information.
[Edit]:
I would also recommend using the opensource lib DevIL - its quite competent in loading various image formats.
Thanks to Felix, by not calling glTexSubImage2D (leaving the memory valid, but uninitialized) I noticed a remnant pattern left by the default memory. This indicated that the texture is being displayed, but the load is most likely the problem.
**UPDATE:
The, problem with the code above is essentially the buffer. The buffer is 1024*1024, but it is only partially filled in by the read, leaving the limit marker of the ByteBuffer at 2359296(1024*768*3) instead of 3145728(1024*1024*3). This gives the error:
Number of remaining buffer elements is must be ... at least ...
I thought that OpenGL needed space to return data, so I doubled the size of the buffer.
The buffer size is doubled to compensate for the error.
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
This is wrong, what is needed is the flip() function (Big THANKS to Reto Koradi for the small hint to the buffer rewind) to put the ByteBuffer in read mode. Since the buffer is only semi-full, the OpenGL buffer check gives an error. The correct thing to do is not double the buffer size; use buffer.position(buffer.capacity()) to fill the buffer before doing a flip().
final static int datasize = (WIDTH*HEIGHT*3); // not +18 no header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
buffer.position(buffer.capacity()); // make sure buffer is completely FILLED!
buffer.flip(); // flip buffer to read mode
To figure this out, it is helpful to hardcode the memory of the buffer to make sure the OpenGL calls are working, isolating the load problem. Then when the OpenGL calls are correct, concentrate on the loading of the buffer. As suggested by Felix K, it is good to make sure one texture has been drawn correctly before calling glTexSubImage2D repeatedly.
Some ideas which might cause the issue:
Your texture is disposed somewhere. I don't know the whole code but I guess somewhere there is a glDeleteTextures and this could cause some issues if called at the wrong time.
Are the texture width and height powers of two? If not this might be an issue depending on your hardware. Old hardware sometimes won't support non-power of two images.
The texture parameters changed between the draw calls at some other point ( Make a debug check of the parameters with glGetTexParameter ).
There could be a loading issue when loading the next image ( edit: or even the first image ). Check if the first image is displayed without loading the next images. If so it must be one of the cases above.
I'm attempting to port an old open-source FMOD 3 game (Candy Crisis) to the latest version of FMOD Ex 4 on OS X. Its sound needs are very simpleāit plays WAVs, sometimes changing their frequency or speaker mix, and also plays MOD tracker music, sometimes changing the speed. I'm finding that the game works fine at first, but over the course of a few minutes, it starts truncating sounds early, then the music loses channels and eventually stops, then over time all sound ceases. I can cause the problem to reproduce more quickly if I lower the number of channels available to FMOD.
I can get the truncated/missing sounds issue to occur even if I never play a music file, but music definitely seems to make things worse. I have also tried commenting out the code which adjusts the sound frequency and speaker mix, and that was not the issue.
I am calling update() every frame.
Here's the entirety of my interactions with FMOD to play WAVs:
void InitSound( void )
{
FMOD_RESULT result = FMOD::System_Create(&g_fmod);
FMOD_ERRCHECK(result);
unsigned int version;
result = g_fmod->getVersion(&version);
FMOD_ERRCHECK(result);
if (version < FMOD_VERSION)
{
printf("Error! You are using an old version of FMOD %08x. This program requires %08x\n", version, FMOD_VERSION);
abort();
}
result = g_fmod->init(8 /* was originally 64, but 8 repros the issue faster */, FMOD_INIT_NORMAL, 0);
FMOD_ERRCHECK(result);
for (int index=0; index<kNumSounds; index++)
{
result = g_fmod->createSound(QuickResourceName("snd", index+128, ".wav"), FMOD_DEFAULT, 0, &s_sound[index]);
FMOD_ERRCHECK(result);
}
}
void PlayMono( short which )
{
if (soundOn)
{
FMOD_RESULT result = g_fmod->playSound(FMOD_CHANNEL_FREE, s_sound[which], false, NULL);
FMOD_ERRCHECK(result);
}
}
void PlayStereoFrequency( short player, short which, short freq )
{
if (soundOn)
{
FMOD::Channel* channel = NULL;
FMOD_RESULT result = g_fmod->playSound(FMOD_CHANNEL_FREE, s_sound[which], true, &channel);
FMOD_ERRCHECK(result);
result = channel->setSpeakerMix(player, 1.0f - player, 0, 0, 0, 0, 0, 0);
FMOD_ERRCHECK(result);
float channelFrequency;
result = s_sound[which]->getDefaults(&channelFrequency, NULL, NULL, NULL);
FMOD_ERRCHECK(result);
result = channel->setFrequency((channelFrequency * (16 + freq)) / 16);
FMOD_ERRCHECK(result);
result = channel->setPaused(false);
FMOD_ERRCHECK(result);
}
}
void UpdateSound()
{
g_fmod->update();
}
And here's how I play MODs.
void ChooseMusic( short which )
{
if( musicSelection >= 0 && musicSelection <= k_songs )
{
s_musicChannel->stop();
s_musicChannel = NULL;
s_musicModule->release();
s_musicModule = NULL;
musicSelection = -1;
}
if (which >= 0 && which <= k_songs)
{
FMOD_RESULT result = g_fmod->createSound(QuickResourceName("mod", which+128, ""), FMOD_DEFAULT, 0, &s_musicModule);
FMOD_ERRCHECK(result);
result = g_fmod->playSound(FMOD_CHANNEL_FREE, s_musicModule, true, &s_musicChannel);
FMOD_ERRCHECK(result);
EnableMusic(musicOn);
s_musicModule->setLoopCount(-1);
s_musicChannel->setPaused(false);
musicSelection = which;
s_musicPaused = 0;
}
}
If someone wants to experiment with this, let me know and I'll upload the project somewhere. My gut feeling is that FMOD is busted but I'd love to be proven wrong.
Sounds like your music needs to be set as higher priority than your other sounds. Remember, lower numbers are more important. I think you can just set the priority on the channel.
Every time I play the following WAV, FMOD loses one channel permanently. I am able to reproduce this channel-losing behavior in the "playsound" example if I replace the existing jaguar.wav with my file.
https://drive.google.com/file/d/0B1eDRY8sV_a9SXMyNktXbWZOYWs/view?usp=sharing
I contacted Firelight and got this response. Apparently WAVs can include a looping command! I had no idea.
Hello John,
I've taken a look at the two files you have provided. Both files end
with a 2 sample infinite loop region.
FMOD 4 (and FMOD 5 for that matter) will see the loop region in the
file and automatically enable FMOD_LOOP_NORMAL if you haven't
specified any loop mode. Assuming you want one-shot behavior just pass
in FMOD_LOOP_OFF when you create the sound.
Kind regards, Mathew Block | Senior Platform Engineer
Technically this behavior contradicts the documented behavior of FMOD_DEFAULT (which is specified to imply FMOD_LOOP_OFF) so they are planning to improve the documentation here.
Based on the wave sample you supplied, FMOD is behaving correctly as it appears you've figured out. The sample has a loop that is honored by FMOD and the last samples are simply repeated forever. While useless, this is correct and the variance in the samples is so slight as to not be audible. While not part of the original spec for wave format, extended information was added later to support meta data such as author, title, comments and multiple loop points.
Your best bet is to examine all your source assets for those that contain loop information. Simply playing all sounds without loop information is probably not the best workaround. Some loops may be intentional. Those that are will have code that stops them. Typically, in a game, the entire waveform is looped when looping is desired. You can then write or use a tool that will strip the loop information. If you do write your own tool, I'd recommend resampling the audio to the native output sampling rate of the hardware. You'd need to insure your resampler was sample accurate (no time shift) and did not introduce noise.
Historically, some game systems had a section at the end of the sound with silence and a loop point set on this region. The short reason for this was to reduce popping that might occur at the end of a sound in a hardware audio channel.
Curiosly, the last 16 samples of your .wav look like garbage and I'm wondering if the .wav assets you're using were converted from a source meant for a game console and that's where the bogus loop information came from as well.
This would have been a comment but my lowly rep does not allow it.
Currently I am attempting to use PBOs to get video data to textures. I'm not sure if what I'm trying to do is even possible, or a good way to do it if it IS possible... I have 3 textures with the GL_RED format (one for each channel, not using Alpha currently). All three of these will be filled out in a single call to an external library.
Here's binding the buffer, etc:
void LockTexture(const TextureID& id, void ** ppbData)
{
Texture& tex = textures.getArray()[id];
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, tex.glBufID);
glBufferData(GL_PIXEL_UNPACK_BUFFER, tex.width * tex.height, NULL, GL_STREAM_DRAW);
*ppbData = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
}
This is done for the 3 textures, the buffers are then filled by the external library. Then I attempt to push them to the texture, like so:
void UnlockTexture(const TextureID& id)
{
Texture& tex = textures.getArray()[id];
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER);
glBindTexture(tex.glTarget, tex.glTexID);
glCheckForErrors(); // <--- NO ERROR
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, tex.width, tex.height, GL_RED, GL_UNSIGNED_BYTE, 0);
glCheckForErrors(); // <--- ERROR
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(tex.glTarget, 0);
}
Going through the list of reasons the error could be generated, this is what I know:
texture array has been defined
type is correct
data param (offset) is good at 0
not executed between glBegin/glEnd
This one I'm not sure about:
error is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
This one seems like it could be an issue, but I'd have no idea how else to handle this:
error is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
Am I correct in saying that this glUnmapBuffer is unmapping the last-mapped buffer, so the correct buffer is still mapped?
GL version is 3.2
I would greatly appreciate any help on this one, thanks!
glUnmapBuffer(target) will unmap the buffer which is currently bound to target. From the code you posted, it is unclear if there will still be the same binding as at the time you did the map call. Your wordings suggests that you do the mapping for all 3 right after each other, and when you try to unmap it, you only unmap the last one mapped because you forget to rebind the other ones, which would lead to this error for the first two of your textures.
I am developing an application in Atmel Studio 6 using the xMega32a4u. I'm using the TWI libraries provided by Atmel. Everything is going well for the most part.
Here is my issue: In order to update an OLED display I am using (SSD1306 controller, 128x32), the entire contents of the display RAM must be written immediately following the I2C START command, slave address, and control byte so the display knows to enter the data into the display RAM. If the control byte does not immediately precede the display RAM package, nothing works.
I am using a Saleae logic analyzer to verify that the bus is doing what it should.
Here is the function I am using to write the display:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_array[513];
data_array[0] = SSD1306_DATA_BYTE;
for (int i=0;i<512;++i){
data_array[i+1] = buffer[i];
}
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_array,
.length = 513
};
twi_master_write(&TWIC, &buffer_send);
}
Clearly, this is very inefficient as each call to this function recreates the entire array "buffer" into a new array "data_array," one element at a time. The point of this is to insert the control byte (SSD1306_DATA_BYTE = 0x40) into the array so that the entire "package" is sent at once, and the control byte is in the right place. I could make the original "buffer" array one element larger and add the control byte as the first element, to skip this process but that makes the size 513 rather than 512, and might mess with some of the text/graphical functions that manipulate this array and depend on it being the correct size.
Now, I thought I could write the code like this:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_byte = SSD1306_DATA_BYTE;
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t data_control_byte = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_byte,
.length = 1
};
twi_master_write(&TWIC, &data_control_byte);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = buffer,
.length = 512
};
twi_master_write(&TWIC, &buffer_send);
}
/*
That doesn't work. The first "twi_master_write" command sends a START, address, control, STOP. Then the next such command sends a START, address, data buffer, STOP. Because the control byte is missing from the latter transaction, this does not work. All I need is to insert a 0x40 byte between the address byte and the buffer array when it is sent over the I2C bus. twi_master_write is a function that is provided in the Atmel TWI libraries. I've tried to examine the libraries to figure out its inner workings, but I can't make sense of it.
Surely, instead of figuring out how to recreate a twi_write function to work the way I need, there is an easier way to add this preceding control byte? Ideally one that is not so wasteful of clock cycles as my first code example? Realistically the display still updates very fast, more than enough for my needs, but that does not change the fact this is inefficient code.
I appreciate any advice you all may have. Thanks in advance!
How about having buffer and data_array pointing to the same uint8_t[513] array, but with buffer starting at its second element. Then you can continue to use buffer as you do today, but also use data_array directly without first having to copy all the elements from buffer.
uint8_t data_array[513];
uint8_t *buffer = &data_array[1];