Currently I am attempting to use PBOs to get video data to textures. I'm not sure if what I'm trying to do is even possible, or a good way to do it if it IS possible... I have 3 textures with the GL_RED format (one for each channel, not using Alpha currently). All three of these will be filled out in a single call to an external library.
Here's binding the buffer, etc:
void LockTexture(const TextureID& id, void ** ppbData)
{
Texture& tex = textures.getArray()[id];
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, tex.glBufID);
glBufferData(GL_PIXEL_UNPACK_BUFFER, tex.width * tex.height, NULL, GL_STREAM_DRAW);
*ppbData = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
}
This is done for the 3 textures, the buffers are then filled by the external library. Then I attempt to push them to the texture, like so:
void UnlockTexture(const TextureID& id)
{
Texture& tex = textures.getArray()[id];
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER);
glBindTexture(tex.glTarget, tex.glTexID);
glCheckForErrors(); // <--- NO ERROR
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, tex.width, tex.height, GL_RED, GL_UNSIGNED_BYTE, 0);
glCheckForErrors(); // <--- ERROR
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(tex.glTarget, 0);
}
Going through the list of reasons the error could be generated, this is what I know:
texture array has been defined
type is correct
data param (offset) is good at 0
not executed between glBegin/glEnd
This one I'm not sure about:
error is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
This one seems like it could be an issue, but I'd have no idea how else to handle this:
error is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
Am I correct in saying that this glUnmapBuffer is unmapping the last-mapped buffer, so the correct buffer is still mapped?
GL version is 3.2
I would greatly appreciate any help on this one, thanks!
glUnmapBuffer(target) will unmap the buffer which is currently bound to target. From the code you posted, it is unclear if there will still be the same binding as at the time you did the map call. Your wordings suggests that you do the mapping for all 3 right after each other, and when you try to unmap it, you only unmap the last one mapped because you forget to rebind the other ones, which would lead to this error for the first two of your textures.
Related
I am trying to loop through an array of Buffers each containing a sound sample read from disk, but I am having problems getting the SynthDef to reset its pointer to the buffers.
I did the following:
Assume I have a folder of sound files and I have read them all into an array of Buffers called "~buffers"
I just want to go through the array in order, playing the samples back to back and stopping after the last one.
I define a simple SynthDef, and then put the Synth that calls it into a Routine:
(
SynthDef(\playBuffer,{arg out = 0, buf;
var sig;
sig = PlayBuf.ar(2, buf, doneAction: Done.freeSelf);
Out.ar(out, sig);
}).add
~routine = Routine({
~buffers.do({
arg item;
var synth;
synth = Synth(\playBuffer, [\buf, item]);
item.duration.wait;
synth.free;
});
});
~routine.play;
)
It does not work as expected---the synths are always playing the same sound,the first one, although for the durations corresponding to the different samples.
I think the problem could be that the function inside my \playbuffer SynthDef (at least according to the Help files) is not re-evaluated with a different bufnum argument inside the loop.
In fact I can loop through the buffers if I use Buffer.play which creates synthDef's and Synth's on the fly. Replacing my Routine with this code works:
(
~routine2 = Routine({
~buffers.do({
arg item;
item.play;
item.duration.wait;
});
});
~routine2.play;
)
BUT: it is very crude, as now I cannot manipulate the buffer output except for changing the amplitude through the mul parameter of Buffer.play.
What I would like to do is to replicate Buffer.play's behavior---creating SynthDef's and Synth's on the fly---in my own code. But I'm having no luck with it. In fact I am not sure where to start, possibly because I don't fully grasp SuperCollider's server's handling of functions. Should I make a Synth-making function and use that inside the routine's loop? Or should I move the definition of the SynthDef inside the loop (which seems equivalent)? I tried the latter, but still got the same sound playing.
Perhaps I am going at this the wrong way---I'm very new to SuperCollider.
The code in your first example is correct. If I fill by buffers like this:
(
s.makeBundle(nil, {
~buffers = [1, 2, 3, 4, 5].collect {
|i|
var b;
b = Buffer.alloc(s, 44100, 1);
b.sine3([100, 150, 175] * i, 0.25);
};
})
)
and then play them with your code example:
(
SynthDef(\playBuffer,{arg out = 0, buf;
var sig;
sig = PlayBuf.ar(1, buf, doneAction: Done.freeSelf);
Out.ar(out, sig);
}).add;
~routine = Routine({
~buffers.do({
arg item;
Synth(\playBuffer, [\buf, item]);
item.duration.wait;
});
});
~routine.play;
)
This works fine, I hear ascending tones. (I changed your example to be single channel buffers, and removed the .free as you were already doing Done.freeSelf). If you're hearing the same sound playing each time, the problem is likely in the code where you're loading your buffers and not in playback.
One gotcha: the duration property of a buffer is not available immediately after you load them - reading an audio file is asynchronous, and SC doesn't know the duration until it's loaded. If you're doing Buffer.read immediately before playing, there's a chance your duration might be e.g. 0 or nil, which would cause unexpected results.
i tryed it in a Task, but i think it is complitly like the think which you will do in routine.
you have to put the buffers in array.
like buffer=[1,2,3,4,5]
but it is better to code in this way.
\buffer=[a.bufnum,b.bufnum,c.bufnum,d.bufnum]
and set the buffer variable in second arrgument of PlayBuf in SynthDef.
because you might load others buffer in your server and if you put the number of the buffer in the array, it will usually play wrong buffer which you do not want to play,.
I'm seeing an issue where I could not re-select the original bitmap on a DC, causing a memory leak. The pointer to the original bitmap stayed the same throughout the program, but the data (from CBitmap::GetBitmap) changes from monochrome to something else. I don't know when the bitmap actually changes, but something in the system is causing it.
CBitmap* cMyClass::mpOldBitmap;
CDC cMyClass::mCanvasDc;
CBitmap cMyClass::mCanvasBmp;
void cMyClass::Init()
{
// One-time initialization
CDC* pDc = GetDC();
mCanvasDc.CreateCompatibleDC(pDc);
mCanvasBmp.CreateCompatibleBitmap(pDc, 10, 10);
mpOldBitmap = mCanvasDc.SelectObject(&mCanvasBmp);
ReleaseDC(pDc);
BITMAP bitmap;
mpOldBitmap->GetBitmap(&bitmap); // A monochrome bitmap, as expected.
}
void cMyClass::Recreate(int newW, int newH)
{
// 1. Delete existing bitmap:
if (mpOldBitmap)
{
BITMAP bitmap;
mpOldBitmap->GetBitmap(&bitmap); // This is no longer the monochrome bitmap. It is 8bpp, with random size.
CBitmap* pCurrBmp = mCanvasDc.SelectObject(mpOldBitmap); // This fails (NULL). I can't de-select my bitmap.
mCanvasBmp.DeleteObject(); // This fails too, causing memory leak. Actually, it fails in CE6, but not in Win32. Regardless, both platforms will have a memory leak.
}
// 2. Recreate the bitmap with new size:
{
CDC* pDc = GetDC();
mCanvasBmp.CreateCompatibleBitmap(pDc, newW, newH);
ReleaseDC(pDc);
}
// 3. Finalize
mpOldBitmap = mCanvasDc.SelectObject(&mCanvasBmp);
}
Any known scenarios where this can happen?
Any debugging tips to break when the bitmap data changes?
Note: In code, I mentioned "this fails". I remove the assert on the returned values to make the code readable.
Edit: The solution that I am using to fix it is to use CDC:SaveDC and CDC::RestoreDC instead of stashing the pointer. The memory leak went away, and every GDI call passed. But I am still curious why the original code leaked. The pointer to the default bitmap, as far as I know, should be a default monochrome bitmap that is probably global in GDI world
Let's see the code from OP.
mpOldBitmap = mCanvasDc.SelectObject(mCanvasBmp);
Because mCanvasBmp is a CBitmap object (not a pointer to CBitmap), first is called the HGDIOBJ operator, then CDC::SelectObject(HGDIOBJ) which returns HGDIOBJ and not CBitmap*. This should give a conversion compiler error. If cast the returned value to CBitmap* is also wrong.
The correct way to get rid of problem is to pass a pointer.
mpOldBitmap = mCanvasDc.SelectObject(& mCanvasBmp);
This case will be called CDC::SelectObject(CBitmap* pBitmap) which returns a CBitmap*.
// I hope it's pretty clear. :)
I'm altering someone else's code. They used PNG's which are loaded via BufferedImage. I need to load a TGA instead, which is just simply a 18 byte header and BGR codes. I have the textures loaded and running, but I get a gray box instead of the texture. I don't even know how to DEBUG this.
Textures are loaded in a ByteBuffer:
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
static ByteBuffer buffer = ByteBuffer.allocateDirect(datasize);
FileInputStream fin = new FileInputStream("/Volumes/RAMDisk/shot00021.tga");
FileChannel inc = fin.getChannel();
inc.position(18); // skip header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
I've followed this: [how-to-manage-memory-with-texture-in-opengl][1] ... because I am updating the texture once per frame, like video.
Called once:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
Called repeatedly:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, width, height, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, byteBuffer);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
return textureID;
The render code hasn't changed and is based on:
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
Make sure you set the texture sampling mode. Especially min filter: glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR). The default setting is mip mapped (GL_NEAREST_MIPMAP_LINEAR) so unless you upload mip maps you will get a white read result.
So either set the texture to no mip or generate them. One way to do that is to call glGenerateMipmap after the tex img call.
(see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml).
It's a very common gl pitfall and something people just tend to know after getting bitten by it a few times.
There is no easy way to debug stuff like this. There are good gl debugging tools in for example xcode but they will not tell you about this case.
Debugging GPU code is always a hassle. I would bet my money on a big industry progress in this area as more companies discover the power of GPU. Until then; I'll share my two best GPU debugging friends:
1) Define a function to print OGL errors:
int printOglError(const char *file, int line)
{
/* Returns 1 if an OpenGL error occurred, 0 otherwise. */
GLenum glErr;
int retCode = 0;
glErr = glGetError();
while (glErr != GL_NO_ERROR) {
printf("glError in file %s # line %d: %s\n", file, line, gluErrorString(glErr));
retCode = 1;
glErr = glGetError();
}
return retCode;
}
#define printOpenGLError() printOglError(__FILE__, __LINE__)
And call it after your render draw calls (possible earlier errors will also show up):
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
printOpenGLError();
This alerts if you make some invalid operations (which might just be your case) but you usually have to find where the error occurs by trial and error.
2) Check out gDEBugger, free software with tons of GPU memory information.
[Edit]:
I would also recommend using the opensource lib DevIL - its quite competent in loading various image formats.
Thanks to Felix, by not calling glTexSubImage2D (leaving the memory valid, but uninitialized) I noticed a remnant pattern left by the default memory. This indicated that the texture is being displayed, but the load is most likely the problem.
**UPDATE:
The, problem with the code above is essentially the buffer. The buffer is 1024*1024, but it is only partially filled in by the read, leaving the limit marker of the ByteBuffer at 2359296(1024*768*3) instead of 3145728(1024*1024*3). This gives the error:
Number of remaining buffer elements is must be ... at least ...
I thought that OpenGL needed space to return data, so I doubled the size of the buffer.
The buffer size is doubled to compensate for the error.
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
This is wrong, what is needed is the flip() function (Big THANKS to Reto Koradi for the small hint to the buffer rewind) to put the ByteBuffer in read mode. Since the buffer is only semi-full, the OpenGL buffer check gives an error. The correct thing to do is not double the buffer size; use buffer.position(buffer.capacity()) to fill the buffer before doing a flip().
final static int datasize = (WIDTH*HEIGHT*3); // not +18 no header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
buffer.position(buffer.capacity()); // make sure buffer is completely FILLED!
buffer.flip(); // flip buffer to read mode
To figure this out, it is helpful to hardcode the memory of the buffer to make sure the OpenGL calls are working, isolating the load problem. Then when the OpenGL calls are correct, concentrate on the loading of the buffer. As suggested by Felix K, it is good to make sure one texture has been drawn correctly before calling glTexSubImage2D repeatedly.
Some ideas which might cause the issue:
Your texture is disposed somewhere. I don't know the whole code but I guess somewhere there is a glDeleteTextures and this could cause some issues if called at the wrong time.
Are the texture width and height powers of two? If not this might be an issue depending on your hardware. Old hardware sometimes won't support non-power of two images.
The texture parameters changed between the draw calls at some other point ( Make a debug check of the parameters with glGetTexParameter ).
There could be a loading issue when loading the next image ( edit: or even the first image ). Check if the first image is displayed without loading the next images. If so it must be one of the cases above.
I am developing an application in Atmel Studio 6 using the xMega32a4u. I'm using the TWI libraries provided by Atmel. Everything is going well for the most part.
Here is my issue: In order to update an OLED display I am using (SSD1306 controller, 128x32), the entire contents of the display RAM must be written immediately following the I2C START command, slave address, and control byte so the display knows to enter the data into the display RAM. If the control byte does not immediately precede the display RAM package, nothing works.
I am using a Saleae logic analyzer to verify that the bus is doing what it should.
Here is the function I am using to write the display:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_array[513];
data_array[0] = SSD1306_DATA_BYTE;
for (int i=0;i<512;++i){
data_array[i+1] = buffer[i];
}
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_array,
.length = 513
};
twi_master_write(&TWIC, &buffer_send);
}
Clearly, this is very inefficient as each call to this function recreates the entire array "buffer" into a new array "data_array," one element at a time. The point of this is to insert the control byte (SSD1306_DATA_BYTE = 0x40) into the array so that the entire "package" is sent at once, and the control byte is in the right place. I could make the original "buffer" array one element larger and add the control byte as the first element, to skip this process but that makes the size 513 rather than 512, and might mess with some of the text/graphical functions that manipulate this array and depend on it being the correct size.
Now, I thought I could write the code like this:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_byte = SSD1306_DATA_BYTE;
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t data_control_byte = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_byte,
.length = 1
};
twi_master_write(&TWIC, &data_control_byte);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = buffer,
.length = 512
};
twi_master_write(&TWIC, &buffer_send);
}
/*
That doesn't work. The first "twi_master_write" command sends a START, address, control, STOP. Then the next such command sends a START, address, data buffer, STOP. Because the control byte is missing from the latter transaction, this does not work. All I need is to insert a 0x40 byte between the address byte and the buffer array when it is sent over the I2C bus. twi_master_write is a function that is provided in the Atmel TWI libraries. I've tried to examine the libraries to figure out its inner workings, but I can't make sense of it.
Surely, instead of figuring out how to recreate a twi_write function to work the way I need, there is an easier way to add this preceding control byte? Ideally one that is not so wasteful of clock cycles as my first code example? Realistically the display still updates very fast, more than enough for my needs, but that does not change the fact this is inefficient code.
I appreciate any advice you all may have. Thanks in advance!
How about having buffer and data_array pointing to the same uint8_t[513] array, but with buffer starting at its second element. Then you can continue to use buffer as you do today, but also use data_array directly without first having to copy all the elements from buffer.
uint8_t data_array[513];
uint8_t *buffer = &data_array[1];
I'm getting a CVImageBufferRef from my AVCaptureSession, and I'd like to take that image buffer and upload it over the network. To save space and time, I would like to do this without rendering the image into a CIImage and NSBitmapImage, which is the solution I've seen everywhere (like here: How can I obtain raw data from a CVImageBuffer object).
This is because my impression is that the CVImageBuffer might be compressed, which is awesome for me, and if I render it, I have to uncompress it into a full bitmap and then upload the whole bitmap. I would like to take the compressed data (realizing that a single compressed frame might be unrenderable later by itself) just as it sits within the CVImageBuffer. I think this means I want the CVImageBuffer's base data pointer and its length, but it doesn't appear there's a way to get that within the API. Anybody have any ideas?
CVImageBuffer itself is an abstract type. Your image should be an instance of either CVPixelBuffer, CVOpenGLBuffer, or CVOpenGLTexture. The documentation for those types lists the functions you can use for accessing the data.
To tell which type you have use the GetTypeID methods:
CVImageBufferRef image = …;
CFTypeID imageType = CFGetTypeID(image);
if (imageType == CVPixelBufferGetTypeID()) {
// Pixel Data
}
else if (imageType == CVOpenGLBufferGetTypeID()) {
// OpenGL pbuffer
}
else if (imageType == CVOpenGLTextureGetTypeID()) {
// OpenGL Texture
}