OpenGL texture loading issue - debugging

This is a very vague problem, so please feel free to clarify anything about this project.
I'm working on a very large application, and recently a very perplexing bug has cropped up regarding the texturing. Some of the textures that we are loading are being loaded - I've stepped through the code, and it runs - but all OpenGL renders for those textures is a weird Pink/White striped texture.
What would you suggest to even begin debugging this situation?
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
Some textures are being loaded, some are not. They're all loaded in the exact same way.
I've made sure that all textures exist
The "pink/white" textures are definitely loaded in memory - they become visible shortly after I load any other texture into OpenGL.
I'm perplexed, and have no idea what else can be wrong. Is there an OpenGL command that can be called after glTexImage that would force the texture to become useable?
Edit:
It's not about the commands failing, it's mainly a timing issue. The pink/white textures show up for a while, until more textures are loaded. It's almost as if the textures are queued up, and the queue just pauses at some time.
Next Edit: I got the glIntercept log working correctly, and this is what it outputted (before the entire program crashed)
http://freetexthost.com/1kdkksabdg
Next Edit: I know for a fact the textures are loaded in OpenGL memory, but for some reason they're not rendering in the program themselves.

If your texture is colored incorrectly most likely you're loading the wrong order of RGB. Make sure in your glTexImage2D you're using the right enums for your image format. Make sure the number of components is correct and that you're getting the order of your RGB pixels in the format argument right.
Although probably not related to your textures showing up wrong, OpenGL doesn't support multithreaded draws so make sure you're not doing any drawing work on a different thread than the one that owns the context.
Edit: Do you have a reference renderer so you can verify the image pixels are being loaded as expected? I would strongly recommend writing a small routine to load then immediately save the pixels to a file so you can be sure that you're getting the right texture results.

Check your texture coordinates. If they are set wrong, you can see just one or two texels mapped to entire primitives. Remember, OpenGL is a state machine. Check if you're changing the texture coordinate state at the wrong time. You may be setting the texture coordinates at a later point in your code, then when you get back to redrawing these elements the state is acceptable for mapping your texture to the code.
If it is merely a timing issue where the texture loading OpenGL calls aren't executed in time, and your threading code is correct, try adding a call to glFlush() after loading the textures. glFlush() causes all pending OpenGL commands to execute.

When you say:
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
This doesn't sound like strong enough protection to me: remember that OpenGL is a state machine with a large amount of internal state. You need to make sure the OpenGL state is what you expect it to be when you are making your calls, and that certain sequences of calls don't get interrupted with by calls from other threads.
I'm no expert on OpenGL thread-safety, but this seems to me where your problem might lie.

Check the size and compression of those images you use as textures. i think opengl texture size has to be a power of 2 ...

You can't load a texture in a thread and use it in other different thread because you will see a beautiful white texture. To do this possible you must load the OpenGL context in between differents threads before use any OpenGL function.

If you are using GLIntercept to check your code, ensure to enable:
ThreadChecking = True;
in the gliConfig.ini file.
Viewing the log it seems that quite a few OpenGL calls are being main outside the main context.

It is possible to load textures in another thread without getting a white texture. The problem is that - once you initialized the OpenGL window - the OpenGL context is "bound" to this thread. You have to deactivate the context in the main thread while you're loading textures and before you start loading them, you have to activate the context in this thread.
You can use this class:
Context.h:
#ifndef CONTEXT_H
#define CONTEXT_H
#include <GL/glut.h>
class Context
{
public:
static Context* getInstance();
void bind();
void unbind();
private:
Context();
Context(const Context&);
~Context();
static Context *instance;
HGLRC hglrc;
HDC hdc;
class Guard
{
public:
~Guard()
{
if (Context::instance != 0) {
delete Context::instance;
}
}
};
friend class Guard;
};
#endif
Context.cpp:
#include "Context.h"
Context* Context::getInstance()
{
static Guard guard;
if(Context::instance == 0) {
Context::instance = new Context();
}
return Context::instance;
}
void Context::bind()
{
wglMakeCurrent(this->hdc, this->hglrc);
}
void Context::unbind()
{
wglMakeCurrent(NULL, NULL);
}
Context::Context()
{
this->hglrc = wglGetCurrentContext();
this->hdc = wglGetCurrentDC();
}
Context::~Context()
{
}
Context *Context::instance = 0;
And that's what you have to do:
int state = 0;
void main()
{
// Create the window.
glutCreateWindow(TITLE);
// Set your loop function.
glutDisplayFunc(&loop);
// Initialize the singleton for the 1st time.
Context::getInstance()->bind();
glutMainLoop();
}
void loop()
{
if (state == 0) {
Context::getInstance()->unbind();
startThread(&run);
} else if (state == 1) {
// Rebind the context to the main thread (just once).
Context::getInstance()->bind();
state = 2;
} else if (state == 2) {
// Draw your textures, lines, etc.
} else {
// Draw something (but no textures).
}
}
void run()
{
Context::getInstance()->bind();
// Load textures...
Context::getInstance()->unbind();
state = 1;
}

Related

graphics card getting activated during video? A test using a java (Processing) program

I have an application I was running that I created with Processing where I was drawing a lot of objects to the screen. Normally the sketch runs at 60 fps and predictably when a lot of stuff is drawn to the screen, it starts to reduce. I wanted to see what changing Processing's renderer would do, as there is a P3D option when you set the size. P3D is a '3D graphics renderer that makes use of OpenGL-compatible graphics hardware.'
I noticed that the performance improved when i used this in that I could draw more objects to the screen before framerate dropped, without really having to change anything in the code. Then i noticed something odd.
I started up the computer the next day and ran the program again and noticed that suddenly the framerate was lower, around 50 fps. There didn't seem to be anything wrong with my computer as it wasn't doing anything else. Then I thought it probably has something to do with the graphics card. I opened a youtube video and it seemed to be fine. Then I ran the sketch again and it went back up to 60fps.
I just want to know what might be going on here hardware wise. I'm using an NVIDIA GTX970 (i think its OC edition). It seems to me that watching the video sort of jump started the card and made it perform on the processing sketch properly. Why didn't the sketch itself make that happen?
as an example:
Vector<DrawableObject> objects;
void setup()
{
size(400,400, P3D); /// here is the thing to change. P3D is an option
objects = new Vector<DrawableObject>();
for(int i=0;i<400;i++)
{
objects.add(new DrawableObject());
}
}
void draw()
{
for(int i=0; i<objects.size(); i++)
{
DrawableObject o = objects.get(i);
o.run();
}
}

is the cairo context life-time limited?

Unfortunately I have to ask because the documentation doesn't specify this very well.
Is the cairo_t and the cairo_surface_t life-time limited ?
In many examples or samples found over the web, the surface and the context are almost always recreated (more oftently the context) for each repaint operation.
Actually it seems to work almost finely if I only create the surface and the context once, lazily, like here, when a x11 windows is resized:
void updateWindowSize()
{
if(!display || !_win)
return;
int w = cast(uint) lround(width);
int h = cast(uint) lround(height);
if (!_osSetSizePos)
XResizeWindow(display, _win, w, h);
if (!cairoSurface)
cairoSurface = cairo_xlib_surface_create(display, _win, _visual, w, h);
cairo_xlib_surface_set_size(cairoSurface, w, h);
if (!_cr) _cr = cairo_create(cairoSurface);
_cv.setContext(_cr); // _cv = canvas
}
However the context has to be passed each time to the canvas _cv.setContext(_cr); otherwise the settings are never applied (color, pen width,...), which is crazy since the context itself never changes.
This completely goes against what I've seen before., including the answers to this Q.
The underlying problem is that if the context is recreated for each redrawing then the operations such as cairo_set_source_rgba, cairo_set_source, cairo_set_line_width, etc., have to be done for each redrawing too, which can be seen as a performance issue.
No, the lifetimes aren't limited (at least by cairo). You can use both for as long as you want. You don't even need to recreate the surface for window resizes, because there is cairo_xlib_surface_set_size(). (There even is cairo_xlib_surface_set_drawable() which can change the drawable, but personally I don't like that function.)
However, libraries like GTK might add some requirements. For example, GTK does double buffering and there contexts can't be cached.

Can be smoothing image in a loaded SWF?

I'm trying to load an SWF into another in AS3/Haxe. The loaded SWF contains some images - but only on some Shape.graphics elements. (Like graphics.beginBitmapFill(); ...)
This images are not smoothed, and jaggy.
Can this images be smoothed anyhow during runtime?
Any hack interested! :)
Thanks in advance!
Tom
Update: Sorry, but I forget to mention, that I'm loading more AS2-SWFs (AVM1) into one AS3-SWF (AVM2) with AVM2Loader, which hack the loaded bytes, and convert the AVM1 SWFs into AVM2 - it works very well. :)
So, in these SWFs I need to find the images/bitmaps, but only found the Shapes, which graphics elements has the 'images'. If I clear this graphics, then all images are gone, so I think, the images are in some graphics.beginBitmapFill(...);, without smoothing. I want to reach them, and switch smoothing on at runtime, if possible.
(Sorry, if the first time I was not enough clear.)
Edit (Jan 23 '14): I found solution for it. It is not fast, and required Flash Player 11.6. Every MovieClip graphics properties has a new readGraphicsData function, which give all the graphics commands (Vector IGraphicsData) to draw the whole MC. And iterate in these commands, if I change every bitmapFill command smooth parameter to true, and redraw the MC, it will be smoothed, and nice.
That's it. Not fast, but working.
I found solution for it. It is not fast, and required Flash Player 11.6. Every MovieClip graphics properties has a new readGraphicsData function, which give all the graphics commands (Vector IGraphicsData) to draw the whole MC. And iterate in these commands, if I change every bitmapFill command smooth parameter to true, and redraw the MC, it will be smoothed, and nice.
That's it. Not fast, but working.
function onLoad(event):Void
{
pic.forceSmoothing = true;
}
Smoothing is a property of bitmaps that's off by default.
var image = new Bitmap(bitmapData);
image.smoothing = true;
Typically, your bitmapData will be in the loader.content.bitmapData when loading externally, but it's up to you were you've stored it.
Update:
If you want to smooth all images in a loaded SWF without any knowledge of the structure of the SWF, then you'll have to recursively dig through the hiarchy of that SWF, and depending on whether or not the object is a Bitmap, turn on smoothing.
function recursivelySmooth(obj):void {
for (var i:int = 0; obj.getChildAt(i); i++) {
var item:* = obj.getChildAt(i);
if (item is Bitmap) {
item.smoothing = true;
} else if (item.hasOwnProperty("numChildren") == true) {
recursivelySmooth(item);
}
}
}
This was written freehand, so you may have to doublecheck everything is correct, but that's the basic idea. Just call recursivelySmooth() on your swf, and it'll dig through all objects that can have child elements and smooth them.

GL_INVALID_VALUE from glFramebufferTexture2DEXT only after delete/realloc texture

I have some C# (SharpGL-esque) code which abstracts OpenGL frame buffer handling away to simple "set this texture as a 'render target'" calls. When a texture is first set as a render target, I create an FBO with matching depth buffer for that size of texture; that FBO/depth-buffer combo will then be reused for all same-sized textures.
I have a curious error as follows.
Initially the app runs and renders fine. But if I increase my window size, this can cause some code to need to resize its 'render target' texture, which it does via glDeleteTextures() and glGenTextures() (then bind, glTexImage2D, and texparams so MIN_FILTER and MAG_FILTER are both GL_NEAREST). I've observed I tend to get the same name (ID) back when doing so (as GL reuses the just-freed name).
We then hit the following code (with apologies for the slightly bastardised GL-like syntax):
void SetRenderTarget(Texture texture)
{
if (texture != null)
{
var size = (texture.Width << 16) | texture.Height;
FrameBufferInfo info;
if (!_mapSizeToFrameBufferInfo.TryGetValue(size, out info))
{
info = new FrameBufferInfo();
info.Width = texture.Width;
info.Height = texture.Height;
GL.GenFramebuffersEXT(1, _buffer);
info.FrameBuffer = _buffer[0];
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
GL.GenRenderbuffersEXT(1, _buffer);
info.DepthBuffer = _buffer[0];
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, info.DepthBuffer);
GL.RenderbufferStorageEXT(GL.RENDERBUFFER_EXT, GL.DEPTH_COMPONENT16, texture.Width, texture.Height);
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, 0);
GL.FramebufferRenderbufferEXT(GL.FRAMEBUFFER_EXT, GL.DEPTH_ATTACHMENT_EXT, GL.RENDERBUFFER_EXT, info.DepthBuffer);
_mapSizeToFrameBufferInfo.Add(size, info);
}
else
{
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
}
GL.CheckFrameBufferStatus(GL.FRAMEBUFFER_EXT);
}
else
{
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, 0, 0);
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, 0);
}
ProjectStandardOrthographic();
}
After said window resize, GL returns a GL_INVALID_VALUE error from the glFramebufferTexture2DEXT() call (identified with glGetError() and gDEBugger). If I ignore this, glCheckFrameBufferStatus() later fails with GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If I ignore this too, I can see the expected "framebuffer to dubious to do anything" errors if I check for them and a black screen if I don't.
I'm running on an NVidia GeForce GTX 550 Ti, Vista 64 (32 bit app), 306.97 drivers. I'm using GL 3.3 with the Core profile.
Workaround and curiosity: If when rellocating textures I glGenTextures() before glDeleteTextures() - to avoid getting the same ID back - the problem goes away. I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors. I'm theorising it's because GL was/is using the texture in a recent FBO and now has decided that texture ID is in use or is no longer valid in some way and so isn't acceptable? Maybe?
After the problem gDEBugger shows that both FBOs (the original one with the smaller depth buffer and previous texture, and the new one with the larger combination) have the same texture ID attached.
I've tried detaching the texture from the frame buffer (via glFramebufferTexture2DEXT again) before deallocation, but to no avail (gDEBuffer reflects the change but the problem still occurs). I've tried taking out the depth buffer entirely. I've tried checking the texture sizes via glGetTexLevelParameter() before I use it; it does indeed exist.
This sounds like a bug in NVIDIA's OpenGL implementation. Once you delete an object name, that object name becomes invalid, and thus should be a legitimate candidate for glGen* to return.
You should file a bug report, with a minimal case that reproduces the issue.
I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors.
No, it doesn't. glGenTextures doesn't allocate storage for textures (which is where any real OOM errors might come from). It only creates the texture name. It's unfortunate that you have to use a workaround, but it's not any real concern.

How to toggle fullscreen on Mac OSX using SDL 1.3 so that it actually works?

There is SDL_WM_ToggleFullScreen. However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it. Ok, annoying, but I can reload my textures. However, when I reload my textures after toggling, it crashes on certain textures inside of a memcpy being called by glTexImage2D. Huh, it sure didn't crash when I loaded those textures the first time around. I even try deleting all my textures before the toggle, but I get the same result.
As a test, I reload textures without toggling fullscreen, and the textures reload fine. So, toggling the fullscreen does something funny to OpenGL. (And, by "toggling", I mean going in either direction: windowed->fullscreen or fullscreen->windowed - I get the same results.)
As an alternative, this code seems to toggle fullscreen as well:
SDL_Surface *surface = SDL_GetVideoSurce();
Uint32 flags = surface->flags;
flags ^= SDL_FULLSCREEN;
SDL_SetVideoMode(surface->w, surface->h, surface->format->BitsPerPixel, flags);
However, calling SDL_GetError after this code says that the "Invalid window" error was set during the SDL_SetVideoMode. And if I ignore it and try to load textures, there is no crashing, but my textures don't show up either (perhaps OpenGL is just immediately returning from glTexImage2D without actually doing anything). As a test, at startup time, I try calling SDL_SetVideoMode twice in a row to perform a toggle right before I even load my textures the first time. In that particular case, the textures do actually show up. Huh, what?
I am using SDL 1.3.0-6176 (posted on the SDL website January 7, 2012).
Update:
My texture uploading code is below (nothing surprising here). For clarification, this application (including the code below) is already working without any issues as a finished application for iOS, Android, PSP, and Windows. Windows is the only other version, other than Mac, that is using SDL under the hood.
unsigned int Texture::init(const void *data, int width, int height)
{
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
return textureID;
}
it crashes on certain textures inside of a memcpy being called by glTexImage2D.
This shouldn't happen. There's some bug somewhere, either in your code or that of the OpenGL implementation. And I'd say it's yours. So please post your texture allocation and upload code for us to see.
However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it.
Yes, unfortunately this is the way how it works on MacOS X. And due to the design of its OpenGL driver model, there's litte you can do about it (did I mention that MacOS X sucks; oh yes, I did. On occasion. Like over a hundred times). The same badly designed driver model also makes MacOS X so slow in catching up with OpenGL development.

Resources