I have some C# (SharpGL-esque) code which abstracts OpenGL frame buffer handling away to simple "set this texture as a 'render target'" calls. When a texture is first set as a render target, I create an FBO with matching depth buffer for that size of texture; that FBO/depth-buffer combo will then be reused for all same-sized textures.
I have a curious error as follows.
Initially the app runs and renders fine. But if I increase my window size, this can cause some code to need to resize its 'render target' texture, which it does via glDeleteTextures() and glGenTextures() (then bind, glTexImage2D, and texparams so MIN_FILTER and MAG_FILTER are both GL_NEAREST). I've observed I tend to get the same name (ID) back when doing so (as GL reuses the just-freed name).
We then hit the following code (with apologies for the slightly bastardised GL-like syntax):
void SetRenderTarget(Texture texture)
{
if (texture != null)
{
var size = (texture.Width << 16) | texture.Height;
FrameBufferInfo info;
if (!_mapSizeToFrameBufferInfo.TryGetValue(size, out info))
{
info = new FrameBufferInfo();
info.Width = texture.Width;
info.Height = texture.Height;
GL.GenFramebuffersEXT(1, _buffer);
info.FrameBuffer = _buffer[0];
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
GL.GenRenderbuffersEXT(1, _buffer);
info.DepthBuffer = _buffer[0];
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, info.DepthBuffer);
GL.RenderbufferStorageEXT(GL.RENDERBUFFER_EXT, GL.DEPTH_COMPONENT16, texture.Width, texture.Height);
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, 0);
GL.FramebufferRenderbufferEXT(GL.FRAMEBUFFER_EXT, GL.DEPTH_ATTACHMENT_EXT, GL.RENDERBUFFER_EXT, info.DepthBuffer);
_mapSizeToFrameBufferInfo.Add(size, info);
}
else
{
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
}
GL.CheckFrameBufferStatus(GL.FRAMEBUFFER_EXT);
}
else
{
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, 0, 0);
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, 0);
}
ProjectStandardOrthographic();
}
After said window resize, GL returns a GL_INVALID_VALUE error from the glFramebufferTexture2DEXT() call (identified with glGetError() and gDEBugger). If I ignore this, glCheckFrameBufferStatus() later fails with GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If I ignore this too, I can see the expected "framebuffer to dubious to do anything" errors if I check for them and a black screen if I don't.
I'm running on an NVidia GeForce GTX 550 Ti, Vista 64 (32 bit app), 306.97 drivers. I'm using GL 3.3 with the Core profile.
Workaround and curiosity: If when rellocating textures I glGenTextures() before glDeleteTextures() - to avoid getting the same ID back - the problem goes away. I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors. I'm theorising it's because GL was/is using the texture in a recent FBO and now has decided that texture ID is in use or is no longer valid in some way and so isn't acceptable? Maybe?
After the problem gDEBugger shows that both FBOs (the original one with the smaller depth buffer and previous texture, and the new one with the larger combination) have the same texture ID attached.
I've tried detaching the texture from the frame buffer (via glFramebufferTexture2DEXT again) before deallocation, but to no avail (gDEBuffer reflects the change but the problem still occurs). I've tried taking out the depth buffer entirely. I've tried checking the texture sizes via glGetTexLevelParameter() before I use it; it does indeed exist.
This sounds like a bug in NVIDIA's OpenGL implementation. Once you delete an object name, that object name becomes invalid, and thus should be a legitimate candidate for glGen* to return.
You should file a bug report, with a minimal case that reproduces the issue.
I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors.
No, it doesn't. glGenTextures doesn't allocate storage for textures (which is where any real OOM errors might come from). It only creates the texture name. It's unfortunate that you have to use a workaround, but it's not any real concern.
Related
Is there a limitation on how much Direct2D tile size can grow?
In D2D1_RENDERING_CONTROLS I can set the tiling of the Direct2D buffering.
D2D1_RENDERING_CONTROLS r4c = {};
m_d2dContext->GetRenderingControls(&r4c);
if (r4c.tileSize.width < (UINT32)wi || r4c.tileSize.height < (UINT32)he)
{
r4c.tileSize.width = wi;
r4c.tileSize.height = he;
m_d2dContext->SetRenderingControls(r4c);
m_d2dContext->GetRenderingControls(&r4c);
nop();
}
When it's up to 4K resolution, the rendering of an effect gets the whole texture at one pass:
When I go up to 8K, then there's tiling, indicating that Direct2D passes the texture in more than one pass:
This is a killing for my custom HLSL effects that require the whole texture to operate, like this special lighting effect.
Is this an intended feature? A bug? An undetected issue? Would I be forced to go to OpenGL rendering eventually?
This is not related to my GF card - I tested with an RTX 2070, the same error occurs.
Edit: here is a full VS 2019 solution with a simple swirl effect.
// When you try plain 1920x1080 or 3840x2160, ok
int wi = 7680;
int he = 4320;
// With the above, the swirling is tiled x4.
Edit: On EndDraw(), I have this in debug output:
Exception thrown at 0x00007FFBE6324B89 in d2dbug.exe: Microsoft C++ exception: _com_error at memory location 0x000000EBC4D5CE58.
Exception thrown at 0x00007FFBE6324B89 in d2dbug.exe: Microsoft C++ exception: _com_error at memory location 0x000000EBC4D5CE58.
Exception thrown at 0x00007FFBE6324B89 in d2dbug.exe: Microsoft C++ exception: _com_error at memory location 0x000000EBC4D5CE58.
There is an answer from Microsoft.
I would recommend using a bitmap as the input to the effect rather
than the image source. Simply drawing the image source into that
bitmap should work. They could also create the bitmap from a WIC
bitmap source using the D2D API CreateBitmapFromWicBitmap.
This works, although the output of an effect returns a ID2D1Image which then has to be converted to a ID2D1Bitmap by the code here.
I'm still discussing with them the performance issues that might occur when there are too many effects in the chain and each of them has to convert manually to an ID2D1Bitmap.
I'm spending some time in the evenings trying to learn Apple's Metal graphics API. I've run into a frustrating problem and so must be missing something pretty fundamental: I can only get rendered objects to appear on screen when depth testing is disabled, or when the depth function is changed to "Greater". What could possibly be going wrong? Also, what kinds of things can I check in order to debug this problem?
Here's what I'm doing:
1) I'm using SDL to create my window. When setting up Metal, I manually create a CAMetalLayer and insert it into the layer hierarchy. To be clear, I am not using MTKView and I don't want to use MTKView. Staying away from Objective-C and Cocoa as much as possible seems to be the best strategy for writing this application to be cross-platform. The intention is to write in platform-agnostic C++ code with SDL and a rendering engine which can be swapped at run-time. Behind this interface is where all Apple-specific code will live. However, I strongly suspect that part of what's going wrong is something to do with setting up the layer:
SDL_SysWMinfo windowManagerInfo;
SDL_VERSION(&windowManagerInfo.version);
SDL_GetWindowWMInfo(&window, &windowManagerInfo);
// Create a metal layer and add it to the view that SDL created.
NSView *sdlView = windowManagerInfo.info.cocoa.window.contentView;
sdlView.wantsLayer = YES;
CALayer *sdlLayer = sdlView.layer;
CGFloat contentsScale = sdlLayer.contentsScale;
NSSize layerSize = sdlLayer.frame.size;
_metalLayer = [[CAMetalLayer layer] retain];
_metalLayer.contentsScale = contentsScale;
_metalLayer.drawableSize = NSMakeSize(layerSize.width * contentsScale,
layerSize.height * contentsScale);
_metalLayer.device = device;
_metalLayer.pixelFormat = MTLPixelFormatBGRA8Unorm;
_metalLayer.frame = sdlLayer.frame;
_metalLayer.framebufferOnly = true;
[sdlLayer addSublayer:_metalLayer];
2) I create a depth texture to use as a depth buffer. My understanding is that this step is necessary in Metal. Though, in OpenGL, the framework creates a depth buffer for me quite automatically:
CGSize drawableSize = _metalLayer.drawableSize;
MTLTextureDescriptor *descriptor =
[MTLTextureDescriptorr texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float_Stencil8 width:drawableSize.width height:drawableSize.height mipmapped:NO];
descriptor.storageMode = MTLStorageModePrivate;
descriptor.usage = MTLTextureUsageRenderTarget;
_depthTexture = [_metalLayer.device newTextureWithDescriptor:descriptor];
_depthTexture.label = #"DepthStencil";
3) I create a depth-stencil state object which will be set at render time:
MTLDepthStencilDescriptor *depthDescriptor = [[MTLDepthStencilDescriptor alloc] init];
depthDescriptor.depthWriteEnabled = YES;
depthDescriptor.depthCompareFunction = MTLCompareFunctionLess;
_depthState = [device newDepthStencilStateWithDescriptor:depthDescriptor];
4) When creating my render pass object, I explicitly attach the depth texture:
_metalRenderPassDesc = [[MTLRenderPassDescriptor renderPassDescriptor] retain];
MTLRenderPassColorAttachmentDescriptor *colorAttachment = _metalRenderPassDesc.colorAttachments[0];
colorAttachment.texture = _drawable.texture;
colorAttachment.clearColor = MTLClearColorMake(0.2, 0.4, 0.5, 1.0);
colorAttachment.storeAction = MTLStoreActionStore;
colorAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassDepthAttachmentDescriptor *depthAttachment = _metalRenderPassDesc.depthAttachment;
depthAttachment.texture = depthTexture;
depthAttachment.clearDepth = 1.0;
depthAttachment.storeAction = MTLStoreActionDontCare;
depthAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassStencilAttachmentDescriptor *stencilAttachment = _metalRenderPassDesc.stencilAttachment;
stencilAttachment.texture = depthAttachment.texture;
stencilAttachment.storeAction = MTLStoreActionDontCare;
stencilAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
5) Finally, at render time, I set the depth-stencil object before drawing my object:
[_encoder setDepthStencilState:_depthState];
Note that if I go into step 3 and change depthCompareFunction to MTLCompareFunctionAlways or MTLCompareFunctionGreater then I see polygons on the screen, but ordering is (expectedly) incorrect. If I leave depthCompareFunction set to MTLCompareFunctionLess then I see nothing but the background color. It acts AS IF all fragments fail the depth test at all times.
The Metal API validator reports no errors and has no warnings...
I've tried a variety of combinations of settings for things like the depth-stencil texture format and have not made any forward progress. Honestly, I'm not sure what to try next.
EDIT: GPU Frame Capture in Xcode displays a green outline of my polygons, but none of those fragments are actually drawn.
EDIT 2: I've learned that the Metal API validator has an "Extended" mode. When this is enabled, I get these two warnings:
warning: Texture Usage Should not be Flagged as MTLTextureUsageRenderTarget: This texture is not a render target. Clear the MTLTextureUsageRenderTarget bit flag in the texture usage options. Texture = DepthStencil. Texture is used in the Depth attachment.
warning: Resource Storage Mode Should be MTLStorageModePrivate and it Should be Initialized with a Blit: This resource is rarely accessed by the CPU. Changing the storage mode to MTLStorageModePrivate and initializing it with a blit from a shared buffer may improve performance. Texture = 0x102095000.
When I head these two warnings, I get these two errors. (The warnings and errors seem to contradict one another.)]
error 'MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
failed assertion `MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
EDIT 3: When I run a sample Metal app and use the GPU frame capture tool then I see a gray scale representation of the depth buffer and the rendered object is clearly visible. This doesn't happen for my app. There, the GPU frame capture tool always shows my depth buffer as a plain white image.
Okay, I figured this out. I'm going to post the answer here to help the next guy. There was no problem writing to the depth buffer. This explains why spending time mucking with depth texture and depth-stencil-state settings was getting me nowhere.
The problem is differences in the coordinate systems used for Normalized Device Coordinates in Metal versus OpenGL. In Metal, NDC are in the space [-1,+1]x[-1,+1]x[0,1]. In OpenGL, NDC are [-1,+1]x[-1,+1]x[-1,+1]. If I simply take the projection matrix produced by glm::perspective and shove it through Metal then results will not be as expected. In order to compensate for the NDC space differences when rendering with Metal, that projection matrix must be left-multiplied by a scaling matrix with (1, 1, 0.5, 1) on the diagonal.
I found these links to be helpful:
1. http://blog.athenstean.com/post/135771439196/from-opengl-to-metal-the-projection-matrix
2. http://www.songho.ca/opengl/gl_projectionmatrix.html
EDIT: Replaced explanation with a more complete and accurate explanation. Replace solution with a better solution.
So I've setup CVPixelBuffer's and tied them to OpenGL FBOs successfully on iOS. But now trying to do the same on OSX has me snagged.
The textures from CVOpenGLTextureCacheCreateTextureFromImage return as GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D targets.
I've found the kCVOpenGLBufferTarget key, but it seems like it is supposed to be used with CVOpenGLBufferCreate not CVPixelBufferCreate.
Is it even possible to get GL_TEXTURE_2D targeted textures on OSX with CVPixelBufferCreate, and if so how?
FWIW a listing of the CV PBO setup:
NSDictionary *bufferAttributes = #{ (__bridge NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA), (__bridge NSString *)kCVPixelBufferWidthKey : #(size.width), (__bridge NSString *)kCVPixelBufferHeightKey : #(size.height), (__bridge NSString *)kCVPixelBufferIOSurfacePropertiesKey : #{ } };
if (pool)
{
error = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &renderTarget);
}
else
{
error = CVPixelBufferCreate(kCFAllocatorDefault, (NSUInteger)size.width, (NSUInteger)size.height, kCVPixelFormatType_32BGRA, (__bridge CFDictionaryRef)bufferAttributes, &renderTarget);
}
ZAssert(!error, #"Couldn't create pixel buffer");
error = CVOpenGLTextureCacheCreate(kCFAllocatorDefault, NULL, [[NSOpenGLContext context] CGLContextObj], [[NSOpenGLContext format] CGLPixelFormatObj], NULL, &textureCache);
ZAssert(!error, #"Could not create texture cache.");
error = CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, renderTarget, NULL, &renderTexture);
ZAssert(!error, #"Couldn't create a texture from cache.");
GLuint reference = CVOpenGLTextureGetName(renderTexture);
GLenum target = CVOpenGLTextureGetTarget(renderTexture);
UPDATE: I've been able to successfully use the resulting GL_TEXTURE_RECTANGLE textures. However, this will cause a lot of problems with the shaders for compatibility between iOS and OSX. And anyway I'd rather continue to use normalised texture coordinates.
If it isn't possible to get GL_TEXTURE_2D textures directly from a CVPixelBuffer in this manner, would it be possible to create a CVOpenGLBuffer and have a CVPixelBuffer attached to it to pull the pixel data?
Just came across this, and I'm going to answer it even though it's old, in case others encounter it.
iOS uses OpenGL ES (originally 2.0, then 3.0). OS X uses regular old (non-ES) OpenGL, with a choice of Core profile (3.0+ only) or Compatibility profile (up to 3.2).
The difference here is that OpenGL (non-ES) was designed a long time ago, when there were many restrictions on texture sizes. As cards lifted those restrictions, extensions were added, including GL_TEXTURE_RECTANGLE. Now it's no big deal for any GPU to support any size texture, but for API compatibility reasons they can't really fix OpenGL. Since OpenGL ES is technically a parallel, but separate, API, which was designed much more recently, they were able to correct the problem from the beginning (i.e. they never had to worry about breaking old stuff). So for OpenGL ES they never defined a GL_TEXTURE_RECTANGLE, they just defined that GL_TEXTURE_2D has no size restrictions.
Short answer - OS X uses Desktop OpenGL, which for legacy compatibility reasons still treats rectangle textures separately, while iOS uses OpenGL ES, which places no size restrictions on GL_TEXTURE_2D, and so never offered a GL_TEXTURE_RECTANGLE at all. Thus, on OS X, CoreVideo produces GL_TEXTURE_RECTANGLE objects, because GL_TEXTURE_2D would waste a lot of memory, while on iOS, it produces GL_TEXTURE_2D objects because GL_TEXTURE_RECTANGLE doesn't exist, nor is it necessary.
It's an unfortunate incompatibility between OpenGL and OpenGL ES, but it is what it is and there's nothing to be done but code around it. Or, now, you can (and probably should consider) moving on to Metal.
As this appears to have been left dangling and is something I recently dealt with: no, GL_TEXTURE_RECTANGLE appears to be the only use case. To get to a GL_TEXTURE_2D you're going to have to render to texture.
FWIW, As of 2023, the modern CGLTexImageIOSurface2D is much faster than CVOpenGLESTextureCacheCreateTextureFromImage() for getting CVPixelData into an OpenGL texture. Just ensure your CVPixelbuffers are IOSurface backed by including (id)kCVPixelBufferIOSurfacePropertiesKey: #{}, in the attributes to [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes]
You will still be getting a GL_TEXTURE_RECTANGLE, but you'll be getting it way faster. I made a little shader so that I can render the GL_TEXTURE_RECTANGLE into a GL_TEXTURE_2D bound to a frame buffer.
There is SDL_WM_ToggleFullScreen. However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it. Ok, annoying, but I can reload my textures. However, when I reload my textures after toggling, it crashes on certain textures inside of a memcpy being called by glTexImage2D. Huh, it sure didn't crash when I loaded those textures the first time around. I even try deleting all my textures before the toggle, but I get the same result.
As a test, I reload textures without toggling fullscreen, and the textures reload fine. So, toggling the fullscreen does something funny to OpenGL. (And, by "toggling", I mean going in either direction: windowed->fullscreen or fullscreen->windowed - I get the same results.)
As an alternative, this code seems to toggle fullscreen as well:
SDL_Surface *surface = SDL_GetVideoSurce();
Uint32 flags = surface->flags;
flags ^= SDL_FULLSCREEN;
SDL_SetVideoMode(surface->w, surface->h, surface->format->BitsPerPixel, flags);
However, calling SDL_GetError after this code says that the "Invalid window" error was set during the SDL_SetVideoMode. And if I ignore it and try to load textures, there is no crashing, but my textures don't show up either (perhaps OpenGL is just immediately returning from glTexImage2D without actually doing anything). As a test, at startup time, I try calling SDL_SetVideoMode twice in a row to perform a toggle right before I even load my textures the first time. In that particular case, the textures do actually show up. Huh, what?
I am using SDL 1.3.0-6176 (posted on the SDL website January 7, 2012).
Update:
My texture uploading code is below (nothing surprising here). For clarification, this application (including the code below) is already working without any issues as a finished application for iOS, Android, PSP, and Windows. Windows is the only other version, other than Mac, that is using SDL under the hood.
unsigned int Texture::init(const void *data, int width, int height)
{
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
return textureID;
}
it crashes on certain textures inside of a memcpy being called by glTexImage2D.
This shouldn't happen. There's some bug somewhere, either in your code or that of the OpenGL implementation. And I'd say it's yours. So please post your texture allocation and upload code for us to see.
However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it.
Yes, unfortunately this is the way how it works on MacOS X. And due to the design of its OpenGL driver model, there's litte you can do about it (did I mention that MacOS X sucks; oh yes, I did. On occasion. Like over a hundred times). The same badly designed driver model also makes MacOS X so slow in catching up with OpenGL development.
This is a very vague problem, so please feel free to clarify anything about this project.
I'm working on a very large application, and recently a very perplexing bug has cropped up regarding the texturing. Some of the textures that we are loading are being loaded - I've stepped through the code, and it runs - but all OpenGL renders for those textures is a weird Pink/White striped texture.
What would you suggest to even begin debugging this situation?
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
Some textures are being loaded, some are not. They're all loaded in the exact same way.
I've made sure that all textures exist
The "pink/white" textures are definitely loaded in memory - they become visible shortly after I load any other texture into OpenGL.
I'm perplexed, and have no idea what else can be wrong. Is there an OpenGL command that can be called after glTexImage that would force the texture to become useable?
Edit:
It's not about the commands failing, it's mainly a timing issue. The pink/white textures show up for a while, until more textures are loaded. It's almost as if the textures are queued up, and the queue just pauses at some time.
Next Edit: I got the glIntercept log working correctly, and this is what it outputted (before the entire program crashed)
http://freetexthost.com/1kdkksabdg
Next Edit: I know for a fact the textures are loaded in OpenGL memory, but for some reason they're not rendering in the program themselves.
If your texture is colored incorrectly most likely you're loading the wrong order of RGB. Make sure in your glTexImage2D you're using the right enums for your image format. Make sure the number of components is correct and that you're getting the order of your RGB pixels in the format argument right.
Although probably not related to your textures showing up wrong, OpenGL doesn't support multithreaded draws so make sure you're not doing any drawing work on a different thread than the one that owns the context.
Edit: Do you have a reference renderer so you can verify the image pixels are being loaded as expected? I would strongly recommend writing a small routine to load then immediately save the pixels to a file so you can be sure that you're getting the right texture results.
Check your texture coordinates. If they are set wrong, you can see just one or two texels mapped to entire primitives. Remember, OpenGL is a state machine. Check if you're changing the texture coordinate state at the wrong time. You may be setting the texture coordinates at a later point in your code, then when you get back to redrawing these elements the state is acceptable for mapping your texture to the code.
If it is merely a timing issue where the texture loading OpenGL calls aren't executed in time, and your threading code is correct, try adding a call to glFlush() after loading the textures. glFlush() causes all pending OpenGL commands to execute.
When you say:
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
This doesn't sound like strong enough protection to me: remember that OpenGL is a state machine with a large amount of internal state. You need to make sure the OpenGL state is what you expect it to be when you are making your calls, and that certain sequences of calls don't get interrupted with by calls from other threads.
I'm no expert on OpenGL thread-safety, but this seems to me where your problem might lie.
Check the size and compression of those images you use as textures. i think opengl texture size has to be a power of 2 ...
You can't load a texture in a thread and use it in other different thread because you will see a beautiful white texture. To do this possible you must load the OpenGL context in between differents threads before use any OpenGL function.
If you are using GLIntercept to check your code, ensure to enable:
ThreadChecking = True;
in the gliConfig.ini file.
Viewing the log it seems that quite a few OpenGL calls are being main outside the main context.
It is possible to load textures in another thread without getting a white texture. The problem is that - once you initialized the OpenGL window - the OpenGL context is "bound" to this thread. You have to deactivate the context in the main thread while you're loading textures and before you start loading them, you have to activate the context in this thread.
You can use this class:
Context.h:
#ifndef CONTEXT_H
#define CONTEXT_H
#include <GL/glut.h>
class Context
{
public:
static Context* getInstance();
void bind();
void unbind();
private:
Context();
Context(const Context&);
~Context();
static Context *instance;
HGLRC hglrc;
HDC hdc;
class Guard
{
public:
~Guard()
{
if (Context::instance != 0) {
delete Context::instance;
}
}
};
friend class Guard;
};
#endif
Context.cpp:
#include "Context.h"
Context* Context::getInstance()
{
static Guard guard;
if(Context::instance == 0) {
Context::instance = new Context();
}
return Context::instance;
}
void Context::bind()
{
wglMakeCurrent(this->hdc, this->hglrc);
}
void Context::unbind()
{
wglMakeCurrent(NULL, NULL);
}
Context::Context()
{
this->hglrc = wglGetCurrentContext();
this->hdc = wglGetCurrentDC();
}
Context::~Context()
{
}
Context *Context::instance = 0;
And that's what you have to do:
int state = 0;
void main()
{
// Create the window.
glutCreateWindow(TITLE);
// Set your loop function.
glutDisplayFunc(&loop);
// Initialize the singleton for the 1st time.
Context::getInstance()->bind();
glutMainLoop();
}
void loop()
{
if (state == 0) {
Context::getInstance()->unbind();
startThread(&run);
} else if (state == 1) {
// Rebind the context to the main thread (just once).
Context::getInstance()->bind();
state = 2;
} else if (state == 2) {
// Draw your textures, lines, etc.
} else {
// Draw something (but no textures).
}
}
void run()
{
Context::getInstance()->bind();
// Load textures...
Context::getInstance()->unbind();
state = 1;
}