is the cairo context life-time limited? - x11

Unfortunately I have to ask because the documentation doesn't specify this very well.
Is the cairo_t and the cairo_surface_t life-time limited ?
In many examples or samples found over the web, the surface and the context are almost always recreated (more oftently the context) for each repaint operation.
Actually it seems to work almost finely if I only create the surface and the context once, lazily, like here, when a x11 windows is resized:
void updateWindowSize()
{
if(!display || !_win)
return;
int w = cast(uint) lround(width);
int h = cast(uint) lround(height);
if (!_osSetSizePos)
XResizeWindow(display, _win, w, h);
if (!cairoSurface)
cairoSurface = cairo_xlib_surface_create(display, _win, _visual, w, h);
cairo_xlib_surface_set_size(cairoSurface, w, h);
if (!_cr) _cr = cairo_create(cairoSurface);
_cv.setContext(_cr); // _cv = canvas
}
However the context has to be passed each time to the canvas _cv.setContext(_cr); otherwise the settings are never applied (color, pen width,...), which is crazy since the context itself never changes.
This completely goes against what I've seen before., including the answers to this Q.
The underlying problem is that if the context is recreated for each redrawing then the operations such as cairo_set_source_rgba, cairo_set_source, cairo_set_line_width, etc., have to be done for each redrawing too, which can be seen as a performance issue.

No, the lifetimes aren't limited (at least by cairo). You can use both for as long as you want. You don't even need to recreate the surface for window resizes, because there is cairo_xlib_surface_set_size(). (There even is cairo_xlib_surface_set_drawable() which can change the drawable, but personally I don't like that function.)
However, libraries like GTK might add some requirements. For example, GTK does double buffering and there contexts can't be cached.

Related

Trouble Getting Depth Testing To Work With Apple's Metal Graphics API

I'm spending some time in the evenings trying to learn Apple's Metal graphics API. I've run into a frustrating problem and so must be missing something pretty fundamental: I can only get rendered objects to appear on screen when depth testing is disabled, or when the depth function is changed to "Greater". What could possibly be going wrong? Also, what kinds of things can I check in order to debug this problem?
Here's what I'm doing:
1) I'm using SDL to create my window. When setting up Metal, I manually create a CAMetalLayer and insert it into the layer hierarchy. To be clear, I am not using MTKView and I don't want to use MTKView. Staying away from Objective-C and Cocoa as much as possible seems to be the best strategy for writing this application to be cross-platform. The intention is to write in platform-agnostic C++ code with SDL and a rendering engine which can be swapped at run-time. Behind this interface is where all Apple-specific code will live. However, I strongly suspect that part of what's going wrong is something to do with setting up the layer:
SDL_SysWMinfo windowManagerInfo;
SDL_VERSION(&windowManagerInfo.version);
SDL_GetWindowWMInfo(&window, &windowManagerInfo);
// Create a metal layer and add it to the view that SDL created.
NSView *sdlView = windowManagerInfo.info.cocoa.window.contentView;
sdlView.wantsLayer = YES;
CALayer *sdlLayer = sdlView.layer;
CGFloat contentsScale = sdlLayer.contentsScale;
NSSize layerSize = sdlLayer.frame.size;
_metalLayer = [[CAMetalLayer layer] retain];
_metalLayer.contentsScale = contentsScale;
_metalLayer.drawableSize = NSMakeSize(layerSize.width * contentsScale,
layerSize.height * contentsScale);
_metalLayer.device = device;
_metalLayer.pixelFormat = MTLPixelFormatBGRA8Unorm;
_metalLayer.frame = sdlLayer.frame;
_metalLayer.framebufferOnly = true;
[sdlLayer addSublayer:_metalLayer];
2) I create a depth texture to use as a depth buffer. My understanding is that this step is necessary in Metal. Though, in OpenGL, the framework creates a depth buffer for me quite automatically:
CGSize drawableSize = _metalLayer.drawableSize;
MTLTextureDescriptor *descriptor =
[MTLTextureDescriptorr texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float_Stencil8 width:drawableSize.width height:drawableSize.height mipmapped:NO];
descriptor.storageMode = MTLStorageModePrivate;
descriptor.usage = MTLTextureUsageRenderTarget;
_depthTexture = [_metalLayer.device newTextureWithDescriptor:descriptor];
_depthTexture.label = #"DepthStencil";
3) I create a depth-stencil state object which will be set at render time:
MTLDepthStencilDescriptor *depthDescriptor = [[MTLDepthStencilDescriptor alloc] init];
depthDescriptor.depthWriteEnabled = YES;
depthDescriptor.depthCompareFunction = MTLCompareFunctionLess;
_depthState = [device newDepthStencilStateWithDescriptor:depthDescriptor];
4) When creating my render pass object, I explicitly attach the depth texture:
_metalRenderPassDesc = [[MTLRenderPassDescriptor renderPassDescriptor] retain];
MTLRenderPassColorAttachmentDescriptor *colorAttachment = _metalRenderPassDesc.colorAttachments[0];
colorAttachment.texture = _drawable.texture;
colorAttachment.clearColor = MTLClearColorMake(0.2, 0.4, 0.5, 1.0);
colorAttachment.storeAction = MTLStoreActionStore;
colorAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassDepthAttachmentDescriptor *depthAttachment = _metalRenderPassDesc.depthAttachment;
depthAttachment.texture = depthTexture;
depthAttachment.clearDepth = 1.0;
depthAttachment.storeAction = MTLStoreActionDontCare;
depthAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassStencilAttachmentDescriptor *stencilAttachment = _metalRenderPassDesc.stencilAttachment;
stencilAttachment.texture = depthAttachment.texture;
stencilAttachment.storeAction = MTLStoreActionDontCare;
stencilAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
5) Finally, at render time, I set the depth-stencil object before drawing my object:
[_encoder setDepthStencilState:_depthState];
Note that if I go into step 3 and change depthCompareFunction to MTLCompareFunctionAlways or MTLCompareFunctionGreater then I see polygons on the screen, but ordering is (expectedly) incorrect. If I leave depthCompareFunction set to MTLCompareFunctionLess then I see nothing but the background color. It acts AS IF all fragments fail the depth test at all times.
The Metal API validator reports no errors and has no warnings...
I've tried a variety of combinations of settings for things like the depth-stencil texture format and have not made any forward progress. Honestly, I'm not sure what to try next.
EDIT: GPU Frame Capture in Xcode displays a green outline of my polygons, but none of those fragments are actually drawn.
EDIT 2: I've learned that the Metal API validator has an "Extended" mode. When this is enabled, I get these two warnings:
warning: Texture Usage Should not be Flagged as MTLTextureUsageRenderTarget: This texture is not a render target. Clear the MTLTextureUsageRenderTarget bit flag in the texture usage options. Texture = DepthStencil. Texture is used in the Depth attachment.
warning: Resource Storage Mode Should be MTLStorageModePrivate and it Should be Initialized with a Blit: This resource is rarely accessed by the CPU. Changing the storage mode to MTLStorageModePrivate and initializing it with a blit from a shared buffer may improve performance. Texture = 0x102095000.
When I head these two warnings, I get these two errors. (The warnings and errors seem to contradict one another.)]
error 'MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
failed assertion `MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
EDIT 3: When I run a sample Metal app and use the GPU frame capture tool then I see a gray scale representation of the depth buffer and the rendered object is clearly visible. This doesn't happen for my app. There, the GPU frame capture tool always shows my depth buffer as a plain white image.
Okay, I figured this out. I'm going to post the answer here to help the next guy. There was no problem writing to the depth buffer. This explains why spending time mucking with depth texture and depth-stencil-state settings was getting me nowhere.
The problem is differences in the coordinate systems used for Normalized Device Coordinates in Metal versus OpenGL. In Metal, NDC are in the space [-1,+1]x[-1,+1]x[0,1]. In OpenGL, NDC are [-1,+1]x[-1,+1]x[-1,+1]. If I simply take the projection matrix produced by glm::perspective and shove it through Metal then results will not be as expected. In order to compensate for the NDC space differences when rendering with Metal, that projection matrix must be left-multiplied by a scaling matrix with (1, 1, 0.5, 1) on the diagonal.
I found these links to be helpful:
1. http://blog.athenstean.com/post/135771439196/from-opengl-to-metal-the-projection-matrix
2. http://www.songho.ca/opengl/gl_projectionmatrix.html
EDIT: Replaced explanation with a more complete and accurate explanation. Replace solution with a better solution.

Why might the scaling of SetWindowExtEx be just wrong?

I am trying to scale images/text etc using MM_ANISTROPIC and what I've done is the following (by the way if the syntax is a little weird, it's originally from delphi so treat the following as pseudocode)
I would expect the following code to produce a rectangle that is 70% of the width of the PaintBox and 30% of the height, yet it doesn't, instead it it noticeably too small.
SetMapMode(hdc,MM_ANISOTROPIC);
SetWindowExtEx(hdc,100,100,0);
SetViewportExtEx(hdc,70,30,0);
Rectangle(hdc, 0,0,PaintBox.width-1,PaintBox.Height-1);
if, on the other hand I change the code so that the SetWindowExtEx has 91 instead of 100 as its parameters (as shown below) then it works, which makes no sense to me at all...
SetMapMode(hdc,MM_ANISOTROPIC);
SetWindowExtEx(hdc,91,91,0);
SetViewportExtEx(hdc,70,30,0);
Rectangle(hdc, 0,0,PaintBox.width-1,PaintBox.Height-1);
My sanity test case was to add the following pseudocode
SetMapMode(hdc,MM_TEXT);
DrawLine(hdc,Round(PaintBox.width*0.7),0,Round(PaintBox.width*0.7),PaintBox.Height-1);
DrawLine(hdc,0,Round(PaintBox.height*0.3),PaintBox.width-1,Round(PaintBox.height*0.3));
I would have expected this to overwrite the lower / bottom edges of my original Rectangle but it does not unless I uses that 91,91 SetWindowExtEx.
Can anyone duplicate this?
FURTHER EDIT: Here is my exact original code I had given pseudo code before to make the question more accessible to non-delphi users but one of my commenters wanted full code to see if my contention that it was a delphi quirk was true or not.
The entire project consisted of a VCL form with a rectangular paintbox dropped on it centered so there was space all around it, and its onPaint event was set to the code below resulting in this image::
procedure TForm11.PaintBox2Paint(Sender: TObject);
var
hdc:THandle;
res:TPoint;
procedure SetupMapMode;
begin
SetMapMode(hdc,MM_ANISOTROPIC);
SetWindowExtEx(hdc,100,100,0);
SetViewportExtEx(hdc,70,30,0);
// These lines are required when we're painting to a TPaintBox but can't be used
// if we're paiting to a TPanel and they were NOT in my original question but only
// got added as part of the answer
// SetViewportOrgEx(hdc,PaintBox2.Left,PaintBox2.Top,#ZeroPoint);
// SetWindowOrgEx(hdc,0,0,#ZeroPoint);
end;
begin
//draw a rectangle to frame the Paintbox Surface
PaintBox2.Canvas.Pen.Style:=psSolid;
PaintBox2.Canvas.Pen.width:=2;
PaintBox2.Canvas.Pen.Color:=clGreen;
PaintBox2.Canvas.Brush.Style:=bsClear;
PaintBox2.Canvas.Rectangle(0,0,PaintBox2.Width-1,PaintBox2.Height-1);
PaintBox2.Canvas.Brush.Style:=bsSolid;
//initialize convenience variable
hdc:=PaintBox2.Canvas.Handle;
SetTextAlign(hdc,TA_LEFT);
//as doing things to the PaintBox2.Canvas via Delphi's interface tends to reset
//everything, I'm ensuring the map mode gets set **immediately** before
//each drawing call
SetupMapMode;
/// Draw Text at the bottom of the PaintBox2.Canvas (though as it's mapped it
/// SHOULD be 1/3 of the way down and much smaller instead)
TextOut(hdc,200,PaintBox2.Height-PaintBox2.Canvas.TextHeight('Ap'),'Hello, World!',13);
PaintBox2.Canvas.Pen.Color:=clblue;
PaintBox2.Canvas.Brush.Style:=bsClear;
//ensure it's set before doing the rectangle
SetupMapMode;
// Redraw the same rectangle as before but in the mapped mode
Rectangle(hdc, 0,0,PaintBox2.Width-1,PaintBox2.Height-1);
PaintBox2.Canvas.Brush.Style:=bsSolid;
//reset the map mode to normal
SetMapMode(hdc,MM_Text);
//draw text at the "same" position as before but unmapped...
TextOut(hdc,200,PaintBox2.Height-PaintBox2.Canvas.TextHeight('Ap'),'Goodbye, World!',15);
//Draw lines exactly at 70% of the way across and 30% of the way down
//if this works as expected they should overwrite the right and bottom
//borders of the rectangle drawn in the mapped mode
PaintBox2.Canvas.Pen.Color:=RGB(0,255,255);
PaintBox2.Canvas.MoveTo(Round(PaintBox2.Width*0.7),0);
PaintBox2.Canvas.LineTo(Round(PaintBox2.Width*0.7),PaintBox2.Height);
PaintBox2.Canvas.MoveTo(0,Round(PaintBox2.Height*0.3));
PaintBox2.Canvas.LineTo(PaintBox2.Width,Round(PaintBox2.Height*0.3));
end;
Okay, I don't know WHY the following is necessary -- it may be a Delphi quirk, the fact that I'm using a TPaintBox with is a TGraphicControl rather than a Component, or if I'm missing out on some fundamental concept on how this whole mapping mode works, BUT if I add the following code:
ZeroPoint:=TPoint.Zero;
SetViewportOrgEx(hdc,PaintBox1.Left,PaintBox1.Top,#ZeroPoint);
SetWindowOrgEx(hdc,0,0,#ZeroPoint);
Then it all displays as expected. Anyone have any explanations as to why this is necessary?
EDIT: Okay, I've got a PARTIAL explanation. It has to do with the control I was painting on being a TPaintBox, which is a TGraphic control rather than a TWinControl. To wit:
TGraphicControl is the base class for all lightweight controls.
TGraphicControl supports simple lightweight controls that do not need the ability to accept keyboard input or contain other controls. Since lightweight controls do not wrap Windows screen objects, they are faster and user fewer resources than controls based on TWinControl.
As such, although they APPEAR to have a separate canvas, I have this sneaking feeling that they are really sharing the form's canvas which is why, when I switched to a TWinControl descendant, which DOES own its own Windows DC, then the display worked as expected without setting the ViewpointOrg.
So it was a Delphi quirk after all...!

glfwSwapBuffers() and vertical refresh on Windows

I want to do something that is very trivial with OpenGL and GLFW: I want to scroll a 100x100 white filled rectangle from left to right and back again. The rectangle should be moved by 1 pixel per frame and the scrolling should be perfectly smooth. This is my code:
int main(void)
{
GLFWwindow* window;
int i = 0, mode = 0;
if(!glfwInit()) return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if(!window) {
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 0, 480, -1, 1);
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1.0, 1.0, 1.0);
while(!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glRecti(i, 190, i + 100, 290);
glfwSwapBuffers(window);
glfwPollEvents();
if(!mode) {
i++;
if(i >= 539) mode = 1;
} else {
i--;
if(i <= 0) mode = 0;
}
}
glfwTerminate();
return 0;
}
On Mac OS X and Linux this code is working really fine. The scrolling is perfectly sync'ed with the vertical refresh and you cannot see any stuttering or flickering. It is really perfectly smooth.
On Windows, things are more difficult. By default, glfwSwapInterval(1) doesn't have any effect on Windows when desktop compositing is enabled. GLFW docs say that this has been done because enabling the swap interval with DWM compositing enabled can lead to severe jitter. This behaviour can be changed by compiling GLFW with GLFW_USE_DWM_SWAP_INTERVAL defined. In that case, the code above works really fine on Windows as well. The scrolling is perfectly sync'ed and there is no jitter. I tested it on a variety of different machines running XP, Vista, 7, and 8.
However, there has to be a very good reason that made the GLFW authors disable the swap interval on Windows by default so I suppose that there are many configurations where it will indeed lead to severe jitter. Maybe I was just lucky that none of my machines showed it. So defining GLFW_USE_DWM_SWAP_INTERVAL is not really a solution I can live with because there has to be a reason why it is disabled by default, although it somewhat escapes me that the GLFW team didn't come up with a nicer solution because as it stands now, GLFW programs aren't really portable because of this issue. Take the code above as an example: It will be perfectly sync'ed on Linux and OS X, but on Windows it will run at lightning speed. This somewhat defies GLFW's concept of portability in my eyes.
Given the situation that GLFW_USE_DWM_SWAP_INTERVAL cannot be used on Windows because the GLFW team explicitly warns about its use, I'm wondering what else I should do. The natural solution is of course a timer which measures the time and makes sure that glfwSwapBuffers() is not called more often than the monitor's vertical refresh rate.
However, this also is not as simple as it sounds since I cannot use Sleep() because this would be much too imprecise. Hence, I'd have to use a polling loop with QueryPerformanceCounter(). I tried this approach and it pretty much works but the CPU usage is of course up to 100% now because of the polling loop sleep. When using GLFW_USE_DWM_SWAP_INTERVAL, on the other hand, CPU usage is at a mere 1%.
An alternative would be to set up a timer that fires at regular intervals but AFAIK the precision of CreateTimerQueueTimer() is not very satisfying and probably doesn't yield perfectly sync'ed results.
To cut a long story short: I'd like to ask what is the recommended way of dealing with this problem? The example code above is of course just for illustration purposes. My general question is that I'm looking for a clean way to make glfwSwapBuffers() swap buffers in sync with the monitor's vertical refresh on Windows. On Linux and Mac OS X this is already working fine but on Windows there is the problem with severe jitter that the GLFW docs talk about (but which I don't see here).
I'm still somewhat puzzled that GLFW doesn't provide an inbuilt solution to this problem and pretty much leaves it up to the programmer to workaround this. I'm still a newbie to OpenGL but from my naive point of view, I think that having a function that swaps buffers in sync with vertical refresh is a feature of fundamental importance so it escapes me why GLFW doesn't have it on Windows.
So once again my question is: How can I workaround the problem that glfwSwapInterval() doesn't work correctly on Windows? What is the suggested approach to solve this problem? Is there a nicer way than using a poll timer that will hog the CPU?
I think your issue has solved itself by a strange coincidence in timing. This commit has been added to GLFW's master branch just a few days ago, and it is removing the GLFW_USE_DWM_SWAP_INTERVAL because they are now using DWM's DWMFlush() API to do the syncing when DWM is in use. The changelog for this commit includes:
[Win32] Removed GLFW_USE_DWM_SWAP_INTERVAL compile-time option
[Win32] Bugfix: Swap interval was ignored when DWM was enabled
So probably grabbing the newest git HEAD for GLFW is all you need to do.

Can be smoothing image in a loaded SWF?

I'm trying to load an SWF into another in AS3/Haxe. The loaded SWF contains some images - but only on some Shape.graphics elements. (Like graphics.beginBitmapFill(); ...)
This images are not smoothed, and jaggy.
Can this images be smoothed anyhow during runtime?
Any hack interested! :)
Thanks in advance!
Tom
Update: Sorry, but I forget to mention, that I'm loading more AS2-SWFs (AVM1) into one AS3-SWF (AVM2) with AVM2Loader, which hack the loaded bytes, and convert the AVM1 SWFs into AVM2 - it works very well. :)
So, in these SWFs I need to find the images/bitmaps, but only found the Shapes, which graphics elements has the 'images'. If I clear this graphics, then all images are gone, so I think, the images are in some graphics.beginBitmapFill(...);, without smoothing. I want to reach them, and switch smoothing on at runtime, if possible.
(Sorry, if the first time I was not enough clear.)
Edit (Jan 23 '14): I found solution for it. It is not fast, and required Flash Player 11.6. Every MovieClip graphics properties has a new readGraphicsData function, which give all the graphics commands (Vector IGraphicsData) to draw the whole MC. And iterate in these commands, if I change every bitmapFill command smooth parameter to true, and redraw the MC, it will be smoothed, and nice.
That's it. Not fast, but working.
I found solution for it. It is not fast, and required Flash Player 11.6. Every MovieClip graphics properties has a new readGraphicsData function, which give all the graphics commands (Vector IGraphicsData) to draw the whole MC. And iterate in these commands, if I change every bitmapFill command smooth parameter to true, and redraw the MC, it will be smoothed, and nice.
That's it. Not fast, but working.
function onLoad(event):Void
{
pic.forceSmoothing = true;
}
Smoothing is a property of bitmaps that's off by default.
var image = new Bitmap(bitmapData);
image.smoothing = true;
Typically, your bitmapData will be in the loader.content.bitmapData when loading externally, but it's up to you were you've stored it.
Update:
If you want to smooth all images in a loaded SWF without any knowledge of the structure of the SWF, then you'll have to recursively dig through the hiarchy of that SWF, and depending on whether or not the object is a Bitmap, turn on smoothing.
function recursivelySmooth(obj):void {
for (var i:int = 0; obj.getChildAt(i); i++) {
var item:* = obj.getChildAt(i);
if (item is Bitmap) {
item.smoothing = true;
} else if (item.hasOwnProperty("numChildren") == true) {
recursivelySmooth(item);
}
}
}
This was written freehand, so you may have to doublecheck everything is correct, but that's the basic idea. Just call recursivelySmooth() on your swf, and it'll dig through all objects that can have child elements and smooth them.

OpenGL texture loading issue

This is a very vague problem, so please feel free to clarify anything about this project.
I'm working on a very large application, and recently a very perplexing bug has cropped up regarding the texturing. Some of the textures that we are loading are being loaded - I've stepped through the code, and it runs - but all OpenGL renders for those textures is a weird Pink/White striped texture.
What would you suggest to even begin debugging this situation?
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
Some textures are being loaded, some are not. They're all loaded in the exact same way.
I've made sure that all textures exist
The "pink/white" textures are definitely loaded in memory - they become visible shortly after I load any other texture into OpenGL.
I'm perplexed, and have no idea what else can be wrong. Is there an OpenGL command that can be called after glTexImage that would force the texture to become useable?
Edit:
It's not about the commands failing, it's mainly a timing issue. The pink/white textures show up for a while, until more textures are loaded. It's almost as if the textures are queued up, and the queue just pauses at some time.
Next Edit: I got the glIntercept log working correctly, and this is what it outputted (before the entire program crashed)
http://freetexthost.com/1kdkksabdg
Next Edit: I know for a fact the textures are loaded in OpenGL memory, but for some reason they're not rendering in the program themselves.
If your texture is colored incorrectly most likely you're loading the wrong order of RGB. Make sure in your glTexImage2D you're using the right enums for your image format. Make sure the number of components is correct and that you're getting the order of your RGB pixels in the format argument right.
Although probably not related to your textures showing up wrong, OpenGL doesn't support multithreaded draws so make sure you're not doing any drawing work on a different thread than the one that owns the context.
Edit: Do you have a reference renderer so you can verify the image pixels are being loaded as expected? I would strongly recommend writing a small routine to load then immediately save the pixels to a file so you can be sure that you're getting the right texture results.
Check your texture coordinates. If they are set wrong, you can see just one or two texels mapped to entire primitives. Remember, OpenGL is a state machine. Check if you're changing the texture coordinate state at the wrong time. You may be setting the texture coordinates at a later point in your code, then when you get back to redrawing these elements the state is acceptable for mapping your texture to the code.
If it is merely a timing issue where the texture loading OpenGL calls aren't executed in time, and your threading code is correct, try adding a call to glFlush() after loading the textures. glFlush() causes all pending OpenGL commands to execute.
When you say:
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
This doesn't sound like strong enough protection to me: remember that OpenGL is a state machine with a large amount of internal state. You need to make sure the OpenGL state is what you expect it to be when you are making your calls, and that certain sequences of calls don't get interrupted with by calls from other threads.
I'm no expert on OpenGL thread-safety, but this seems to me where your problem might lie.
Check the size and compression of those images you use as textures. i think opengl texture size has to be a power of 2 ...
You can't load a texture in a thread and use it in other different thread because you will see a beautiful white texture. To do this possible you must load the OpenGL context in between differents threads before use any OpenGL function.
If you are using GLIntercept to check your code, ensure to enable:
ThreadChecking = True;
in the gliConfig.ini file.
Viewing the log it seems that quite a few OpenGL calls are being main outside the main context.
It is possible to load textures in another thread without getting a white texture. The problem is that - once you initialized the OpenGL window - the OpenGL context is "bound" to this thread. You have to deactivate the context in the main thread while you're loading textures and before you start loading them, you have to activate the context in this thread.
You can use this class:
Context.h:
#ifndef CONTEXT_H
#define CONTEXT_H
#include <GL/glut.h>
class Context
{
public:
static Context* getInstance();
void bind();
void unbind();
private:
Context();
Context(const Context&);
~Context();
static Context *instance;
HGLRC hglrc;
HDC hdc;
class Guard
{
public:
~Guard()
{
if (Context::instance != 0) {
delete Context::instance;
}
}
};
friend class Guard;
};
#endif
Context.cpp:
#include "Context.h"
Context* Context::getInstance()
{
static Guard guard;
if(Context::instance == 0) {
Context::instance = new Context();
}
return Context::instance;
}
void Context::bind()
{
wglMakeCurrent(this->hdc, this->hglrc);
}
void Context::unbind()
{
wglMakeCurrent(NULL, NULL);
}
Context::Context()
{
this->hglrc = wglGetCurrentContext();
this->hdc = wglGetCurrentDC();
}
Context::~Context()
{
}
Context *Context::instance = 0;
And that's what you have to do:
int state = 0;
void main()
{
// Create the window.
glutCreateWindow(TITLE);
// Set your loop function.
glutDisplayFunc(&loop);
// Initialize the singleton for the 1st time.
Context::getInstance()->bind();
glutMainLoop();
}
void loop()
{
if (state == 0) {
Context::getInstance()->unbind();
startThread(&run);
} else if (state == 1) {
// Rebind the context to the main thread (just once).
Context::getInstance()->bind();
state = 2;
} else if (state == 2) {
// Draw your textures, lines, etc.
} else {
// Draw something (but no textures).
}
}
void run()
{
Context::getInstance()->bind();
// Load textures...
Context::getInstance()->unbind();
state = 1;
}

Resources