Failure when uploading image data with glTexImage2D - windows

I'm developing an OpenGL application and everything works fine under Linux (both x86_32 and x86_64) but I've hit the wall during porting app to Windows. My application uses the very basic OpenGL 1.0, great glfw 2.7.6 and libpng 1.5.7. Before porting entire program, I tried writing the simplest code possible which would test whether those libraries work properly under Windows and everything seemed to work just fine until I had started using textures!
Uusing textures with glTexImage2D(..) my program gets Access Violation with the following error:
First-chance exception at 0x69E8F858 (atioglxx.dll) in sidescroll.exe: 0xC0000005: Access violation reading location 0x007C1000.
Unhandled exception at 0x69E8F858 (atioglxx.dll) in sidescroll.exe: 0xC0000005: Access violation reading location 0x007C1000.
I've done some research and found out that it's probably the GPU driver bug. Sadly, I have a Toshiba L650-1NU notebook with AMD Radeon HD5650 for which none drivers are provided but obsolete vendor-distributed. Author of the given post suggest using glBindBuffer but since I use OpenGL 1.0 I don't have access to this method.
Do you have any ideas of how to bypass this issue without using newer OpenGL? Nevertheless, if this is The One Solution can I be provided with a tutorial or code snippet on how to use OpenGL 2.1 with glfw?
Here's the piece of my code, which is causing the error:
img = img_loadPNG(BACKGROUND);
if (img) {
printf("%p %d %d %d %d", img->p_ubaData, img->uiWidth, img->uiHeight, img->usiBpp, img->iGlFormat);
glGenTextures(1, &textures[0]);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->uiWidth, img->uiHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, img->p_ubaData); //SEGFAULT HERE ON WINDOWS!
//img_free(img); //it may cause errors on windows
} else printf("Error: loading texture '%s' failed!\n", BACKGROUND);

The error you're experiencing is because the buffer you pass to glTexImage2D is shorter than what glTexImage2D tries to read from what it deduces it from the parameters. That is crashes under Windows but not Linux is, because under Linux memory allocations tend to be a bit larger than what you request, while under Windows you get very tight constraints.
Most likely the PNG reads as a RGB. You however tell glTexImage2D that you pass over GL_RGBA, which will of course access out of bounds.
I see that the img structure you receive has a element iGlFormat. I bet that this will be exactly the format to pass to glTexImage2D. So try this:
glTexImage2D(
GL_TEXTURE_2D, 0,
img->iGlFormat, // This is not ideal, OpenGL will chose whatever suits
img->uiWidth, img->uiHeight, 0,
img->iGlFormat, GL_UNSIGNED_BYTE,
img->p_ubaData )

Related

Will `wglCreateContext` fail safely if a DirectX context already exists for the given `HDC` (device context)

I'm attempting to contribute to a cross-platform, memory-safe API for creating and using OpenGL contexts in Rust called glutin. There is an effort to redesign the API in a way that will allow users to create a GL context for a pre-existing window.
One of the concerns raised was that this might not be memory safe if a user attempts to create a GL context for a window that has a pre-existing DirectX context.
The documentation for wglCreateContext suggests that it will return NULL upon failure, however it does not go into detail about what conditions might cause this.
Will wglCreateContext fail safely (by returning NULL) if a DirectX context already exists for the given HDC (device context)? Or is the behaviour in this situation undefined?
I do not have access to a Windows machine with OpenGL support and am unable to test this directly myself.
The real issue I see here is, that wglCreateContext may fail for any reason, and you have to be able to deal with that.
That being said, the way you formulated your question reeks of a fundamental misunderstanding between the relationship between OpenGL contexts, device contexts and windows. To put in in three simple words: There is none!
Okay, that begs for clarification. What's the deal here? Why is there a HDC parameter to wglCreateContext if these are not related?
It all boils down to pixel formats (or framebuffer configuration). A window as you can see it on the screen, is a direct 1:1 representation of a block of memory. This block of memory has a certain pixel format. However as long as only abstract drawing methods are used (like the GDI is), the precise pixel format being used doesn't matter and the graphics system may silently switch the pixel format as it sees fit. In times long begone, when graphics memory was scarce this could mean huge savings.
However OpenGL assumes to operate of framebuffers with a specific, unchanging pixel format. So in order to support that a new API was added that allows to nail down the internal pixel format of a given window. However since the only part of the graphics system that's actually concerned with the framebuffer format is the part that draws stuff, i.e. the GDI, the framebuffer configuration happens through that part. HWNDs are related to passing around messages, associating input events to applications and all that jazz. But it's HDCs that relate to everything graphics. So that's why you set the pixel format through an HDC.
When creating an OpenGL context, that context has to be configured for the graphics device it's intended to be used on. And this again goes through the GDI and data structures that are addressed through HDC handles. But once an OpenGL context has been created, it can be used with any HDC that refers to a framebuffer that has a pixel format configured that is compatible to the HDC the OpenGL context was originally created with. It can be a different HDC of the same window, or it can be a HDC of an entirely different window alltogether. And ever since OpenGL-3.3 core an OpenGL context can be made current with no HDC at all, being completely self contained, operating on self managed framebuffers. And last but not least, that binding can be changed at any time.
Everytime when people, who have no clear understanding of this, implement some OpenGL binding or abstraction wrapper, they tend to get this part wrong and create unnecessarily tight straight jackets, which then other people, like me, have to fight their way out, because the way the abstraction works is ill conceived. The Qt guys made that mistake, the GTK+ guys did so, and now it seems apparently so do you. There is this code snippet on your Github project page:
let events_loop = glutin::EventsLoop::new();
let window = glutin::WindowBuilder::new()
.with_title("Hello, world!".to_string())
.with_dimensions(1024, 768)
.with_vsync()
.build(&events_loop)
.unwrap();
unsafe {
window.make_current()
}.unwrap();
unsafe {
gl::load_with(|symbol| window.get_proc_address(symbol) as *const _);
gl::ClearColor(0.0, 1.0, 0.0, 1.0);
}
Arrrrggggh. Why the heck are the methods make_current and get_proc_address associated with the window? Why?! Who came up with this? Don't do this shit, it makes the life of people who have to use this miserable and painful.
Do you want to know to what this leads? Horribly, messy unsafe code, that does disgusting and dirty things to forcefully and bluntly disable some of the safeguards present, just so that it can go to work. Like this horrible thing I had to do, to get Qt4's ill assumptions of how OpenGL works out of the way.
#ifndef _WIN32
#if OCTPROCESSOR_CREATE_GLWIDGET_IN_THREAD
QGLFormat glformat(
QGL::DirectRendering |
QGL::DoubleBuffer |
QGL::Rgba |
QGL::AlphaChannel |
QGL::DepthBuffer,
0 );
glformat.setProfile(QGLFormat::CompatibilityProfile);
gl_hidden_widget = new QGLWidget(glformat);
if( !gl_hidden_widget ) {
qDebug() << "creating woker context failed";
goto fail_init_glcontext;
}
gl_hidden_widget->moveToThread( QThread::currentThread() );
#endif
if( !gl_hidden_widget->isValid() ) {
qDebug() << "worker context invalid";
goto fail_glcontext_valid;
}
gl_hidden_widget->makeCurrent();
#else
if( wglu_create_pbuffer_with_glrc(
3,3,WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
&m_hpbuffer,
&m_hdc,
&m_hglrc,
&m_share_hglrc)
){
qDebug() << "failed to create worker PBuffer and OpenGL context";
goto fail_init_glcontext;
}
qDebug()
<< "m_hdc" << m_hdc
<< "m_hglrc" << m_hglrc;
if( !wglMakeCurrent(m_hdc, m_hglrc) ){
qDebug() << "failed making OpenGL context current on PBuffer HDC";
goto fail_glcontext_valid;
}
#endif

LoadLibrary() fails with error 8 (ERROR_NOT_ENOUGH_MEMORY)

Later edit: After more investigation, the Windows Updates and the OpenGL DLL were red herrings. The cause of these symptoms was a LoadLibrary() call failing with GetLastError() == ERROR_NOT_ENOUGH_MEMORY. See my answer for how to solve such issues. Below is the original question for historical interest. /edit
A map viewer I wrote in Python/wxPython for Windows with a C++ backend suddenly
stopped working, without any code changes or even recompiling. The very same
executables had been working for weeks before (same Python, same DLLs, ...).
Now, when querying Windows for a pixel format to use with OpenGL (with
ChoosePixelFormat()), I get a MessageBox saying:
LoadLibrary failed with error 8:
Not enough storage is available to process this command
The error message is displayed when executing the following code fragment:
void DevContext::SetPixelFormat() {
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
int pf = ChoosePixelFormat(m_hdc, &pfd); // <-- ERROR OCCURS IN HERE
if (pf == 0) {
throw std::runtime_error("No suitable pixel format.");
}
if (::SetPixelFormat(m_hdc, pf, &pfd) == FALSE) {
throw std::runtime_error("Cannot set pixel format.");
}
}
It's actually an ATI GL driver DLL showing the message box. The relevant part of the call stack is this:
... More MessageBox stuff
0027e860 770cfcf1 USER32!MessageBoxTimeoutA+0x76
0027e880 770cfd36 USER32!MessageBoxExA+0x1b
*** ERROR: Symbol file not found. Defaulted to export symbols for C:\Windows\SysWOW64\atiglpxx.dll -
0027e89c 58471df1 USER32!MessageBoxA+0x18
0027e9d4 58472065 atiglpxx+0x1df1
0027e9dc 57acaf0b atiglpxx!DrvValidateVersion+0x13
0027ea00 57acb0f3 OPENGL32!wglSwapMultipleBuffers+0xc5e
0027edf0 57acb1a9 OPENGL32!wglSwapMultipleBuffers+0xe46
0027edf8 57acc6a4 OPENGL32!wglSwapMultipleBuffers+0xefc
0027ee0c 57ad5658 OPENGL32!wglGetProcAddress+0x45f
0027ee28 57ad5dd4 OPENGL32!wglGetPixelFormat+0x70
0027eec8 57ad6559 OPENGL32!wglDescribePixelFormat+0xa2
0027ef48 751c5ac7 OPENGL32!wglChoosePixelFormat+0x3e
0027ef60 57c78491 GDI32!ChoosePixelFormat+0x28
0027f0b0 57c7867a OutdoorMapper!DevContext::SetPixelFormat+0x71 [winwrap.cpp # 42]
0027f1a0 57ce3120 OutdoorMapper!OGLContext::OGLContext+0x6a [winwrap.cpp # 61]
0027f224 1e0acdf2 maplib_sip!func_CreateOGLDisplay+0xc0 [maps.sip # 96]
0027f240 1e0fac79 python33!PyCFunction_Call+0x52
... More Python stuff
I did a Windows Update two weeks ago and noticed some glitches (e.g. when
resizing the window), but my program still worked mostly OK. Just now I
rebooted, Windows installed 1 more update, and I don't get past
ChoosePixelFormat() any more. However, the last installed update was
KB2998527, a Russia timezone update?!
Things that I already checked:
Recompiling doesn't make it work.
Rebooting and running without other programs running doesn't work.
Memory consumption of my program is only 67 MB, I'm not out of memory.
Plenty of diskspace free (~50 GB).
The HDC m_hdc is obtained from the display panel's HWND and seems to be valid.
Changing my linker commandline doesn't work.
Should I update my graphics drivers or roll back the updates? Any other ideas?
System data dump: Windows 7 Ultimate SP1 x64, 4GB RAM; HP EliteBook 8470p; Python 3.3, wxPython 3.0.1.dev76673 msw (phoenix); access to C++ data structures via SIP 4.15.4; C++ code compiled with Visual Studio 2010 Express, Debug build with /MDd.
I was running out of virtual address space.
By default, LibTIFF reads TIF images by memory-mapping them (mmap() or CreateFileMapping()). This is fine for pictures of your wife, but it turns out it's a bad idea for gigabytes worth of topographic raster-maps of the Alps.
This was difficult to diagnose, because LibTIFF silently fell back to read() if the memory mapping failed, so there never was an explicit error before. Further, mapped memory is not accounted as working memory by Windows, so the Task-Manager was showing 67MB, when in fact nearly all virtual address space used up.
This blew up now because I added more TIF images to my database recently. LoadLibrary() started failing because it couldn't find any address space to put the new library. GetLastError() returned 8, which is ERROR_NOT_ENOUGH_MEMORY. That this happened within ATI's OpenGL library was just coincidence.
The solution was to pass "m" as flag to TiffOpen() to disable memory mapped IO.
Diagnosing this is easy with the Windows SysInternals tool VMMap (documentation link), which shows you how much of the virtual address space of a process is taken up by code/heap/stack/mapped files/shareable data/etc.
This should be the first thing to check if LoadLibrary() or CreateFileMapping() fails with ERROR_NOT_ENOUGH_MEMORY.

Directx Texture interface to existing memory

I'm writing a rendering app that communicates with an image processor as a sort of virtual camera, and I'm trying to figure out the fastest way to write the texture data from one process to the awaiting image buffer in the other.
Theoretically I think it should be possible with 1 DirectX copy from VRAM directly to the area of memory I want it in, but I can't figure out how to specify a region of memory for a texture to occupy, and thus must perform an additional memcpy. DX9 or DX11 solutions would be welcome.
So far, the docs here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174363(v=vs.85).aspx have held the most promise.
"In Windows Vista CreateTexture can create a texture from a system memory pointer allowing the application more flexibility over the use, allocation and deletion of the system memory"
I'm running on Windows 7 with the June 2010 Directx SDK, However, whenever I try and use the function in the way it specifies, I the function fails with an invalid arguments error code. Here is the call I tried as a test:
static char s_TextureBuffer[640*480*4]; //larger than needed
void* p = (void*)s_TextureBuffer;
HRESULT res = g_D3D9Device->CreateTexture(640,480,1,0, D3DFORMAT::D3DFMT_L8, D3DPOOL::D3DPOOL_SYSTEMMEM, &g_ReadTexture, (void**)p);
I tried with several different texture formats, but with no luck. I've begun looking into DX11 solutions, it's going slowly since I'm used to DX9. Thanks!

OpenCL-GL Interop memory not in sync

I'm having troubles with OpenCL-GL shared memory.
I have a application that's working in both linux and windows. The CL-GL sharing works in linux, but not in windows.
The windows driver says that it supports sharing, the examples from AMD work so it should work. My code for creating the context in windows is:
cl_context_properties properties[] = {
CL_CONTEXT_PLATFORM, (cl_context_properties)platform_(),
CL_WGL_HDC_KHR, (intptr_t) wglGetCurrentDC(),
CL_GL_CONTEXT_KHR, (intptr_t) wglGetCurrentContext(),
0
};
platform_.getDevices(CL_DEVICE_TYPE_GPU, &devices_);
context_ = cl::Context(devices_, properties, &CL::cl_error_callback, nullptr, &err);
err = clGetGLContextInfoKHR(properties, CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR, sizeof(device_id), &device_id, NULL);
context_device_ = cl::Device(device_id);
queue_ = cl::CommandQueue(context_, context_device_, 0, &err);
My problem is that the CL and GL memory in a shared buffer is not the same. I print them out (by memory mapping) and I notice that they differ. Changing the data in the memory works in both CL and GL, but only changes that memory, not both (that is both buffers seems intact, but not shared).
Also, clGetGLObjectInfo on the cl-buffer returns the correct gl buffer.
Update: I have found that if I create the opencl-context on the cpu it works. This seems weird, as I'm not using integrated graphics, and I don't belive the cpu is handling opengl. I'm using SDL to create the window, could that have something to do with this?
I have now confirmed that the opengl context is running on the gpu, so the problem lies elsewhere.
Update 2: Ok, so this is weird. I tried again today, and suddenly it works. As far as I know I didn't install any new drivers before I shut down the computer yesterday, so I don't know what could have brought this about.
Update 3: Right, I noticed that changing the number of particles caused this to work. When I allocated so many particles that the shared buffer is slightly above one MB it suddenly starts to work.
I solved the problem.
OpenGL buffer object must be created "after" OpenCL context was created.
If "before", we can't share the OpenGL data.
I use RadeonHD5670 ATI Catalyst 12.10
Maybe, ATI driver's problem because Nvidia-Computing-SDK samples don't depend on the order.

OpenGL suppresses exceptions in MFC dialog-based application

I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this:
int* i = 0;
*i = 3;
I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior.
Now I add some OpenGL initialization code in the OnInitDialog() method:
HDC DC = GetDC(GetSafeHwnd());
static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd
1, // version number
PFD_DRAW_TO_WINDOW | // support window
PFD_SUPPORT_OPENGL | // support OpenGL
PFD_DOUBLEBUFFER, // double buffered
PFD_TYPE_RGBA, // RGBA type
24, // 24-bit color depth
0, 0, 0, 0, 0, 0, // color bits ignored
0, // no alpha buffer
0, // shift bit ignored
0, // no accumulation buffer
0, 0, 0, 0, // accum bits ignored
32, // 32-bit z-buffer
0, // no stencil buffer
0, // no auxiliary buffer
PFD_MAIN_PLANE, // main layer
0, // reserved
0, 0, 0 // layer masks ignored
};
int pixelformat = ChoosePixelFormat(DC, &pfd);
SetPixelFormat(DC, pixelformat, &pfd);
HGLRC hrc = wglCreateContext(DC);
ASSERT(hrc != NULL);
wglMakeCurrent(DC, hrc);
Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen.
It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle.
If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards.
I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases.
Well, hope good guys will help me :)
PS The problem originally where posted under this topic.
Ok, I found out some more information about this. In my case it's windows 7 that installs KiUserCallbackExceptionHandler as exception handler, before calling my WndProc and giving me execution control. This is done by ntdll!KiUserCallbackDispatcher. I suspect that this is a security measure taken by Microsoft to prevent hacking into SEH.
The solution is to wrap your wndproc (or hookproc) with a try/except frame so you can catch the exception before Windows does.
Thanks to Skywing at http://www.nynaeve.net/
We've contacted nVidia about this
issue, but they say it's not their
bug, but rather the Microsoft's. Could
you please tell how you located the
exception handler? And do you have
some additional information, e.g. some
feedbacks from Microsoft?
I used the "!exchain"-command in WinDbg to get this information.
Rather than wrapping the WndProc or hooking all WndProcs, you could use Vectored Exception Handling:
http://msdn.microsoft.com/en-us/library/ms679274.aspx
First, both behaviors are correct. Dereferencing a null pointer is "undefined behavior", not a guaranteed access violation.
First, find out whether this is related to exception throwing or only to accessing memory location zero (try a different exception).
If you configure Visual Studio to stop on first-chance access violations, does it break?
Call VirtualQuery(NULL, ...) before and after glMakeCurrent and compare. Maybe the nVidia OpenGL drivers VirtualAlloc page zero (a bad idea, but not impossible or illegal).
I found this question when I was looking at a similar problem. Our problem turned out to be silent consumption of exceptions when running a 32-bit application on 64-bit Windows.
http://connect.microsoft.com/VisualStudio/feedback/details/550944/hardware-exceptions-on-x64-machines-are-silently-caught-in-wndproc-messages
There’s a fix available from Microsoft, though deploying it is somewhat challenging if you have multiple target platforms:
http://support.microsoft.com/kb/976038
Here's an article on the subject describing the behavior:
http://blog.paulbetts.org/index.php/2010/07/20/the-case-of-the-disappearing-onload-exception-user-mode-callback-exceptions-in-x64/
This thread on stack overflow also describes the problem I was experiencing:
Exceptions silently caught by Windows, how to handle manually?

Resources