GLFW fails to load all GL ES 2.0 functions - opengl-es

I am trying to set up GLFW to create window and OpenGL ES 2.0 context,as I need something out of the box to manage input callbacks etc.
The problem is this,if I use the following setup:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_CREATION_API, GLFW_NATIVE_CONTEXT_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
glfwindow = glfwCreateWindow(width, height, "Window Title", NULL, NULL);
if (!glfwindow)
{
glfwTerminate();
exit(1);
}
glfwMakeContextCurrent(glfwindow);
Using GLFW_NATIVE_CONTEXT_API for GLFW_CONTEXT_CREATION_API hint gives me the following info on the vendor:
GL version:OpenGL ES 3.2 NVIDIA 368.69 GL vendor:NVIDIA Corporation GL
renderer:GeForce GTX 960M/PCIe/SSE2 GLSL version:OpenGL ES GLSL ES
3.20
Then it fails even to create a shader (glCreateShader()) returns 0.
But if instead that hint flag I use GLFW_EGL_CONTEXT_API , I manage to get through of most of the GL routines like shaders loading,program compile and link,GL VBO setup etc.But then I fails on glDrawElements().
And if I print the vendor info with this setup I can see this:
GL version:4.5.0 NVIDIA 368.69 GL vendor:NVIDIA Corporation GL
renderer:GeForce GTX 960M/PCIe/SSE2 GLSL version:4.50 NVIDIA
So,it is quite weird to me that when EGL should suppossedly be the underlying API for context creation,I get desktop OpenGL set.
Also if I try to retrieve function pointer for glDrawElements manually,it returns null.
PFNGLDRAWELEMENTSPROC func = reinterpret_cast<PFNGLDRAWELEMENTSPROC>
(eglGetProcAddress("glDrawElements"));
I would like to understand what can be the problem and how to use GLFW for GL ES context creation the right way.

As no one answered my own question,and because I had figured out the root of the problem a long time ago,here is the explanation:
GLFW works just fine for GLES context initialization.The problem is with the libraries linked.One must not link with opengl32.lib when emulating GLES on Windows.In my case I use PowerVR SDK and its ES2 and ES3 libs.Therefore,the following libs are mandatory:
glfw3.lib
libEGL.lib
libGLESv2.lib
In my linker,because by default I was using desktop OpenGL,I was also linking with opengl32.lib ,which had probably a precedence before the other GL libs.Make sure you exclude it when running GLES on the desktop platform.

Related

How OpenGL ICD on Windows Loads OpenGL 1.0 and 1.1 Functions?

Recently I've been doing research on how OpenGL graphics drivers (ICD) implement OpenGL functions called by OpenGL runtime (opengl32.dll) on Windows.
I understand that I can use the GetProcAddress to get the function pointers of OpenGL 1.0 and 1.1 functions exposed by opengl32.dll, and I have to call wglGetProcAddress returned by GetProcAddress to get the context-dependent function pointers for OpenGL 1.2+ functions.
To see whether ICD executes the OpenGL functions, I wrote wrapper programs of opengl32.dll and ICD and let them print the function name when the function gets called. For OpenGL 1.2+ functions, I saw my test program first called the function pointer returned by wglGetProcAddress from opengl32.dll, then I saw it called the function pointer returned by DrvGetProcAddress from ICD (I wrote hook functions for some of these functions). So it proved that these functions were implemented by the graphics driver. But for OpenGL 1.0 and 1.1 functions, who have API symbols exposed by opengl32.dll, both wglGetProcAddress of opengl32.dll and DrvGetProcAddress of ICD return NULL when they attempt to load them by their function names.
I also understand that both GetProcAddress functions were used to get the function pointers of OpenGL extension (OpenGL 1.2 or above) functions only, but I'm not sure whether the graphic driver implements OpenGL 1.0 and 1.1 functions now. If it does, how do these functions in ICD get called by opengl32.dll?
The ICD provides a dispatch table for all OpenGL 1.0 and 1.1 functions and the exported functions from opengl32.dll just jump to the entry point in the ICD. Here is how ReactOS does this.

How to use OpenGL ES with GLFW on windows?

Since NVIDIA DRIVE product supports the OpenGL ES 2 and 3 specifications, I want to run OpenGL ES code on Windows 10 with GTX 2070, which will elimated a
Also, GLFW support configuration like glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API). Is it possible to use GLFW to run OpenGL ES code on Windows 10?
First of all, make sure you have downloaded glad with GL ES.
https://glad.dav1d.de/
For the glfw part, you need to set the GLFW_CLIENT_API window hints.
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
And also choose wich version you want, for example:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
Then specify the kind of context. In the case of OpenGL ES, according to the documentation, it must be GLFW_OPENGL_ANY_PROFILE.
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_ANY_PROFILE);
GLFW_OPENGL_PROFILE indicates the OpenGL profile used by the context.
This is GLFW_OPENGL_CORE_PROFILE or GLFW_OPENGL_COMPAT_PROFILE if the
context uses a known profile, or GLFW_OPENGL_ANY_PROFILE if the OpenGL
profile is unknown or the context is an OpenGL ES context. Note that
the returned profile may not match the profile bits of the context
flags, as GLFW will try other means of detecting the profile when no
bits are set.
However, GLFW_OPENGL_ANY_PROFILE is already the default value, so you don't really need to set it.

OpenCL-GL Interop memory not in sync

I'm having troubles with OpenCL-GL shared memory.
I have a application that's working in both linux and windows. The CL-GL sharing works in linux, but not in windows.
The windows driver says that it supports sharing, the examples from AMD work so it should work. My code for creating the context in windows is:
cl_context_properties properties[] = {
CL_CONTEXT_PLATFORM, (cl_context_properties)platform_(),
CL_WGL_HDC_KHR, (intptr_t) wglGetCurrentDC(),
CL_GL_CONTEXT_KHR, (intptr_t) wglGetCurrentContext(),
0
};
platform_.getDevices(CL_DEVICE_TYPE_GPU, &devices_);
context_ = cl::Context(devices_, properties, &CL::cl_error_callback, nullptr, &err);
err = clGetGLContextInfoKHR(properties, CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR, sizeof(device_id), &device_id, NULL);
context_device_ = cl::Device(device_id);
queue_ = cl::CommandQueue(context_, context_device_, 0, &err);
My problem is that the CL and GL memory in a shared buffer is not the same. I print them out (by memory mapping) and I notice that they differ. Changing the data in the memory works in both CL and GL, but only changes that memory, not both (that is both buffers seems intact, but not shared).
Also, clGetGLObjectInfo on the cl-buffer returns the correct gl buffer.
Update: I have found that if I create the opencl-context on the cpu it works. This seems weird, as I'm not using integrated graphics, and I don't belive the cpu is handling opengl. I'm using SDL to create the window, could that have something to do with this?
I have now confirmed that the opengl context is running on the gpu, so the problem lies elsewhere.
Update 2: Ok, so this is weird. I tried again today, and suddenly it works. As far as I know I didn't install any new drivers before I shut down the computer yesterday, so I don't know what could have brought this about.
Update 3: Right, I noticed that changing the number of particles caused this to work. When I allocated so many particles that the shared buffer is slightly above one MB it suddenly starts to work.
I solved the problem.
OpenGL buffer object must be created "after" OpenCL context was created.
If "before", we can't share the OpenGL data.
I use RadeonHD5670 ATI Catalyst 12.10
Maybe, ATI driver's problem because Nvidia-Computing-SDK samples don't depend on the order.

Failure when uploading image data with glTexImage2D

I'm developing an OpenGL application and everything works fine under Linux (both x86_32 and x86_64) but I've hit the wall during porting app to Windows. My application uses the very basic OpenGL 1.0, great glfw 2.7.6 and libpng 1.5.7. Before porting entire program, I tried writing the simplest code possible which would test whether those libraries work properly under Windows and everything seemed to work just fine until I had started using textures!
Uusing textures with glTexImage2D(..) my program gets Access Violation with the following error:
First-chance exception at 0x69E8F858 (atioglxx.dll) in sidescroll.exe: 0xC0000005: Access violation reading location 0x007C1000.
Unhandled exception at 0x69E8F858 (atioglxx.dll) in sidescroll.exe: 0xC0000005: Access violation reading location 0x007C1000.
I've done some research and found out that it's probably the GPU driver bug. Sadly, I have a Toshiba L650-1NU notebook with AMD Radeon HD5650 for which none drivers are provided but obsolete vendor-distributed. Author of the given post suggest using glBindBuffer but since I use OpenGL 1.0 I don't have access to this method.
Do you have any ideas of how to bypass this issue without using newer OpenGL? Nevertheless, if this is The One Solution can I be provided with a tutorial or code snippet on how to use OpenGL 2.1 with glfw?
Here's the piece of my code, which is causing the error:
img = img_loadPNG(BACKGROUND);
if (img) {
printf("%p %d %d %d %d", img->p_ubaData, img->uiWidth, img->uiHeight, img->usiBpp, img->iGlFormat);
glGenTextures(1, &textures[0]);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->uiWidth, img->uiHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, img->p_ubaData); //SEGFAULT HERE ON WINDOWS!
//img_free(img); //it may cause errors on windows
} else printf("Error: loading texture '%s' failed!\n", BACKGROUND);
The error you're experiencing is because the buffer you pass to glTexImage2D is shorter than what glTexImage2D tries to read from what it deduces it from the parameters. That is crashes under Windows but not Linux is, because under Linux memory allocations tend to be a bit larger than what you request, while under Windows you get very tight constraints.
Most likely the PNG reads as a RGB. You however tell glTexImage2D that you pass over GL_RGBA, which will of course access out of bounds.
I see that the img structure you receive has a element iGlFormat. I bet that this will be exactly the format to pass to glTexImage2D. So try this:
glTexImage2D(
GL_TEXTURE_2D, 0,
img->iGlFormat, // This is not ideal, OpenGL will chose whatever suits
img->uiWidth, img->uiHeight, 0,
img->iGlFormat, GL_UNSIGNED_BYTE,
img->p_ubaData )

Do GLSL geometry shaders work on the GMA X3100 under OSX

I am trying to use a trivial geometry shader but when run in Shader Builder on a laptop with a GMA X3100 it falls back and uses the software render. According this document the GMA X3100 does support EXT_geometry_shader4.
The input is POINTS and the output is LINE_STRIP.
What would be required to get it to run on the GPU (if possible)
uniform vec2 offset;
void main()
{
gl_Position = gl_PositionIn[0];
EmitVertex();
gl_Position = gl_PositionIn[0] + vec4(offset.x,offset.y,0,0);
EmitVertex();
EndPrimitive();
}
From the docs you link to it certainly appears it should be supported.
You could try
int hasGEOM = isExtensionSupported("EXT_geometry_shader4");
If it returns in the affirmative you may have another problem stopping it from working.
Also according to the GLSL Spec (1.20.8) "Any extended behavior must first be enabled.
Directives to control the behavior of the compiler with respect to extensions are declared with the #extension directive"
I didn't see you use this directive in your code so I can suggest
#extension GL_EXT_geometry_shader4 : enable
At the top of your shader code block.
I've found this OpenGL Extensions Viewer tool really helpful in tracking down these sorts of issues. It will certainly allow you to confirm Apple's claims. That said, wikipedia states that official GLSL support for geometry shaders is technically an OpenGL 3.2 feature.
Does anyone know if the EXT_geometry_shader4 implementation supports the GLSL syntax, or does it require some hardware or driver specific format?
Interestingly enough, I've heard that the compatibility claims of Intel regarding these integrated GPUs are sometimes overstated or just false. Apparently the X3100 only supports OpenGL 1.4 and below (or so I've heard, take this with a grain of salt, as I can't confirm this).
On my HP Laptop, with an Intel x3100 using Windows 7 x64 drivers (v8.15.10.1930 (9-23-2009)) directly from Intel's website, the extension "EXT_geometry_shader4" (or any variation of it) is NOT supported. I've confirmed this programmatically and using the tool "GPU Caps Viewer" (which lists detected supported extensions, amongst other useful things). Since Windows tends to be the primary subject of driver development from any vendor, it's unlikely the OSX driver is any better, and may in fact have even less supported extensions.

Resources