How to use OpenGL ES with GLFW on windows? - windows

Since NVIDIA DRIVE product supports the OpenGL ES 2 and 3 specifications, I want to run OpenGL ES code on Windows 10 with GTX 2070, which will elimated a
Also, GLFW support configuration like glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API). Is it possible to use GLFW to run OpenGL ES code on Windows 10?

First of all, make sure you have downloaded glad with GL ES.
https://glad.dav1d.de/
For the glfw part, you need to set the GLFW_CLIENT_API window hints.
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
And also choose wich version you want, for example:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
Then specify the kind of context. In the case of OpenGL ES, according to the documentation, it must be GLFW_OPENGL_ANY_PROFILE.
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_ANY_PROFILE);
GLFW_OPENGL_PROFILE indicates the OpenGL profile used by the context.
This is GLFW_OPENGL_CORE_PROFILE or GLFW_OPENGL_COMPAT_PROFILE if the
context uses a known profile, or GLFW_OPENGL_ANY_PROFILE if the OpenGL
profile is unknown or the context is an OpenGL ES context. Note that
the returned profile may not match the profile bits of the context
flags, as GLFW will try other means of detecting the profile when no
bits are set.
However, GLFW_OPENGL_ANY_PROFILE is already the default value, so you don't really need to set it.

Related

GLFW fails to load all GL ES 2.0 functions

I am trying to set up GLFW to create window and OpenGL ES 2.0 context,as I need something out of the box to manage input callbacks etc.
The problem is this,if I use the following setup:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_CREATION_API, GLFW_NATIVE_CONTEXT_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
glfwindow = glfwCreateWindow(width, height, "Window Title", NULL, NULL);
if (!glfwindow)
{
glfwTerminate();
exit(1);
}
glfwMakeContextCurrent(glfwindow);
Using GLFW_NATIVE_CONTEXT_API for GLFW_CONTEXT_CREATION_API hint gives me the following info on the vendor:
GL version:OpenGL ES 3.2 NVIDIA 368.69 GL vendor:NVIDIA Corporation GL
renderer:GeForce GTX 960M/PCIe/SSE2 GLSL version:OpenGL ES GLSL ES
3.20
Then it fails even to create a shader (glCreateShader()) returns 0.
But if instead that hint flag I use GLFW_EGL_CONTEXT_API , I manage to get through of most of the GL routines like shaders loading,program compile and link,GL VBO setup etc.But then I fails on glDrawElements().
And if I print the vendor info with this setup I can see this:
GL version:4.5.0 NVIDIA 368.69 GL vendor:NVIDIA Corporation GL
renderer:GeForce GTX 960M/PCIe/SSE2 GLSL version:4.50 NVIDIA
So,it is quite weird to me that when EGL should suppossedly be the underlying API for context creation,I get desktop OpenGL set.
Also if I try to retrieve function pointer for glDrawElements manually,it returns null.
PFNGLDRAWELEMENTSPROC func = reinterpret_cast<PFNGLDRAWELEMENTSPROC>
(eglGetProcAddress("glDrawElements"));
I would like to understand what can be the problem and how to use GLFW for GL ES context creation the right way.
As no one answered my own question,and because I had figured out the root of the problem a long time ago,here is the explanation:
GLFW works just fine for GLES context initialization.The problem is with the libraries linked.One must not link with opengl32.lib when emulating GLES on Windows.In my case I use PowerVR SDK and its ES2 and ES3 libs.Therefore,the following libs are mandatory:
glfw3.lib
libEGL.lib
libGLESv2.lib
In my linker,because by default I was using desktop OpenGL,I was also linking with opengl32.lib ,which had probably a precedence before the other GL libs.Make sure you exclude it when running GLES on the desktop platform.

Zero Opengl 3.2 pixelformat matches found?

Today I finally found out what has been stalling my development process: Even though no errorcode is set, the function wglChoosePixelFormatARB returns 0 pixelformats.
I am trying to set up an OpenGL context in my C++ application and I have managed to retrieve the function pointers for the extensions.
glGetIntegerv(GL_MAJOR_VERSION, &maj)
returns 4 so, naturally, I assumed it would be possible to create an OpenGL 3.2 context. However, after finding out there were no matches, I started to comment out some of my requirements to go in the attribList parameter. There were no matches whatsoever.
Only when I, just to be certain, commented out
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
I finally got matches. Out of the 8 matching pixel formats that the other requirements meet, not ONE of them seems to support version 3 of OGL.
Has anyone ever run into this? I have tried updating/reinstalling my video drivers, but nothing has changed. I am running this on Windows 7, MS Visual Studio 2008, and my graphics card is one from the AMD Radeon HD 7700 Series.
The WGL_CONTEXT_MAJOR_VERSION_ARB, WGL_CONTEXT_MINOR_VERSION_ARB and related attributes are not attributes of the Windows Pixelformat.
You must not use them with wglChoosePixelFormatARB().
Those options belong into the attribute list of wglCreateContextAttribsARB as defined by the WGL_ARB_create_context extension.

OpenCL half precision extension support on Apple OS X

Does anybody know the state of half precision floating point support in OpenCL as implemented by Apple.
According to OpenCL 1.1 spec The following statement should enable half2:
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
but when I come to build the kernel the compiler throws a message such as
error: variable has incomplete type 'half4' (aka 'struct __Reserved_Name__Do_
The following thread ask a similar question : OpenCL half4 type Apple OS X
But this thread is old. Can anyone please tell me if the half precision is supported by apple recently?
When you want to know if a extension is supported by a specific implementation (regardless if it's Apple's or another), just use the function
cl_int clGetPlatformInfo(cl_platform_id platform,
cl_platform_info param_name,
size_t param_value_size,
void *param_value,
size_t *param_value_size_ret)
passing the value CL_PLATFORM_EXTENSIONS for the param_name argument. it'll return a space-separated list of extension names.
Note that this list must returns the extensions "supported by all devices associated with this platform".
So it means that even if the platform supports the cl_khr_fp16 extension but not your device, it won't appear in the list.
To know the extension available on your device use
clGetDeviceInfo(...)
with the value CL_DEVICE_EXTENSIONS for the param_name argument.
For a generic answer to OpenCL extension querying see CaptainObvious' answer above (https://stackoverflow.com/a/17425167/5394228).
I asked Apple Developer Support about this and they say that half support is available in Metal and there are no plans to add new functionality to OpenCL now. (they answered Nov 2017)

OpenCL-GL Interop memory not in sync

I'm having troubles with OpenCL-GL shared memory.
I have a application that's working in both linux and windows. The CL-GL sharing works in linux, but not in windows.
The windows driver says that it supports sharing, the examples from AMD work so it should work. My code for creating the context in windows is:
cl_context_properties properties[] = {
CL_CONTEXT_PLATFORM, (cl_context_properties)platform_(),
CL_WGL_HDC_KHR, (intptr_t) wglGetCurrentDC(),
CL_GL_CONTEXT_KHR, (intptr_t) wglGetCurrentContext(),
0
};
platform_.getDevices(CL_DEVICE_TYPE_GPU, &devices_);
context_ = cl::Context(devices_, properties, &CL::cl_error_callback, nullptr, &err);
err = clGetGLContextInfoKHR(properties, CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR, sizeof(device_id), &device_id, NULL);
context_device_ = cl::Device(device_id);
queue_ = cl::CommandQueue(context_, context_device_, 0, &err);
My problem is that the CL and GL memory in a shared buffer is not the same. I print them out (by memory mapping) and I notice that they differ. Changing the data in the memory works in both CL and GL, but only changes that memory, not both (that is both buffers seems intact, but not shared).
Also, clGetGLObjectInfo on the cl-buffer returns the correct gl buffer.
Update: I have found that if I create the opencl-context on the cpu it works. This seems weird, as I'm not using integrated graphics, and I don't belive the cpu is handling opengl. I'm using SDL to create the window, could that have something to do with this?
I have now confirmed that the opengl context is running on the gpu, so the problem lies elsewhere.
Update 2: Ok, so this is weird. I tried again today, and suddenly it works. As far as I know I didn't install any new drivers before I shut down the computer yesterday, so I don't know what could have brought this about.
Update 3: Right, I noticed that changing the number of particles caused this to work. When I allocated so many particles that the shared buffer is slightly above one MB it suddenly starts to work.
I solved the problem.
OpenGL buffer object must be created "after" OpenCL context was created.
If "before", we can't share the OpenGL data.
I use RadeonHD5670 ATI Catalyst 12.10
Maybe, ATI driver's problem because Nvidia-Computing-SDK samples don't depend on the order.

Do GLSL geometry shaders work on the GMA X3100 under OSX

I am trying to use a trivial geometry shader but when run in Shader Builder on a laptop with a GMA X3100 it falls back and uses the software render. According this document the GMA X3100 does support EXT_geometry_shader4.
The input is POINTS and the output is LINE_STRIP.
What would be required to get it to run on the GPU (if possible)
uniform vec2 offset;
void main()
{
gl_Position = gl_PositionIn[0];
EmitVertex();
gl_Position = gl_PositionIn[0] + vec4(offset.x,offset.y,0,0);
EmitVertex();
EndPrimitive();
}
From the docs you link to it certainly appears it should be supported.
You could try
int hasGEOM = isExtensionSupported("EXT_geometry_shader4");
If it returns in the affirmative you may have another problem stopping it from working.
Also according to the GLSL Spec (1.20.8) "Any extended behavior must first be enabled.
Directives to control the behavior of the compiler with respect to extensions are declared with the #extension directive"
I didn't see you use this directive in your code so I can suggest
#extension GL_EXT_geometry_shader4 : enable
At the top of your shader code block.
I've found this OpenGL Extensions Viewer tool really helpful in tracking down these sorts of issues. It will certainly allow you to confirm Apple's claims. That said, wikipedia states that official GLSL support for geometry shaders is technically an OpenGL 3.2 feature.
Does anyone know if the EXT_geometry_shader4 implementation supports the GLSL syntax, or does it require some hardware or driver specific format?
Interestingly enough, I've heard that the compatibility claims of Intel regarding these integrated GPUs are sometimes overstated or just false. Apparently the X3100 only supports OpenGL 1.4 and below (or so I've heard, take this with a grain of salt, as I can't confirm this).
On my HP Laptop, with an Intel x3100 using Windows 7 x64 drivers (v8.15.10.1930 (9-23-2009)) directly from Intel's website, the extension "EXT_geometry_shader4" (or any variation of it) is NOT supported. I've confirmed this programmatically and using the tool "GPU Caps Viewer" (which lists detected supported extensions, amongst other useful things). Since Windows tends to be the primary subject of driver development from any vendor, it's unlikely the OSX driver is any better, and may in fact have even less supported extensions.

Resources