Problems with glext.h - windows

I was just looking through the OpenGL updates on OS X Lion when I found something that now has me scared to use glext.h.
So, here's the bug. Lion's OpenGL.framework has a glext.h with the following definition.
typedef void *GLhandleARB;
But the glext.h from the OpenGL registry has the following instead.
typedef unsigned int GLhandleARB;
Now, the trouble is that when building for x86_64 on Lion we have sizeof(void*)==8, but sizeof(unsigned int)==4. So what do you trust? Lion's header? Or the OpenGL registry's header? Well, of course you trust the system headers, because apparently they claim to know that the ABI on 64-bit Lion has a 64-bit GLhandleARB type.
Now, this raises a few issues in my mind about various platforms:
If you must use Apple's glext.h, but Apple's glext.h doesn't provide access to anything later than OpenGL 2.1, then how do you get at 3.0+ features on newer cards?
Is it unsafe to use the OpenGL registry's glext.h on Linux? Or must you use the system's glext.h there as well? In that case, question #1 applies here as well.
How the heck do you handle things on Windows, where there is never a glext.h on the system? You clearly can't use a driver vendor's glext.h, because different vendors may disagree on the sizes of various types. (Or is that not true?) What's the deal here?

I see no problem.
Just use OS/drivers provided headers.
Or better use multi-platform OpenGL extension loader, that will do the trick for you.
(eg. GLEW)
On the other hand in code you will use only GLhandleARB, not other things, so on Mac it will be void* - no problem, on Linux - something different - no problem, on Linux with AMD header - something entirely different - no problem.
Source code is portable across different platforms, not binaries, so I see no problem here.
1) You cant get better OpenGL if you use version served by Apple. So currently you can get max OpenGL 3.2 core profile on 10.7. (heard that Nvidia on some gpus bypassed it with its own headers with OpenGL 3.3, but have no way to check it myself).
2) It depends. If you target OpenGL 2.1 and below, open-source drivers support it, but higher versions are supported only by proprietary drivers, so you should use their headers.
But in code you just put "#include ", and then link against appropriate header and .so library.
3) Do not know how things stand on Win. But probably vendors use glext from OpenGL registry.
But all of this is based on wrong assumption. You DO NOT have to know answers for them. Just use software that already know how to handle this burden. (eg. GLEW).

You should use the official OpenGL function to get the extensions supported by the instance of OpenGL you are running with: glGetString(GL_EXTENSIONS)
As for which type you should use, I think this has already been answered Apple's mailing lists: http://lists.apple.com/archives/mac-opengl/2005/Nov/msg00182.html
Both; the spec doesn't make any claims about what a GLhandleARB is,
other than that it's at least 32 bits wide. Note that in the OpenGL
2.0 shading language API there is no GLhandle type, it uses GLuint
like textures. Also note that GLuint is not an unsigned int on Mac OS
X, it's an unsigned long, so you're still screwed :)

Related

How to use the cl_khr_3d_image_writes extension to OpenCL in macOS Big Sur

Currently, I'm trying to enable the cl_khr_3d_image_writes extension for OpenCL on my M1 Mac however the cl_kernel.h file is read-only and can't be written to. I've disabled SIP however the problem persists. What am I doing wrong?
It's not clear to me what editing the header file would achieve. Editing system headers is almost always a bad idea, and when you find yourself wanting to do that, it's usually a good idea to take a step back and think about what you really are trying to achieve. There's almost always a better way than editing a system header.
So, you want to use 3D image writes.
Does your device report support for this extension? This is the first thing to check, and you should always check this on the end user's system too before trying to use an extension or you'll have more difficult error handling to deal with down the line.
macOS supports OpenCL 1.2, which has direct support for the 3D image write functions. When creating your context, make sure you create a version 1.2 compatible context, not version 1.0/1.1.
In OpenCL 1.2, all you should need to do if the device supports the extension, is to enable it and call those built-in functions to perform the writes.
To enable use of the extension in your kernel, use #pragma OPENCL EXTENSION cl_khr_3d_image_writes : enable
I should point out that OpenCL is deprecated on macOS, and it is being replaced by Metal compute shaders. When developing new software, it's recommended that you use those instead.

Any downsides of moving from GDI+ to OpenGL?

I recently moved the rendering part of a program of mine from GDI+ to OpenGL.
Now I'm wondering: are there any downsides to doing so?
For example, are there any versions of Windows (XP or later) that support GDI+ but not OpenGL?
Or, for example, is it possible for a lack of drivers (or poor drivers), or a lack of a graphics card, etc. to make OpenGL rendering impossible on a system on which GDI+ works fine?
(I understand the OpenGL might need to resort to software rendering on less capable systems, but aside from slowness, I'm wondering if it would ever simply not work correctly in a situation in which GDI+ would.)
It depends on the OpenGL version/profile you're using. Up to, inclusing Windows XP OpenGL-1.1 is available by default without additional drivers. Since Windows Vista the minimum available OpenGL version is OpenGL-1.4.
However if you need anything more than that, you're relying on the user installing the drivers that come from the GPU vendor; the drivers installed by default in a standard Windows installation don't cover OpenGL (for not perfectly sane reasons).
Programs and libraries that strongly depend on OpenGL-ES have resorted to incorporate fallbacks like ANGLE.
There are some idiosyncrasies, as for example: you cannot create a transparent OpenGL window, if transparency is disabled (which means, not at all, under XP). Otherwise, as datenwolf notes, there's ANGLE, but even that does not always work. Another option might be mesa3d compiled for the windows target, with software rendering enabled. This option might be the safest one and faster than the software OpenGL 1.1 implementation from Microsoft.

how to write cross-version/platform Linux kernel modules?

I'm new to programming Linux kernel modules, and many getting started guides on the topic include little information about how to build a kernel module which will run on many versions and CPU platforms of Linux. Most of the guides I've seen simply state things like, "Linux doesn't ensure any ABI/API compatibility between versions." However, other OSes do provide these guarantees for major versions, and the guides are mostly targeting 2.7 (which is a bit old now).
I was wondering if there is any kind of ABI/API compatibility now, or if there are any standard ways to deal with versioning other than isolating the kernel-dependent bits of my code into files with a ton of preprocessor directives. (Also, are there any standard preprocessor symbols I should be using in the second case?)
There isn't a stable ABI for the kernel and most likely never will be because it'd make Linux suck. The reasons for not having one are all pretty much documented in that link.
The best way to deal with this is to get your driver merged upstream where it'll be maintained by other kernel developers.
As to being cross-platform, that pretty much comes free with the Linux kernel as long as you only use the standard, platform-independent functions provided in the API.
Linux, the ying and the yang. Tangrs answer is good; it answers your question. However, there is the linux compat projects. See the backports wiki. Basically, there are libraries that provide shim functionality for newer Linux ABI's which you can use to link your code. The KERNEL_VERSION macro that Eugene notes is inspected in a compat.h, and appropriate compat-2.6.38.h, etc are included where each version has either macros and/or library functions to provide a forward API.
This lets the Linux Wifi group write code for the bleeding edge kernel, while still making it possible to compile on older kernel versions.
I guess this answers the question,
if there are any standard ways to deal with versioning?
The compat library is not a panacea, but at least it is there and under development.
Open source - There are many mutations. They all have a different plan.

How can I debug an OpenCL kernel in Xcode 4.1?

I have some OpenCL kernels that aren't doing what they should be, and I would love to debug them in Xcode. Is this possible?
If not, is there any way I can use printf() in my CPU-based kernels? When I use printf() in my kernels the OpenCL compiler always gives me a whole bunch of errors.
Casting the format string to const char * appears to fix this problem.
This works for me on Lion:
printf((char const *)"%d %d\n", dl, dll);
This has the error described above:
printf("%d %d\n", dl, dll);
Have you tried adding this pragma to enable printf?
#pragma OPENCL EXTENSION cl_amd_printf : enable
You might also want to try using Quartz Composer to test out your kernels. If you have access to the WWDC 2010 videos, I believe they show how to use Quartz Composer for rapid prototyping of OpenCL kernels in Sessions 416: "Harnessing OpenCL in Your Application" or 418: "Maximizing OpenCL Performance". There were also some good sessions on this during WWDC 2009 and 2008 that might also be available via ADC on iTunes.
Using Quartz Composer, you can quickly set up inputs and outputs for a kernel, then monitor the results in realtime. You can avoid the change-compile-test cycle because everything is compiled as you type. Syntax errors and the like will pop up as you change code, which makes it fairly easy to identify those.
I've used this tool to develop and test out OpenGL shaders, which have many things in common with OpenCL kernels.
Have you given the gDEBugger a try already? I think it's the only choice you have currently, for OpenCL debugging on the Mac.
Intel offers a printf in their new OpenCL 1.1 SDK, but that's only for Linux and Windows. Lion has OpenCL 1.1, but at least my Core 2 Duo does not support the printf extension.
AMD ist still developing their OpenCL tools, and the Nvidia Debugging tools are only for CUDA, as far as I understand.

How to do OpenGL 3 programming on OS X with a GeForce 9400

I have a MacBook Pro with a GeForce 9400 graphics card. Wikipedia said this card supports OpenGL 3.
But the header and library shipped with OS X 10.6 seems to be OpenGL 2 only (I checked the files in /usr/X11/include/).
I need to do some OpenGL 3 programming. Can I do it with my current hardware and OS? What do I need to get and install?
Sadly, I don't think you can yet, as detailed here.
I believe Lion will upgrade OpenGL to 3.2 for OS X though (which is still short of the more useful 3.3 unfortunately).
NB: I do not own a Mac, this is purely from trying to learn modern OpenGL on the windows side and digging around to understand how portable it would be.
Edit: this thread on the official OpenGL forums has more detail. Although (see comments below this answer) it may not be completely clear why vendors cannot provide OpenGL 3+ compliant drivers, it seems pretty clear that there is no way to use fully OpenGL 3.3 compliant code and shaders in OS X. Some workarounds are provided in that thread however, as well as in my first link.
The best place to check OpenGL support on the various OSX and Mac combinations is:
http://developer.apple.com/graphicsimaging/opengl/capabilities/
See the "Core" subpage for 10.7+
OpenGL 3.2 with GLSL 1.5 on 10.7.2 isn't too bad.
Your current hardware can support OpenGL 3, but not the OS. Mac OS X 10.7 (Lion) should support OpenGL 3, which is a solution only if you can wait many months.
Your only option right now is to switch to a different OS such as Windows or Linux. You'll have to boot from this other operating system, because the virtual machine systems present a virtual video card to the guest operating systems, and none have OpenGL 3 compatible virtual video cards.
(Disclaimer: This information is based on taking Windows OpenGL and replacing wgl with glX. But I did verify that the corresponding extensions exist in GLX land)
You won't find OpenGL 3 support in any header files. Rather you need the GLX_ARB_create_context extension.
The other answers are probably correct about missing support in OSX, but even when support comes, you'll have to use glXGetProcAddress and load the extension. (Can't video card manufacturers add support for these extensions through their driver? Why does it require "OS support"?)
Windows OpenGL developer here. On Windows 7 only OpenGL 1.4 is officially supported, but everyone gets around this limitation by querying which functions are available at run-time.
On OSX I expect you can do the same thing. The easiest way to do this is with The OpenGL Extension Wrangler Library: http://www.opengl.org/sdk/libs/GLEW/

Resources