Every tutorial on OpenGL in Win32 I read instructs to use the wgl functions, wglCreateContext() and wglMakeCurrent().
Is this the only way of doing OpenGL in a Windows environment?
Are these functions part of the OpenGL API or the Window API?
Is this the only way of doing OpenGL in a Windows environment?
Yes, aside from getting a library like GLFW to do it for you (and take care of the cross-platform issues).
Are these functions part of the OpenGL API or the Window API?
The Windows API.
OpenGL specifies some semantics of OpenGL contexts (that they can be created, made current to the thread, and are required for any OpenGL commands to work), but does not specify an API for them, so it is system dependent. Windows has wglCreateContext, and Linux has glXCreateContext
Related
I'm writing a app that uses Vulkan/OpenGL for rendering, GLFW for windowing. This does by default does not support precision gestures that modern windows (>8 I guess) supports (as GLFW does not have support for it).
Going trough the Win32 API docs, it seems that to add precision gesture manipulation to win32 apps, one needs to use the Direct Manipulation API. Does this require using a DirectX rendering backend? I've gone through the example app provided here, but that uses by default DirectX. I could copy over the framebuffers and whatnot, but was wondering if using DirectX is a requirement.
I'm building an application with OpenGL ES 2.0 and SDL2 for Android. Is SDL_GL_GetProcAddress working with OpenGL ES 2.0 on Android? Also i know OpenGL ES 2.0 is a subset of OpenGL, so with this method can it run on desktop systems too?
From a quick browse of the SDL repository it should be.
SDL_video.c defines the implementation of SDL_GL_GetProcAddress simply to check that you've started OpenGL and then to call _this->GL_GetProcAddress, where _this is a global instance of the video driver.
SDL_androidvideo.c sets its GL_GetProcAddress to be Android_GLES_GetProcAddress, which is a preprocessor substitution for SDL_EGL_GetProcAddress.
So, so far: if you call SDL_GL_GetProcAddress, you'll get through to SDL_EGL_GetProcAddress.
SDL_egl.c implements SDL_EGL_GetProcAddress but declines to call eglGetProcAddress on Android. This looks like it's probably an error — the reason given is this bug but the status for that bug switched to 'Released' in June 2013, which I believe means that this has been fixed in Android for more than three years.
That aside, the fallback is to use SDL_LoadFunction, first with the direct function name, then with it proceeded by an underscore provided it's short enough to fit into the statically-declared buffer. Which this one is.
(so, caveat: SDL_GL_GetProcAddress is definitely not thread-safe, even if you've taken appropriate share group steps to use multiple GL contexts, but if you're writing an SDL program you probably don't care)
Android should be using the dlopen version of SDL_sysloadso so it looks like SDL_LoadFunction is implemented directly as a call to dlsym. Which has no issues that I'm aware of under Android.
So, in summary: yes, that call should work. It'll use the platform-specific dynamic library loader rather than the EGL call though it probably doesn't need to, but that's just an implementation detail.
I have small cross platform engine that runs my OpenGL ES 2.0 games on Android and on Windows. To run it on Windows I am using PowerVR emulator (just libraries linked to the project). It all works well.
Now I would like to debug it and inspect in any OpenGL debugger. I tried Intel GPA, AMD CodeXL, gDebugger, glslDevil. But non of them were able to do it. In case of Intel GPA it did not find the running game. In other cases it started the game but failed to pause it or do anything later.
I do not know whether it is because it is OpenGL ES instead of OpenGL. But the PowerVR emulation must work like translating OpenGL ES to OpenGL, I think?
My questions are:
Is there any (utility) way how to debug OpenGL ES 2.0 programs on Windows?
Or is there any better emulation library than PowerVR that will force the app look like OpenGL for other tools (instead of OpenGL ES)?
I am doing all this as none of debuggers work for me on Android device. I am developing with Samsung Galaxy Tab (which is Tegra GPU), but Nvida's PerfHUD ES does not currently support it (and I also do not meet Android 4.0 or higher having only 3.1)
Is there any way how to debug OpenGL ES on Android device that has Android version 3.1 and it is Samsung Galaxy Tab device?
Thanx
You're correct - PVRVFrame translates OpenGL ES calls into host OpenGL calls. This is why the likes of gDEBugger will capture the OpenGL API calls made by the emulator rather than the calls you actually submitted.
The PowerVR SDK includes an OpenGL ES/EGL API recording tool called PVRTrace that has all of the functionality you're looking for.
The PVRTrace recording libraries can be used to record applications using PVRVFrame on Windows and Linux. The SDK also includes recording libraries for Android and Linux devices.
PVRTraceGUI (analysis tool for Windows, OSX & Linux) can be used to review and inspect the data you've recorded. It also has an Image Analysis widget that allows you to step through the draw calls in your recording & some other handy features, such as a Pixel Analysis pie chart that highlights the most costly fragment shaders in your render so you know where to focus shader optimisation.
There's also a PVRTrace standalone playback tool that allows you to replay your recordings on any of the supported OS's (inc. Windows & Android).
You can find an overview of the tool on the Imagination website here & can download PVRTrace through the PowerVR SDK installer, available here
I routinely debug OpenGL ES on Windows using the PowerVR VFrame translator, which converts OpenGL ES calls to OpenGL, as you said. I think it's the best solution. VFrame has some step and tracing features, but mostly I am using the debugging features of MSVC++.
If you are using GLSurfaceView on android, it has an OpenGL ES tracing feature too. I also recommend using an X86 AVD rather than ARM or trusting the drivers on any one device. This article explains in detail:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1
Like with toolkits such as qT, wxwidgets and such, how does an API designer provide and api that is the same, even though it calls totally different system calls to do so? For example, in Windows OS you have to mess around with a whole lot of functions in the GDI. On Linux you have to mess around with a whole lot of functions in XLib and whatever other layers the distribution has on top of in. So how how can you design an widgit kit that can unify all that functionality? so that say CreateWindow() will create a windon on any platform? I don't comprehend how this can be done.
Instead of using Xlib or GDI, you could use something that is more universal. For example, you could use OpenGL, which is supported everywhere. I think that is what Blender's UI does.
Some toolkits can be modified to use some kind of backend for each platform they support. This is basically what Qt does. On Mac OS X, Qt apps use Cocoa as a backend. Qt for OS X was made specifically for that OS. However, there are other Qt implementations on other platforms, so that's what makes Qt work on more than one platform. SWT for Java works the same way (using the OS's native toolkit as a backend).
Other toolkits can use some kind of high-level layer to render. For example, Swing for Java is rendered purely using Java APIs, and is not tied to any platform at all.
I am trying to figure out the relationship between CGL and OpenGL on Mac platform.
More specifically about the context. Do they share context? If yes, how? Please give me a link to some related examples.
If no, then are there two contexts working in Core Animation applications which make use of OpenGL?
I am very confused by the use of OpenGL by Mac. Can somebody clarify?
CGL sets up device specific contexts suitable for OpenGL to render to. Compare to wgl and xgl on Windows and X respectively. CGL understands how to query the graphics hardware for its pixel format, and then how to set up and configure a context (e.g. double-buffered or single-buffered, what resolution depth, stencil, accumulation buffer, etc). But it doesn't provide functions to draw in that context. Once you have created the context with CGL, you make it current, and then you can call OpenGL to render in that context.
In Core Graphics (do not confuse it with CGL), both context initialization and drawing into the context are handled by the same framework. But because OpenGL is an open standard and designed to be cross-platform, the rendering functionality and the device context functionality have been abstracted into separate frameworks.
CGL is the low-level interface to OpenGL on a Mac. You probably don't want to be using it if you are writing an OpenGL Mac app. I am currently in the process of creating a intuitive OpenGL Mac application template for XCode 4, but in the mean time you can look at https://github.com/mk12/Pong-Ultimate, a pong clone I made using OpenGL. It uses NSOpenGL, a higher-level Cocoa interface to OpenGL.
You may also find the Apple docs helpful: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_intro/opengl_intro.html.