Profiling OpenGL ES in Windows - visual-studio

I'm trying to do some profiling on my OpenGL ES code. Somewhere in my GPU pipeline (a shader I believe) is causing a huge delay. Which is the best profiler I can use? Is this one a good option? is there one I can use directly within Visual Studio?

If you have a GPU performance issue on IOS, the best is to use XCode tools to profile it directly on device, running the app from Xcode and then doing a frame capture to look at the timings for each draw call / the number of cycles used by each shader (more info here)
You can also profile on Windows if you are also able to simulate your graphics pipeline in classic OpenGL in your Windows version, but this may not be a good idea as the iPhone's GPU is very different than a classic desktop GPU so the bottleneck might not be the same on Windows than on IOS.
To profile on Windows I would suggest using either Nvidia PerfKit (if you have a Nvidia card) or AMD's GPU PerfStudio if you have an AMD card.
There is also RenderDoc which is a nice tool but not sure if it provides much profiling information (it is more for debugging graphics issues than profiling)

Related

Application Compilation using Graphics Card

During the Microsoft Windows 10 Devices event, Panos Panay - whilst talking about the Surface Books graphics said the following:
It's for that coder, using the latest Visual Studio where they can compile using the GPU and CPU at the same time and not lose a minute (Video)
This could just be a throwaway comment, but given that it is possible to do CPU type activities on the GPU (CUDA?), I wondered if he was actually talking about a genuine way to make Visual Studio use both the CPU and the GPU to perform application compilation.
Looking online, I can't see an obvious answer. Is this possible?
If they use something like AMP underneath then that is exactly what it is designed to do... use CPU and GPU (heterogeneous computing).

Deploying Qt5 on Windows without Hardware Acceleration

Qt5 can use the OpenGL driver or the DirectX Driver by using ANGLE. As we cannot depend on an installed OpenGL driver, we need to use the ANGLE backend. Unfortunately, this doesn't solve all deployment problems especially on Windows virtual machines without hardware acceleration. On these systems, we're getting an error message saying that the creation of an OpenGL context failed.
Screenshot: Failed to create OpenGL context for format QSurfaceFormat
We're deploying all required libraries (libEGL.dll libGLESv2.dll libeay32.dll msvcp110.dll msvcr110.dll d3dcompiler_46.dll) but we're still getting this error message.
How do you deploy a QML application that needs to run on end user machines without OpenGL driver and on (virtual) machines without Direct3D Acceleration?
There is a page on the Qt wiki mentioning this problem, but that's not very helpful for solving it.
Update for Qt 5.4.0:
My findings so far are:
Setting QT_ANGLE_PLATFORM=warp -> creates a windows without content.
Setting QT_ANGLE_PLATFORM=d3d9 -> same error dialog, as expected.
Setting QT_ANGLE_PLATFORM=d3d11 -> same error dialog, as expected.
Setting QT_OPENGL=desktop -> same as QT_ANGLE_PLATFORM=warp.
Setting QT_OPENGL=angle -> same error dialog, as expected.
Setting QT_OPENGL=software + opengl32sw.dll (mesa for windows) -> unpredictable: May run, may crash, may show the error dialog.
Update for Qt Quick 2D Renderer
Although, Mesa seems to be a partial solution, the configration seems to be very crash often in Qt 5.4.0 .
Another fallback could be the Qt Quick 2D Renderer, but unfortunately this crashes too.
Copying softwarecontext.dll into /scenegraph + Setting QMLSCENE_DEVICE=softwarecontext -> crash
Update after some user experience:
Angle
Has some render bugs on some systems
Does not work reliable on all systems
Angle with Warp
Not reliable
Desktop OpenGL
The default implements OpenGL 1.1, which is too old.
Not reliable, even if the OpenGL version is ok.
Has render bugs, if used by Qt
QtQuick2dRenderer
Has some major render issues
Crashes, Freezes
Works on systems without HW acceleration
Mesa OpenGL Backend
Seems to be quite reliable at the moment
quite slow in general, very slow on some systems.
Heavy Deployment weight
Conclusion: there is still no real solution for these systems
Update for Qt 5.5
Anno 2015: Broken graphics drives are still broken.
My conclusion for the moment is:
Use QtQuick2dRenderer if possible.
Use Mesa backend otherwise.
Skip Angle, skip Desktop OpenGL, skip Warp.
QT 5 has huge compatibility issue with opengl on some hardware configurations
Combination of Intel HD3000 driver and Nvidia/ATI card won't work on Windows 10.
https://bugreports.qt.io/browse/QTBUG-42240
Intel drops support for this card but their drivers has bug that leads to crash.
You cannot rely on hardware opengl if you want to support customers with HD3000.
Under Windows, opengl32.dll is the default OpenGL driver. It implements OpenGL 1.1 (really old version).
ANGLE has a baseline of OpenGL ES 2.0 and needs DirectX 9/11 installed to map the calls into.
So if you got a video card that doesn't have an OpenGL driver installed, an OpenGL driver less than 2.0, and/or DirectX 9/11 not installed, your app is not going to work.
In regards to virtualization and 3D acceleration, these maybe worth a read:
Why does Qt Creator 3.0.0 Welcome Mode not work in VM?
https://bugreports.qt.io/browse/QTBUG-34964
Also, if you run a multi monitor Windows environment under VirtualBox, 3d acceleration will be disabled.
I re-checked this to see if these problems have been fixed by the latest release of QT 5.12.2, but no they have not. The function described in the QT wiki entry OP referenced
https://wiki.qt.io/Qt_5_on_Windows_ANGLE_and_OpenGL sounds good but in practice it simply doesn't work.
I conclude avoid OpenGL on QT in any form. It's just too unreliable.

Debugging OpenGL ES 2.0 game that runs in Windows through PowerVR emulation

I have small cross platform engine that runs my OpenGL ES 2.0 games on Android and on Windows. To run it on Windows I am using PowerVR emulator (just libraries linked to the project). It all works well.
Now I would like to debug it and inspect in any OpenGL debugger. I tried Intel GPA, AMD CodeXL, gDebugger, glslDevil. But non of them were able to do it. In case of Intel GPA it did not find the running game. In other cases it started the game but failed to pause it or do anything later.
I do not know whether it is because it is OpenGL ES instead of OpenGL. But the PowerVR emulation must work like translating OpenGL ES to OpenGL, I think?
My questions are:
Is there any (utility) way how to debug OpenGL ES 2.0 programs on Windows?
Or is there any better emulation library than PowerVR that will force the app look like OpenGL for other tools (instead of OpenGL ES)?
I am doing all this as none of debuggers work for me on Android device. I am developing with Samsung Galaxy Tab (which is Tegra GPU), but Nvida's PerfHUD ES does not currently support it (and I also do not meet Android 4.0 or higher having only 3.1)
Is there any way how to debug OpenGL ES on Android device that has Android version 3.1 and it is Samsung Galaxy Tab device?
Thanx
You're correct - PVRVFrame translates OpenGL ES calls into host OpenGL calls. This is why the likes of gDEBugger will capture the OpenGL API calls made by the emulator rather than the calls you actually submitted.
The PowerVR SDK includes an OpenGL ES/EGL API recording tool called PVRTrace that has all of the functionality you're looking for.
The PVRTrace recording libraries can be used to record applications using PVRVFrame on Windows and Linux. The SDK also includes recording libraries for Android and Linux devices.
PVRTraceGUI (analysis tool for Windows, OSX & Linux) can be used to review and inspect the data you've recorded. It also has an Image Analysis widget that allows you to step through the draw calls in your recording & some other handy features, such as a Pixel Analysis pie chart that highlights the most costly fragment shaders in your render so you know where to focus shader optimisation.
There's also a PVRTrace standalone playback tool that allows you to replay your recordings on any of the supported OS's (inc. Windows & Android).
You can find an overview of the tool on the Imagination website here & can download PVRTrace through the PowerVR SDK installer, available here
I routinely debug OpenGL ES on Windows using the PowerVR VFrame translator, which converts OpenGL ES calls to OpenGL, as you said. I think it's the best solution. VFrame has some step and tracing features, but mostly I am using the debugging features of MSVC++.
If you are using GLSurfaceView on android, it has an OpenGL ES tracing feature too. I also recommend using an X86 AVD rather than ARM or trusting the drivers on any one device. This article explains in detail:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

Emulate OpenGL on machine with standard VGA graphics

So, we've got a little graphical doohickey that needs to run in a server environment without a real video card. All it really needs is framebuffer objects and maybe some vector/font anti-aliasing. It will be slow, I know. It just needs to output single frames.
I see this post about how to force software rendering mode, but it seems to apply to machines that already have OpenGL enabled cards (like NVidia).
So, for fear of trying to install OpenGL on a machine three time zones away with a bunch of live production sites on it-- has anybody tried this and/or know how to "emulate" an OpenGL environment? Unfortunately our dev server HAS a video card, so I can't really show "what I've tried".
The relevant code is all in Cinder, but I think our actual OpenGL utilization is lightweight for this purpose.
This would run on windows server 2008 Standard
I see MS has a software implementation for OGL 1.1, but can't seem to find one for 2.0
Build/find some Mesa DLLs.
It will be slow.

How to disable Macbook Pro from switching to a high performance graphics card in Cocoa?

All 2010 Macbook Pros come with two graphics cards — a low-performance built-in Intel HD one and a high-performance discrete NVIDIA one — and it switches between them on the fly depending on the needs of the running applications.
I have a simple Cocoa application that consists of just a menu bar item with a NSTextField in it. All I do is update the text field with an NSAttributedString from time to time. The trouble is that my application switches my Macbook Pro to use the high-performance NVIDIA card (I used the gfxCardStatus tool to confirm this).
What could possibly need the high-performance card? Is there a known list of reasons for the applications to require high-performance graphics card? Is there a way to force the computer to use the discrete graphics card?
There is a good article about GPU switching in the newer MacBook Pros at Ars Technica.
I noticed that OS X switches to the dedicated GPU if you
Start an application that links against OpenGL
Connect a second display
The code of gfxCardStatus is open source. And it seems that the relevant part is located in switcher.m. You can take a closer look here.
In MacOS 10.7 you can specify a setting in the PList to stop going to discrete graphics:
https://developer.apple.com/library/mac/qa/qa1734/_index.html
Needs to be a 2011+ MacBook Pro.

Resources