Using a Raspberry Pi 2 I'd like use SDL 2 to create hardware accelerated OpenGL ES 2 programs in windowed mode. I'm currently unable to do this. I'd also like the ability to toggle between full screen and windowed mode in my programs if possible.
I believe my problem is related to the build configuration I am using from SDL2 sources.
I followed this guide to get SDL2 working with OpenGL ES from sources on my Raspberry, and it works for creating full screen SDL2 programs with an OpenGL ES context:
https://solarianprogrammer.com/2015/01/22/raspberry-pi-raspbian-getting-started-sdl-2/
The guide makers configure options are:
../configure --host=armv7l-raspberry-linux-gnueabihf --disable-pulseaudio --disable-esd --disable-video-mir --disable-video-wayland --disable-video-x11 --disable-video-opengl
In his guide, the creator states: "The above options will make sure, SDL 2 is built with the OpenGL ES backend and that any SDL application will run as a full screen application,"
I would really like to modify the build configuration to allow for windowed mode. What options would I need to change in his configure to allow for OpenGL ES 2 in windowed mode which can be toggled to full screen?
https://wiki.libsdl.org/SDL_SetWindowFullscreen
That is, to have the option to create a windowed SDL2 OpenGL ES program at first, with the option to toggle between full screen and windowed from within my program. How can I make an SDL2 for Raspbian to allow for this kind of behaviour?
My system details:
Raspberry Pi 2
Raspbian OS
LXDE desktop
Thanks
You need to remove the --disable-video-x11 option you're passing to the ./configure script. X11 is the window manager and is responsible for creating your windows.
It appears this is typically not recommended since it can cause somewhat buggy behaviour.
For copy paste lovers
./configure --host=armv7l-raspberry-linux-gnueabihf --disable-pulseaudio --disable-esd --disable-video-mir --disable-video-wayland --disable-video-opengl
Related
I am using gitlab-ci-runner to automatically test a desktop Qt/OpenGL application on every commit. On Windows 8, the application is executed by a system service installed with gitlab-ci-runner, hence it is run "hidden", i.e. on no visible desktop. All the UI modules initialize and run anyway, except the OpenGL module, which never gets an "expose" event; if I try to draw into the OpenGL context without the window being exposed I get the error:
QOpenGLContext::swapBuffers() called with non-exposed window, behavior is undefined
I have found out that it is rather difficult and not recommended to execute a Windows GUI application from a service on a running desktop session (see How can a Windows service execute a GUI application?).
Now, I don't need the user to see the application, I just need the OpenGL part to work correctly. Is there a way I can "pretend" to expose a window somehow, or is there any other way to get this to run correctly from a system service?
Is there a way I can "pretend" to expose a window somehow, or is there any other way to get this to run correctly from a system service?
Two problems you're running into:
If a window is not exposed all pixels rendered to will fail the pixel ownership test and rendering turns into a NoOp for all the pixels. The workaround for that is using wither a PBuffer or (recommended) a framebuffer object. Neither will immediately work with a on screen window, so you'll have to change some code.
In Windows processes started as a service usually don't get access to the graphics hardware, so you're limited to the capabilities of the software GDI OpenGL fallback (limited to OpenGL-1.1, hence no support for FBOs or PBuffers).
If you need GPU acceleration you can either invest into some grid computing GPU hardware (well, actually every GPU could do it, but the drivers disallow it for consumer grade stuff) to get a working OpenGL accelerated context. Or you can migrate to Linux, use a GPU supported by KMS/DRI/DRM and completely circumvent any graphics system. There's no official guide on how to do that, but it's on my (lengthy) ToDo list writing such a tutorial.
If you can live without GPU acceleration, drop a Windows build of Mesa with the softpipe renderer beside your program's .exe; the Windows Mesa softpipe build comes as a opengl32.dll fully API and ABI compatible with the standard opengl32.dll, but is independent of any graphics driver. It gives you OpenGL-3.3 support including FBOs and PBuffers.
So, keep the stock driver, but add some other driver plus Windows register configuration that tells Windows how to do brightness at the hardware level?
Is that even possible within Windows? Or does it need to be built into the graphics driver itself?
(I'm specifically asking about Intel's "Legacy Backlight Brightness (LBB) I/O Register". Which works on a lot of Intel GPUs.)
For reference, I'm not really grokking all the jargon: http://msdn.microsoft.com/en-us/library/windows/hardware/ff569755(v=vs.85).aspx
I‘ve got multiple nVidia GPU Cards (Q2000) on a Windows 7 system,without SLI, only one monitor.
Now what I'm trying to do is make a Direct3D9 device runing on a specific GPU.
I can use the [Adapter] parameter in IDirect3D9::CreateDevice to choose a GPU, but unless I connect a second monitor on that GPU card, it will not work (if I've only got one desktop on Windows).
If I click the "Detect" button in Resolution Control Panel, it can make a "fake" desktop on the side of my primary desktop, and CreateDevice(1, ...) works well - but this is not what I want.
For OpenGL, it's easy because the WGL_NV_gpu_affinity, It can make a OpenGL device runs on the second GPU with only one monitor connected, one desktop on windows.
I wonder if there is any API can use for Directx 9 work as "WGL_NV_gpu_affinity".
Any hint will be very appreciated. Thanks in advance!
IDirect3D9::CreateDevice uses at 1st parametr "Adapter", which not GPU, just monitor adapter
I have a Virtual machine running Ubuntu on my windows7 PC. The machine has Intel i3-2120 processor. So I suppose it has support for OpenGL APIs as the processor has in-built Intel HD Graphics 2000 GPU.
I am using OpenGL ES 2.0 Emulator from ARM to build and run 3D application. I am new to OpenGL ES. I had built a cube application which comes with the example in Emulator itself just to test whether if the setup is ready to run 3D application.
The application does not run and it fails in compiling the shader in the below steps:
GL_CHECK(glCompileShader(*pShader));
GL_CHECK(glGetShaderiv(*pShader, GL_COMPILE_STATUS, &iStatus));
Is this issue somewhere related to hardware? Could someone please help in figuring out what is wrong here with the setup?
Thanks!!
If you don't have any errors in shader code, it should be due to virtualisation. Check if you have 3D acceleration support on you ubuntu.
Execute this in terminal: glxinfo | grep rendering
If you get "direct rendering: No", there is your problem. Check if your virtualisation application supports 3D acceleration and how to enable it.
I am new to android and wish to play around with the emulator.
What I want to do is to create my own piece of virtual hardware that can collect OpenGL commands and produce OpenGL graphics.
I have been told that in order to do this I will need to write a linux kernal driver to enable communication with the hardware. Additionally, I will need to write an Android user space library to call the kernal driver.
To start with I plan on making a very simple piece of hardware that only does, say 1 or 2, commands.
Has anyone here done something like this? If so, do you have any tips or possible links to extra information?
Any feedback would be appreciated.
Writing a hardware emulation is a tricky task and by no means easy. So if you really want to do this, I'd not start from scratch. In your case I'd first start with some simpler (because many of the libraries are already in place on guest and the host side): Implementing a OpenGL passthrough for ordinary Linux through qemu. What does it take:
First you add some virtual GPU into qemu, which also involves adding a new graphics output module that uses OpenGL (so far qemu uses SDL). Next you create DRI/DRM drivers in the Linux kernel, that will run on the guest (Android uses its own graphics system, but for learning DRI/DRM are fine), as well as in Mesa. On the host side you must translate what comes from qemu in OpenGL calls. Since the host side GPU is doing all the hard work your DRI/DRM part will be quite minimal and just build a brigde.
The emulator that comes with Android SDK 23 already runs OpenGL, you can try this out with the official MoreTeapots example: https://github.com/googlesamples/android-ndk/tree/a5fdebebdb27ea29cb8a96e08e1ed8c796fa52db/MoreTeapots
I am pretty sure that it is hardware accelerated, since all those polygons are rendering at 60 FPS.
The AVD creation GUI from Studio has a hardware acceleration option, which should control options like:
==> config.ini <==
hw.gpu.enabled=yes
hw.gpu.mode=auto
==> hardware-qemu.ini <==
hw.gpu.enabled = true
hw.gpu.mode = host
hw.gpu.blacklisted = no
in ~/.android/avd/Nexus_One_API_24.a/.