3D application fails to run on Intel i3-2120 - opengl-es

I have a Virtual machine running Ubuntu on my windows7 PC. The machine has Intel i3-2120 processor. So I suppose it has support for OpenGL APIs as the processor has in-built Intel HD Graphics 2000 GPU.
I am using OpenGL ES 2.0 Emulator from ARM to build and run 3D application. I am new to OpenGL ES. I had built a cube application which comes with the example in Emulator itself just to test whether if the setup is ready to run 3D application.
The application does not run and it fails in compiling the shader in the below steps:
GL_CHECK(glCompileShader(*pShader));
GL_CHECK(glGetShaderiv(*pShader, GL_COMPILE_STATUS, &iStatus));
Is this issue somewhere related to hardware? Could someone please help in figuring out what is wrong here with the setup?
Thanks!!

If you don't have any errors in shader code, it should be due to virtualisation. Check if you have 3D acceleration support on you ubuntu.
Execute this in terminal: glxinfo | grep rendering
If you get "direct rendering: No", there is your problem. Check if your virtualisation application supports 3D acceleration and how to enable it.

Related

Got Android Studio installation error

I am kinda new to Android Studio & stuff. So today, I was installing the Android Studio with the SDK Manager. All was going smooth until an error came up which says:
Unable to install Intel HAXM
Your CPU does not support required features (VT-x or SVM).
Unfortunately, your computer does not support hardware accelerated virtualization.
Here are some of your options:
Use a physical device for testing
Develop on a Windows/OSX computer with an Intel processor that
supports VT-x and NX
Develop on a Linux computer that supports VT-x or SVM
Use an Android Virtual Device based on an ARM system image (This
is 10x slower than hardware accelerated virtualization)
I've attached a pic of my system specs. Can someone please throw some light on this issue?
Thanks
It is because you had not intialize virtual technology in your device.You Need to go in BOOT Option before starting WINDOWS OS and enable VT-x from there>
The option of enabling Virtual technology is putted in different option depends on device manufacturer
Edit: Android Studio emulator won't run on Windows with an AMD processor. The error message is kind of misleading, as it suggests the problem is with your CPU. But it is within the troubleshoot message: "Windows/OSX computer with an Intel processor". Basicallly, that means it is not going to work properly in your current setup. Either try installing Linux and running Android Studio on that (which might come with its own issues), using a physical device for testing or use the slow ARM images.
You are using an AMD processor. SVM is AMD technology and VT-x is Intel technology. So you won't be able to get VT-x to run, but SVM might be possible.
As another poster had suggested, virtualization may have been disabled in the BIOS. There may be an option to enable virtualization. It does however seem to happen that virtualization is activated in the BIOS and Android-Studio does not recognize that. I have not figured out how to fix that either.
You could use the emulator with an ARM image, which will be very slow. Alternatively, you could use another emulator that is not integrated into Android-Studio.

SDL2 OpenGL in Windowed Mode on Raspbian

Using a Raspberry Pi 2 I'd like use SDL 2 to create hardware accelerated OpenGL ES 2 programs in windowed mode. I'm currently unable to do this. I'd also like the ability to toggle between full screen and windowed mode in my programs if possible.
I believe my problem is related to the build configuration I am using from SDL2 sources.
I followed this guide to get SDL2 working with OpenGL ES from sources on my Raspberry, and it works for creating full screen SDL2 programs with an OpenGL ES context:
https://solarianprogrammer.com/2015/01/22/raspberry-pi-raspbian-getting-started-sdl-2/
The guide makers configure options are:
../configure --host=armv7l-raspberry-linux-gnueabihf --disable-pulseaudio --disable-esd --disable-video-mir --disable-video-wayland --disable-video-x11 --disable-video-opengl
In his guide, the creator states: "The above options will make sure, SDL 2 is built with the OpenGL ES backend and that any SDL application will run as a full screen application,"
I would really like to modify the build configuration to allow for windowed mode. What options would I need to change in his configure to allow for OpenGL ES 2 in windowed mode which can be toggled to full screen?
https://wiki.libsdl.org/SDL_SetWindowFullscreen
That is, to have the option to create a windowed SDL2 OpenGL ES program at first, with the option to toggle between full screen and windowed from within my program. How can I make an SDL2 for Raspbian to allow for this kind of behaviour?
My system details:
Raspberry Pi 2
Raspbian OS
LXDE desktop
Thanks
You need to remove the --disable-video-x11 option you're passing to the ./configure script. X11 is the window manager and is responsible for creating your windows.
It appears this is typically not recommended since it can cause somewhat buggy behaviour.
For copy paste lovers
./configure --host=armv7l-raspberry-linux-gnueabihf --disable-pulseaudio --disable-esd --disable-video-mir --disable-video-wayland --disable-video-opengl

How to Render With OpenGL on Tesla equiped Windows based host

I used to think that Tesla will not support OpenGL API, but recently leanred that Tesla products also can be used on visualization via OpenGL.
I have a workstation, in which there are 2 Intel E5 CPUs, and 1 Tesla C2050. According to https://developer.nvidia.com/opengl-driver, Tesla C2050 should support at least OpenGL version 3.
Now, I'd like to run a render service program using OpenGL 3.3 on that workstation, but without success.
The following is what I tried.
If I login through RDP remote desktop, the supported OpenGL version is 1.1 due to the virtual graphics adapter. Here, I used tscon commond to reconnect to the pysical console. As a result, the RDP connection lost. When I reconnected, I saw all the windows resized to 800*600 and the detected OpenGL support was still 1.1.
If I login with a monitor pluged to some kind of "integrated graphics adapter", the supported OpenGL version is still 1.1, maybe because the program was started within the screen pluged to the basic adapter. BUt the Tesla GPU does not have a grpahics output port.
I wonder how should I config the host to enable the use of Tesla GPU for OpenGL based rendering.
I have solved this problem.
First, in fact the Tesla C2050 is dedicated video card and has one DVI display port.
What is more important is that, the BIOS on motherboard was set to start up on integrated GPU. Changing this config to PCI-E card solves the problem that unable to access Tesla card.
Next, about graphics API support.
The official driver on NVIDIA site is offering support for OpenGL 4.4.
And, the Tesla card can be used to render via OpenGL just as the same as Quadro or Geforce card. There's no notable difference and no special configuration is necessary.

Nvidia Nsight 2.2 OpenGL shader debugger - not working?

I've got NVidia's Parallel Nsight 2.2 system configured on my two computers. The target has a Geforce 450 gts with driver ver 301.42 and the host a Quadro 1000M with the same driver version. Loading the simplest OpenGL 3.0 program (display a colored triangle using shaders) runs fine but I can't seem to get the Nsight shader debugger to work.
Everything seems to work, I can open the NSight->Windows->Shaders List window, double click a shader, have the source code open and select a line and set a breakpoint. A big fat red dot shows up to indicate the break point is set, but the breakpoint is NEVER hit, so I'm stuck.
Has anyone ever got the OpenGL shader debugger working with Parallel Nsight 2.2?
B.t.w. the NSight->New Analysis Activity works great. I can create a trace of all the openGL calls and view it with no problems.
The OpenGL shader debugger requires a driver that is not released yet. You will need a driver more recent than 306.37 to get a good debugging experience.
-s

Create virtual hardware, kernel, qemu for Android Emulator in order to produce OpenGL graphics

I am new to android and wish to play around with the emulator.
What I want to do is to create my own piece of virtual hardware that can collect OpenGL commands and produce OpenGL graphics.
I have been told that in order to do this I will need to write a linux kernal driver to enable communication with the hardware. Additionally, I will need to write an Android user space library to call the kernal driver.
To start with I plan on making a very simple piece of hardware that only does, say 1 or 2, commands.
Has anyone here done something like this? If so, do you have any tips or possible links to extra information?
Any feedback would be appreciated.
Writing a hardware emulation is a tricky task and by no means easy. So if you really want to do this, I'd not start from scratch. In your case I'd first start with some simpler (because many of the libraries are already in place on guest and the host side): Implementing a OpenGL passthrough for ordinary Linux through qemu. What does it take:
First you add some virtual GPU into qemu, which also involves adding a new graphics output module that uses OpenGL (so far qemu uses SDL). Next you create DRI/DRM drivers in the Linux kernel, that will run on the guest (Android uses its own graphics system, but for learning DRI/DRM are fine), as well as in Mesa. On the host side you must translate what comes from qemu in OpenGL calls. Since the host side GPU is doing all the hard work your DRI/DRM part will be quite minimal and just build a brigde.
The emulator that comes with Android SDK 23 already runs OpenGL, you can try this out with the official MoreTeapots example: https://github.com/googlesamples/android-ndk/tree/a5fdebebdb27ea29cb8a96e08e1ed8c796fa52db/MoreTeapots
I am pretty sure that it is hardware accelerated, since all those polygons are rendering at 60 FPS.
The AVD creation GUI from Studio has a hardware acceleration option, which should control options like:
==> config.ini <==
hw.gpu.enabled=yes
hw.gpu.mode=auto
==> hardware-qemu.ini <==
hw.gpu.enabled = true
hw.gpu.mode = host
hw.gpu.blacklisted = no
in ~/.android/avd/Nexus_One_API_24.a/.

Resources