I am developing an OpenGL application and I am seeing some strange things happen. The machine I am testing with is equipped with an NVidia Quadro FX 4600 and it is running RHEL WS 4.3 x86_64 (kernel 2.6.9-34.ELsmp).
I've stepped through the application with a debugger and I've noticed that it is hanging on OpenGL calls that are receiving information from the OpenGL API: i.e. - glGetError, glIsEnabled, etc. Each time it hangs up, the system is unresponsive for 3-4 seconds.
Another thing that is interesting is that if this same code is run on RHEL 4.5 (Kernel 2.6.9-67.ELsmp), it runs completely fine. The same code also runs perfectly on Windows XP. All machines are using the exact same hardware:
PNY nVidia Quadro FX4600 768mb PCI Express
Dual Intel Xeon DP Quad Core E5345 2.33hz
4096 MB 667 MHz Fully Buffered DDR2
Super Micro X7DAL-E Intel 5000X Chipset Dual Xeon Motherboard
Enermax Liberty 620 watt Power Supply
I have upgraded to the latest 64bit drivers: Version 177.82, Release Date: Nov 12, 2008 and the result is the exact same.
Does anyone have any idea what could be causing the system to hang on these OpenGL calls?
It appears that this is an issue with less-than-perfect NVidia drivers for Linux. Upgrading to a newer kernel appears to help. If I am forced to use this dated kernel, there are some things that I've tried that seem to help.
Defining the __GL_YIELD environment variable to "NOTHING" prior to starting X seems to increase stability with this older kernel.
http://us.download.nvidia.com/XFree86/Linux-x86_64/177.82/README/chapter-11.html
I've also tried disabling Triple Buffering and Flipping.
I've also found that these forums are very helpful for Linux/NVidia problems. Just do a search for "linux crash"
You may be able to dig deeper by using a system profiler like Sysprof or OProfile. Do other OpenGL applications using these calls exhibit similar behavior?
Related
I have a Windows 11 machine, with a RTX 3050 graphics card. It's a Dell G15 laptop. I cannot find a (good) solution to the graphical glitches that appears on an Android Emulator.
The only "solution" I found was to change the hw.gpu.mode in the config.ini file from auto to guest. That fixes the glitches, but causes really bad performance issues and one app I developed with Flutter for my company straight up doesn't load. (loads when hw.gpu.mode=auto).
I'd appreciate if you can point me to the right direction to solving this. Let me know if you need any other details about my machine:
OS: Windows 11 Home Single Language [64-bit]
Kernel: 10.0.22000.0
CPU: 11th Gen Intel(R) Core(TM) i5-11260H # 2.60GHz
GPU: NVIDIA GeForce RTX 3050 Laptop GPU
Nvidia driver version: 516.94 (Downloaded the "Game ready driver" from GeForce Experience)
This problem is only on emulators with android 12+, personally i installed a device with android 11 and a second device with android13 for tests.
It seems like dedicated Nvidia GPU's are causing the problem. I have a 3060 laptop and I have the same issue and when I set it to guest it seems to work. My guess is that setting changes it from using the GPU to the CPU. I would recommend you try setting android studio to use integrated graphics instead of dedicated. Since I have a 8 core CPU compared to a 4 core CPU of yours, I'm guessing that's the reason I don't get as bad performance
Curious if anyone out there is doing Android Studio development on a dual Xeon machine.
I would like to know if the additional CPU gave a dramatic or visible (50% or more) boost in build performance.
You probably found out, but for others wondering: Chances are - it won't.
Did some testing with two relatively quick E2650 v4 Xeons on a largish Java + Kotlin project and Xeons were considerably slower than low core count / higher clock CPU's.
Check out the benchmarks here:
https://superuser.com/questions/1115206/will-dual-xeons-improve-android-studio-build-times/
I have tried to measure speed of Android Studio 3.1.4 on the same hardware: Macbook Pro 2011, RAM 4Gb, SSD 240GB Samsung, Core i5 2.4Ghz.
I have installed on this machine 3 different OS: Windows 10, MacOS Hight Sierra 10.13, Ubuntu 18.04.
Avarage build time (running command: gradlew clean build, gradlew clean assembleRelease) on MacOS/Ubuntu was around 30% faster than on Windows.
On my another working machine: Core i5 3.0 Ghz 7400, RAM 16Gb, SSD 250Gb. Build time takes 4.34min on Windows 10 machine.
The same project on a little bit slower processor, but with the same RAM and SSD and it is running Ubuntu 16.04 build time takes two times faster!!
Well I was shocked with results, but still I choose Windows as development machine, because it's much more comfortable for me to use comfortable and
usable keyboard and sotfware than on Unix like systems. And even if I had to choose between MacOS and Ubuntu - mac is really much easier to setup everything, and
Ubuntu is too complex to use for usual people. Choise is up to you.
No, this isn't another rant question about NVidia vs AMD; I'm genuinely interested in having my demo running well with both vendors. I've tested my code with four configurations:
MacBook Pro (NVidia GT650M) - fine
Desktop with CentOS 6.5 (Nvidia Quadro FX) - fine
Desktop with Windows 7 64 bit (AMD HD7950 with Catalyst 14.4) - slow
Desktop with Fedora 19 (AMD HD7950 with catalyst 14.4) - slow
3 and 4 are actually the same machine. The code is not highly optimized but it's not doing anything too complex either: I have a grid (which I render using GL_POINTS), a line that represents the path found by A* and a moving agent. The grid has about 10k elements, if I remove that the demo runs better, but still not perfectly.
I guess it's a driver issue, as on 3 and 4 it seems it's running with software rendering; I profiled the code on Windows with CodeXL and a frame take ~400ms and seems to be using mostly the CPU rather than the GPU.
As final information, I'm using GLEW and GLFW for cross-platform development. The full code is available here: https://bitbucket.org/theWatchmen/behaviour-trees
Let me know if you need any further information.
Original answer posted here: http://www.gamedev.net/topic/659012-issue-with-opengl-demo-using-catalyst-drivers-linux-and-windows/#entry5168395
It seems that for this particular card GL_POINTS are emulated in software and that's causing the demo to slow down. I will change the grid to triangles to make sure it runs smoothly on all cards.
I've got a 2007 MacPro, 8GB RAM, 2 x NVIDIA GeForce 7300 GT (256 MB). I tried to look at a couple of Google's WebGL demos, for example this one but am unable to do so because
my system is not WebGL compatible.
I'm running Lion and the latest version of Chrome - what else do I need to do? Or is my 'bleeding-edge' workstation now a relic of the past?
You need a compatible browser, and half-way decent hardware. (Which you have)
See http://get.webgl.org for better instructions.
[EDIT!]
Actually, after looking through get.webgl.org a bit more, they explicitly state that your card is incompatible:
If you have the following graphics cards, WebGL is unsupported and is disabled by default:
Mac:
ATI Radeon HD2400
ATI Radeon 2600 series
ATI Radeon X1900
GeForce 7300 GT
This is probably because of driver bugs that they've found affect the stability of the browser. (Most vendors have lousy OpenGL support, even on systems like the Mac!)
You still may be able to force WebGL to enable through by navigating to about:flags in Chrome and seeing if it has an Enable WegGL option.
Pretty much as the title says, using bootcamp.
WDDM1.1 compliance and GPU recognition confirmed by the WP7 emulator running with EnableFrameRateCounters showing.
I'm considering a Macbook air as a compromise to resolve a need to access iphone dev tools and upgrade my Win7 mobile capability to something reasonably performant with one device.
My current laptop barely runs Win7 and borders on unusable for WP7 tooling hence the interest to try and solve two problems with one device - if realistic.
I assume if the device can run WP7 tools satisfactorily, it would be capable of anything else I might want to do when booted under Win7.
The new MacBook Airs do not have very powerful processors. The 11" maxes at 1.66 Ghz, while the 13" maxes at 2.13 Ghz. However, they do have the same GPU as the current 13" MacBook Pro. Also, since they use solid state drives, data access is significantly faster. Overall, it will not be the fastest computer you've used, but it should be enough to work.
I've bought one, but since it's going to the wife, I won't be able to test it in depth.
Instead, the MacbookPro 13" from '09 works fine (monoTouch+iOS dev and bootcamp to vstudio+wp7 dev). I upgraded to 4 gigs memory and that helped, also the disk is slower than I'd like. It responds like a mid-grade desktop, imo.
The problem I see is that the processor on the air's is ULV with a really slow clock, also the sdd in the base version is only 64g which is going to be cramped, I think.
Consider this: many Mac gamers install Windows with bootcamp just to have better gameplay experience.
That's because Windows have native access to the GPU through bootcamp.
http://www.mth.kcl.ac.uk/~shaww/web_page/grid/macgpu.htm (2009 article)
http://www.gpgpu.org/forums/viewtopic.php?t=3766&highlight=bootcamp (2007 article)
So the answer is yes.