No, this isn't another rant question about NVidia vs AMD; I'm genuinely interested in having my demo running well with both vendors. I've tested my code with four configurations:
MacBook Pro (NVidia GT650M) - fine
Desktop with CentOS 6.5 (Nvidia Quadro FX) - fine
Desktop with Windows 7 64 bit (AMD HD7950 with Catalyst 14.4) - slow
Desktop with Fedora 19 (AMD HD7950 with catalyst 14.4) - slow
3 and 4 are actually the same machine. The code is not highly optimized but it's not doing anything too complex either: I have a grid (which I render using GL_POINTS), a line that represents the path found by A* and a moving agent. The grid has about 10k elements, if I remove that the demo runs better, but still not perfectly.
I guess it's a driver issue, as on 3 and 4 it seems it's running with software rendering; I profiled the code on Windows with CodeXL and a frame take ~400ms and seems to be using mostly the CPU rather than the GPU.
As final information, I'm using GLEW and GLFW for cross-platform development. The full code is available here: https://bitbucket.org/theWatchmen/behaviour-trees
Let me know if you need any further information.
Original answer posted here: http://www.gamedev.net/topic/659012-issue-with-opengl-demo-using-catalyst-drivers-linux-and-windows/#entry5168395
It seems that for this particular card GL_POINTS are emulated in software and that's causing the demo to slow down. I will change the grid to triangles to make sure it runs smoothly on all cards.
Related
I bought a Intel NUC Hades Canyon VR NUC8i7HVK PC and added an 8gb Kingston ValueRAM and 120GB Kingston UV500 SSD, running Windows 10, together with an Intel Realsense Depth Camera
I have tried to install the Realsense SDK, Realsense Viewer and Realsense Depth Quality tool aren't working.
They literally don't open without any error messages.
I have reinstalled it numerous times.
I'm intending to use it with the Nuitrack library and Unity for Skeleton tracking and have successfully done so with my regular computer.
However, now nothing seems to work anymore.
I have no idea how to go about it, since I don't know where to look. Any ideas?
Windows Phone 7 Emulator runs in slow mode... even tho my system supports VT
I just updated my Sony Vaio FW21E's bios update, now VT is enabled, but emulator still runs in same old slow mode.
How can I run the emulator in VT mode.
Please advise.
Make sure your system meets the requirements laid out here.
Setup and System Requirements for Windows Phone Emulator
In particular, verify your gpu is being recognised by the emualtor by checking the frame rate counters are visible.
This will not happen if your display driver is not WDDM1.1 compliant and minimum Directx 10.
I also recommend trying a Win7 install on a spare hard disk if you're running Vista. This consistently produces positive results when problems of this nature are reported on hardware compliant systems.
I had this issue on my Mac running bootcamp. I read in some forum what appeared to be the weirdest solution ever.
If I had Netflix open, streaming a movie, my emulator would work perfectly. When I did not, it would just be the slowest thing.
I read somewhere that could be related to drivers and hardware acceleration. So Windows Phone was not 'hardcore' enough to trigger turning on the acceleration on the video card but when you had the streaming ON it was using it, making it fast.
You might try that out... I know it sounds dumb but it worked for me.
The HD3450 should be ok as its a DirectX 10 card I beleive
As said above the card needs to be WDDM1.1
you can check this by running 'dxdiag' in the run or search box in vista. go to 'Display 1' (or just Dispaly) tab, and on the right there will be DDI Version - should be 10, and Driver Model - should be WDDM 1.1.
If its not compliant with WDDM1.1/DX10, it will work ok but you wont get things like aminmations on page transitions etc.
Pretty much as the title says, using bootcamp.
WDDM1.1 compliance and GPU recognition confirmed by the WP7 emulator running with EnableFrameRateCounters showing.
I'm considering a Macbook air as a compromise to resolve a need to access iphone dev tools and upgrade my Win7 mobile capability to something reasonably performant with one device.
My current laptop barely runs Win7 and borders on unusable for WP7 tooling hence the interest to try and solve two problems with one device - if realistic.
I assume if the device can run WP7 tools satisfactorily, it would be capable of anything else I might want to do when booted under Win7.
The new MacBook Airs do not have very powerful processors. The 11" maxes at 1.66 Ghz, while the 13" maxes at 2.13 Ghz. However, they do have the same GPU as the current 13" MacBook Pro. Also, since they use solid state drives, data access is significantly faster. Overall, it will not be the fastest computer you've used, but it should be enough to work.
I've bought one, but since it's going to the wife, I won't be able to test it in depth.
Instead, the MacbookPro 13" from '09 works fine (monoTouch+iOS dev and bootcamp to vstudio+wp7 dev). I upgraded to 4 gigs memory and that helped, also the disk is slower than I'd like. It responds like a mid-grade desktop, imo.
The problem I see is that the processor on the air's is ULV with a really slow clock, also the sdd in the base version is only 64g which is going to be cramped, I think.
Consider this: many Mac gamers install Windows with bootcamp just to have better gameplay experience.
That's because Windows have native access to the GPU through bootcamp.
http://www.mth.kcl.ac.uk/~shaww/web_page/grid/macgpu.htm (2009 article)
http://www.gpgpu.org/forums/viewtopic.php?t=3766&highlight=bootcamp (2007 article)
So the answer is yes.
I have used ATI Stream SDK on windows XP SP3 and implemented one algorithm on GPU. But Now I am interested in scaling this algorithm on multiple GPUs on mutiple machines I switched to UBUNTU to use MPI ( To send messages ).
I googled this but I got references for installation on SLES and RHEL but I am looking for UBUNTU 9.04.
Thanks
GG
AMD is switching to OpenCL based API soon. May be it will be worthwhile holding your horses till the OpenCL API stabilizes. Cuda is far ahead of the curve in terms of GPU usability, there is a nice project called MAGMA which is bringing together the LAPACK library for joint CPU-GPU usage.
I know of people who are using the ATI Stream SDK and ACML-GPU on Ubuntu without any special problems -- that is, no problems that they wouldn't have on any other Linux distro.
If you can get the Catalyst drivers installed correctly (which in this case will probably mean compiling your kernel modules) and your X windows configured correctly (especially DRI module, and there are security issues if you want Stream to work with remote access) it should work.
I'm tempted to ask/comment how you plan to share GPUs between multiple MPI processes, but that's probably wandering off-topic.
I am developing an OpenGL application and I am seeing some strange things happen. The machine I am testing with is equipped with an NVidia Quadro FX 4600 and it is running RHEL WS 4.3 x86_64 (kernel 2.6.9-34.ELsmp).
I've stepped through the application with a debugger and I've noticed that it is hanging on OpenGL calls that are receiving information from the OpenGL API: i.e. - glGetError, glIsEnabled, etc. Each time it hangs up, the system is unresponsive for 3-4 seconds.
Another thing that is interesting is that if this same code is run on RHEL 4.5 (Kernel 2.6.9-67.ELsmp), it runs completely fine. The same code also runs perfectly on Windows XP. All machines are using the exact same hardware:
PNY nVidia Quadro FX4600 768mb PCI Express
Dual Intel Xeon DP Quad Core E5345 2.33hz
4096 MB 667 MHz Fully Buffered DDR2
Super Micro X7DAL-E Intel 5000X Chipset Dual Xeon Motherboard
Enermax Liberty 620 watt Power Supply
I have upgraded to the latest 64bit drivers: Version 177.82, Release Date: Nov 12, 2008 and the result is the exact same.
Does anyone have any idea what could be causing the system to hang on these OpenGL calls?
It appears that this is an issue with less-than-perfect NVidia drivers for Linux. Upgrading to a newer kernel appears to help. If I am forced to use this dated kernel, there are some things that I've tried that seem to help.
Defining the __GL_YIELD environment variable to "NOTHING" prior to starting X seems to increase stability with this older kernel.
http://us.download.nvidia.com/XFree86/Linux-x86_64/177.82/README/chapter-11.html
I've also tried disabling Triple Buffering and Flipping.
I've also found that these forums are very helpful for Linux/NVidia problems. Just do a search for "linux crash"
You may be able to dig deeper by using a system profiler like Sysprof or OProfile. Do other OpenGL applications using these calls exhibit similar behavior?