What do I need to do to get WebGL up and running on my Mac? - macos

I've got a 2007 MacPro, 8GB RAM, 2 x NVIDIA GeForce 7300 GT (256 MB). I tried to look at a couple of Google's WebGL demos, for example this one but am unable to do so because
my system is not WebGL compatible.
I'm running Lion and the latest version of Chrome - what else do I need to do? Or is my 'bleeding-edge' workstation now a relic of the past?

You need a compatible browser, and half-way decent hardware. (Which you have)
See http://get.webgl.org for better instructions.
[EDIT!]
Actually, after looking through get.webgl.org a bit more, they explicitly state that your card is incompatible:
If you have the following graphics cards, WebGL is unsupported and is disabled by default:
Mac:
ATI Radeon HD2400
ATI Radeon 2600 series
ATI Radeon X1900
GeForce 7300 GT
This is probably because of driver bugs that they've found affect the stability of the browser. (Most vendors have lousy OpenGL support, even on systems like the Mac!)
You still may be able to force WebGL to enable through by navigating to about:flags in Chrome and seeing if it has an Enable WegGL option.

Related

Android Emulator, graphical glitches on Windows 11

I have a Windows 11 machine, with a RTX 3050 graphics card. It's a Dell G15 laptop. I cannot find a (good) solution to the graphical glitches that appears on an Android Emulator.
The only "solution" I found was to change the hw.gpu.mode in the config.ini file from auto to guest. That fixes the glitches, but causes really bad performance issues and one app I developed with Flutter for my company straight up doesn't load. (loads when hw.gpu.mode=auto).
I'd appreciate if you can point me to the right direction to solving this. Let me know if you need any other details about my machine:
OS: Windows 11 Home Single Language [64-bit]
Kernel: 10.0.22000.0
CPU: 11th Gen Intel(R) Core(TM) i5-11260H # 2.60GHz
GPU: NVIDIA GeForce RTX 3050 Laptop GPU
Nvidia driver version: 516.94 (Downloaded the "Game ready driver" from GeForce Experience)
This problem is only on emulators with android 12+, personally i installed a device with android 11 and a second device with android13 for tests.
It seems like dedicated Nvidia GPU's are causing the problem. I have a 3060 laptop and I have the same issue and when I set it to guest it seems to work. My guess is that setting changes it from using the GPU to the CPU. I would recommend you try setting android studio to use integrated graphics instead of dedicated. Since I have a 8 core CPU compared to a 4 core CPU of yours, I'm guessing that's the reason I don't get as bad performance

How to Render With OpenGL on Tesla equiped Windows based host

I used to think that Tesla will not support OpenGL API, but recently leanred that Tesla products also can be used on visualization via OpenGL.
I have a workstation, in which there are 2 Intel E5 CPUs, and 1 Tesla C2050. According to https://developer.nvidia.com/opengl-driver, Tesla C2050 should support at least OpenGL version 3.
Now, I'd like to run a render service program using OpenGL 3.3 on that workstation, but without success.
The following is what I tried.
If I login through RDP remote desktop, the supported OpenGL version is 1.1 due to the virtual graphics adapter. Here, I used tscon commond to reconnect to the pysical console. As a result, the RDP connection lost. When I reconnected, I saw all the windows resized to 800*600 and the detected OpenGL support was still 1.1.
If I login with a monitor pluged to some kind of "integrated graphics adapter", the supported OpenGL version is still 1.1, maybe because the program was started within the screen pluged to the basic adapter. BUt the Tesla GPU does not have a grpahics output port.
I wonder how should I config the host to enable the use of Tesla GPU for OpenGL based rendering.
I have solved this problem.
First, in fact the Tesla C2050 is dedicated video card and has one DVI display port.
What is more important is that, the BIOS on motherboard was set to start up on integrated GPU. Changing this config to PCI-E card solves the problem that unable to access Tesla card.
Next, about graphics API support.
The official driver on NVIDIA site is offering support for OpenGL 4.4.
And, the Tesla card can be used to render via OpenGL just as the same as Quadro or Geforce card. There's no notable difference and no special configuration is necessary.

Does Kinect for windows v2 work with Parallels?

Does Kinect for windows v2 work on mac pro using windows 8.1 that is running on top of Parallels?
Considering Kinects v2's minimum hardware requirements below (copied from this MSDN blogs), it is not possible for windows 8/8.1 running on top of Parallels to recognize and run Kinect v2. The latest version of parallels v10, as of the time of this answer, only supports DirectX 10 which is below the minimum requirement. I have tried it myself, but no success even with Parallels Gaming Mode. Moreover, in order for Kinect to be recognized you need the full USB 3.0 bandwidth.
Alternative solution as discussed inthis MSDN blog, is to use WindowsToGo or by installing Windows using boot camp.
Kinects v2 minimum required capabilities:
64 bit (x64) processor
4 GB Memory (or more)
I7 3.1Ghz (or higher)
Built-in USB 3.0 host controller (Intel or Renesas chipset).
If you’re adding USB 3.0 functionality to your existing PC through an adapter, please ensure that it is a Windows 8 compliant device and that it supports Gen-2. See the troubleshooting section of Getting Started for more information.
DX11 capable graphics adapter (see list of known good adapters below)
Intel HD 4400 integrated display adapter
ATI Radeon HD 5400 series
ATI Radeon HD 6570
ATI Radeon HD 7800 (256bit GDDR5 2GB/1000Mhz)
NVidia Quadro 600
NVidia GeForce GT 640
NVidia GeForce GTX 660
NVidia Quadro K1000M
A Kinect v2 sensor, which includes a power hub and USB cabling.

Mirror Drivers not working on Windows 7 64 bit computer

I am trying to develop an application that uses a mirror driver, although I am having an issue getting any mirror driver to work properly on my computer. I always seem to get the same issue no matter which driver I user. I have tried the Mirror Driver in UltraVNC and Also the DemoForge Mirage Driver that is included in TightVNC.
These are the issues I seem to receive- this this the issue from DemoForge Mirage. The error from the other drivers are essentially the same just maybe worded slightly different:
Could not create device driver context!
Unable to map memory for mirror driver!
Considering this is happening with all mirror drivers I am thinking maybe it is an issue with my graphics card or Intel HD graphics.
My display adapters are:
Nvidia GeForce GT525M
Intel HD Graphics 3000
Can anyone tell me what the problem could be and how to fix it? I have thought about just developing on another computer but it doesn't change the fact that I am still having an issue and others will too.

glGetError hangs for several seconds

I am developing an OpenGL application and I am seeing some strange things happen. The machine I am testing with is equipped with an NVidia Quadro FX 4600 and it is running RHEL WS 4.3 x86_64 (kernel 2.6.9-34.ELsmp).
I've stepped through the application with a debugger and I've noticed that it is hanging on OpenGL calls that are receiving information from the OpenGL API: i.e. - glGetError, glIsEnabled, etc. Each time it hangs up, the system is unresponsive for 3-4 seconds.
Another thing that is interesting is that if this same code is run on RHEL 4.5 (Kernel 2.6.9-67.ELsmp), it runs completely fine. The same code also runs perfectly on Windows XP. All machines are using the exact same hardware:
PNY nVidia Quadro FX4600 768mb PCI Express
Dual Intel Xeon DP Quad Core E5345 2.33hz
4096 MB 667 MHz Fully Buffered DDR2
Super Micro X7DAL-E Intel 5000X Chipset Dual Xeon Motherboard
Enermax Liberty 620 watt Power Supply
I have upgraded to the latest 64bit drivers: Version 177.82, Release Date: Nov 12, 2008 and the result is the exact same.
Does anyone have any idea what could be causing the system to hang on these OpenGL calls?
It appears that this is an issue with less-than-perfect NVidia drivers for Linux. Upgrading to a newer kernel appears to help. If I am forced to use this dated kernel, there are some things that I've tried that seem to help.
Defining the __GL_YIELD environment variable to "NOTHING" prior to starting X seems to increase stability with this older kernel.
http://us.download.nvidia.com/XFree86/Linux-x86_64/177.82/README/chapter-11.html
I've also tried disabling Triple Buffering and Flipping.
I've also found that these forums are very helpful for Linux/NVidia problems. Just do a search for "linux crash"
You may be able to dig deeper by using a system profiler like Sysprof or OProfile. Do other OpenGL applications using these calls exhibit similar behavior?

Resources