how to enable cuda only for computing purpose, not for display - gpgpu

Iam using nvidia gt 440 gpu. It is used for both display and computational purpose which leads to less performance while computation. can i enable it only for computational purpose? if so how can i disable it from using display.

It depends -- are you working on Windows or Linux? Do you have any other display adapters (graphics cards) in the machine?
If you're on Linux, you can run without the X Windows Server (i.e., from a terminal) and SSH into the box (or attach your display to another adapter).
If you're on Windows, you need to have a second display adapter. As long as your display is connected to your GeForce 440 GT, there's no way to use it only for computational purposes. That also includes Remote Desktop, which won't work at all unless you have a Tesla card because of the way the WDDM (Windows Display Driver Model) was designed (it can't be accessed from within Session 0, which is where the RDP service runs).

I'm using Intel integrated graphics for display purposes and GPU for compute purpose on Linux.
You'll need to setup from bios to use the integrated graphics on mobo. This will leave your GPU free. It depends on your hardware available. =)
How much does it affects the performance? I did checked before, the display in windows did takes up some memory (less than 10mb).

Check that you have write permission on the /dev/nvidia* devices. The CUDA C Getting Started Guide for Linux contains a script that automatically sets the correct permissions at startup.

Related

Is it possible to add display panel brightness support to an existing Windows display driver?

So, keep the stock driver, but add some other driver plus Windows register configuration that tells Windows how to do brightness at the hardware level?
Is that even possible within Windows? Or does it need to be built into the graphics driver itself?
(I'm specifically asking about Intel's "Legacy Backlight Brightness (LBB) I/O Register". Which works on a lot of Intel GPUs.)
For reference, I'm not really grokking all the jargon: http://msdn.microsoft.com/en-us/library/windows/hardware/ff569755(v=vs.85).aspx

How to use DirectX 9 with multi- GPU on windows7

I‘ve got multiple nVidia GPU Cards (Q2000) on a Windows 7 system,without SLI, only one monitor.
Now what I'm trying to do is make a Direct3D9 device runing on a specific GPU.
I can use the [Adapter] parameter in IDirect3D9::CreateDevice to choose a GPU, but unless I connect a second monitor on that GPU card, it will not work (if I've only got one desktop on Windows).
If I click the "Detect" button in Resolution Control Panel, it can make a "fake" desktop on the side of my primary desktop, and CreateDevice(1, ...) works well - but this is not what I want.
For OpenGL, it's easy because the WGL_NV_gpu_affinity, It can make a OpenGL device runs on the second GPU with only one monitor connected, one desktop on windows.
I wonder if there is any API can use for Directx 9 work as "WGL_NV_gpu_affinity".
Any hint will be very appreciated. Thanks in advance!
IDirect3D9::CreateDevice uses at 1st parametr "Adapter", which not GPU, just monitor adapter

How can I override the CUDA kernel execution time limit on Windows with a secondary GPUs?

From Nvidia's website, it explain the time-out problem:
Q: What is the maximum kernel execution time? On Windows, individual
GPU program launches have a maximum run time of around 5 seconds.
Exceeding this time limit usually will cause a launch failure reported
through the CUDA driver or the CUDA runtime, but in some cases can
hang the entire machine, requiring a hard reset. This is caused by
the Windows "watchdog" timer that causes programs using the primary
graphics adapter to time out if they run longer than the maximum
allowed time.
For this reason it is recommended that CUDA is run on a GPU that is
NOT attached to a display and does not have the Windows desktop
extended onto it. In this case, the system must contain at least one
NVIDIA GPU that serves as the primary graphics adapter.
Source: https://developer.nvidia.com/cuda-faq
So it seems that, nvidia believes, or at least strongly implys, having multi- (nvidia) gpus, and with proper configuration, can prevent this from happening?
But how? so far I tried lots ways but there is still the annoying time-out on a GK110 GPU that is: (1) plugging in the secondary PCIE 16X slots; (2) Not being connected to any monitors (3) Is setted to use as an exclusive physX card in driver control panel (as recommended by some other guys), but the block-out is still there.
If your GK110 is a Tesla K20c GPU, then you should switch the device from wddm mode to TCC mode. This can be done with the nvidia-smi.exe tool that gets installed with the driver. Use the windows search function to find this file (nvidia-smi.exe) then use the command line help (`nvidia-smi --help) to discover the commands necessary to switch a GPU from WDDM to TCC mode.
Once you have done this, the windows watchdog mechanism will no longer pay attention to your GK110 device.
If on the other hand it is a GeForce GPU, there is no way to switch it to TCC mode. Your only option is to modify the registry settings, which is somewhat difficult. Your mileage may vary, as the exact structure of the reg keys varies by OS.
If a GPU is in WDDM mode, it is subject to the watchdog timer.

Enable/Disable multiple monitors via Win32 API or NVidia API?

I'm trying to write a small utility that will enable/disable monitors under Windows 7 with my nVidia graphics card. (ie. "Extend the desktop onto this monitor", etc)
The reason is that my nVidia Geforce GTX 480 has three outputs (2x DVI, 1x Mini-HDMI) but only allows two to be active at any given time so I need to enable/disable monitors when I want to switch to my TV (HDMI) display.
The Win32 API function EnumDisplayDevices isn't working because it doesn't show disabled monitors.
nVidia provides an API (NVAPI) and has functions to enumerate all monitors (even disabled ones) and you can enable a monitor but you can't disable a monitor. (I'm referring to NvAPI_CreateDisplayFromUnAttachedDisplay)
UltraMon seems to have figured out how to perform this but I can't find any information.
I think that if 2 out of three displays are already connected, the 3rd one will not be detected.
the card will stop listening for a new hardware.
you have to manually take out the cable , and then insert a new one in a different port.
unless there is a way to "eject" the connection, similar to a usb storage device.

Create virtual hardware, kernel, qemu for Android Emulator in order to produce OpenGL graphics

I am new to android and wish to play around with the emulator.
What I want to do is to create my own piece of virtual hardware that can collect OpenGL commands and produce OpenGL graphics.
I have been told that in order to do this I will need to write a linux kernal driver to enable communication with the hardware. Additionally, I will need to write an Android user space library to call the kernal driver.
To start with I plan on making a very simple piece of hardware that only does, say 1 or 2, commands.
Has anyone here done something like this? If so, do you have any tips or possible links to extra information?
Any feedback would be appreciated.
Writing a hardware emulation is a tricky task and by no means easy. So if you really want to do this, I'd not start from scratch. In your case I'd first start with some simpler (because many of the libraries are already in place on guest and the host side): Implementing a OpenGL passthrough for ordinary Linux through qemu. What does it take:
First you add some virtual GPU into qemu, which also involves adding a new graphics output module that uses OpenGL (so far qemu uses SDL). Next you create DRI/DRM drivers in the Linux kernel, that will run on the guest (Android uses its own graphics system, but for learning DRI/DRM are fine), as well as in Mesa. On the host side you must translate what comes from qemu in OpenGL calls. Since the host side GPU is doing all the hard work your DRI/DRM part will be quite minimal and just build a brigde.
The emulator that comes with Android SDK 23 already runs OpenGL, you can try this out with the official MoreTeapots example: https://github.com/googlesamples/android-ndk/tree/a5fdebebdb27ea29cb8a96e08e1ed8c796fa52db/MoreTeapots
I am pretty sure that it is hardware accelerated, since all those polygons are rendering at 60 FPS.
The AVD creation GUI from Studio has a hardware acceleration option, which should control options like:
==> config.ini <==
hw.gpu.enabled=yes
hw.gpu.mode=auto
==> hardware-qemu.ini <==
hw.gpu.enabled = true
hw.gpu.mode = host
hw.gpu.blacklisted = no
in ~/.android/avd/Nexus_One_API_24.a/.

Resources