How can I use physical computer's GPU on VMware Workstation? - ffmpeg

I am ching.
I want to use GPU to accelerate video transcoding time by using ffmpeg on the virtual machine, VMware Workstation.
I use command lspci | grep VGA, the output is
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
It seems that I only can use its own Graphic Card on VMware, right?
How can I use my physical computer's GPU, NVIDIA Quadro K2000?
Or another solution can solve the problem mentioned on the tilte?

I assume you are using some flavor of linux as your host? Open VM preferences and navigate to Display - there check "Accelerate 3D graphics" and select maximum amount of Graphics memory. Other than that you cant select directly to passthrough your GPU to a VM in a workstation.

Related

Is vmware virtualization faster on same SSD as the images or on separate SSD?

I use VMware player with two SSDs, main windows 10 OS on first SSD, I will store the linux virtual images on second SSD ...
Will the virtual machines run faster if the VMware player program is installed on same SSD as the main OS or on the same drive as the virtual images ?
All ok.If windows10 and vmware installed together may be unloading problems.However,this situation is very small,so do not worry.

How do I increase the video memory in VMWare Workstation for OS X 10.12?

I'm using VMWare Workstation 12 to run OS X 10.12 from Windows 10 Home. I have an NVidia GeForce GTX 960M with 2 GB dedicated video memory. I don't see a setting in the virtual machine setting that allows me to change the amount of video memory allotted to the guest. If I remember correctly I think I maxed it out at 128 MB during the setup. Can someone help me figure out how to give the guest more video memory?
Apparently this isn't an option on workstation.

Can I use my GPU from a docker container on a MacBook Pro ? (AMD Radeon GPU)

I would like to run a GPU enabled app (Gazebo) inside a docker container on my MacBook Pro.
I seemed to me, through my research, that about a year ago, Docker released a native Docker app for MacOS.
Before that, Docker used to spawn an entire Linux VM and run the container on top of it.
Now, it apparently uses some native hypervisor framework, making it more optimized and closer to the hardware, changing entirely Docker's approach to containerization on a Mac.
All this is not very clear to me and I am not sure of everything I stated.
Is it now possible to use my macbook pro's GPU from a docker container, and, if yes, how ?
The command line I'm using right now, which works for regular X11 apps but not GPU-enabled apps like Gazebo is:
xhost +
docker run -it -e DISPLAY=$ip:0 -v /tmp/.X11-unix:/tmp/.X11-unix image_name bash
There's still a virtual machine involved.
Docker for Mac uses a virtualization layer called XHyve. It's a lot thinner and more lightweight than VirtualBox or such (emulates fewer peripherals), but it's still virtualization.
PCI passthrough is (theoretically) possible, but you can't pass through your laptop's main GPU and still use it.
Hardware with an IOMMU (and yes, your MacBook Pro has an Intel chipset with such support) can allow a virtualized environment direct access to PCI hardware.
However, you can't cede control of a piece of hardware to a VM and still use that hardware from the host. (Some high-end server network cards work around this by having multiple PCI endpoints, so the host and each guest gets a different endpoint to talk to).
So -- you could get an external Thunderbolt-attached GPU, and it might work... in the future.
The underlying support in Xhyve isn't there yet (as of this mid-2017 writing), and even on KVM (used by a lot of folks doing pioneering work here), there are only limited reports of success (with a specific video card -- the Radeon HD 5850).

how to enable cuda only for computing purpose, not for display

Iam using nvidia gt 440 gpu. It is used for both display and computational purpose which leads to less performance while computation. can i enable it only for computational purpose? if so how can i disable it from using display.
It depends -- are you working on Windows or Linux? Do you have any other display adapters (graphics cards) in the machine?
If you're on Linux, you can run without the X Windows Server (i.e., from a terminal) and SSH into the box (or attach your display to another adapter).
If you're on Windows, you need to have a second display adapter. As long as your display is connected to your GeForce 440 GT, there's no way to use it only for computational purposes. That also includes Remote Desktop, which won't work at all unless you have a Tesla card because of the way the WDDM (Windows Display Driver Model) was designed (it can't be accessed from within Session 0, which is where the RDP service runs).
I'm using Intel integrated graphics for display purposes and GPU for compute purpose on Linux.
You'll need to setup from bios to use the integrated graphics on mobo. This will leave your GPU free. It depends on your hardware available. =)
How much does it affects the performance? I did checked before, the display in windows did takes up some memory (less than 10mb).
Check that you have write permission on the /dev/nvidia* devices. The CUDA C Getting Started Guide for Linux contains a script that automatically sets the correct permissions at startup.

How to make VirtualBox or VMware (or any other virtualization software) to use a native guest network driver?

I don't know if what I want to achieve is actually feasible or not. I have an RTL8192CE wireless network Mini PCI card, which definitely doesn't work properly on Linux (running Ubuntu 12.04 64-bit (Precise Pangolin)). I have already tried everything I could think of: I downloaded the latest drivers from the Realtek homepage, tried using NDISwrapper with several different sets of Windows drivers, and also tried using generic wireless backports, etc. None of it solved my problem.
It does, on the other hand, work perfectly on Windows... I dual-boot Windows 7 and Ubuntu 12.04, both 64-bit. Apparently, there is a bug in Ubuntu related to this card.
I want to know whether there is a way to use a virtualized Windows installation (Windows XP or Windows 7, preferably not Windows Vista) under my Ubuntu 12.04 64-bit that uses a native Windows driver (since the network card works perfectly in Windows). The virtualization software can be either VirtualBox (prefered), VMware or any other. There isn't any problem if I have to manually configure that by shell scripting or anything similar.
So, to make it clearer, I have a VirtualBox installed in my Ubuntu 12.04 (my host), which I use to run Windows 7 (my guest). I wanted to know whether this virtualized (guest) Windows 7 could have "direct" access to my wireless interface -- such as the dual-booted Windows 7 I have installed, without passing through the Ubuntu drivers.
Apparently I could not achieve that by using VirtualBox's guest additions, could I?
PS: I believe none of VirtualBox's networking modes (NAT, bridged networking, internal networking and host-only networking) would allow me to do that, am I correct? How could I solve that problem?
What you are asking for is called PCI Passthrough in VirtualBox - and it should be considered a very advanced topic. I have experimented with this feature before in VirtualBox and VMWare ESXi (make that vSphere...) and it can be extremely fragile.
I would suggest you spend some time reading the VirtualBox manual section on this (Chapter 9: Advanced Topics), there are some limitations you will want to be aware of as well as just know that this is an area of virtualization that is very young and immature. Off hand, here are some of the rather strict requirements before you can even begin:
Your hardware must have an IOMMU (Intel calls it VT-d, AMD -> AMD-Vi)
Your guest must be configured with hardware assist enabled (VT-x or AMD-V)
Your host Linux kernel must be built to utilize the IOMMU hardware
If your hardware/software meets those rather strict guidelines, give it a shot. What will happen is your guest will be effectively given direct access to your wireless PCI card and it will show up directly as a PCI device to your guest. You will install and use the drivers exactly as you would if Windows were your host operating system instead of your guest.
Reference - Chapter 9: Advanced Topics - PCI Passthrough
https://www.virtualbox.org/manual/ch09.html#pcipassthrough

Resources