Using pytorch Cuda on MacBook Pro - macos

I am using MacBook Pro (16-inch, 2019, macOS 10.15.5 (19F96))
GPU
AMD Radeon Pro 5300M
Intel UHD Graphics 630
I am trying to use Pytorch with Cuda on my mac.
All of the guides I saw assume that i have Nvidia graphic card.
I found this: https://github.com/pytorch/pytorch/issues/10657 issue, but it looks like I need to install ROCm, and according to their Supported Operating Systems, it only supports Linux.
Is it possible to run Pytorch on GPU using mac and AMD Graphic card?

No.
CUDA works only with supported NVidia GPUs, not with AMD GPUs.
There is an ongoing effort to support acceleration for AMD GPUs with PyTorch (via ROCm, which does not work on MacOS).

CUDA is a framework for GPU computing, that is developed by nVidia, for the nVidia GPUs. Also, the same goes for the CuDNN framework.
At the moment, you cannot use GPU acceleration with PyTorch with AMD GPU, i.e. without an nVidia GPU. The O.S. is not the problem, i.e. it doesn't matter that you have macOS. It is a matter of what GPU you have.
What you can do though, is that you can either purchase an external nVidia GPU or use some cluster. For example, Google Colab offers PyTorch compatibility.

PyTorch now supports training using Metal.
Announcement: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
To get started, install the latest nightly build of PyTorch: https://pytorch.org/get-started/locally/
Answer pre May 2022
Unfortunately, no GPU acceleration is available when using Pytorch on macOS. CUDA has not available on macOS for a while and it only runs on NVIDIA GPUs. AMDs equivalent library ROCm requires Linux.
If you are working with macOS 12.0 or later and would be willing to use TensorFlow instead, you can use the Mac optimized build of TensorFlow, which supports GPU training using Apple's own GPU acceleration library Metal.
Currently, you need Python 3.8 (<=3.7 and >=3.9 don't work) to run it. To install, run:
pip3 install tensorflow-macos
pip3 install tensorflow-metal
You may need to uninstall existing tensorflow distributions first or work in a virtual environment.
Then you can just
import tensorflow as tf
tf.test.is_gpu_available() # should return True

It will be possible in 4 months, around march 2022. See Soumith reply to this question on GitHub. https://github.com/pytorch/pytorch/issues/47702

Related

Can I use Alea.cuBase / Alea GPU with CUDA 8.0?

I just tried to run Alea TK samples on machine with GTX 1070, and:
CUDA 7.5 installs, but doesn't seem to work there. NVidia says CUDA 8.0RC should be used with this GPU: https://devtalk.nvidia.com/default/topic/949823/cuda-setup-and-installation/when-the-cuda-toolkit-will-support-gtx1070-graphics-card-/
CUDA 8.0 also successfully installs there, but it seems all the bindings in Alea.cuBase are to CUDA 7.5 -- i.e. basically, all samples fail on attempt to load CUDA 7.5's "cu*64_75.dll" libraries, though 8.0 version includes similar ones with "_80" suffix.
Same samples run on machines with less recent GPUs (and thus CUDA 7.5) without any issues.
Is there any way to address this, or I should wait for an updated version of Alea.cuBase?
The GTX 1070 has a new GPU of Pascal architecture in it. Starting with Alea GPU V3 beta 17 we support also Pascal. Give it a try. Cuda 8 should also work. But you have to use the new Alea GPU version 3 beta release. The old Alea GPU v 2.2 is not capable to compile for Pascal.

Would TensorFlow utilize GPU on a Mac if installed on a VM?

From TensorFlow's "Getting Started" page:
# Only CPU-version is available at the moment.
$ pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
I'm not super familiar with using GPU or CUDA libraries, but if I installed TensorFlow inside a Linux VM (say the precise32 available through Vagrant), then would TensorFlow utilize the GPU when running inside that VM?
Probably not. VirtualBox, for example, does not support PCI Passthrough on a MacOS host, only a Linux host (and even then, I'd... uh, not get my hopes up). MacOS ends up so tightly integrated with its GPU(s) that I'd be very dubious that any VM can do it at this point.
As an update: Tensorflow can now use GPUs on Mac OS X. The relevant PR is https://github.com/tensorflow/tensorflow/pull/664 and after a brew install coreutils the Linux installation 'build from source' instructions should work. I see a 10x speedup compared to the CPU version with an NVIDIA gforce 960 and Intel i7-6700K.
Edit/(downdate?): Starting with MacOS Mojave, due some API changes and what appears to be some long-standing beef between Apple and NVidia, drivers for NVidia graphics cards are no longer available. No NVidia means no Cuda means no Tensorflow, nor really any other respectable machine learning. It appears something like Google Collaboratory is the way to go for now.

OpenGL 3.3 Ubuntu (Virtual Machine)

I need openGL 3.3 or higher to use GLSL 3.3. My problem is that I have Mac Os X which doesn't allow to use a version higher than 2.1 of OpenGL. I've installed a virtual machine of Ubuntu inside my system but also has the version 2.1 of OpenGL. I don't understand what's going on because I have an AMD Radeon HD 6490M 256 MB which is compatible with version 4.1 of OpenGL. Is there any way that I can use openGL 3.3 or higher without doing a disk partition?
I don't understand what's going on because I have an AMD Radeon HD 6490M 256 MB which is compatible with version 4.1 of OpenGL
The GPU serves the host machine. The virtual machine sees only some dumb framebuffer device, or the OpenGL API available to the VM running on the host passed through the guest.
If you want to leverage the OpenGL-4 capabilities you must install an OS that can access the GPU natively. Also, if you want to run Linux you'll have to install the proprietary fglrx drivers (also called Catalyst for Linux), as the open source drivers that ship as distribution default haven't caught up yet.
Is there any way that I can use openGL 3.3 or higher without doing a disk partition?
Upgrade to OS X 10.9 when it comes out or grab the beta.
Or find some VM software for OS X that supports VGA passthrough.
If you're willing to repartition you can install Windows or Linux natively and use the drivers from AMD.

Offline compilation for AMD and NVIDIA OpenCL Kernels without cards installed

I was trying to figure out a way to perform offline compilation of OpenCL kernels without installing Graphics cards. I have installed the SDK's.
Does anyone has any experience with compiling OpenCL Kernels without having the graphics cards installed for both any one of them NVIDIA or AMD.
I had asked a similar question on AMD forums
(http://devgurus.amd.com/message/1284379).
NVIDIA forums for long are in accessible so couldn't get any help from there.
Thanks
AMD has an OpenCL extension for compiling binaries for devices that are not present on the system. The extension is called cl_amd_offline_devices. Pass the property CL_CONTEXT_OFFLINE_DEVICES_AMD when creating a context and all of AMDs supported devices are reported and can be used to create binaries as if they were present on the system.
Check out their OpenCL programming guide at http://developer.amd.com/tools/hc/AMDAPPSDK/assets/AMD_Accelerated_Parallel_Processing_OpenCL_Programming_Guide.pdf for more info
No need to graphic card, you can compile OpenCL programs for CPU too. If you have Intel or AMD CPU this idea works. Download latest OpenCL SDK from corresponding manufacturer website and compile OpenCL program:
Intel OpenCL SDK
AMD APP

Install AMD OpenCL CPU driver with an Nvidia graphic card

I have seen this question many times but never found an answer for Windows.
I recently ported my CUDA code to OpenCL.
When testing with an ATI card, the Catalyst drivers contain a CPU OpenCL driver, hence I can run the OpenCL code on the CPU.
When testing with an NVIDIA card, there is no driver for the CPU.
Question is: how can I install (and deploy) a CPU driver when running with an Nvidia card?
Thanks a lot
To use OpenCL on CPU you don't need any driver, you only need OpenCL runtime that supports CPU, which (in case of AMD/ATI) is part of APP SDK. It could be installed no matter what GPU you have. Your end-users would also have to install the APP SDK: currently, there is no way to install OpenCL runtime only.
If you have Intel CPU, you better try Intel OpenCL SDK, which has separate installer. However, AMD APP SDK works on Intel CPUs quite well, but note vice versa.

Resources